Unnamed: 0
int64 0
10k
| input
stringlengths 9.18k
112k
| output
stringlengths 136
194k
| instruction
stringclasses 1
value |
---|---|---|---|
106 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The development of the internet of things (IoT) expands to an ultra-large-scale, which provides numerous services across different domains and environments. The use of middleware eases application development by providing the necessary functional capability. This paper presents a new form of middleware for controlling smart devices installed in an intelligent environment. This new form of middleware functioned seamlessly with any manufacturer API or bespoke controller program. It acts as an all-encompassing top layer of middleware in an intelligent environment control system capable of handling numerous different types of devices simultaneously. This protected de-synchronization of data stored in clone devices. It showed that in this middleware, the clone devices were regularly synchronized with their original master such as locally stored representations were continuously updated with the known true state values.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Architectures for intelligent environments typically require some form of necessary middleware layer, enabling installed hardware (smart devices) and software, (agents or controller programs), to link and operate together. Various types of middleware are created to perform this crucial role, such as Universal Plug and Play (UPnP), Web Services, and bespoke mechanisms written by device manufacturers <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>.</ns0:p><ns0:p>Middleware may vary in terms of functionality, with different strengths and weaknesses.</ns0:p><ns0:p>These strengths and weaknesses largely depend on their deployment conditions. This is challenging for the programmers that adopt unenviable measures when designing control systems, particularly for complex smart devices or intelligent environments. Selecting a particular mechanism requires trade-off of resources or functionality to a certain extent <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. Some bespoke control system middleware designs help recover functionality loss by reproducing components that help to extend the chosen mechanism. Alternatively, programmers could implement an architecture where multiple independent middleware layers run simultaneously within a single control system. Both of these solutions naturally require a greater coding effort to implement the system, which itself is a lengthy program processing than the default implementation.</ns0:p><ns0:p>A potential risk of data synchronization also prevails within the system <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref>. For instance, the device state variability; updated using one middleware system, cannot be reflected by all the alternative mechanisms used. If their state data became desynchronized, smart devices controlled by agents may exhibit erroneous or undesirable behaviours, which could prove costly and dangerous. These issues also emerge due to hardware action request response times. If the new state data needs to be shared amongst several components within the middleware (e.g., to</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Literature Review</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Related Work</ns0:head><ns0:p>Currently, many off-the-shelf commercial devices possess inbuilt intelligent functionality, (i.e., either contain or can be accessed in some way via a computer system) <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. To facilitate usability of their products, hardware manufacturers include pre-installed middleware designed for computer control. Previously, control systems were device-centric; for example, the control system for washing machine cycle programming. However, with the widespread use of networking platforms such as Ethernet, Wi-Fi, and Bluetooth, the interconnectivity between different devices began to emerge. Being proponents of Pervasive Computer Science concepts, such as the Internet-of-Things <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>, several hardware manufacturers chose to implement middleware for their products around established common frameworks, such as UPnP.</ns0:p><ns0:p>This was due to a belief that by adopting common middleware architecture, customers would use smart devices created by rival companies, within their proprietary personal home networks.</ns0:p><ns0:p>Though, this concept is yet to be turned into a mainstream commercial reality. Various larger multinational hardware manufacturers, such as Apple or Sony are adopting more protectionist marketing strategies, creating interconnectivity between their products and services. For example, Apple iWatch smart device release will only work when paired with an Apple iPhone 5 or newer, though it is not compatible with any other older iPhone models or any rival smartphone handset <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>. Such smart devices operate using completely bespoke proprietary middleware systems, designed solely by a particular company.</ns0:p><ns0:p>The current literature shows that the use of middleware in IoT is limited <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>. Ngu et al <ns0:ref type='bibr' target='#b9'>[8]</ns0:ref> have designed an IoT application for real-time prediction of blood alcohol content based on smartwatch sensor data. The study has conducted a survey on the competencies of the current PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>IoT middleware. In addition, the challenges and the enablers associated to the IoT middleware were presented in order to embrace the heterogeneity of IoT devices. Fremantle and Scott <ns0:ref type='bibr' target='#b10'>[9]</ns0:ref> have utilized a structured search approach for identifying 54 particular IoT middleware frameworks and examined the security frameworks associated to each middleware. A total of 12 requirements (integrity and confidentiality, access control, consent, policy-based security, authentication, federated identity, and device identity) were used from the first stage for validating the competencies of each system. Elkhoder et al <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref> have accounted middleware for the emerging attributes such as seamless communications, lightweight aspects, and mobile across different heterogeneous networks and domains. It involves a context-adaptive technique, which allows the user for managing the location information shared by things on the basis of policy enforcement and context-aware mechanism. This mechanism accounted both the preferences and informed consent of a user.</ns0:p><ns0:p>Jyothi <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref> has managed data volumes and supported semantic modeling in the open issues, specifically managing the crowd sourcing of different domains. There is a scope for research work in order to make a generic IoT-middleware system, which is relevant across all regions by making all the functional aspects reusable and can be included as enabler to the middleware system.</ns0:p><ns0:p>Razzaque, Milojevic-Jevric, and Palade <ns0:ref type='bibr' target='#b13'>[12]</ns0:ref> survey showed that the use of middleware assists in the development through its integration of heterogeneous communication and computing devices. It also states that middleware provides interoperability support for application across different devices and services. Jeon and Jung <ns0:ref type='bibr' target='#b15'>[13]</ns0:ref> have revealed that the average request rate elevated by 25 percent compared to Californium, which is a middleware for effective association in IoT environments with vigorous performance, a power consumption PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>reduced by up to 68% and an average response time reduced by 90% when resource management was utilized. Lastly, the latency and power consumption of IoT devices can be reduced by the proposed platform. According to da Cruz et al <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref>, an important role is played by middleware as it is accountable for covering the intelligence part in IoT, which make decisions, allow them to communicate, and integrate data from devices on the basis of data collected. Afterward, a reference architecture model was investigated for IoT middleware based on IoT platform requirements, which detail the effective operational approaches of each proposed module, and proposed fundamental security features for this software type. Zhao et al <ns0:ref type='bibr' target='#b17'>[15]</ns0:ref> have proposed a stack of support-communication-computing for integrating effective open-source projects in order to devise techniques for allowing sufficient uniform human-thing associations, and developing implementation foundations for cutting-edge technologies including semantic reasoning and fog computing.</ns0:p><ns0:p>Due to the lack of common frameworks adoption for building complex intelligent environments or devices, the addition of a bespoke middleware layer is critical. This layer is designed to collectively handle the respective control systems of each installed smart device type. This extra layer of middleware wraps the individual bespoke device controller programs into a single API, representing the entire intelligent environment. This is also created by different Computer Science projects, which create middleware for intelligent environments, linking their creations into bespoke device controllers. Although all might concentrate on the same area of the control system architecture, the actual functionality can have a broad design. For example, Román et al. <ns0:ref type='bibr' target='#b19'>[16]</ns0:ref> created their bespoke Gaia infrastructure to allow distributed collections of smart objects and environments, represented and accessed via an interface. Later, Roalter et al. <ns0:ref type='bibr' target='#b20'>[17]</ns0:ref> chose to integrate a control system for a robot into their intelligent environment middleware, PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>allowing it to seamlessly access sensors and actuators within the smart space. In their OpenCOPI middleware, Lopes et al. <ns0:ref type='bibr' target='#b22'>[18]</ns0:ref> utilized a web ontology language based around semantic web services to create a mediator regulating access between ubiquitous applications and available service providers. Stavropoulos et al. <ns0:ref type='bibr' target='#b23'>[19]</ns0:ref> also used web Services as the base for their aWESoME infrastructure, which in addition to being an intelligent environment controller, focused on promoting 'energy savings' by consuming low power in its operations. Finally, Olaru et al. <ns0:ref type='bibr' target='#b24'>[20]</ns0:ref> used a context-aware multi-agent system as their middleware base for controlling ambient intelligence exhibited within an environment.</ns0:p><ns0:p>Different middleware platforms were discussed on these criteria for networked robotic systems <ns0:ref type='bibr' target='#b25'>[21]</ns0:ref>. Majority of the middleware platforms have varied objectives including reusability, development process, self-discovery, self-configuration, supporting QoS, flexibility, and integration. In addition, several middleware platforms were discussed in the study for networked robotic systems such as self-adaptation, discovery and higher-level abstractions, collaborations support, and other advanced characteristics for integration. Rodriguez-Molina and Kammen <ns0:ref type='bibr' target='#b26'>[22]</ns0:ref> have apparently demonstrated the collection of services demonstrated by community of researchers, developers, and scientists. Furthermore, middleware solution utilizes an API that explains how services were accessed from both the applications and the hardware that has been embedded for a Smart Grid-like deployment. Moreover, the authors have indicated that boundaries exist between the hardware located and the network, which are compliant with a standard demonstrating the sub-systems of the software. Vikash et al <ns0:ref type='bibr' target='#b27'>[23]</ns0:ref> have conducted a survey on middle-wares of WSNs towards IoT in offering a comparative view of different middle-wares and how this middleware technology can be utilized for implementing several issues emerging in the development of IoT applications. Rodriguez-Molina et al <ns0:ref type='bibr' target='#b29'>[24]</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>have utilized different aspects with several functionalities preferred as essential for a semantic middleware architecture conjoined with maritime operations including access to the application layer, context awareness, and device and service registration. On the contrary, Rodriguez-Molina et al <ns0:ref type='bibr' target='#b29'>[24]</ns0:ref> have interweaved other technologies with middleware such as acoustic networks and wireless communications. Under such circumstances, Rodriguez-Molina et al <ns0:ref type='bibr' target='#b30'>[25]</ns0:ref> have established an approach for interchanging information at the data level among independent maritime vehicles, which is of significant importance as the required information will have to be defined, along with the size of transferred data. Rodriguez-Molina et al <ns0:ref type='bibr' target='#b30'>[25]</ns0:ref> have forwarded the Maritime Data Transfer Protocol for interchanging standardized aspects of information at the data level for maritime independent maritime vehicles, and the procedures that are needed for information interchange.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Techniques/Protocols/Paradigms used in the Development of Proposed Work</ns0:head><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> highlights the integration of the additional middleware layer within the control system architecture using the 'Device Wrapper' node. Without this layer of middleware, it would be necessary first to learn the use of interface and to control each smart device individually. This requires the use of multiple different programming languages, such as Java, Python, C, C++ or C#, and possibly OS-dependent software packages. For example, the University of Essex currently has two full-scale intelligent environment test-beds, each using a different style of additional middleware layer to amalgamate devices from various manufacturers. These include iClassroom and iSpace.</ns0:p><ns0:p>[Insert Figure <ns0:ref type='figure'>1</ns0:ref> Here] The iClassroom <ns0:ref type='bibr' target='#b31'>[26]</ns0:ref> is an intelligent environment customised to resemble a university or school teaching room (Figure <ns0:ref type='figure' target='#fig_16'>2</ns0:ref>). Most smart technologies are used directly to augment presentations or other teaching strategies to enhance student learning experiences. The middleware in the environment uses Web Services to wrap diverse collection of devices into a single common interfacing mechanism. The Web Services system uses Eclipse's Jetty Web Server <ns0:ref type='bibr' target='#b32'>[27]</ns0:ref> for its operations.</ns0:p><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_16'>2</ns0:ref> Here] The iClassroom uses a centralized configuration, with all the middleware running from a single server. In terms of functionality, the main user interface is in bespoke website form hosted on the environment server. This server contains numerous hyperlinks to the Web Services used to control each device. The user interface can be loaded onto any standard browser application such as smart phones and tablets. Through the user interface, smart device in the environment can be connected, controlled, and monitored to the Web Services system from any remote location capable of accessing the iClassroom network.</ns0:p><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_20'>3</ns0:ref> Here] The iSpace <ns0:ref type='bibr' target='#b33'>[28,</ns0:ref><ns0:ref type='bibr' target='#b35'>29]</ns0:ref> is an intelligent environment test-bed customized to resemble a typical household environment. The multi-roomed space includes a full-sized lounge/kitchen, study, bathroom, and bedroom, each connected by a central hallway. Unlike the iClassroom, most of the smart technologies in the iSpace are deliberately concealed within hollow walls and ceilings (Figure <ns0:ref type='figure' target='#fig_20'>3</ns0:ref>). This provides unknowing visitors to space an initial impression that they are in a normal (i.e., non-augmented) environment. The middleware in this intelligent environment is based upon UPnP, with controllers for over sixty smart devices linked together into a single API.</ns0:p><ns0:p>More specifically, Youpi UPnP stack <ns0:ref type='bibr' target='#b36'>[30]</ns0:ref> was used to implement wrappers for each smart device.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The iSpace comprises a distributed configuration, with the middleware and control code for devices which split across several different computers, each connected via a common network.</ns0:p><ns0:p>The Youpi UPnP wrappers allow each smart device to broadcast its existence on the same network <ns0:ref type='bibr' target='#b38'>[31]</ns0:ref>. iSpace allows programmers to create a bespoke controller code to integrate the smart devices into their projects better. To discover them, programmers must create a Youpi UPnP control point within their code and perform a search. This can be either for a specific device or a general search that returns a set of every smart device discovered on the network.</ns0:p><ns0:p>Using a returned smart device instance, a control program can then isolate specific state variables, actions and arguments to either monitor or modify the associated hardware. Using UPnP, user control programs can 'subscribe' to individual state variables within a smart device instance. The state of a subscribed variable changes can automatically flag an associated listener within the user's interface program.</ns0:p><ns0:p>For a large environment like the iSpace, implementation and maintenance of UPnP middleware require significant programming effort. To operate a wrapper for each smart device instance, a significant amount of information concerning hardware and its uses are required.</ns0:p><ns0:p>Initially, each device type needed every state variable, action, and argument that can be individually declared. These were associated with dedicated UPnP state listeners to allow the correct functioning of the subscription system. Finally, it was necessary to individually initialize and start each instance of a smart device in the iSpace, which included assigning unique attributes such as names and UUIDs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>The Environment Development Kit (EDK)</ns0:head><ns0:p>The new system is created from scratch and is not an extension or reconfiguration of a preexisting architecture. Thereby, steps should be taken to address the synchronisation and resource e) Assigned with a unique controller program instance or one collectively shared by a group.</ns0:p><ns0:p>The system operates in both a centralized or distributed context and is not OS dependent.</ns0:p><ns0:p>Controllers assigned to individual or groups of devices could be updated using a 'hot-swap', without needing to restart any part of the system, (as long as the device is not being accessed at the time of modification). Additionally, the inventory of devices is not to be declared before the system could run, as new devices could be discovered and handled by the system at any time.</ns0:p><ns0:p>The case is similar to various middleware previously designed for intelligent environments, such as UPnP. A multicast communications system allows devices to be linked to agents and other user programs. Multicast is less dependable than alternative communication methods such as TCP. It is because there is no guarantee that intended targets would receive the message broadcasted to a network. However, it is significantly much faster than TCP and can share message across multiple targets in a subscribed group simultaneously, which determines the main reasons behind its usage in this middleware system. Similar multicast systems are in use in computer game virtual worlds, where the environment is divided into regions, each with their assigned group containing all the players, objects and PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science other updatable information present in that area <ns0:ref type='bibr' target='#b39'>[32,</ns0:ref><ns0:ref type='bibr' target='#b40'>33]</ns0:ref>. When an entity moves from one region to another, it should also change its subscription to the corresponding multicast group while simultaneously leaving the previous one.</ns0:p><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_40'>4</ns0:ref> Here] Users were able to access the EDK network, search for represented smart devices and send action requests to specific discovered instances via an inbuilt control point. Devices discovered by each control point instance were automatically 'cloned' and locally stored. Clone devices were exact copies of their originals, with the exception that they possessed no controller, so they could not directly interface with real hardware in the environment. Cloned devices received state updates from the original 'master' device. These synchronisation updates occur regularly.</ns0:p><ns0:p>Typically, this requires an update several times a minute, although the interval period between messages could be increased or decreased according based on its importance for clones to return the most up-to-date state information possible.</ns0:p><ns0:p>For actuators, EDK control points bypass the clone representation and transmit a state change action request directly to the master device. Multicast communication used received response from the device indicating concerning the change of request wither its success or failure.</ns0:p><ns0:p>However, if the new state was applied to the master device, then alteration of the local clone would occur during the next synchronisation update. To speed this process, the EDK was designed where changes are automatically and immediately prompted to a synchronisation update, temporarily overriding the scheduled system, (which had its internal timer reset). Manuscript to be reviewed Computer Science methods that are used with a broad range of bespoke sophistication for devices or environments.</ns0:p><ns0:p>The EDK came packaged with classes for representing several basic actuator and sensor types, used by master device representations, and subsequently by control point instances when creating their clones. Furthermore, the system was equally capable of handling encountering unknown device types in an ad-hoc manner. This was achieved by creating a cloned instance of the discovered device using a generic class appropriate for representing its declared features and variables. Several different generic classes existed depending on the device, whether the device was an actuator or a sensor and the number variables it possessed. Because of this feature, control points did not require setup or inventory of an intelligent environment before being connected. Additionally, it was possible to use the EDK with intelligent environments based around both centralized and distributed architectures, as there was no requirement to host master devices and clones on the same computer to communicate.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Implementing Master Devices</ns0:head><ns0:p>Three key components are required to exist on an EDK network; 1) A representation identifies the attributes of the device and declaring whether it is an actuator or sensor.</ns0:p><ns0:p>2) A gateway to the target EDK network from where the device will be accessible.</ns0:p><ns0:p>3) A controller connecting the virtual EDK representation with some counterpart hardware that exists in the real-world. Together these components allow a 'master' device to be created using the EDK API. Once Master devices can be implemented and deployed remotely on one or several networked computers. Depending upon personal preference, a master device application could also be used to generate one or several different actuators and sensors, with no requirement of device type.</ns0:p><ns0:p>Multiple different control programs can be imported into the same implementation, and if desired, a group of devices can either share or each is allocated their instance of the same controller.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.1'>Step One: Declaring Constants</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_42'>5</ns0:ref> Here] Firstly, several constants need to be declared to allow communication and different control systems to operate correctly. Figure <ns0:ref type='bibr' target='#b5'>(5)</ns0:ref> provide examples of these, where each attribute is explained further in the implementation process.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.2'>Step Two: Creating Device Instances</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_22'>6</ns0:ref> Here] [Insert Figure <ns0:ref type='figure' target='#fig_23'>7</ns0:ref> Here] The EDK API contains several different classes that can be used to create new representations of individual master devices. The selection of the class for a given situation depends upon several factors, specifically, the number of state variables supported by the device along with its identification as an actuator or a sensor. For common device types, the EDK API contains several dedicated classes, many of which contain bespoke convenience methods that provide better control of specific state variables. For example, Figures <ns0:ref type='figure' target='#fig_47'>6 and 7</ns0:ref> Manuscript to be reviewed Computer Science of an instance of two different types of light emitting devices. Both types have a state variable 'Power' declared, which controls whether the device is on or off. To support these actions, both the 'BooleanLight' and 'DimmableLight' classes contain two convenience methods 'turnOn' and 'turnoff' which allow the 'Power' state variable to be set without the need to declare or process any new values directly. The 'DimmableLight' class used in Figure <ns0:ref type='figure' target='#fig_23'>7</ns0:ref> additionally contains 'getBrightness' and 'setBrightness' methods, as compared to the declared 'Brightness' state variable, used to control the emitted light level.</ns0:p><ns0:p>Without these convenience methods, the state variable name would need to be declared along with any new state value (if applicable), in a formatted String in order to perform the same function. If the device being used is uncommon or unknown for some reason, the EDK API generic series classes can be created. This creates representations based purely on the number of declared state variables. For actuators, these would be 'SingleVariableActuator' and 'MultiVariableActuator', whereas for sensors the appropriate generic classes would be 'SingleVariableSensor' and 'MultiVariableSensor'. Each class contains several different constructors based on the available information of the created device. For instance, in Figures <ns0:ref type='figure' target='#fig_47'>6 and 7</ns0:ref>, the constructor is provided with a name for the device instance, the supported state variables (provided in a string array for multi-variable devices, as seen in Figure <ns0:ref type='figure' target='#fig_23'>7</ns0:ref>), and a description of the device itself. In this instance, all other required variables, such as a unique UUID, are allocated to the new device by the EDK API. Figures <ns0:ref type='figure' target='#fig_47'>6 and 7</ns0:ref> also show how a 'StateChangeListener' can be attached to individual master device instances, which flag whenever the value of any supported state variable is changed either by some attached hardware or due to a user request. 1 1 Note: When implementing master devices, it is highly recommended that each instance is given unique name value. It should always be ensured that device instances always have different uuid values, which can be generated randomly using the Java SDK UUID class. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.3'>Step Three: Adding Communications</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_24'>8</ns0:ref> Here] An instance of the 'Communications' class must be created in order for master devices to be able to connect to an EDK network. Figure <ns0:ref type='figure' target='#fig_24'>8</ns0:ref> shows how to do this using two of the variables from Figure <ns0:ref type='figure' target='#fig_42'>5</ns0:ref> mentioned earlier to supply values for the network address and communications port variables. The EDK uses multicast communication to allow master devices to send updates to any running control points that have joined the same group. As a consequence, it is important to ensure that the value used for the 'NETWORK_ADDRESS' variable is a valid multicast address. It may also be necessary to open the value used for 'COMMUNICATIONS_PORT' on firewalls, which may be blocking the sending or receiving of multicast communications packets on the network.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.4'>Step Four: Loading Device Controllers</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_25'>9</ns0:ref> Here] Control system programs for individual smart devices are created independently of the EDK.</ns0:p><ns0:p>Once implemented, a device control system can be uploaded into a master device via the EDK API. Figure <ns0:ref type='figure' target='#fig_25'>9</ns0:ref> shows how to do this. 'DeviceController.jar' is the filename of the controller being uploaded, from the designated 'CONTROLLER_DIRECTORY'. To integrate the controller program with the master device, the EDK needs to know the package ('CONTROLLER_PACKAGE') and the name of the main class ('CONTROLLER_CLASS').</ns0:p><ns0:p>As before, examples of these values are provided in Figure <ns0:ref type='figure' target='#fig_42'>5</ns0:ref>.</ns0:p><ns0:p>which often contain several convenience methods for performing device-specific actions. To use these methods, state variables must be declared using the specific names specified by the Javadoc information for the relevant class. The next step is to create a hub to process devices. The 'communications' variable of the 'DeviceHub' constructor should be the same instance of the 'Communications' class created back in Step Three. Once the hub is initialised, each of the smart device instances created in Step Two needs to be individually added. As these are master devices, they also need to be associated with their respective controller programs, loaded during Step Four. Figure <ns0:ref type='figure' target='#fig_26'>10</ns0:ref> provides an example of how to implement this for light devices.</ns0:p><ns0:p>The purpose of the 'DeviceHub' class differs slightly depending upon whether it is used within an implementation for master devices or an EDK control point. For master devices, the hub liaises with the communication system created in Step Three, to have each of its stored devices access their associated controllers and perform a live update of their recorded state variables. Typically, this would involve each device accessing the real-world hardware to which its control system connects it. Once acquired, the up-to-date state variable information is then passed back to the communications system, where it is transmitted to the EDK network and used synchronizing any listening clone device representations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.6'>Step Six: Adding Web Services, (Optional)</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_27'>11</ns0:ref> Here] Using the Web Services interface method with an EDK middleware implementation is optional, so this step can be ignored if desired. However, enabling the Web Services system only requires the code shown in Figure <ns0:ref type='figure' target='#fig_27'>11</ns0:ref> Manuscript to be reviewed Computer Science which in the case of this example (Figure <ns0:ref type='figure' target='#fig_42'>5</ns0:ref>).</ns0:p><ns0:p>Once added to an instance of the 'DeviceHub' class, the EDK mechanism will auto-generate a Web Services control interface for each of the declared smart devices and add them to the internal HTTP Server. Currently, two different acceptable commands are implemented for sensors, (i.e., about and get), while three for actuators (i.e., about, get and set). The Web Services interface can be loaded using any standard Web Browser, including on most mobile devices, such as smartphones and tablet computers. The syntax for an EDK Web Service is naturally bespoke to each situation where it is used, but the basic URL structure is as follows; '8000'</ns0:p><ns0:p>The port number for the Web Server, as declared by the value of the 'WEBSERVICES_PORT' constant created back in Step One. 'Light1' 'Dimmer1' The names of the smart devices being accessed, as declared when their representations were created in Step Two.</ns0:p></ns0:div>
<ns0:div><ns0:head>'about'</ns0:head><ns0:p>A command to display general information about the specified device.</ns0:p></ns0:div>
<ns0:div><ns0:head>'get'</ns0:head><ns0:p>A command to return the name of each state variable supported by the specified device, along with its currently recorded value. 'set?Power:1'</ns0:p><ns0:p>A command to set state variable 'Power' to a value of '1'.</ns0:p><ns0:p>Note: 0 = OFF, 1 = ON 'get?Brightness <ns0:ref type='bibr'>'</ns0:ref> A command to return the current value of state variable 'Brightness'.</ns0:p><ns0:p>'set?Brightness:75' A command to set state variable 'Brightness' to a value of '75'.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.7'>Creating a Control Point</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_19'>12</ns0:ref> Here] An instance of the 'ControlPoint' class allows external client programs to access devices via an EDK network. Figure <ns0:ref type='figure' target='#fig_19'>12</ns0:ref> shows the API code required to perform this task. The String variable used in the 'ControlPoint' constructor is a bespoke name for the specific instance being created. If more than one control point is being used within a client program, (i.e., to access different EDK networks), this variable can be used to identify specific instances.</ns0:p><ns0:p>A control point effectively acts as a portal into the EDK middleware system, allowing users to search for groups or specific master devices on the associated network. Its associated communications system provides details of the network which is accessed by a control point. In addition to providing details of an EDK network, the communication system supplied to a control point is also responsible for processing state update messages for the master devices. The control point itself automatically generates an internal 'DeviceHub' instance, which is used to create and store clone device instances based on information received by the communications system from the EDK network. The control point accesses the stored clone devices and uses their information to provide returnable results for user searches. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.8'>Searching for Devices</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To search for known master devices, present on an EDK network, a 'ControlPoint' instance can use its 'searchForDevices' method (Figure <ns0:ref type='figure' target='#fig_29'>13</ns0:ref>). Instances of 'ControlPoint' will only become aware of master devices upon receiving an update packet from them. Therefore, upon initially starting, the delay might occur before all master devices present on a network are discovered, concerning their update cycles when the control point joins the EDK multicast group. It is typically a good idea to enclose the search command in a 'for', 'do-while' or 'while' loop to keep the control point scanning the network until a non-empty array is returned or the desired device is found. Alternatively, this command could also be repeatedly called from within an isolated thread, which runs continuously in the background, allowing devices that start broadcasting after the control point's initial search also to be detected.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.9'>Searching for Specific Devices</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_30'>14</ns0:ref> Here] If details of the target device or devices are known ahead of time, a client program can alternatively use one of the more specific search methods of the 'ControlPoint' class. Figure <ns0:ref type='figure' target='#fig_30'>14</ns0:ref> shows three such methods, each targeting different attributes of master devices. Firstly, the 'SearchForDevicesByType' method can be used to filter the known list of master devices by their type, returning an array of any matching instances stored in the control point 'DeviceHub'.</ns0:p><ns0:p>The topmost example in Figure <ns0:ref type='figure' target='#fig_30'>14</ns0:ref> uses this method to search for all instances of a 'BooleanLight' The 'Actuator' and 'Sensor' classes in the API both contain numerous other declared variables that can be used with this method each representing one of the pre-formed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>'MULTI_VARIABLE_DEVICE' variables, (also found in the 'Actuator' and 'Sensor' classes), or a bespoke device name entered as a String.</ns0:p><ns0:p>The two remaining search methods, shown in Figure <ns0:ref type='figure' target='#fig_30'>14</ns0:ref>, are each designed only to return a single device, matching either a specified name or uuid criterion. If no matching device is found, then a 'null' value is returned. If, for some reason, two different master devices existed on an EDK network and both were called 'Light1', the 'searchForDeviceByName' method will only return the first instance it encounters upon contents scanning of control point device hub. This also applies to the 'searchForDeviceByUUID' method if both devices share the same aid value. 1</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.10'>Processing EDK Smart Devices</ns0:head><ns0:p>The instance of 'Device Hub' created by a control point is used to store clones created to represent networked master devices locally. The device hub creates the clones, which is the automatic response when information is received from the communication system that does not match any previously known representation. Clone devices are generally stored locally on the same computer as the control point used to create them. Aside from not possessing controllers for hardware, they are identical to the master devices that spawned them in every way. Even the unique attributes of the original master device are copied, including its name and quid value. As such, by using the associated classes within the EDK API, the details and states of each clone can be read, and in the case of actuators manipulated, in the same manner as if connecting directly to any master device instance.</ns0:p><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_62'>15</ns0:ref> Here] The EDK API contains several different methods of reading the state of smart devices. A</ns0:p><ns0:p>1 Note: When implementing master devices, it is highly recommended that each instance is given unique name value. It should always be ensured that device instances always have different uuid values, which can be generated randomly using the Java SDK UUID class. From Figure <ns0:ref type='figure' target='#fig_62'>15</ns0:ref> examples, the topmost method is a general 'getState' command, which is common to every EDK device. When called, the 'getState' command will return a single string representation of the entire device, or more specifically, its state variable values. Responses are always sent in pairs, with the name of the state variable and its current value. For instance, in the case of the 'Light1' 'BooleanLight' device used in the implementation examples earlier, an 'etState' request could result in either of the following String responses;</ns0:p><ns0:formula xml:id='formula_0'>Power:0 Power:1</ns0:formula><ns0:p>Where, 'Power' is the name of the only state variable included in a 'BooleanLight' object, while zero (off) and one (on) are the current values of that state. The colon separating the two values acts as a key in a split command to allow easy separation of variable name and its value.</ns0:p><ns0:p>In the case of multi-variable devices, which contain more than one state variable, an additional tilde key ('~') is added to separate the individual attributes. So, for the 'Dimmer1' DimmableLight' used in earlier examples, a 'getState' request could return;</ns0:p><ns0:p>Power:0~Brightness:100</ns0:p><ns0:p>Where, 'Power' is the first state variable and 'Brightness' is the second, with current recorded values of zero (off) and one hundred (percentage of maximum illumination), respectively 1 . Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_62'>15</ns0:ref> shows three methods that API can be used to return only the current value of a specific state variable, rather than the name/value pairs shown above. Generally, this is achieved by specifying the name of the state variable whose value is required as a variable in the method.</ns0:p><ns0:p>The second and third down examples in Figure <ns0:ref type='figure' target='#fig_19'>12</ns0:ref> demonstrate this process for a 'BooleanLight' and 'DimmableLight' respectively. Typically, the returned state values are in a String format, but can easily be converted as shown with the 'Brightness' variable example, where the value is converted into an integer once returned.</ns0:p><ns0:p>In many cases, the string can be avoided to integer conversion, as performed in the third example of Figure <ns0:ref type='figure' target='#fig_62'>15</ns0:ref>. This is based on EDK API device's inclusion of convenience methods, which return state variable values in their most appropriate format automatically. This is demonstrated by the bottommost method in Figure <ns0:ref type='figure' target='#fig_62'>15</ns0:ref>, which uses a 'getBrightness' method found in the 'DimmableLight' class, which automatically returns the state value as an integer.</ns0:p><ns0:p>All that is required to use these bespoke convenience methods is to cast the generic 'Device' object returned by a control point search into the appropriate device type class, as is shown in the example.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.11'>Writing to Smart Devices</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_65'>16 Here]</ns0:ref> In the example provided by Figure <ns0:ref type='figure' target='#fig_64'>16</ns0:ref>, an array returned by a control point search (as described in Section 2.2), is scanned for a specific device called 'Dimmer1', which is also a 'DimmableLight'. If found, the value of the state variable 'Power' is requested from the device.</ns0:p><ns0:p>If the value returned for 'Power' is zero (off) then a command is sent to turn the light on. For any other circumstance, the command is to turn the light off. 1 In the example, the generic 'device' variable taken from the 'Device' array is cast into a 'DimmableLight' object. This allows the programmer to access the two convenience methods 'turnoff' and 'turnOn,' which are contained within the Actuator class. The EDK API contains several models for intelligent devices that can be used in place of the more generic actuator and sensor classes. Some of these classes also contain further convenience methods specific to that device type. For example, the 'DimmableLight' class also contains a 'setBrightness' method to allow a specific value to be entered for a 'Brightness' state variable, controlling the amount of light emitted by the device.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.12'>Implementing EDK Compatible Device Controllers</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_32'>17</ns0:ref> Here] Recalling back to Section 3.1.1, two of the constants that needed to be declared when implementing a master device (as shown in Figure <ns0:ref type='figure' target='#fig_42'>5</ns0:ref>) were 'CONTROLLER_CLASS' and 'CONTROLLER_PACKAGE'. The origins of the values used in the Figure <ns0:ref type='figure' target='#fig_42'>5</ns0:ref> example, can be seen in Figure <ns0:ref type='figure' target='#fig_32'>17</ns0:ref>. The value that should be used for the 'CONTROLLER_CLASS' constant is the name of the main class of the control system program, which in the example is simply 'Controller'. Additionally, the value for the 'CONTROLLER_PACKAGE' is the name of the package containing the declared main class, in this case 'devicecontroller'.</ns0:p><ns0:p>When implementing a control program for smart devices to be used with an EDK middleware system, Figure <ns0:ref type='figure' target='#fig_32'>17</ns0:ref> shows the minimum classes required for integration. More specifically, the 'getState' and 'setState' methods are both essential and should be used to 1 Note: In the example provided in Figure <ns0:ref type='figure' target='#fig_64'>16</ns0:ref> Manuscript to be reviewed Computer Science directly return or update the current state of the associated hardware, respectively. If additional code is required to create a link with the associated hardware, such as using a third-party software package (e.g., RXTX or a manufacturer API), then all this code should be placed into a constructor within the main class, as shown by the 'Controller' constructor in Figure <ns0:ref type='figure' target='#fig_32'>17</ns0:ref>. 1 If necessary, the constructor code should establish a connection with the smart device hardware and then maintain it as a global variable that can be accessed directly by the 'getState' and 'setState' methods. Alternatively, the constructor could be used to start an isolated thread, which uses locally declared variables, also accessible by the 'getState' and 'setState' methods, to handle state action requests. It is essential that no code necessary to directly establish a connection with hardware is included in either the 'getState' or 'setState' method as doing so could lead to overall instability in the EDK middleware system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.13'>Controllers for Multi-Variable Devices</ns0:head><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_49'>18</ns0:ref> Here] Figure <ns0:ref type='figure' target='#fig_49'>18</ns0:ref> highlights how the controller code in Figure <ns0:ref type='figure' target='#fig_32'>17</ns0:ref> can be extended to handle smart devices with multiple state variables. The code presented (in Figure <ns0:ref type='figure' target='#fig_32'>17</ns0:ref>), could function with a multi-variable smart device, but it is often desirable to separate certain state variables to better structure program code, or for efficiency, etc. Figure <ns0:ref type='figure' target='#fig_49'>18</ns0:ref> shows a template for the controller used with the 'DimmableLight' device 'Dimmer1' example mentioned throughout this guide. Notice how the 'getState' and 'setState' methods have been retained (for handling the 'Power' state variable), although the 'Brightness' state variable has been separated and given its handling methods, namely 'getBrightness' and 'setBrightness.' To add additional 'get' and 'set'</ns0:p><ns0:p>1 Note: Methods used in device controllers must return state values in as a single String using the expected EDK formats, as discussed in Section 3.3.1.</ns0:p><ns0:p>Note: It is recommended that the 'setState' method returns the same value as an action request to 'getState' once completing its operation.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science methods, it is necessary to ensure that their suffix is named the same as the state variable they are expected to handle, (e.g. 'getPower', 'getBrightness', 'setPower', 'setBrightness', etc.). 1 When accessing a loaded device controller, an EDK implementation will first search through the methods of the declared controller class (i.e. 'CONTROLLER_CLASS') to see if it contains a bespoke match for the current state variable it needs to process. If no appropriate method matching the state variable can be found, the system will then automatically default to either the 'getState' or 'setState' method, depending upon which action is being performed.</ns0:p><ns0:p>Additionally, in Figure <ns0:ref type='figure' target='#fig_62'>15</ns0:ref>, the package name has changed, reflecting that it is a different device controller program from Figure <ns0:ref type='figure' target='#fig_30'>14</ns0:ref> example. To be loaded correctly into the EDK and used with a master device, Figure <ns0:ref type='figure' target='#fig_62'>15</ns0:ref> example would need to specify 'devicecontrollermultiple' as the value for 'CONTROLLER_PACKAGE' during the first implementation step.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Evaluation</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1'>Evaluation Strategy</ns0:head><ns0:p>One of the criteria used to measure the success of the EDK is its effective processing of action requests for smart devices, compared to existing middleware solutions. Since UPnP and Web Services largely inspired the new mechanism, examples of these architectures are used as a benchmark. Additionally, EDK testing with sensors and actuators is necessary, to acquire accurate results for both sending and receiving state variable data. Therefore, the middleware was integrated into the control systems of two different intelligent environments, located at the University of Essex, namely the iClassroom and the iSpace test-beds.</ns0:p><ns0:p>The intelligent environments chosen for the evaluation were selected as benchmarks to take 1 Note: Any 'set' methods should also only expect to receive a single variable containing the new state value to be processed, as shown in Figures <ns0:ref type='figure' target='#fig_67'>17 and 18</ns0:ref>. No other variables must be added for the methods to be compatible.</ns0:p><ns0:p>Note: As with 'setState' it is recommended that any 'set' method returns the same value as an action request to 'getState' once completing its operation. by the different components of the EDK, (i.e., API control and Web Services) compared to that needed by the equivalent benchmark systems. If the EDK was capable of performing the same tasks as an existing middleware system in approximately the same time frame, it is considered a viable alternative for use in an intelligent environment control system. Incidentally, it was anticipated that the EDK would actually require a significantly lower processing time than UPnP or Web Services, mostly due to the greater emphasis on locally processed variables.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Evaluation Results</ns0:head><ns0:p>To evaluate the EDK middleware system, each mechanism was tested separately, with no other programs running at the time. For each middleware system, five hundred 'get' or 'set' requests were made to the same sensor or actuator, respectively. This workload was split across five sessions of one hundred requests each. To keep the test fair, each session was performed using the same computer, which ran both timer program measuring the processing intervals, plus made the state requests to the relevant middleware implementation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1'>System Validation: Sensors</ns0:head><ns0:p>The first experiment in the project evaluation focused specifically on comparing middleware used to access sensors. In other words, devices whose functionality consisted only of sending Manuscript to be reviewed Computer Science automatically requested a current reading from the relevant sensor hardware when called.</ns0:p><ns0:p>However, the iSpace UPnP mechanism functioned in a very similar style to the EDK middleware design. Specifically, whenever called, the 'getState' command would access the control system for a specific device and return the last recorded value obtained from the actual sensor itself. A 'getState' method call never actually resulted in the hardware being accessed directly in order to take a reading. Updates to the recorded light level value were handled by an independent thread located within the program code for the sensor, which automatically took a new reading from the hardware intermittently. In the case of the iSpace UPnP system, a new reading was recorded approximately once every ten seconds. For EDK, this interval was reduced to approximately three seconds, in order to reduce the margin of error between the recorded and actual sensor values.</ns0:p><ns0:p>[Insert Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> Here] The experiment involved measuring the time required for each middleware system to return a lux value for a specific light sensor Phidget. As can be seen from the results displayed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, on average for each session run, both device interface methods offered by the EDK middleware required significantly less time to process individual action requests than both the Jetty-based Web Services or Youpi UPnP implementations. In the case of the Direct API interface method, the processing time was reduced to nanoseconds as the sensor value being returned in response to each action request was taken from a locally stored variable within the clone of the real sensor generated by the EDK. As mentioned above, the Youpi UPnP middleware used a similar method, but the recorded values for the light sensor were stored on a remote server, hence required additional processing time for the system to access and get the data values from. Unlike the API interface method, action requests made using the EDK Web Services were directed at the real PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science light sensor, yet they still required a fraction of the processing time of the Jetty-based system, consistently throughout the evaluation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>System Validation: Actuators</ns0:head><ns0:p>The second experiment performed for the evaluation focused on actuators. More specifically, this involved devices that contained one or more variables that could have their state modified, via the middleware accessing an attached control system. When setting the state of a variable, the success of the corresponding hardware being updated was not always guaranteed. This is because many devices do not provide any feedback to the controller code, concerning whether a transmitted command has been received or acted. Both, Jetty-based Web Services and Youpi UPnP middleware sent acknowledgments back to the control programs where action requests originated but always assumed a positive outcome. In contrast, the EDK did not send any acknowledgments in direct response to individual state change requests. The success or failure of an update could be determined by observing the next set of variable settings transmitted by a real device to its clones.</ns0:p><ns0:p>[Insert Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> Here] This experiment involved each middleware system accessing a single DMX-controlled dimmable spotlight and alternating its brightness level between fifty and one hundred percent. Unlike for the sensor 'getState' action requests, when setting a device's state, the EDK's Direct API interface method also targeted the original 'master' representation rather than a clone.</ns0:p><ns0:p>Effectively, a state change action request was forwarded to the original device by sending out a state change request using the inbuilt multicast communication system. As the system did not require any acknowledgment, the process could be ended at that point, unlike UPnP and the Web Services mechanisms, which required some kind of response, even if based upon an assumption.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Evaluation Summary</ns0:head><ns0:p>Concerning the processing times for action requests, on average, in every session of the evaluation, both interface mechanisms of the EDK were found to be significantly superior to the benchmark Jetty-based Web Services and Youpi UPnP middleware systems. This result was not entirely unexpected as the EDK utilised locally stored variables much more than either of the benchmark systems. Furthermore, in cases where remote access is necessary, (i.e. when setting the state of an actuator), the EDK does not require any confirmation of a device receiving an action state request to be returned, further reducing the processing times.</ns0:p><ns0:p>For experiment two, the additional 'power' state variable of the DMX spotlight was manually set to an 'on' state prior to each experiment session to observe the brightness changes but was otherwise unused. However, it should be noted that for each action request sent to the UPnP middleware system, it was necessary for its control point to first perform a search of all the DMX PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science light state variables in order to isolate the one controlling the brightness before the new state could be set. As more than one state variable existed within the device (i.e., power and brightness), this may have caused the action request processing time to be extended, depending upon which was discovered first. This additional step was not necessary for the EDK or Jettybased Web Services, where the individual state variables were already declared within the device controller code. This may explain why the UPnP mechanism required more time than any other mechanism in each session when handling an actuator, despite being faster than Jetty-based Web Services in three out of five sessions during the earlier sensor experiment.</ns0:p><ns0:p>[Insert Figure <ns0:ref type='figure' target='#fig_51'>19</ns0:ref> Here] [Insert Figure <ns0:ref type='figure' target='#fig_34'>20</ns0:ref> Here] Finally, another observation made during the evaluation was that for each batch of one hundred action requests performed, while the average processing times generated by Jetty-based Web Services and Youpi UPnP fluctuated noticeably between sessions, the times for both EDK methods remained largely static. This effect is illustrated by the graphical representations of the results from the evaluation presented in Figures <ns0:ref type='figure' target='#fig_71'>19 and 20</ns0:ref>. These figures also clearly highlight how the EDK was capable of outperforming the two benchmark middleware mechanisms consistently throughout the entire evaluation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Conclusions</ns0:head><ns0:p>This paper discussed recent work performed into creating a new middleware architecture for an intelligent environment control system. It shared insights on how control system developers often require some functionality trade-off when integrating one of the numerous existing architectures. Alternatively, an existing mechanism may be manually augmented with additional The evaluation experiments both indicated overwhelmingly that the two interface methods offered by the EDK architecture implementation could process state action requests to sensors and actuators faster than either benchmark mechanism. Out of the ten sessions performed, either benchmark did not outperform the EDK in a single case. EDK and UPnP used stored variables when returning sensor state data, rather than accessing the actual device directly. These stored variables were updated periodically by an independent thread, which noted the actual sensor readings. However, in the UPnP implementation, a ten-second delay between individual readings was observed along with an update of the stored variable. For the EDK implementation, this delay was reduced to three seconds, yet the mechanism was still capable of processing action requests faster than the UPnP benchmark. The results strongly indicated that the EDK is a viable alternative to UPnP and Web Services in a smart device or intelligent environment control system.</ns0:p><ns0:p>Another important issue for EDK was to resolve the data de-synchronised using multiple device interfacing methods. As the new mechanism was designed from the ground up, steps could be taken to allow synchronisation between the API and Web Services interface methods to be practically guaranteed. For this, both interface methods use the same sets of variables to store descriptions of individual master or clone devices. Also, clone devices were regularly synchronised with their original master; locally stored representations were constantly being updated with the known true state values. This helped defense against de-synchronization of data stored in clone devices.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Future Work</ns0:head><ns0:p>With the initial framework for the EDK established, future researches can expand the existing middleware architecture to integrate additional functionality. Firstly, an internal mechanism can be introduced to add an internal authentication system, which allows instances of smart devices represented by the mechanism to optionally be assigned with a password. Unless Manuscript to be reviewed Computer Science of a smart device is located. Technically this could already be achieved to some extent using the existing mechanism, for example, by manipulating the name variable when creating device instances. However, with the augmentations would allow the handling of scenarios such as the device changing locations to a different intelligent environment. It would also make it easier to represent multiple different environments on the same network. This could potentially allow the creation of agents or control programs that use devices from several different environments collaboratively, as part of their functionality. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure Legends</ns0:note><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:note type='other'>Computer Science Figure 11</ns0:note><ns0:note type='other'>Computer Science Figure 12</ns0:note><ns0:note type='other'>Computer Science Figure 13</ns0:note><ns0:note type='other'>Computer Science Figure 14</ns0:note><ns0:note type='other'>Computer Science Figure 15</ns0:note><ns0:note type='other'>Computer Science Figure 16</ns0:note><ns0:note type='other'>Computer Science Figure 17</ns0:note><ns0:note type='other'>Computer Science Figure 18</ns0:note><ns0:note type='other'>Computer Science Figure 19</ns0:note><ns0:note type='other'>Computer Science Figure 20</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021) Manuscript to be reviewed Computer Science allocation issues. This led to the creation of the Environment Development Kit (EDK), which was written using only standard Java SDK, with no extensions or third-party APIs. The EDK architecture design declared smart device present within an Intelligent Environment to be; a) Discoverable on a network. b) Subscribed to using listeners, which monitored for state change events. c) Accessible and/or controllable via a common API. d) Accessible and/or controllable via a set of auto-generated Web Services.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>4</ns0:head><ns0:label /><ns0:figDesc>shows complete EDK system architecture for controlling a single, smart device via a thirdparty interface program. Allowing users to create control points, the API provides a series of expandable classes and PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>deployed on the network, the virtual device representation would monitor or change the state of the real hardware-based upon received user instructions. It would also periodically broadcast details of itself and the current state of the hardware to the EDK network, which is received by PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021) Manuscript to be reviewed Computer Science any active control points and used to create or synchronize associated 'clone' devices.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>show the creation PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Note: All supported state variable names must be declared when creating a new device instance. Several pre-formed device types are included within the EDK API, PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Note: If an application is intended to deploy multiple master device instances, it is recommended to create each instance at this stage before continuing. Note: To prevent processing errors, individual state variables should never include the colon or tilde characters (i.e., : or ~) in their names or possible returnable values. PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021) Manuscript to be reviewed Computer Science 3.1.5 Step Five: Creating a Device Processing Hub [Insert Figure 10 Here]</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>to be added after the creation of the device hub. The variable 'WEBSERVICES_PORT' is the last of the declared constants create back in Step One, PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>http://<Computer IP Address>:<Web Server Port>/<Device Name>/<Command> So, based upon the 'BooleanLight' and 'DimmableLight' smart device examples used throughout this tutorial, some acceptable URLs would be; http://127.0.0.1:8000/Light1/about http://127.0.0.1:8000/Light1/get http://127.0.0.1:8000/Light1/set?Power:1 http://127.0.0.1:8000/Dimmer1/get?Brightness http://127.0.0.1:8000/Dimmer1/set?Brightness:75 '127.0.0.1' The IP address of the computer running the master device representation being accessed, (likely to be different from localhost).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>[</ns0:head><ns0:label /><ns0:figDesc>Insert Figure 13 Here] PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>smart device types included in the EDK. Alternatively, to search for a device type not included within the standard API, programs can use the 'SINGLE_VARIABLE_DEVICE' and PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021) Manuscript to be reviewed Computer Science selection of the possible options is listed in Figure 15. The determination of best method largely depends upon what information is required about the state of a device and how it is subsequently used.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>Note: To prevent processing errors individual state variables should never include the colon or tilde characters (i.e.: or ~) in their names or possible returnable values. PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>, 'Actuator.DIMMABLE_LIGHT' could also have been used instead of the String value 'DimmableLight.' PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021) Manuscript to be reviewed Computer Science advantage of the established middleware in their existing control systems. During the evaluation, EDK and benchmark middleware systems were tested by controlling a DMX-based dimmable spotlight and a Phidgets light sensor. Primarily, it assesses the average processing speed required</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>information about their current state back to a requesting program. The Jetty-based Web Services system in the iClassroom linked directly into individual device control programs, which PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>lists the average processing times required by each batch of one hundred action requests made during the experiment. As with the sensor experimentation, both EDK interface mechanisms were found on average to require significantly less processing times than the Jettybased Web Services and Youpi UPnP systems. It is worth noting that the difference between the EDK and Jetty-based Web Services was not as profound as for the sensors. This was due to the PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021) Manuscript to be reviewed Computer Science controller code for the DMX lights adding a delay of approximately one hundred milliseconds to the processing of each action request via a compulsory sleep command. As the EDK Web Services were directed at the representation of the actual light rather than a local clone, the delay was unavoidable.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021) Manuscript to be reviewed Computer Science code to overcome any missing functionality, which could result in detrimental synchronization issues within the resulting control system. This project aimed to create a new middleware, capable combining the positive features of two well-known architectures (namely UPnP and Web Services) into a single mechanism, to overcome functionality and synchronisation issues. Most importantly, it envisaged that the new mechanism would be required to process state action requests as quickly as the two benchmark middleware architectures. Complex, intelligent environment developers must implement their additional bespoke layer of middleware to link devices and services provided by different manufacturers. iClassroom and iSpace control system architectures in two full-scale intelligent environments are also presented. It also discusses the Environment Development Kit (EDK) as the new alternative middleware as an outcome of the research performed by this project. It assesses how viable the new middleware design was as an alternative to UPnP and Web Services. This strategy involved two separate experiments testing sensors and actuators independently due to their different properties. It showed that a comparative analysis of the average action request processing times required by the EDK, plus implementations of the benchmark UPnP and Web Services systems.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>the correct password was supplied during communication, neither control points nor other interfaces could access the device, nor a clone of the original be created as a result of a search request or update processes. Therefore, future study should focus on EDK development, where its variables and methods allow control points and other interfaces to identify the specific environment where an instance PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 1 :Figure 2 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1: An example of a two-tiered middleware architecture for controlling a smart device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: The iSpace intelligent environment and its middleware architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 4 :Figure 5 :</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4: The EDK middleware architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Creating a New Single Variable Smart Device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Creating a New Multi-Variable Smart Device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Creating the Communications System</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Loading device control systems</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Creating a Hub and Adding Master Devices</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_27'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Enabling the EDK Web Services</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_28'><ns0:head>Figure 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12: Code for creating an EDK Control Point instance</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_29'><ns0:head>Figure 13 :</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13: Searching for Known Master Devices on an EDK Network</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_30'><ns0:head>Figure 14 :</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14: Three Methods for Finding Specific Devices</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_31'><ns0:head>Figure 15 :Figure 16 :</ns0:head><ns0:label>1516</ns0:label><ns0:figDesc>Figure 15: Methods for Reading the Values of Device State Variables</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_32'><ns0:head>Figure 17 :</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17: A Controller Template for a Single Variable Smart Device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_33'><ns0:head>Figure 18 :Figure 19 :</ns0:head><ns0:label>1819</ns0:label><ns0:figDesc>Figure 18: A Controller Template for a Dimmable Light, (multi-variable smart device)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_34'><ns0:head>Figure 20 :</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20: A graph showing average processing times for setting a light (experiment two)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_35'><ns0:head>Figure1:Figure1:</ns0:head><ns0:label /><ns0:figDesc>Figure1: An example of a two-tiered middleware architecture for controlling a smart device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_36'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: The iClassroom intelligent environment and its middleware architecture</ns0:figDesc><ns0:graphic coords='49,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_37'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: The iClassroom intelligent environment and its middleware architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_38'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: The iSpace intelligent environment and its middleware architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_39'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: The iSpace intelligent environment and its middleware architecture</ns0:figDesc><ns0:graphic coords='50,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_40'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: The EDK middleware architecture</ns0:figDesc><ns0:graphic coords='51,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_41'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: The EDK middleware architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_42'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: An Example of Communication and Controller Constants</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_43'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: An Example of Communication and Controller Constants</ns0:figDesc><ns0:graphic coords='52,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_44'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Creating a New Single Variable Smart Device</ns0:figDesc><ns0:graphic coords='53,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_45'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Creating a New Single Variable Smart Device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_46'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Creating a New Multi-Variable Smart Device</ns0:figDesc><ns0:graphic coords='54,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_47'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Creating a New Multi-Variable Smart Device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_48'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Creating the Communications System</ns0:figDesc><ns0:graphic coords='55,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_49'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Creating the Communications System</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_50'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Loading device control systems</ns0:figDesc><ns0:graphic coords='56,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_51'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Loading device control systems</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_52'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Creating a Hub and Adding Master Devices</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_53'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Creating a Hub and Adding Master Devices</ns0:figDesc><ns0:graphic coords='57,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_54'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Enabling the EDK Web Services</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_55'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Enabling the EDK Web Services</ns0:figDesc><ns0:graphic coords='58,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_56'><ns0:head>Figure 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12: Code for creating an EDK Control Point instance</ns0:figDesc><ns0:graphic coords='59,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_57'><ns0:head>Figure 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12: Code for creating an EDK Control Point instance</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_58'><ns0:head>Figure 13 :</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13: Searching for Known Master Devices on an EDK Network</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_59'><ns0:head>Figure 13 :</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13: Searching for Known Master Devices on an EDK Network</ns0:figDesc><ns0:graphic coords='60,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_60'><ns0:head>Figure 14 :</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14: Three Methods for Finding Specific Devices</ns0:figDesc><ns0:graphic coords='61,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_61'><ns0:head>Figure 14 :</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14: Three Methods for Finding Specific Devices</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_62'><ns0:head>Figure 15 :</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15: Methods for Reading the Values of Device State Variables</ns0:figDesc><ns0:graphic coords='62,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_63'><ns0:head>Figure 15 :</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15: Methods for Reading the Values of Device State Variables</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_64'><ns0:head>Figure 16 :</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16: An example of sending action state change requests to a master device</ns0:figDesc><ns0:graphic coords='63,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_65'><ns0:head>Figure 16 :</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16: An example of sending action state change requests to a master device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_66'><ns0:head>Figure 17 :</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17: A Controller Template for a Single Variable Smart Device</ns0:figDesc><ns0:graphic coords='64,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_67'><ns0:head>Figure 17 :</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17: A Controller Template for a Single Variable Smart Device</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_68'><ns0:head>Figure 18 :Figure 18 :</ns0:head><ns0:label>1818</ns0:label><ns0:figDesc>Figure 18: A Controller Template for a Dimmable Light, (multi-variable smart device)</ns0:figDesc><ns0:graphic coords='65,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_69'><ns0:head>Figure 19 :Figure 19 :</ns0:head><ns0:label>1919</ns0:label><ns0:figDesc>Figure 19: A graph showing average processing times for setting a light (experiment one)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_70'><ns0:head>Figure 20 :</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20: A graph showing average processing times for setting a light (experiment two)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_71'><ns0:head>Figure 20 :</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20: A graph showing average processing times for setting a light (experiment two)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Average middleware processing times for taking a reading from a light sensor</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)Manuscript to be reviewedComputer Science1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Average middleware processing times for taking a reading from a light sensor Average Processing Time Required (milliseconds)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Session 1</ns0:cell><ns0:cell>Session 2</ns0:cell><ns0:cell>Session 3</ns0:cell><ns0:cell>Session 4</ns0:cell><ns0:cell>Session 5</ns0:cell></ns0:row><ns0:row><ns0:cell>Jetty-based Web</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>80.03</ns0:cell><ns0:cell>77.43</ns0:cell><ns0:cell>55.58</ns0:cell><ns0:cell>61.48</ns0:cell><ns0:cell>60.65</ns0:cell></ns0:row><ns0:row><ns0:cell>Services</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Youpi UPnP</ns0:cell><ns0:cell>52.51</ns0:cell><ns0:cell>35.51</ns0:cell><ns0:cell>99</ns0:cell><ns0:cell>22.01</ns0:cell><ns0:cell>106.56</ns0:cell></ns0:row><ns0:row><ns0:cell>Web Services (EDK)</ns0:cell><ns0:cell>1.99</ns0:cell><ns0:cell>1.66</ns0:cell><ns0:cell>1.61</ns0:cell><ns0:cell>1.64</ns0:cell><ns0:cell>1.40</ns0:cell></ns0:row><ns0:row><ns0:cell>Direct API Control</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.00542</ns0:cell><ns0:cell>0.00532</ns0:cell><ns0:cell>0.00516</ns0:cell><ns0:cell>0.00517</ns0:cell><ns0:cell>0.00519</ns0:cell></ns0:row><ns0:row><ns0:cell>(EDK)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Average middleware processing times changing the brightness of a dimmable light Average</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Processing Time Required (milliseconds) Session 1 Session 2 Session 3 Session 4 Session 5</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Jetty-based</ns0:cell><ns0:cell>127.09</ns0:cell><ns0:cell>133.46</ns0:cell><ns0:cell>112.57</ns0:cell><ns0:cell>113.92</ns0:cell><ns0:cell>118.29</ns0:cell></ns0:row><ns0:row><ns0:cell>Web Services</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Youpi UPnP</ns0:cell><ns0:cell>171.36</ns0:cell><ns0:cell>232.33</ns0:cell><ns0:cell>212.22</ns0:cell><ns0:cell>238.08</ns0:cell><ns0:cell>180.49</ns0:cell></ns0:row><ns0:row><ns0:cell>Web Services</ns0:cell><ns0:cell>101.83</ns0:cell><ns0:cell>102.45</ns0:cell><ns0:cell>102.35</ns0:cell><ns0:cell>102.26</ns0:cell><ns0:cell>102.09</ns0:cell></ns0:row><ns0:row><ns0:cell>(EDK)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Direct API</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>0.64</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell>0.55</ns0:cell></ns0:row><ns0:row><ns0:cell>(EDK)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>2 PeerJ Comput. Sci. reviewing PDF | (CS-2020:08:52107:1:4:NEW 3 Apr 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "January 20, 2023
Response Sheet
Reviewer 1
Comment 1: The authors mention “the developed mechanism serves as a viable alternative to the commonly used Universal Plug and Play (UPnP) and Web Services middleware systems”, which is quite a statement since WS and UPnP have been standards in industry and research for decades. At this stage of their middleware development, it is hard to make such a claim, so the authors should humble down this sentence.
Comment No.
Page No.
Line No.
Actual
Content
Removed/ Edited/ Replaced Content
1
1
NA
The authors mention “the developed mechanism serves as a viable alternative to the commonly used Universal Plug and Play (UPnP) and Web Services middleware systems”, which is quite a statement since WS and UPnP have been standards in industry and research for decades. At this stage of their middleware development, it is hard to make such a claim, so the authors should humble down this sentence.
Catered (The sentence has been removed from the abstract).
Comment 1: Line 74 “For instance, if a device state variable, updated using one middleware system, was not reflected by all the alternative mechanisms used”. This looks like an incomplete sentence that should be finished.
Comment 2: It is not clear what the advantages of the middleware architecture are compared to the existing literature. An open issues section, where it is mentioned what open issues are being tackled by the proposal, must be added. Likewise, the contributions made by the proposal should be made clear in the manuscript in the first section.
Comment No.
Page No.
Line No.
Actual
Content
Removed/ Edited/ Replaced Content
1
2
38-40
For instance, if a device state variable, updated using one middleware system, was not reflected by all the alternative mechanisms used
For instance, the device state variability; updated using one middleware system, cannot be reflected by all the alternative mechanisms used.
2
3
58-66
NA
The core contribution of this study is an evaluation of both the state-of-the-art and state… fulfils a series of non-functional requirements.
Comment 1: The references provided in literature could be expanded with usage of middleware in other international research projects. A (non-exhaustive) list of references that could be added is: A Review of Middleware for Networked Robots, Middleware Architectures for the Smart Grid: A Survey on the State-of-the-Art, Middleware Technologies for Smart Wireless Sensor Networks towards Internet of Things: A Comparative Review, Taxonomy and Main Open Issues, An optimized, data distribution service-based solution for reliable data exchange among autonomous underwater vehicles, Maritime Data Transfer Protocol (MDTP): A Proposal for a Data Transmission Protocol in Resource-Constrained Underwater Environments Involving Cyber-Physical Systems or IoT Middleware: A Survey on Issues and Enabling Technologies. Of course, these are suggested just as a means to improve the paper, they are not mandatory to be included if the authors think otherwise and provide other similar ones.
Comment No.
Page
No.
Line No.
Actual
Content
Removed/ Edited/ Replaced Content
Remarks
1
8-9
144-169
-
Different middleware platforms were discussed on these criteria for networked robotic… independent maritime vehicles, and the procedures that are needed for information interchange
Catered
Comment 1: The middleware architecture that has been created lacks a specific name, it is only referenced as part of the project (ScaleUp). That makes hard to understand the differentiating value of this architecture compared to the other.
Comment 2: There is very little information about fundamentals of middleware: a) is a Message-Oriented Middleware? A Middleware architecture? Or something else? b) How do the PDUs interchanged look like (if it applies)? c) What software components are included in the MW? d) What software layers are used within the middleware itself (Hardware Abstraction Layer, Service Application Points, etc.)? All these questions should be answered thoroughly.
Comment No.
Page No.
Line No.
Actual
Content
Removed/ Edited/ Replaced Content
Remarks
1
NA
NA
NA
Not catered (The name of the architecture is already presented as “EDK API”.
2
10-26
226-564
NA
NA
Not catered (All the required information is already mentioned in Section 3).
Reviewer 2
Comment 1: Inline 41 authors present a “new form”, in line 45, authors present a “new mechanism” and in line 49, authors present a “system”. It is necessary to concrete the product, or solution, presented in the paper. Title names a middleware, it is necessary to concrete the contribution of the paper.
Comment 2: Lines 47 (end) and 48: '...types of the device...” change for “...types of devices
Comment 4: Line 49: authors relate a “clone devices” that implies that the system presented uses clonation, this aspect must be presented previously.
Comment No.
Page No.
Line No.
Actual
Content
Removed/ Edited/ Replaced Content
Remarks
1
1
7-10
NA
The development of the internet of things (IoT) expands to an ultra-large-scale, which provides numerous services across different domains and environments. The use of middleware eases application development by providing the necessary functional capability. This paper presents a new form of middleware for controlling smart devices installed in an intelligent environment.
Catered
2
1
12-14
NA
It acts as an all-encompassing top layer of middleware in an intelligent environment control system capable of handling numerous different types of devices simultaneously.
Catered
3
1
14
NA
This protected de-synchronization of data stored in clone devices.
Catered
Comment 1: Line 57: I suggest to change “omnipresent” for “necessary” or “ubiquitous”, instead omnipresent is correct, it is more frequently to use the alternatives mentioned above.
Comment No.
Page No.
Line No.
Actual
Content
Removed/ Edited/ Replaced Content
Remarks
1
2
22-23
NA
Architectures for intelligent environments typically require some form of necessary middleware layer
Catered
Comment 1: Line 109: instead of “bespoke” is correct, I suggest to use the more frequently expression “custom-made” or “proprietary” when the concept is used in commercial trades (Apple or Sony mentions).
Comment 2: Lines 124 and 125 present the interesting result that can be used in the paper (requirements), I suggest authors select from the 12 requirements which ones accomplish the system presented (for the sake of providing scientific validity to the work presented).
Comment 3: Line 183 (figure 1): If the objective of the figure is to demonstrate where the wrapper should be inserted, a figure is not necessary, it is enough to explain in the text the sequence between the device and the programming interface. However, I suggest extending the figure with the context of layers in which a middleware acts.
Comment 4: Line 184: the reference Doolet et al., 2011 probably is the reference Dooley et al, 2011.
Comment 5: Line 190 (figure 2): I don’t know the significance of the home environment pictures related to the blocks diagram used in the figure.
Comment 6: Figures 1, 2, and 3 represents the role of the main blocks, or functionalities, offered between devices and interface programs. It could be interesting to unite them in a comparative figure where device and programming interfaces were common and the different ways of implementing middleware could be compared.
Comment No.
Page
No.
Line No.
Actual
Content
Removed/ Edited/ Replaced Content
Remarks
1
4
79
This was due to a belief that by adopting common middleware architecture, customers would use smart devices created by rival companies, within their be-spoke personal home networks.
This was due to a belief that by adopting common middleware architecture, customers would use smart devices created by rival companies, within their proprietary personal home networks.
Catered
2
4
93-96
NA
A total of 12 requirements (integrity and confidentiality, access control, consent, policy-based security, authentication, federated identity, and device identity) were used from the first stage for validating the competencies of each system.
Catered
3
NA
NA
NA
NA
Catered (See Figure 1)
4
NA
NA
NA
NA
Not catered (its already present in correct form)
5
NA
NA
NA
NA
Catered (Figure 2 is revised) See Figure 2
6
NA
NA
NA
NA
Not catered (Figures 1 and 2 are revised based on the recommendations so there is no need for comparison)
Comment 1: Figure 4: a code snippet can be useful in an implementation section, but a code declaration does not seem convenient in this section. In the text, authors declare that figure 4 represents an architecture, but no code.
Comment 2: Line 258: an example of a middleware based on publish-subscribe can be found in this reference (I suggest to include in the paper):
“Simó-Ten, J. E., Munera, E., Poza-Lujan, J. L., Posadas-Yagüe, J. L., & Blanes, F. (2017, June). CKMultipeer: connecting devices without caring about the network. In International Symposium on Distributed Computing and Artificial Intelligence (pp. 189-196). Springer, Cham.”
Comment No.
Page
No.
Line No.
Actual
Content
Removed/ Edited/ Replaced Content
Remarks
1
NA
NA
NA
NA
Catered (Figure 4 is revised; See it)
2
12
252
NA
where the environment is divided into regions, each with their assigned group containing all the players, objects and other updatable information present in that area
Catered
" | Here is a paper. Please give your review comments after reading it. |
107 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Performance problems in applications should ideally be detected as soon as they occur, i.e., directly when the causing code modification is added to the code repository. To this end, complex and cost-intensive application benchmarks or lightweight but less relevant microbenchmarks can be added to existing build pipelines to ensure performance goals.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Performance problems in applications should ideally be detected as soon as they occur, i.e., directly when the causing code modification is added to the code repository. To this end, complex and costintensive application benchmarks or lightweight but less relevant microbenchmarks can be added to existing build pipelines to ensure performance goals.</ns0:p><ns0:p>In this paper, we show how the practical relevance of microbenchmark suites can be improved and verified based on the application flow during an application benchmark run. We propose an approach to determine the overlap of common function calls between application and microbenchmarks, describe a method which identifies redundant microbenchmarks, and present a recommendation algorithm which reveals relevant functions that are not covered by microbenchmarks yet. A microbenchmark suite optimized in this way can easily test all functions determined to be relevant by application benchmarks after every code change, thus, significantly reducing the risk of undetected performance problems.</ns0:p><ns0:p>Our evaluation using two time series databases shows that, depending on the specific application scenario, application benchmarks cover different functions of the system under test. Their respective microbenchmark suites cover between 35.62% and 66.29% of the functions called during the application benchmark, offering substantial room for improvement. Through two use cases --removing redundancies in the microbenchmark suite and recommendation of yet uncovered functions --we decrease the total number of microbenchmarks and increase the practical relevance of both suites. Removing redundancies can significantly reduce the number of microbenchmarks (and thus the execution time as well) to ~10% and ~23% of the original microbenchmark suites, whereas recommendation identifies up to 26 and 14 newly, uncovered functions to benchmark to improve the relevance.</ns0:p><ns0:p>By utilizing the differences and synergies of application benchmarks and microbenchmarks, our approach potentially enables effective software performance assurance with performance tests of multiple granularities.</ns0:p><ns0:p>In this paper, we show how the practical relevance of microbenchmark suites can be improved and verified based on the application flow during an application benchmark run. We propose an approach to determine the overlap of common function calls between application and microbenchmarks, describe a heuristic which identifies redundant microbenchmarks, and present a recommendation algorithm which reveals relevant functions that are not covered by microbenchmarks yet. A microbenchmark suite optimized in this way can easily test all functions determined to be relevant by application benchmarks after every code change, thus, significantly reducing the risk of undetected performance problems.</ns0:p><ns0:p>Our evaluation using two time series databases shows that, depending on the specific application scenario, application benchmarks cover different functions of the system under test. Their respective microbenchmark suites cover between 35.62% and 66.29% of the functions called during the application benchmark, offering substantial room for improvement. Through two use cases -removing redundancies in the microbenchmark suite and recommendation of yet uncovered functions -we decrease the total number of microbenchmarks and increase the practical relevance of both suites. Removing redundancies can significantly reduce the number of microbenchmarks (and thus the execution time as well) to ~10% and ~23% of the original microbenchmark suites, whereas recommendation identifies up to 26 and 14 newly, uncovered functions to benchmark to improve the practical relevance.</ns0:p><ns0:p>By utilizing the differences and synergies of application benchmarks and microbenchmarks, our approach potentially enables effective software performance assurance with performance tests of multiple granularities.</ns0:p><ns0:p>While application benchmarks are the gold standard and very powerful as they benchmark complete systems, they are hardly suitable for regular use in continuous integration pipelines due to their long execution time and high costs <ns0:ref type='bibr' target='#b11'>(Bermbach et al., 2017b;</ns0:ref><ns0:ref type='bibr' target='#b8'>Bermbach and Tai, 2014)</ns0:ref>. Alternatively, less complex and therefore less costly microbenchmarks could be used, which are also easier to integrate into build pipelines due to their simpler setup <ns0:ref type='bibr' target='#b49'>(Laaber and Leitner, 2018)</ns0:ref>. However, a simple substitution can be dangerous: on one hand, it is not clear to what extent a microbenchmark suite covers the functions used in production; on the other hand, often only a complex application benchmark is suitable to evaluate complex aspects of a system. To link both benchmark types, we introduce the term practical relevance which refers to the extent to which a microbenchmark suite targets code segments that are also invoked by application benchmarks.</ns0:p><ns0:p>In this paper, we aim to determine, quantify, and improve the practical relevance of a microbenchmark suite by using application benchmarks as a baseline. In real setups, developers often do not have access to a (representative) live system, e.g., generally-available software such as database systems are used by many companies which install and deploy their own instances and, consequently, the software's developers usually do not have access to the custom installations and their production traces and logs. In addition, software is used differently by each customer which results in different load profiles as well as varying configurations. Thus, it is often reasonable to use one or more application benchmarks as the next accurate proxy to simulate and evaluate a representative artificial production system. The execution of these benchmarks for each code change is very expensive and time-consuming, but a light-weight microbenchmark suite that has proven to be practically relevant could replace them to some degree.</ns0:p><ns0:p>To this end, we analyze the called functions of a reference run, which can be (an excerpt from) a production system or an application benchmark, and compare them with the functions invoked by microbenchmarks to determine and quantify a microbenchmark suite's practical relevance. If every called function of the reference run is also invoked by at least one microbenchmark, we consider the respective microbenchmark suite as 100% practically relevant as the suite covers all functions used in the baseline execution. Based on this information, we devise two optimization strategies to improve the practical relevance of microbenchmark suites according to a reference run: (i) a heuristic for removing redundancies in an existing microbenchmark suite and (ii) a recommendation algorithm which identifies uncovered but relevant functions.</ns0:p><ns0:p>In this regard, we formulate the following research questions:</ns0:p><ns0:p>RQ 1 How to determine and quantify the practical relevance of microbenchmark suites? Software source code in an object-oriented system is organized in classes and functions. At runtime, executed functions call other classes and functions, which leads to a program flow that can be depicted as a call graph. This graph represents which functions call which other functions and adds additional meta information such as the duration of the executed function. If these graphs are available for a reference run and the respective microbenchmark suite, it is possible to compare the flow of both graphs and quantify to which degree the current microbenchmark suite reflects the use in the reference run, or rather the real usage in production. Our evaluation with two well-known time series databases shows that their microbenchmark suites cover about 40% of the functions called during application benchmarks. The majority of the functionality used by an application benchmark, our proxy for a production application, is therefore uncovered by the microbenchmark suites of our study objects.</ns0:p><ns0:p>RQ 2 How to reduce the execution runtime of microbenchmark suites without affecting their practical relevance?</ns0:p><ns0:p>If there are many microbenchmarks in a suite, they are likely to have redundancies and some functions will be benchmarked by multiple microbenchmarks. By creating a new subset of the respective microbenchmark suite without these redundancies, it is possible to achieve the same coverage level with fewer microbenchmarks, which significantly reduces the overall runtime of the microbenchmark suite. Applying this optimization as part of our evaluation shows that this can reduce the number of microbenchmarks by 77% to 90%, depending on the application and benchmark scenario. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>If the microbenchmark suite's coverage is not sufficient, the uncovered graph of the application benchmark can be used to locate functions which are highly relevant for practical usage. We present a recommendation algorithm which provides a fast and automated way to identify these functions that should be covered by microbenchmarks. Our evaluation shows that an increase in coverage from the original 40% to up to 90% with only three additional microbenchmarks is theoretically possible. An optimized microbenchmark suite could, e.g., serve as initial and fast performance smoke test in continuous integration or deployment (CI/CD) pipelines or for developers who need a quick performance feedback for their recent changes.</ns0:p><ns0:p>After applying both optimizations, it is possible to cover a maximum portion of an application benchmark with a minimum suite of microbenchmarks which has several advantages. First of all, this helps to identify important functions that are relevant in practice and ensures that their performance is regularly evaluated via microbenchmarks. Instead of a suite that checks rarely used functions, code sections that are relevant for practical use are evaluated frequently. Second, microbenchmarks evaluating functions that are already implicitly covered by other microbenchmarks are selectively removed, achieving the same practical relevance with as few microbenchmarks as possible while reducing the runtime of the total suite. Furthermore, the effort for the creation of microbenchmarks is minimized because the microbenchmarks of the proposed functions will cover a large part of the application benchmark call graph and fewer microbenchmarks are necessary. Developers will still have to design and implement performance tests, but the identification of highly relevant functions for actual operation is facilitated and functions that implicitly benchmark many further relevant functions are pointed out, thus covering a broad call graph. Ultimately, the optimized microbenchmark suite can be used in CI/CD pipelines more effectively: It is possible to establish a CI/CD pipeline which, e.g., executes the comparatively simple and short but representative microbenchmark suite after each change in the code. The complex and cost-intensive application benchmark can then be executed more sparsely, e.g., for each major release.</ns0:p><ns0:p>In this sense, the application benchmark remains as the gold standard revealing all performance problems, while the optimized microbenchmark suite is an easy-to-use and fast heuristic which offers a quick insight into performance yet with obviously lower accuracy.</ns0:p><ns0:p>It is our hope that this study contributes to the problem of performance testing as part of CI/CD pipelines and enables a more frequent validation of performance metrics to detect regressions sooner.</ns0:p><ns0:p>Our approach can give targeted advice to developers to improve the effectiveness and relevance of their microbenchmark suite. Throughout the rest of the paper, we will always use an application benchmark as the reference run but our approach can, of course, also use other sources as a baseline.</ns0:p></ns0:div>
<ns0:div><ns0:head>Contributions:</ns0:head><ns0:p>• An approach to determine and quantify the practical relevance of a microbenchmark suite.</ns0:p><ns0:p>• An adaptation of the Greedy-based algorithm proposed by <ns0:ref type='bibr' target='#b19'>Chen and Lau (1998)</ns0:ref> to remove redundancies in a microbenchmark suite.</ns0:p><ns0:p>• A recommendation strategy inspired by <ns0:ref type='bibr' target='#b67'>Rothermel et al. (1999)</ns0:ref> for new microbenchmarks which aims to cover large parts of the application benchmark's function call graph.</ns0:p><ns0:p>• An empirical evaluation which analyzes and applies the two optimizations to the microbenchmark suites of two large open-source time series databases.</ns0:p><ns0:p>Paper Structure: After summarizing relevant background information in Section 2, we present our approach to determine, quantify, and improve microbenchmark suites in Section 3. Next, we evaluate our approach by applying the proposed algorithms to two open-source time series databases in Section 4 and discuss its strength and limitations in Section 5. Finally, we outline related work in Section 6 and conclude in Section 7.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>This section introduces related background information, in particular this comprises benchmarking and time series databases.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Benchmarking</ns0:head><ns0:p>Benchmarking aims to determine quality of service (QoS) by stressing a system under test (SUT) in a standardized way while observing its reactions, usually in a test or staging environment <ns0:ref type='bibr'>(Bermbach et al., 2017b,a)</ns0:ref>. To provide relevant results, benchmarks must meet certain requirements such as fairness, portability, or repeatability <ns0:ref type='bibr' target='#b45'>(Huppler, 2009;</ns0:ref><ns0:ref type='bibr'>Bermbach et al., 2017b,a;</ns0:ref><ns0:ref type='bibr' target='#b30'>Folkerts et al., 2013)</ns0:ref>. In this paper, we deal with two different kinds of benchmarks: application benchmarks, which evaluate a complete application system, and microbenchmarks, which evaluate individual functions or methods. Functional testing as well as monitoring are not a focus of this work, but are of course closely related <ns0:ref type='bibr' target='#b11'>(Bermbach et al., 2017b)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.1'>Application Benchmark</ns0:head><ns0:p>In a so-called application benchmark, the SUT is first set up and brought into a defined initial state, e.g., using warmup requests or by inserting initial data. Next, an evaluation workload is sent to the SUT and the relevant metrics are collected. This method is on the one hand very powerful, because many relevant aspects and conditions can be simulated in a defined testbed, but it is very expensive and time-consuming on the other hand; not only in the preparation but also in the periodic execution.</ns0:p><ns0:p>The evaluation of an entire system involves several crucial tasks to finally come up with a relevant comparison and conclusion, especially in dynamic cloud environments <ns0:ref type='bibr'>(Bermbach et al., 2017b,a;</ns0:ref><ns0:ref type='bibr' target='#b33'>Grambow et al., 2019b)</ns0:ref>. During the design phase, it is necessary to think in detail about the specific requirements of the benchmark and its objectives. While defining (and generating) the workload, many aspects must be taken into account to ensure that the requirements of the benchmark are not violated and to guarantee a relevant result later on <ns0:ref type='bibr' target='#b45'>(Huppler, 2009;</ns0:ref><ns0:ref type='bibr'>Bermbach et al., 2017b,a;</ns0:ref><ns0:ref type='bibr' target='#b30'>Folkerts et al., 2013)</ns0:ref>. This is especially difficult in dynamic cloud environments, because it is hard to reproduce results due to performance variations inherent in cloud systems, random fluctuations, and other cloud-specific characteristics <ns0:ref type='bibr' target='#b54'>(Lenk et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b26'>Difallah et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b30'>Folkerts et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b65'>Rabl et al., 2010)</ns0:ref>. To set up an SUT, all components have to be defined and initialized first. This can be done with the assistance of automation tools (e.g., <ns0:ref type='bibr' target='#b38'>Hasenburg et al. (2019</ns0:ref><ns0:ref type='bibr' target='#b37'>Hasenburg et al. ( , 2020a))</ns0:ref>). However, automation tools still have to be configured first, which further complicates the setup of application benchmarks. During the benchmark run, all components have to be monitored to ensure that there is no bottleneck inside the benchmarking system, e.g., to avoid quantifying the resources of the benchmarking client's machine instead of the maximum throughput of the SUT. Finally, the collected data needs to be transformed into relevant insights, usually in a subsequent offline analysis <ns0:ref type='bibr' target='#b11'>(Bermbach et al., 2017b)</ns0:ref>. Together, these factors imply that a really continuous application benchmarking, e.g., applied to every code change, will usually be prohibitively expensive in terms of time but also in monetary cost.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.2'>Microbenchmarks</ns0:head><ns0:p>Instead of benchmarking the entire SUT at once, microbenchmarks focus on benchmarking small code fragments, e.g., single functions. Here, only individual critical or often used functions are benchmarked on a smaller scale (hundreds of invocations) to ensure that there is no performance drop introduced with a code change or to estimate rough function-level metrics, e.g., average execution duration or throughput.</ns0:p><ns0:p>They are usually defined in only a few lines of code; while they are also executed repeatedly, running microbenchmarks takes considerably less time than the execution of an application benchmark. Moreover, they are usually easier to set up and to execute as there is no complex SUT which needs to be initialized first. They are therefore more suitable for frequent use in CI/CD pipelines but also have to cope with variability in cloud environments <ns0:ref type='bibr' target='#b52'>(Leitner and Bezemer, 2017;</ns0:ref><ns0:ref type='bibr' target='#b49'>Laaber and Leitner, 2018;</ns0:ref><ns0:ref type='bibr' target='#b50'>Laaber et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Bezemer et al., 2019)</ns0:ref>. Finally, they cannot cover all aspects of an application benchmark and are, depending on the concrete use-case, usually considered less relevant individually.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Time Series Database Systems</ns0:head><ns0:p>In this paper, we use time series database systems (TSDBs) as study objects. TSDBs are designed and optimized to receive, store, manage, and analyze time series data <ns0:ref type='bibr' target='#b29'>(Dunning et al., 2014)</ns0:ref>. Time series data usually comprises sequences of timestamped data -often numeric values -such as measurement values. As these values tend to arrive in-order, TSDB storage layers are optimized for append-only writes because only a few straggler values arrive late, e.g., due to network delays. Moreover, the stored values are rarely updated as the main purpose of TSDBs is to identify trends or anomalies in incoming data, e.g., for identifying failure situations. Due to this, TSDBs are optimized for fast aggregation queries over </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>APPROACH</ns0:head><ns0:p>We aim to determine and quantify the practical relevance of microbenchmark suites, i.e., to what extent the functions invoked by application benchmarks are also covered by microbenchmarks. Moreover, we want to improve microbenchmark suites by identifying and removing redundancies as well as recommending important functions which are not covered yet.</ns0:p><ns0:p>Our basic idea is based on the intuition that, regardless of whether software is evaluated by an application benchmark or microbenchmark, both types evaluate the same source code and algorithms. Since an application benchmark is designed to simulate realistic operations in a production-near environment, it can reasonably be assumed that it can serve as a baseline or reference execution to quantify relevance in the absence of a real production trace. On the other hand, microbenchmarks are written to check the performance of individual functions and multiple microbenchmarks are bundled as a microbenchmark suite to analyze the performance of a software system. Both benchmark types run against the same source code and generate a program flow (call graph) with functions 5 as nodes and function calls as edges. We propose to analyze these graphs to (i) determine the coverage of both types to quantify the practical relevance of a microbenchmark suite, (ii) remove redundancies by identifying functions (call graph nodes) which are covered by multiple microbenchmarks, and (iii) recommend functions which should be covered by microbenchmarks because of their usage in the application benchmark. In a perfectly benchmarked software project, the ideal situation in terms of our approach would be that all practically relevant functions are covered by exactly one microbenchmark. To check and quantify this fact for a given project and to improve it subsequently, we propose the approach illustrated in Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>To use our approach, we assume that the software project complies with best practices for both benchmarking domains, e.g., <ns0:ref type='bibr' target='#b11'>Bermbach et al. (2017b)</ns0:ref>; Damasceno <ns0:ref type='bibr' target='#b24'>Costa et al. (2019)</ns0:ref>. It is necessary that there is both a suite of microbenchmarks and at least one application benchmark for the respective SUT.</ns0:p><ns0:p>The application benchmark must rely on realistic scenarios to generate a relevant program flow and must run against an instrumented SUT which can create a call graph during the benchmark execution. During that tracing run, actual measurements of the application benchmark do not matter. The same applies for the execution of the microbenchmarks, where it also must be possible to reliably create the call graphs for the duration of the benchmark run. These call graphs can subsequently be analyzed structurally to quantify and improve the microbenchmark suite's relevance. We will discuss this in more detail in the Section 3.1.</ns0:p><ns0:p>1 https://www.influxdata.com 2 https://victoriametrics.com 3 https://prometheus.io 4 http://opentsdb.net 5 In the following, we exclusively refer to functions but our approach can similarly be used for methods and procedures depending on the SUT's programming language.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:ref> Manuscript to be reviewed Computer Science We propose two concrete methods for optimizing a microbenchmark suite: (1) An algorithm to remove redundancies in the suite by creating a minimal sub-set of microbenchmarks which structurally covers the application benchmark graph to the same extent (Section 3.2). ( <ns0:ref type='formula'>2</ns0:ref>) A recommendation strategy to suggest individual functions which are currently not covered by microbenchmarks but which are relevant for the application benchmark and will cover a large part of its call graph (Section 3.3).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Determining and Quantifying Relevance</ns0:head><ns0:p>After executing an application benchmark and the microbenchmark suite on an instrumented SUT, we retrieve one (potentially large) call graph from the application benchmark run and many (potentially small) graphs from the microbenchmark runs, one for each microbenchmark. In these graphs, each function represents a node and each edge represents a function call. Furthermore, we differentiate in the graphs between so-called project nodes, which refer directly to functions of the SUT, and non-project nodes, which represent functions from libraries or the operating system. After all graphs have been generated, the next step is to determine the function coverage, i.e., which functions are called by both the application benchmark and at least one microbenchmark.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows an example: The application benchmark graph covers all nodes from node 1 to node 19 and has two entry points, node 1 and node 8. These entry points, when invoked, call other functions, which again call other functions (cycles are possible, e.g., in the case of recursion). Nodes in the graph can be both project functions of the SUT or functions of external libraries. There are also two microbenchmarks in this simple example, benchmark 1 and benchmark 2. While benchmark 1 only covers two nodes, benchmark 2 covers four nodes and seems to be more practically relevant (we will discuss this later in more detail).</ns0:p><ns0:p>To determine the function coverage, we iterate through all application benchmark nodes and identify all microbenchmarks which cover this function. As a result, we get a list of coverage sets, one for each microbenchmark, where each entry describes the overlap of nodes (functions) between the application benchmark call graph and the respective microbenchmark graph. Next, we count (i) all project-only functions and (ii) all functions which are called during the application benchmark and in at least one microbenchmark. Finally, we calculate two different coverage metrics: First, the project-only coverage of all executed functions in comparison to the total number of project functions in the application benchmark.</ns0:p><ns0:p>Second, the overall coverage, including external functions.</ns0:p><ns0:p>For our example application benchmark call graph in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>: coverage pro ject−only = 5 10 = 0.5 and coverage overall = 6 19 ≈ 0.316. Note that these metrics would not change if there would be a third microbenchmark covering a subset of already covered nodes, e.g., node 14 and node 17.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Removing Redundancies</ns0:head><ns0:p>Our first proposed optimization removes redundancies in the microbenchmark suite and achieves the same coverage level with fewer microbenchmarks. For example, the imaginary third benchmark mentioned above (covering nodes 14 and 17 in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>) would be redundant, as all nodes are already covered by other microbenchmarks. To identify a minimal set of microbenchmarks, we adapt the Greedy algorithm proposed by <ns0:ref type='bibr' target='#b19'>Chen and Lau (1998)</ns0:ref> and rank the microbenchmarks based on the number of reachable function nodes that overlap with the application benchmark (instead of all reachable nodes as proposed in <ns0:ref type='bibr' target='#b19'>(Chen and Lau, 1998)</ns0:ref>), as defined in Algorithm 1.</ns0:p><ns0:p>After analyzing the graphs, we get coverage sets of overlaps between the application benchmark and the microbenchmark call graphs (input C). First, we sort them based on the number of covered nodes in descending order, i.e., microbenchmarks which cover many functions of the application benchmark are moved to the top (line 3). Next, we pick the first coverage set as it covers the most functions and add the respective microbenchmark to the minimal set (lines 4 to 8). Afterwards, we have to remove the covered set of the selected microbenchmark from all coverage sets (lines 9 to 11) and sort the coverage set again to pick the next microbenchmark. We repeat this until there are no more microbenchmarks to add (i.e., all microbenchmarks are part of the minimal set and there is no redundancy) or until the picked coverage set would not add any covered functions to the minimal set (line 6).</ns0:p><ns0:p>In this work, we sort the coverage sets by their number of covered nodes and do not include any additional criteria to break ties. This could, however, result in an undefined outcome if there are multiple coverage sets with the same number of covered additional functions, but this case is a rare event and did not occur in our study. Still, including other secondary sorting criteria such as the distance to the graph's root node or the total number of nodes in the coverage set might improve this optimization further.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Recommending Additional Microbenchmark Targets</ns0:head><ns0:p>A well-designed application benchmark will trigger the same function calls in an SUT as a production use would. A well-designed microbenchmark for an individual function will also implicitly call the same functions as in production or during the application benchmark. In this second optimization of the microbenchmark suite, we rely on these assumptions to selectively recommend uncovered functions for further microbenchmarking. This allows developers to directly implement new microbenchmarks that will cover a large part of the uncovered application benchmark call graph and thus increase the coverage levels (see Section 3.1).</ns0:p><ns0:p>Similar to the removal of redundancies, we build on the idea of a well-known, greedy test case prioritization algorithm proposed by <ns0:ref type='bibr' target='#b67'>Rothermel et al. (1999)</ns0:ref> to recommend functions for benchmarking that are not covered yet. In particular, we adapt Rothermel's additional algorithm, which iteratively prioritizes tests whose coverage of new parts of the program (that have not been covered by previously prioritized tests) is maximal. Instead of using the set of all covered methods by a microbenchmark suite, our adaptation uses the function nodes from the application benchmark that are not covered yet as greedy criteria to optimize for.</ns0:p><ns0:p>Algorithm 2 defines the recommendation algorithm. The algorithm requires as input the call graph from the application benchmark, the graphs from the microbenchmark suite, and the coverage sets determined in Section 3.1, as well as the upper limit n of recommended functions.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 2: Recommending functions which are not covered by microbenchmarks yet.</ns0:p><ns0:p>Input: A, M,C -application benchmark CG, microbenchmark CGs, coverage sets Input: n -number of microbenchmarks to recommend Result: R -Set of recommended functions to microbenchmark 1 R ← / 0</ns0:p><ns0:formula xml:id='formula_0'>2 notCovered ← {a|a ∈ A ∧ IsProjectNode(a)} \C total 3 N ← / 0 4 while n > 0 do 5 foreach function f a ∈ notCovered do 6 additionalNodes ← DetermineReachableNodes( f a ) ∩ notCovered N ← N ∪ {additionalNodes} 7 end 8 SortByNumberOfNodes(N) 9 largestAdditionalSet ← RemoveFirst(N) 10 if |largestAdditionalSet| == 0 then 11 return R 12 end 13 R ← R ∪ largestAdditionalSet[0] 14 n = n − 1 15 notCovered ← notCovered \ largestAdditionalSet 16 end</ns0:formula><ns0:p>First, we determine the set of nodes (functions) in the application benchmark call graph which are not covered by any microbenchmark (line 2). Next, we determine the reachable nodes for each function in this set, only considering project nodes, and store the results in another set N (lines 5 to 7). To link back to our example graph in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>, the resulting set for function 3 (neither covered by benchmark 1 nor 2) would be nodes 3, 5, and 6 (node 12 is not a project node and not part of the reachable nodes). Third, we sort the set N by the number of nodes in each element, starting with the set with the most nodes in it (line 8). If two functions cover the same number of project nodes, we determine the distance to the closest root node and select functions that have a shorter distance. If the functions are still equivalent, we include the number of covered non-project nodes as a third factor and favor the function with higher coverage.</ns0:p><ns0:p>Finally, we pick the first element and add the respective function to the recommendation set R (lines 9 to 13), update the not covered functions (line 15), and run the algorithm again to find the next function which adds the most additional nodes to the covered set. Our algorithm ends if there are n functions in R (i.e., upper limit for recommendations reached) or if the function which would be added to the recommendation set R does not add additional functions to the covered set (line 11).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EMPIRICAL EVALUATION</ns0:head><ns0:p>We empirically evaluate our approach on two open-source TSDBs written in Go, namely InfluxDB and VictoriaMetrics, which both have extensive developer-written microbenchmark suites. As application benchmark and, therefore, baseline, we encode three application scenarios in YCSB-TS 6 . On the other side, we run the custom microbenchmark suites of the respective systems.</ns0:p><ns0:p>We start by giving an overview of YCSB-TS and both evaluated systems (Section 4.1). Next, we describe how we run the application benchmark using three different scenarios (Section 4.2) and the microbenchmark suite (Section 4.3) to collect the respective set of call graphs. Finally, we use the call graphs to determine the coverage for each application scenario and quantify the practical relevance (Section 4.4) before removing redundancies in the benchmark suites (Section 4.5) and recommending functions which should be covered by microbenchmarks for every investigated project (Section 4.6).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Study Objects</ns0:head><ns0:p>To evaluate our approach, we need an SUT which comes with a developer-written microbenchmark suite and which is compatible with an application benchmark. For this, we particularly looked at TSDBs written in Go as they are compatible with the YCSB-TS application benchmark, and since Go contains a microbenchmark framework as part of its standard library. Furthermore, Go provides a tool called pprof 7 which allows us to extract the call graphs of an application using instrumentation. Based on these considerations, we decided to evaluate our approach with the TSDBs InfluxDB 8 and VictoriaMetrics 9</ns0:p><ns0:p>(see Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>).</ns0:p><ns0:p>YCSB-TS 10 is a specialized fork of YCSB <ns0:ref type='bibr' target='#b20'>(Cooper et al., 2010</ns0:ref>) (an extensible benchmarking framework for data serving systems) for time series databases. Usually every experiment with YCSB is divided into a load phase which preloads the SUT with initial data, and a run phase which executes the actual experiment queries.</ns0:p><ns0:p>InfluxDB is a popular TSDB with more than 400 contributors and more than 19,000 stars on GitHub.</ns0:p><ns0:p>VictoriaMetrics is an emerging TSDB (the first version was released in 2018) which has already collected more than 2,000 stars on GitHub. Both systems are written in Go, offer a microbenchmark suite, and can be benchmarked using the YCSB-TS tool. However, there was no suitable connector for VictoriaMetrics in the official YCSB-TS repository, which we implemented based on the existing connectors for InfluxDB and Prometheus. Moreover, we also fixed some small issues in the YCSB-TS implementation. A fork with all necessary changes, including the new connector and all fixes, is available on GitHub 11 .</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Application Benchmark</ns0:head><ns0:p>Systems such as our studied TSDBs are used in different domains and contexts, resulting in different load profiles depending on the specific use case. We evaluated each TSDB in three different scenarios which are motivated in Section 4.2.1. The actual benchmark experiment is described in Section 4.2.2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Scenarios</ns0:head><ns0:p>Depending on the workload, the call graphs within an SUT may vary. To consider this effect in our evaluation, we generate three different workloads based on the following three scenarios for TSDBs, see Medical Monitoring: An intensive care unit monitors its patients through several sensors which forward the tagged and timestamped measurements to the TSDB. These values are requested and processed by an analyzer, which averages relevant values for each patient once per minute and scans for irregularities once per hour. In our workload configuration, we assume a new data item for every patient every two seconds and deal with 10 patients.</ns0:p><ns0:p>We convert this abstract scenario description into the following YCSB-TS workload: With an evaluation period of seven days, there are approximately three million values in the range of 0 to 300 that are inserted into the database in total. Half of them, about 1.5 million, are initially inserted during the load phase. Next, in the run phase, the remaining records are inserted and the queries are made. In this scenario, there are about 100,000 queries which contain mostly AVG as well as 1.680 SCAN operations.</ns0:p><ns0:p>Furthermore, the workload uses ten different tags to simulate different patients.</ns0:p><ns0:p>Smart Factory: In this scenario, a smart factory produces several goods with multiple machines. Whenever an item is finished, the machine controller submits the idle time during the manufacturing process as a timestamped entry to the TSDB tagged with the kind of produced item. Furthermore, a monitoring tool queries the average and the total amount of produced items once per hour and the accumulated idle time at each quarter of an hour. Finally, there are several manual SCAN queries for produced items over a given period. Our evaluation scenario deals with five different products and ten machines which on average each assemble a new item every 10 seconds. Moreover, there are 60 SCAN queries on average per day.</ns0:p><ns0:p>The corresponding YCSB-TS workload covers a 31-day evaluation period during which approximately 2.6 million data records are inserted. Again, we split the records in half and insert the first part in the load phase and insert the second part in parallel to all other queries in the run phase. In this scenario, we execute about 6,000 queries in the run phase which include SUM, SCAN, AVG, and COUNT operations (frequency in descending order). Furthermore, the workload uses five predefined tags to simulate the different products.</ns0:p><ns0:p>Wind Parks: Wind wheels in a wind park send information about their generated energy as timestamped and tagged items to the TSDB once per hour. At each quarter of an hour, a control center scans and counts the incoming data from 500 wind wheels in five different geographic regions. Moreover, it totals the produced energy for every hour.</ns0:p><ns0:p>Translated into a YCSB-TS workload with 365 days evaluation time, this means about 4.4 million records to be inserted and five predefined tags for the respective regions. Again, we have also split the records equally between the load and run phase. In addition, we run about 80,000 queries, split between SCAN, AVG, and SUM (frequency in descending order).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Experiments</ns0:head><ns0:p>Similar to <ns0:ref type='bibr' target='#b6'>(Bermbach et al., 2017a)</ns0:ref>, each experiment is divided into three phases: initialize, load, and run (see Figure <ns0:ref type='figure' target='#fig_0'>3</ns0:ref>). During the initialization phase, we create two AWS t2.medium EC2 instances (2 vCPUs, 4 GiB RAM), one for the SUT and one for the benchmarking client in the eu-west-1 region. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science of the client is identical for all experiments: YCSB-TS is installed and configured on the benchmarking client instance. The initialization of the SUT starts with the installation of required software, e.g., Git, Go, and Docker. Next, we clone the SUT, revert to a fixed Git commit (see Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>) and instrument the source code to start the CPU profiling when running. Finally, we build the SUT and create an executable file.</ns0:p><ns0:p>During the load phase, we start the SUT and execute the load workload of the respective scenario using the benchmarking client and preload the database. Then, we stop the SUT and keep the inserted data. This way we can clearly separate the call graphs of the following run phase from the rest of the experiment.</ns0:p><ns0:p>Afterwards, we restart the SUT for the run phase. Since the source code has been instrumented, a CPU profile is now created and function calls are recorded in it by sampling while the SUT runs. Next, we run the actual workload against the SUT using the benchmarking client and subsequently stop the SUT.</ns0:p><ns0:p>This run phase of the experiments took between 40 minutes and 18 hours, depending on the workload and TSDB. Note that the actual benchmark runtime is in this case irrelevant (as long as it is sufficiently long) since we are only interested in the call graph. Finally, we export the generated CPU profile which we use to build the call graphs.</ns0:p><ns0:p>After running the application benchmark for all scenarios and TSDBs, we have six application benchmark call graphs, one for each combination of scenario and TSDB.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Microbenchmarks</ns0:head><ns0:p>To generate the call graphs for all microbenchmarks, we execute all microbenchmarks in both projects one after the other and extract the CPU profile for each microbenchmark separately. Moreover, we set the benchmark execution time to 10 seconds to reduce the likelihood that the profiler misses nodes (functions), due to statistical sampling of stack frames. This means that each microbenchmark is executed multiple times until the total runtime for this microbenchmark reaches 10 seconds and that the runtime is usually slightly higher than 10 seconds (the last execution starts before the 10-second deadline and ends afterwards). Finally, we transform the profile files of each microbenchmark into call graphs, which we use in our further analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Determining and Quantifying Relevance</ns0:head><ns0:p>Based on the call graphs for all scenario workloads and microbenchmarks, we analyze the coverage of both to determine and quantify the practical relevance following Section 3.1. Figure <ns0:ref type='figure'>4</ns0:ref> shows the microbenchmark suite's coverage for each study object and scenario. For InfluxDB, the overall coverage ranges from 62.90% to 66.29% and the project-only coverage ranges from 40.43% to 41.25%, depending on the application scenario. For VictoriaMetrics, the overall coverage ranges from 43.5% to 46.74% and the project-only coverage from 35.62% to 40.43%. Furthermore, all scenarios also generate a set of common functions which are invoked in every scenario.</ns0:p><ns0:p>For InfluxDB, there are 464 functions of 920 in total (50.43%) and for VictoriaMetrics there are 341 functions of 603 in total (56.55%) which are called in every application scenario. Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref> shows the overlap details.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Removing Redundancies</ns0:head><ns0:p>Our first optimization, as defined in Algorithm 1, analyzes the existing coverage sets and removes redundancy from both microbenchmark suites by greedily adding microbenchmarks to a minimal suite which fulfills the same coverage criteria. Figure <ns0:ref type='figure'>6</ns0:ref> shows the step-by-step construction of this minimal set of microbenchmarks up to the maximum possible coverage.</ns0:p><ns0:p>For InfluxDB (Figure <ns0:ref type='figure'>6a</ns0:ref>), the first selected microbenchmark already covers more than 12% of each application benchmark scenario graph. Furthermore, the first four selected microbenchmarks are identical in all scenarios. Depending on the scenario, these already cover a total of 28% to 31% (with a maximum Manuscript to be reviewed</ns0:p><ns0:p>Computer Science coverage of about 40% when using all microbenchmarks, see Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>). These four microbenchmarks are therefore very important when covering a large practically relevant area in the source code. However, even if all microbenchmarks selected during minimization are chosen and the maximum possible coverage is achieved, the removal of redundancies remains very effective. Depending on the application scenario, the initial suite with 288 microbenchmarks from which we extracted call graphs were reduced to a suite with either 19, 25, or 27 microbenchmarks.</ns0:p><ns0:p>In general, we find similar results for VictoriaMetrics (Figure <ns0:ref type='figure'>6b</ns0:ref>). Already the first microbenchmark selected by our algorithm covers at least 17% of the application benchmark call graph in each scenario.</ns0:p><ns0:p>For VictoriaMetrics, the first four selected microbenchmarks also have similar coverage sets, there is only a small difference in the parametrization of one chosen microbenchmark. In total, these first four microbenchmarks cover 29% to 34% of the application benchmark call graph, depending on the scenario, and there is a maximum possible coverage between 35% and 40% when using the full existing microbenchmark suite. Again, the first four microbenchmarks are therefore particularly effective and already cover a large part of the application benchmark call graph. Moreover, even with the complete minimization and the maximum possible coverage, our algorithm significantly reduces the number of microbenchmarks: from 62 microbenchmarks down to 13 or 14 microbenchmarks, depending on the concrete application scenario.</ns0:p><ns0:p>Since each microbenchmark takes on average about the same amount of time (see Section 4.3), our minimal suite results in a significant time saving when running the microbenchmark suite. For InfluxDB it would take only about 10% of the original time and for VictoriaMetrics about 23% respectively. On the other hand, these drastic reductions also mean that many microbenchmarks in both projects evaluate the same functions. This can be useful under certain circumstances, e.g., if there is a performance degradation detected using the minimal benchmark suite and developers need to find the exact cause. However, given our goal of finding a minimal set of microbenchmarks to use as smoke test in a CI/CD pipeline, these redundant microbenchmarks present an opportunity to drastically reduce the execution time without much loss of information.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.6'>Recommending Additional Microbenchmark Targets</ns0:head><ns0:p>Our second optimization, the recommendation, starts with the minimal microbenchmark suite from above and subsequently recommends functions to increase the coverage of the microbenchmark suite and application benchmark following Algorithm 2. Figure <ns0:ref type='figure' target='#fig_5'>7</ns0:ref> shows this step-by-step recommendation of functions starting with the current coverage up to a 100% relevant microbenchmark suite.</ns0:p><ns0:p>For InfluxDB (Figure <ns0:ref type='figure' target='#fig_5'>7a</ns0:ref>), a microbenchmark for the first recommended function would increase the coverage by 28% to 31% depending on the application scenario and the first three recommended functions are identical for all scenarios: (i) executeQuery runs a query against the database and returns the results, (ii) ServeHTTP responds to HTTP requests, and (iii) storeStatistics writes statistics into the database.</ns0:p><ns0:p>If each of these functions were evaluated by a microbenchmark in the same way as the application benchmark, i.e., resulting in the same calls of downstream functions and the same call graph, there would already be a total coverage of 90% to 94%. To achieve a 100% match, additional 10 to 26 functions must be microbenchmarked, depending on the application scenario and always under the assumption that the microbenchmark will call the function in the same way as the application benchmark does.</ns0:p><ns0:p>In general, we find similar results for VictoriaMetrics (Figure <ns0:ref type='figure' target='#fig_5'>7b</ns0:ref>). Already a microbenchmark for the first recommended function would increase the coverage by 39% to 51% and microbenchmarking the first three recommended functions would increase the coverage up to a total of 94% to 95%. Again, these three functions are recommended in all scenarios, only the ordering is different. All first recommended functions are anonymous functions, respectively (i) an HTTP handler function, (ii) a merging function, and (iii) a result-related function. To achieve 100% project-only coverage, 10 to 14 additional functions would have to be microbenchmarked depending on the application scenario.</ns0:p><ns0:p>In summary, our results show that the microbenchmark suite can be made much more relevant to actual practice and usage with only a few additional microbenchmarks for key functions. In most cases, however, it will not be possible to convert the recommendations directly into suitable microbenchmarks (we discuss this point in Section 5). Nevertheless, we see these recommendations as a valid starting point for more thorough analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>We propose an automated approach to analyze and improve microbenchmark suites. It can be applied to all application systems that allow the profiling of function calls and the subsequent creation of a call graph. This is particularly easy for projects written in the Go programming language as this functionality is part of the Go environment. Furthermore, our approach is beneficial for projects with a large code base where manual analysis would be too complex and costly. In total, we propose three methods for analyzing and optimizing existing microbenchmark suites but can also provide guidance for creating new ones. Nevertheless, every method has its limits and should not be applied blindly.</ns0:p><ns0:p>Assuming that the application benchmark reflects a real production system or simulates a realistic situation, the resulting call graphs will reflect this perfectly. Unfortunately, this is not always the case, because the design and implementation of a sound and relevant application benchmark has its own challenges and obstacles which we will not address here <ns0:ref type='bibr' target='#b11'>(Bermbach et al., 2017b)</ns0:ref>. Nevertheless, a well-designed application benchmark is capable of simulating different scenarios in realistic environments in order to identify weak points and to highlight strengths. Ultimately, however, for the discussion that follows, we must always be aware that the application benchmark will never be a perfect representation of real workloads. Trace-based workloads <ns0:ref type='bibr' target='#b6'>(Bermbach et al., 2017a)</ns0:ref> can help to introduce more realism.</ns0:p><ns0:p>Considering only function calls is imperfect but sufficient: Our approach relies on identifying the coverage of nodes in call graphs and thus on the coverage of function calls. Additional criteria such as Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>path coverage, block coverage, line coverage, or the frequency of function executions are not considered and subject to future research. We deliberately chose this simple yet effective method of coverage measurement:</ns0:p><ns0:p>(1) Applying detailed coverage metrics such as line coverage would deepen the analysis and check that every code line called by the application benchmark is at least once called by a microbenchmark.</ns0:p><ns0:p>However, if the different paths in a function source code are relevant for production and do not only catch corner cases, they should be also considered in the application benchmark and microbenchmark workload (e.g., if the internal function calls in the Medical Monitoring scenario would differ for female and male patients, the respective benchmark workload should represent female and male patients with the same frequency as in production).</ns0:p><ns0:p>(2) As our current implementation relies on sampling, the probability that a function that is called only once or twice during the entire application benchmark or microbenchmark is called at the exact time a sample is taken is extremely low. Thus, the respective call graphs will usually only include practically relevant functions.</ns0:p><ns0:p>(3) We assume that all benchmarks adhere to benchmarking best practices. This includes both the application benchmark which covers all relevant aspects and the individual microbenchmarks which each focus on individual aspects. This implies that if there is an important function, this function will usually be covered by multiple microbenchmarks which each generate a unique call graph with individual function calls and which therefore will all be included into the optimized microbenchmark suite. Thus, there will still usually be multiple microbenchmarks which evaluate important functions. (4) Both base algorithms ((Chen and Lau, 1998) and <ns0:ref type='bibr' target='#b67'>(Rothermel et al., 1999)</ns0:ref>) are standard algorithms and have recently been shown to work well with modern software systems, e.g., <ns0:ref type='bibr' target='#b55'>(Luo et al., 2018)</ns0:ref>. We therefore assume that a relevant benchmark workload will generate a representative call graph and argue that a more detailed analysis of the call graph would not improve our approach significantly. The same applies to the microbenchmarks and their coverage sets with the application benchmark where our approach will only work if the microbenchmark suite generates representative function invocations. Overall, the optimized suite serves as simple and fast heuristic for detecting performance issues in a pre-production stage but it is -by definition -not capable of detecting all problems: there will be false positives and negatives. In practice, we would therefore suggest to use the microbenchmark-based heuristic with every commit whereas the application benchmark will be run periodically; how often is subject to future research.</ns0:p><ns0:p>The sampling rate affects the accuracy of the call graphs: The generation of the call graphs in our evaluation is based on statistical sampling of stack frames at specified intervals. Afterwards, the collected data is combined into the call graph. However, this carries the risk that, if the experiment is not run long enough, important calls might not be registered and thus will not appear in the call graph. The required duration depends on the software project and on the sampling rate, i.e., at which frequency samples are taken. To account for this, we chose frequent sampling combined with a long benchmark duration in our experiments which makes it unlikely that we have missed relevant function calls.</ns0:p><ns0:p>Our approach is transferable to other applications and domains: We have currently evaluated our approach with two TSDBs written in the Go programming language, but we see no major barriers to implementing our approach for applications written in other programming languages. There are several profiling tools for other programming languages, e.g., for Java or Python, so this approach is not limited to the Go programming language and is applicable to almost all software projects. Moreover, there are various other application domains where benchmarking can be applied which we also discuss in Section 6.</ns0:p><ns0:p>In this work, we primarily intend to present the approach and its resulting opportunities, e.g., for CI/CD pipelines. The transfer to other application domains and programming languages is subject to future research.</ns0:p><ns0:p>The practical relevance of a microbenchmark suite can be quantified quickly and accurately: Our approach can be used to determine and quantify the practical relevance of a microbenchmark suite based on a large baseline call graph (e.g., an application benchmark) and many smaller call graphs from the execution of the microbenchmark suite. On one hand, this allows us to determine and quantify the practical relevance of the current microbenchmark suite with respect to the actual usage: in our evaluation of two different TSDBs, we found that this is ~40% for both databases. On the other hand, this means that ~60% of the required code parts for the daily business are not covered by any microbenchmark, which highlights the need for additional microbenchmarks to detect and ultimately prevent performance problems in both study objects. It is important to note that the algorithm only includes identical nodes Manuscript to be reviewed</ns0:p><ns0:p>Computer Science in the respective graphs; edges, i.e., which function calls which other function, are not considered here.</ns0:p><ns0:p>This might lead to an effectively lower coverage if our algorithms selects a microbenchmark that only measures corner cases. To address this, it may be necessary to manually remove all microbenchmarks that do not adhere to benchmarking best practices before running our algorithm. In summary, we offer a quick way to approximate coverage and practical relevance of a microbenchmark suite in and for realistic scenarios.</ns0:p><ns0:p>A minimal microbenchmark suite with reduced redundancies can be used as performance smoke test: Our first optimization to an existing microbenchmark suite, Algorithm 1, aims to find a minimal set of microbenchmarks which already cover a large part of an application benchmark, again based on the nodes in existing call graphs. Our evaluation has shown that a very small number of microbenchmarks is sufficient to cover a large part of the potential maximum coverage for both study objects. Furthermore, it has also shown that the number of microbenchmarks in a suite can still be significantly reduced, even if we want to achieve the maximum possible coverage. Translated into execution time, this removal of redundancies corresponds to savings of up to 90% in our scenarios, which offers a number of benefits for benchmarking in CI/CD pipelines. A minimal microbenchmark suite could show developers a rough performance impact of their current changes. This enables developers to run a quick performance test on each commit, or to quickly evaluate a new version before starting a more complex and cost-intensive application benchmark. In this setup, the application benchmark remains the gold standard to detect all performance problems while the less accurate optimized microbenchmark suite is a fast and easy-to-use performance check. Finally, it is important to note that the intention of our approach is not to remove 'unnecessary' microbenchmarks entirely but rather to define a new microbenchmark suite as a subset of the existing one which serves as a proxy to benchmarking the performance of the SUT. Although our evaluation also revealed that many microbenchmarks benchmark the same code and are therefore redundant, this redundancy is frequently desirable in other contexts (e.g., for detailed error analysis).</ns0:p><ns0:p>The recommendations can not always be directly used: Our second optimization, Algorithm 2, recommends functions which should be microbenchmarked in order to cover a large additional part of realistic application flow in the SUT. Our evaluation with two open-source TSDBs has shown that this is indeed possible and that already with a small number of additional microbenchmarks a large part of the application benchmark call graph could be covered. However, our evaluation also suggests that these microbenchmarks are not always easy to implement, as the recommended functions are often very generic and abstract. Our recommendation should therefore mostly be seen as an initial point for further manual investigation by expert application developers. Using their domain knowledge, they can estimate which (sub)functions are called and what their distribution/ratio actually is. Furthermore, the application benchmark's call graph can also support this analysis as it offers insights into the frequency of invocation for all covered functions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>RELATED WORK</ns0:head><ns0:p>Software performance engineering traditionally revolves around two general flavors: model-based and measurement-based. The context of our study falls into measurement-based software performance engineering, which deals with measuring certain performance metrics, e.g., latency, throughput, memory, or I/O, over time. Research on application benchmarking and microbenchmarking topics are related to our study, in particular for reducing their execution frequency or generating new microbenchmarks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1'>Application Benchmarking</ns0:head><ns0:p>Related work in this area deals with the requirements for benchmarks in general, application-specific characteristics, and more effective benchmark execution. Furthermore, contributors expand on the analysis of problems and examine the influence of environmental factors on the benchmark run in more detail.</ns0:p><ns0:p>One of the earliest publications addressed general challenges such as testing objectives, workload characterization, and requirements <ns0:ref type='bibr' target='#b79'>(Weyuker and Vokolos, 2000)</ns0:ref>. These aspects were then refined and adapted to present needs and conditions in an ongoing process, e.g., <ns0:ref type='bibr' target='#b45'>(Huppler, 2009;</ns0:ref><ns0:ref type='bibr'>Bermbach et al., 2017b,a;</ns0:ref><ns0:ref type='bibr' target='#b30'>Folkerts et al., 2013)</ns0:ref>.</ns0:p><ns0:p>Current work focuses on application-specific benchmarks. To name a few, there are benchmarks which evaluate database or storage systems, e.g., <ns0:ref type='bibr'>(Bermbach et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b20'>Cooper et al., 2010</ns0:ref> Manuscript to be reviewed <ns0:ref type='bibr'>et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b48'>Kuhlenkamp et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b57'>Müller et al., 2014;</ns0:ref><ns0:ref type='bibr'>Pallas et al., 2017b,a;</ns0:ref><ns0:ref type='bibr' target='#b62'>Pelkonen et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b26'>Difallah et al., 2013)</ns0:ref>, benchmark microservices <ns0:ref type='bibr' target='#b76'>(Villamizar et al., 2015;</ns0:ref><ns0:ref type='bibr'>Grambow et al., 2020a,b;</ns0:ref><ns0:ref type='bibr' target='#b74'>Ueda et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b28'>Do et al., 2017)</ns0:ref>, determine the quality of web APIs <ns0:ref type='bibr' target='#b5'>(Bermbach and</ns0:ref><ns0:ref type='bibr'>Wittern, 2016, 2020)</ns0:ref>, specifically tackle web sites <ns0:ref type='bibr' target='#b56'>(Menascé, 2002)</ns0:ref>, or evaluate other large-scale software systems, e.g., <ns0:ref type='bibr' target='#b47'>(Jiang and Hassan, 2015;</ns0:ref><ns0:ref type='bibr' target='#b39'>Hasenburg et al., 2020b;</ns0:ref><ns0:ref type='bibr'>Hasenburg and Bermbach, 2020)</ns0:ref>. Our approach can use all of these application benchmarks as a baseline. As long as a call graph can be generated from respective SUT during the benchmark run, this graph can serve as input for our approach.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Other approaches aim to reduce the execution time for application benchmarks: AlGhamdi et al. <ns0:ref type='bibr'>(2016,</ns0:ref><ns0:ref type='bibr'>2020)</ns0:ref> proposed to stop the benchmark run when the system reaches a repetitive performance state, and He et al. ( <ns0:ref type='formula'>2019</ns0:ref>) devised a statistical approach based on kernel density estimation to stop once a benchmark is unlikely to produce a different result with more repetitions. Such approaches can only be combined with our analysis and optimization under certain conditions. The main aspect here are rarely called functions which might never be called if the benchmark run is terminated early. If the determination of the call graph is based on sampling, as in our evaluation, the results could be incomplete because relevant calls were not detected.</ns0:p><ns0:p>Many studies and approaches address the factors and conditions in cloud environments during benchmarks <ns0:ref type='bibr' target='#b13'>(Binnig et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b26'>Difallah et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b30'>Folkerts et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b48'>Kuhlenkamp et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b72'>Silva et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b65'>Rabl et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b70'>Schad et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b46'>Iosup et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b53'>Leitner and Cito, 2016;</ns0:ref><ns0:ref type='bibr' target='#b50'>Laaber et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b75'>Uta et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abedi and Brecht, 2017;</ns0:ref><ns0:ref type='bibr' target='#b5'>Bermbach, 2017)</ns0:ref>. These studies and approaches are relevant for the application of our approach. If the variance in the test environment is known and can be reduced to a minimum, this supports the application engineers in deciding at what time and to which extent which benchmark type should be executed.</ns0:p><ns0:p>Finally, there are several studies that aim to identify (the root cause of) performance regressions <ns0:ref type='bibr' target='#b59'>(Nguyen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b31'>Foo et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Daly et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b33'>Grambow et al., 2019b;</ns0:ref><ns0:ref type='bibr' target='#b78'>Waller et al., 2015)</ns0:ref> or examine the influence of environment factors on the system under test, such as the usage of Docker <ns0:ref type='bibr' target='#b32'>(Grambow et al., 2019a)</ns0:ref>. Here, the first mentioned approaches can be combined with our approach very well. If a performance problem is not detected although the microbenchmark suite is optimized, the mentioned approaches can be used in the secondary application benchmark to support developers. Regarding the environmental parameters, these must be taken into account to achieve a reliable and relevant result. If the execution of functions depends on specific environmental factors which differ between test and production environment, this can falsify the outcome.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Microbenchmarking</ns0:head><ns0:p>The second form of benchmarking that is subject of this study is microbenchmarking, which has only recently gained more traction from research. <ns0:ref type='bibr' target='#b52'>Leitner and Bezemer (2017)</ns0:ref> and <ns0:ref type='bibr' target='#b73'>Stefan et al. (2017)</ns0:ref> empirically studied how microbenchmarks -sometimes also referred to as performance unit tests -are used in open-source Java projects and found that adoption is still limited. Others focused on creating performance-awareness through documentation <ns0:ref type='bibr' target='#b41'>(Horký et al., 2015)</ns0:ref> and removing the need for statistical knowledge through simple hypothesis-style, logical annotations <ns0:ref type='bibr' target='#b16'>(Bulej et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b15'>(Bulej et al., , 2017))</ns0:ref>. Chen and Shang (2017) characterize code changes that introduce performance regressions and show that microbenchmarks are sensitive to performance changes. Damasceno <ns0:ref type='bibr' target='#b24'>Costa et al. (2019)</ns0:ref> study bad practices and anti-patterns in microbenchmark implementations. All these studies are complementary to ours as they focus on different aspects of microbenchmarking that is neither related to time reduction nor recommending functions as benchmark targets.</ns0:p><ns0:p>Laaber and <ns0:ref type='bibr'>Leitner (2018)</ns0:ref> are the first to study microbenchmarks written in Go and apply a mutationtesting-inspired technique to dynamically assess redundant benchmarks. Their idea is similar to ours: we use static call graphs to compute the microbenchmark coverage of application benchmark calls, whereas they compute redundancies between microbenchmarks of the same suite. <ns0:ref type='bibr' target='#b27'>Ding et al. (2020)</ns0:ref> study the usability of functional unit tests for performance testing and build a machine learning model to classify whether a unit test lends itself to performance testing. Our redundancy removal approach could augment their approach by filtering out unit tests (for performance) that lie on the hot path of an application benchmark.</ns0:p><ns0:p>To reduce the overall microbenchmark suite execution time, one might execute the microbenchmarks in parallel on cloud infrastructure. Recent work studied how and to which degree such an unreliable environment can be used <ns0:ref type='bibr' target='#b50'>(Laaber et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Bulej et al., 2020)</ns0:ref>. Similar to <ns0:ref type='bibr' target='#b40'>He et al. (2019)</ns0:ref> <ns0:ref type='bibr' target='#b58'>Mostafa et al. (2017)</ns0:ref> rearrange microbenchmarks to execute the ones earlier that are more likely to expose performance changes. These studies focus on reducing the time of performance testing or focusing on the relevant microbenchmarks/commits, which is similar in concept to our motivation. Our study, however, utilizes different granularity levels of performance tests, i.e., application benchmarks and microbenchmarks, to inform which microbenchmarks are more or less relevant.</ns0:p><ns0:p>Finally, synthesizing microbenchmarks could be a way to increase coverage of important parts of an application. These could, for instance, be identified by an application benchmark. SpeedGun generates microbenchmarks for concurrent classes to expose concurrency-related performance bugs <ns0:ref type='bibr' target='#b63'>(Pradel et al., 2014)</ns0:ref>; and AutoJMH randomly generates microbenchmark workloads based on forward slicing and control flow graphs <ns0:ref type='bibr' target='#b66'>(Rodriguez-Cancio et al., 2016)</ns0:ref>. Both approaches are highly related to our paper as they propose solutions for not yet existing benchmarks. However, both require as input a class or a segment that shall be performance tested. Our recommendation algorithm could provide this input.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>Performance problems of an application should ideally be detected as soon as they occur. Unfortunately, it is often not possible to verify the performance of every source code modification by a complete application benchmark for time and cost reasons. Alternatively, much faster and less complex microbenchmarks of individual functions can be used to evaluate the performance of an application. However, their results are often less meaningful because they do not cover all parts of the source code that are relevant in production.</ns0:p><ns0:p>In this paper, we determine, quantify, and improve this practical relevance of microbenchmark suites based on the call graphs generated in the application during the two benchmark types and suggest how the microbenchmark suite can be designed and used more effectively and efficiently. The central idea of our approach is that all functions of the source code that are called during an application benchmark are relevant for production use and should therefore be covered by the faster and more lightweight microbenchmarks as well. To this end, we determine and quantify the coverage of common function calls between both benchmark types, suggest two methods of optimization, and illustrate how these can be leveraged to improve build pipelines: (1) by removing redundancies in the microbenchmark suite, which reduces the total runtime of the suite significantly; and (2) by recommending relevant target functions which are not covered by microbenchmarks yet to increase the practical relevance.</ns0:p><ns0:p>Our evaluation on two time series database systems shows that the number of microbenchmarks can be significantly reduced (up to 90%) while maintaining the same coverage level and that the practical relevance of a microbenchmark suite can be increased from around 40% to 100% with only a few additional microbenchmarks for both investigated software projects. This opens up a variety of application scenarios for CI/CD pipelines, e.g., the optimized microbenchmark suite might scan the application for performance problems after every code modification or commit while running the more complex application benchmark only for major releases.</ns0:p><ns0:p>In future work, we plan to investigate whether such a build pipeline is capable of detecting and catching performance problems at an early stage. Furthermore, we want to examine if a more detailed analysis of our coverage criteria on path or line level of the source code is feasible and beneficial. Even though there are still some limitations, we think that our automated approach is very useful to support larger software projects in detecting performance problems effectively, in a cost-efficient way, and at an early stage.</ns0:p></ns0:div>
<ns0:div><ns0:head>19/23</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>RQ 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>How to increase the practical relevance within cost efficiency constraints? 2/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. The practical relevance of a microbenchmark suite can be quantified by by relating the number of covered functions and the total number of called functions during an application benchmark to each other.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>C</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)Manuscript to be reviewedComputer ScienceAlgorithm 1: Removing redundancies in the microbenchmark suite.Input: C -coverage sets Result: minimalSet -Minimal set of microbenchmarks 1 minimalSet ← / 0 2 while |C| > 0 do 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>6</ns0:head><ns0:label /><ns0:figDesc>https://github.com/TSDBBench/YCSB-TS 8/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. After initialization, the SUT is filled with initial data and restarted for the actual experiment run to clearly separate the program flow.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Already microbenchmarks of the first three recommended functions could increase the project-only coverage up to 90% to 94% for InfluxDB and 94% to 95% for VictoriaMetrics.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>et al. (2020) introduced dynamic reconfiguration to stop the execution when their result is stable in order to reduce execution time. Our approach to remove redundancies is an alternative approach to reduce microbenchmark suite execution time.Another large body of research is performance regression testing, which utilizes microbenchmarks between two commits to decide whether and what to test for performance.<ns0:ref type='bibr' target='#b44'>Huang et al. (2014</ns0:ref><ns0:ref type='bibr' target='#b68'>) and Sandoval Alcocer et al. (2016</ns0:ref>, 2020) utilize models to assess whether a code commit introduces a regression to select versions that should be tested for performance. de Oliveira et al. (2017) and Alshoaibi et al. (2019) decide based on source code indicators which microbenchmarks to execute on every commit.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Our Evaluation uses two open-source TSDBs written in Go as study objects.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Project</ns0:cell><ns0:cell>InfluxDB</ns0:cell><ns0:cell>VictoriaMetrics</ns0:cell></ns0:row><ns0:row><ns0:cell>GitHub URL</ns0:cell><ns0:cell cols='2'>influxdata/influxdb VictoriaMetrics/VictoriaMetrics</ns0:cell></ns0:row><ns0:row><ns0:cell>Branch / Release</ns0:cell><ns0:cell>1.7</ns0:cell><ns0:cell>v1.29.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Commit</ns0:cell><ns0:cell>ff383cd</ns0:cell><ns0:cell>2ab4cea</ns0:cell></ns0:row><ns0:row><ns0:cell>Go Files</ns0:cell><ns0:cell>646</ns0:cell><ns0:cell>1,284</ns0:cell></ns0:row><ns0:row><ns0:cell>Lines Of Code (Go)</ns0:cell><ns0:cell>193,225</ns0:cell><ns0:cell>462,232</ns0:cell></ns0:row><ns0:row><ns0:cell>Contributers</ns0:cell><ns0:cell>407</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>Stars</ns0:cell><ns0:cell>ca. 19,100</ns0:cell><ns0:cell>2,500</ns0:cell></ns0:row><ns0:row><ns0:cell>Forks</ns0:cell><ns0:cell>ca. 2,700</ns0:cell><ns0:cell>185</ns0:cell></ns0:row><ns0:row><ns0:cell>Microbenchmarks in Project</ns0:cell><ns0:cell>347</ns0:cell><ns0:cell>65</ns0:cell></ns0:row><ns0:row><ns0:cell>Extracted Call Graphs</ns0:cell><ns0:cell>288</ns0:cell><ns0:cell>62</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>All workload files are available on GitHub 12 .</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>We configured an application benchmark to use three different workload profiles.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Scenario</ns0:cell><ns0:cell cols='4'>Medical Monitoring Smart Factory Wind Parks</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Load</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Records</ns0:cell><ns0:cell /><ns0:cell>1,512,000</ns0:cell><ns0:cell>1,339,200</ns0:cell><ns0:cell>2,190,000</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Run</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Insert</ns0:cell><ns0:cell /><ns0:cell>1,512,000</ns0:cell><ns0:cell>1,339,200</ns0:cell><ns0:cell>2,190,000</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Scan</ns0:cell><ns0:cell /><ns0:cell>1,680</ns0:cell><ns0:cell>1,860</ns0:cell><ns0:cell>35,040</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Avg</ns0:cell><ns0:cell /><ns0:cell>100,800</ns0:cell><ns0:cell>744</ns0:cell><ns0:cell>35,040</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Count</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>744</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Sum</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>2,976</ns0:cell><ns0:cell>8,760</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Total</ns0:cell><ns0:cell /><ns0:cell>1,614,480</ns0:cell><ns0:cell>1,345,524</ns0:cell><ns0:cell>2,268,840</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Other</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Duration</ns0:cell><ns0:cell /><ns0:cell>7 days</ns0:cell><ns0:cell>31 days</ns0:cell><ns0:cell>365 days</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Tags</ns0:cell><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Initialize Application Benchmark</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Run Workload</ns0:cell></ns0:row><ns0:row><ns0:cell>Start Instances</ns0:cell><ns0:cell>Clone SUT</ns0:cell><ns0:cell>Start SUT</ns0:cell><ns0:cell>Load SUT initial Data SUT with Populate</ns0:cell><ns0:cell>SUT Shutdown</ns0:cell><ns0:cell>Start SUT</ns0:cell><ns0:cell>Execute YCSB-TS Workload</ns0:cell></ns0:row><ns0:row><ns0:cell>Inject CPU Profiler</ns0:cell><ns0:cell>Build SUT</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Shutdown SUT</ns0:cell><ns0:cell>Extract CPU Profile</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>All microbenchmarks together form a significantly larger call graph than the application benchmark (number of nodes) but, however, these by far do not cover all functions called during the application benchmarks (coverage).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Project</ns0:cell><ns0:cell>Scenario</ns0:cell><ns0:cell>Node Type</ns0:cell><ns0:cell cols='2'>Number of Nodes</ns0:cell><ns0:cell>Coverage</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>App</ns0:cell><ns0:cell>Micro</ns0:cell><ns0:cell>Abs.</ns0:cell><ns0:cell>Rel.</ns0:cell></ns0:row><ns0:row><ns0:cell>InfluxDB</ns0:cell><ns0:cell cols='2'>Medical Monitoring overall</ns0:cell><ns0:cell>1,838</ns0:cell><ns0:cell cols='2'>3,069 1,180 64.20%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>project-only</ns0:cell><ns0:cell>737</ns0:cell><ns0:cell>1,621</ns0:cell><ns0:cell>304 41.25%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Smart Factory</ns0:cell><ns0:cell>overall</ns0:cell><ns0:cell>1,504</ns0:cell><ns0:cell>3,069</ns0:cell><ns0:cell>997 66.29%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>project-only</ns0:cell><ns0:cell>517</ns0:cell><ns0:cell>1,621</ns0:cell><ns0:cell>209 40.43%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Wind Parks</ns0:cell><ns0:cell>overall</ns0:cell><ns0:cell>1,895</ns0:cell><ns0:cell cols='2'>3,069 1,192 62.90%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>project-only</ns0:cell><ns0:cell>778</ns0:cell><ns0:cell>1,621</ns0:cell><ns0:cell>318 40.87%</ns0:cell></ns0:row><ns0:row><ns0:cell>VictoriaMetrics</ns0:cell><ns0:cell cols='2'>Medical Monitoring overall</ns0:cell><ns0:cell>1,573</ns0:cell><ns0:cell>1,125</ns0:cell><ns0:cell>691 43.93%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>project-only</ns0:cell><ns0:cell>511</ns0:cell><ns0:cell>454</ns0:cell><ns0:cell>182 35.62%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Smart Factory</ns0:cell><ns0:cell>overall</ns0:cell><ns0:cell>1,238</ns0:cell><ns0:cell>1,125</ns0:cell><ns0:cell>591 47.74%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>project-only</ns0:cell><ns0:cell>371</ns0:cell><ns0:cell>454</ns0:cell><ns0:cell>150 40.43%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Wind Parks</ns0:cell><ns0:cell>overall</ns0:cell><ns0:cell>1,600</ns0:cell><ns0:cell>1,125</ns0:cell><ns0:cell>696 43.50%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>project-only</ns0:cell><ns0:cell>542</ns0:cell><ns0:cell>454</ns0:cell><ns0:cell>207 38.19%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Pair-wise Overlap Between Different Scenarios same characteristics in general. All scenarios trigger unique functions which are not covered by other scenarios, see Table 2. For both SUTs, the Smart Factory scenario generates the smallest unique set of project-only nodes (29 unique functions for InfluxDB and 4 unique ones for VictoriaMetrics) and the Wind Parks scenario the largest one (134 functions for InfluxDB and 77 for VictoriaMetrics).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>Table3shows the detailed coverage levels.As a next step, we also analyze the coverage sets of all application benchmark call graphs to evaluate to which degree the scenarios vary and generate different call graphs. Figures5a and 5bshow the application scenario coverage as Venn diagrams for InfluxDB and VictoriaMetrics using project nodes only. Both 11/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>All scenarios generate an individual call graph. Some functions are exclusively called in one scenario, many are called in two or all three scenarios.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Medical Monitoring</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Medical Monitoring</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Smart Factory</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Smart Factory</ns0:cell></ns0:row><ns0:row><ns0:cell>Wind Parks</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Wind Parks</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>(a) InfluxDB</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>(b) VictoriaMetrics</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>10% 40% 50% Figure 5. 0% 20% 30% Project-only Coverage</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell /><ns0:cell>17</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>23</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>27</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>Number of Microbenchmarks</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Medical Monitoring</ns0:cell><ns0:cell cols='3'>Smart Factory</ns0:cell><ns0:cell cols='3'>Wind Parks</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>50%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>40%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Project-only Coverage</ns0:cell><ns0:cell>20% 30%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>10%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>14</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>Number of Microbenchmarks</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Medical Monitoring</ns0:cell><ns0:cell cols='3'>Smart Factory</ns0:cell><ns0:cell cols='3'>Wind Parks</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>12/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021) Manuscript to be reviewed Computer Science (a) InfluxDB (b) VictoriaMetrics Figure 6. Already the first four microbenchmarks selecting by Algorithm 1 cover 28% to 31% for InfluxDB and 29% to 34% for VictoriaMetrics of the respective application benchmark's call graph. 13/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='7'>https://golang.org/pkg/runtime/pprof 8 https://www.influxdata.com 9 https://victoriametrics.com 10 https://github.com/TSDBBench/YCSB-TS 11 https://github.com/martingrambow/YCSB-TS 12 https://github.com/martingrambow/YCSB-TS/tree/master/workloads 9/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)</ns0:note>
<ns0:note place='foot' n='23'>/23 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53568:1:1:NEW 8 Apr 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Mar 10, 2021
Dear Editor and Reviewers,
we would like to thank you for your time and effort to provide us constructive feedback on our submission “Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suites”. We have revised the manuscript and have considered all your comments.
Below, we describe how we addressed the raised concerns and explain what changes we made to improve our paper. To easier distinguish between your comments and our notes, our notes are in green.
Kind regards,
Martin Grambow, Christoph Laaber, Philipp Leitner, and David Bermbach.
Reviewer 1 (Edd Barrett)
Basic reporting
The submission describes a method by which to minimise a suite of microbenchmarks for use within a CI pipeline, where quick feedback is desirable. Given a reference 'application benchmark' and a suite of microbenchmarks, the method uses coverage metrics to boil down the microbenchmark suite into a (hopefully) representative set of bare-minimum microbenchmarks.
In addition, a method is also proposed for bulking up a too-small microbenchmark suite, by pointing out areas of code which are not yet covered.
The submission is well-written and the spelling and grammar is excellent. Referencing seems sufficient and I have no issues with the structure of the article.
The approach is well-explained and I feel that I have a decent grasp of what the authors are proposing.
Experimental design
Unfortunately, I don't think the design of the experiment is quite right.
It appears that the approach is underpinned by the assumption that microbenchmarks which cover the same functions are equal. From experience, it is not only code coverage that characterises a benchmark, but also how frequently certain parts of code are executed.
To frame this in the context of the paper's approach, suppose that a reference application benchmark AB spends 90% of its time in a tight loop calling a function F1. That means that benchmarking measurements of AB are very much coupled with the performance of F1. Inefficiencies in this function will be amplified because the function is called so frequently. If AB is truly a proxy for the production environment, then F1 had better be really well optimised!
Now suppose that there's a microbenchmark MB1 that executes (covers) exactly the same set of functions as AB, but which only calls F1 very infrequently. Suppose that in MB1 only 1% of runtime is spent in F1. The proposed method would select MB1 because, according to the paper's definition of 'practical relevance', this is the ideal microbenchmark: the coverage of both the AB and MB1 are identical. Yet performance measurements of MB1 are highly unlikely to discover inefficiencies in F1, since 99% of runtime is spent elsewhere and any performance irregularities caused by F1 are therefore lost in the noise. Thus MB1 is a poor proxy for AB and if F1 did have performance issues, MB1 would fail to find them, whereas running AB would find them immediately.
The application benchmark (if it is designed properly) always remains as the gold standard to detect all performance problems which are relevant for the daily use. The optimized microbenchmark suite, on the other hand, is a simple, cheap, and fast heuristic solution which covers all relevant parts of the software but is inherently not able to detect all issues, there will be false positives and false negatives.
For our approach, we assume that both benchmark types comply with the best practices in the respective discipline. We do not design or implement benchmarks, but work with existing benchmarks of several software projects and there might be corner cases which are not covered: A large software project usually contains hundreds or thousands of functions which have to be evaluated in benchmarks. While a well-designed application benchmark will cover a large portion of them, microbenchmarks will usually each cover a very small fraction and only evaluate a single aspect. A situation in which “a microbenchmark MB1 […] executes (covers) exactly the same set of functions as AB” is therefore only theoretically possible, but not realistic (especially in a large project) as it describes a very bad benchmark design for both the application benchmark as well as the microbenchmark suite.
A microbenchmark suite should, just like an application benchmark, find as many performance problems as possible before releasing a new version. However, because the individual benchmarks of a microbenchmark suite only evaluate individual functions and not the interaction of the various application components, it will never be able to detect all problems.
As each microbenchmark covers an individual aspect, each microbenchmark evaluates a different set of functions. If there is an important function F1 in the project, this function is usually covered by multiple microbenchmarks, e.g., MB1 covers F1, F2, and F3; MB 2 covers F1, F3, and F4. Our approach would take both MB1 and MB2 into the relevant suite because the sets differ and the probability that a problem will be detected in F1 increases with the number of microbenchmarks which evaluate F1.
We tried to clarify this in the paper.
The experiments conducted then measure the ability for the proposed approach to minmimise the microbenchmark suite into one with similar code coverage as the application benchmark. I'd argue that instead the evaluation should check whether the reduced benchmark suite has similar ability to detect the same performance problems as the application benchmark. At the very least, I think the evaluation should test whether code coverage is a good proxy for this (I doubt it is, but I'm willing to be proven wrong).
Each application benchmark and workload depend on the individual application and use case scenario. For checking whether the reduced suite can detect the same problems as the application benchmark (or whether code coverage is a good proxy), we would need detailed domain knowledge:
First, we would need real performance issues which are well documented, e.g., a list of commits which introduce a performance issue (incl. function and class name). Unfortunately, this data set does not exist for any of the evaluated projects or cannot be generated in any straightforward way.
Second, we would need detailed domain knowledge about real use case scenarios and the microbenchmark suite. While we have implemented three different scenarios as application benchmarks to the best of our knowledge, only the real project stakeholders can create realistic and relevant application benchmarks. Moreover, only developers which know relevant functions and source code parts can identify redundant microbenchmarks (aided by our approach) and implement missing ones.
Since we are not project insiders with domain knowledge for any of the evaluated projects, we cannot provide a more detailed evaluation at this time.
To fix this approach properly, the definition of practical relevance would need to be revised to take 'hotness' (frequency of execution) of functions into account. Perhaps function nodes in the call graph need some kind of weighting applied.
As described above, we think that an approach based only on code coverage is sufficient to detect many (not all) performance problems in advance, but there is another problem with the function “hotness” because of the sampling:
If the application benchmark is designed properly and executed for a sufficient long runtime, the resulting graph will reflect all relevant functions and it will also be possible to categorize functions based on some hotness criteria. The graphs for microbenchmarks which “only” run 10 seconds each, on the other hand, will only reliably find hot functions and might miss functions which are called only a few times due to the sampling. Moreover, as the microbenchmarks each usually focus on a single aspect, the function execution frequency in a microbenchmark might differ from the frequency of the same function in the application benchmark. Thus, it will be hard to compare and match under these conditions and we decided to not take the frequency into account, but always use the whole call graph (and not only hot functions).
We tried to clarify this in the paper.
Validity of the findings
I think the validity of the findings is limited due to the problems outlined in the previous section.
Comments for the Author
How do you deal with non-determinism in benchmarks, where different parts of the CFG are executed in subsequent executions? Presumably you'd union the CFGs of some 30 runs?
While individual executions of microbenchmarks and requests in application benchmarks potentially yield different call graphs, the final call graph used in our approach will cover all functions across the repeated executions and requests. Thus, the call graphs used are the union of a large number of potentially non-deterministic individual executions/requests. Each microbenchmarks was repeatedly run for at least ten seconds and the usual duration of one invocation is usually a few milliseconds – consequently, the call graph of one execution is the union of all invocations. For the application benchmark, on the other hand, the same principle applies because of the individual requests (see Table 2) which each potentially generate different inner calls, but these finally sum up to a deterministic call graph.
In Section 3.1 I wasn't sure where 6/16 comes from. Should it be 6/19?
We have updated the text, thanks for your comment.
'our proof of concept simply selects a random coverage set in case of a tie' -- This introduces non-determinism into your algorithm. Should you be repeating your experiments (say) 30 times and reporting some measure of variation then? You also mention sampling to get the CFG, which is another source of non-determinism.
Here, our text was misleading, we adjusted the text accordingly:
We introduced some secondary sorting criteria in our implementation (based on ordering in a Hashset which seems to be random, yet is deterministic) but did not encounter any tie situations in our study. Therefore, we believe that these ties are very rare in practice and do not warrant much implementation effort towards finding optimal strategies for ties.
Our approach is designed to work easily, without adaptions, builds on existing microbenchmarks, and advises developers which microbenchmarks are redundant or missing. The actual decision which benchmarks can be skipped, or which should be implemented, must always be made by a project insider. If there is a non-deterministic situation, it can always easily be detected by executing the respective benchmark or algorithm multiple times and a project insider can make an informed decision.
Regarding the sampling we refer to our previous answer that the non-determinism introduced by sampling is mitigated by the number of executions/requests.
'covers the application benchmark to the same extend' -> '... to the same extent'
We have updated the text, thanks for your comment.
'coverage of all covered function' -> 'coverage of all covered functions' or even better 'coverage of all executed functions'.
We have updated the text, thanks for your comment.
Algorithm 2: '{ A | a \in A \wedge ...}' should that be '{ a | a \in A ...}' (lower case 'a' in the set comprehension)?
We have updated the text, thanks for your comment.
Reviewer 2 (Muhammad Arshad Islam)
Basic reporting
Following are a few ambiguous statements that should be reviewed.
'Depending on the application scenario, the 288 microbenchmarks from which we extracted call graphs were reduced to either 19, 25, or 27.'
The context of private and business customers is not clear.
'e.g., if a function calls different other functions depending on whether or not a value corresponds to a business customer, the respective benchmark workload should represent private and business customers in the same distribution...'
'Thus, the determined coverage corresponds to a maximum value, and the actual value might be smaller. '
'Finally, synthesizing microbenchmarks could be a way to increase coverage of important parts of an application, as, for example, identified by an application benchmark.'
We have updated the respective sentences, thanks for your comments.
Experimental design
One of the objectives of the article is to remove the redundancies in micro-benchmark suites. Authors should address the payload aspect because behavior of the redundant benchmark may vary depending on the inherit nature of the payload.
For our approach, we assume that both benchmark types comply with the best practices in the respective benchmark class. We do not design or implement benchmarks, but work with existing benchmarks of several software projects. The payload design is part of the benchmark design in general and can only be done by project insiders. In particular, this means that any payload aspects that matter to the system under test should already be covered by the respective benchmarks.
Usually, each microbenchmark covers an individual aspect and frequently calls the respective function. A specialized microbenchmark, which calls a function with a special payload (besides the usual payload), would usually be implemented in a separate microbenchmark to avoid situation in which the microbenchmark’s results depend on the actual payload, i.e., there can be multiple microbenchmarks which evaluate the same function. Complex application benchmarks, on the other hand, are tightly coupled with the payload and we covered this aspect in our evaluation by designing three different use cases.
Moreover, as each microbenchmark covers an individual aspect, each microbenchmark evaluates a different set of functions. If there is an important function F1 in the project, this function is usually covered by multiple microbenchmarks, e.g., MB1 covers F1, F2, and F3; MB 2 covers F1, F3, and F4. Each of these microbenchmarks can be expected to use different payloads (if payload matters). Our approach would include both MB1 and MB2 into the relevant suite because the sets differ and the probability that a problem will be detected in F1 increases with the number of microbenchmarks which evaluate F1.
We tried to clarify this in the paper.
Call graph related approaches are usually used in software testing domain. How can the definition of SUT be satisfied if one function is called only once in the micro benchmark? Authors have agreed that benchmark cannot be a perfect representative of a real application but in this case, a system cannot be characterized as 'under stress' if a benchmark function is called only once. The authors do claim in the last section that benchmark relevance can be increased up to 100% that seems contradictory to previous statement.
Our approach is based on the sampling outcome of the profiling tool. The probability that a function that is called only once or twice during the entire benchmark is called at the exact time a sample is taken is extremely low. If the application benchmark is designed properly and executed for a sufficient long runtime, the resulting graph will reflect all relevant functions and will miss functions which are only called once or twice during the benchmark (but will find all functions which are executed multiple times and hence are relevant). The graphs for microbenchmarks which “only” run 10 seconds each, on the other hand, will only reliably find functions which are executed very frequently and there is a higher chance to miss functions which are called only a few times due to the sampling.
Furthermore, Microbenchmarks are usually not a stress test as it is hard to set a system under stress with a few hundreds or thousands of function invocations. Our approach is, to the best of our knowledge, the first one which tries to combine both application benchmarks and microbenchmarks to create synergies.
We tried to clarify this in the paper.
The paper has built the proposed technique using the two algorithms of Chen and Lau (1998) and Rothermel et al. (1999) that overlap with software testing domain. Considerable work has been done in the software testing domain to ensure comprehensive coverage. The authors should provide arguments that these relatively old techniques can achieve desired goals in all kinds of applications.
Both algorithms are standard algorithms for this domain and already have proven to achieve the desired goals. We added the publication 'How Do Static and Dynamic Test Case Prioritization Techniques Perform on Modern Software Systems? An Extensive Study on GitHub Projects' by Luo et al. (IEEE TSE '19) to our reference list and tried to clarify this in the paper. It shows that both algorithms can still keep up with modern approaches very well and, although they are somewhat older, still are state of the art.
Why is there a need for calling SortSet(C) in each iteration when C does no change during the later steps in Algo 1?
The sets in C change with every iteration (line 9-11) after the set with the largest coverage is added to the minimal set. Next, the covered functions must be removed from all remaining sets in C (line 9-11). Thus, the old sorted list might not be up to date and must be sorted again.
E.g.:
MB1 covers 4 functions: f1, f2, f3, f4
MB2 covers 3 functions: f2, f3, f5
MB3 covers 2 function: f5, f6
The algorithm would add MB1 to the minimal set and remove f1, f2, f3, and f4 from the sets. The old sorting (MB2 before MB3) must be updated because MB3 now covers more functions (f5 and f6) than MB2 (only f5).
How n will be determined representing the number of benchmarks to be recommended used in Algo 2?Can it not be determined heuristically?
The application benchmark (if it is designed properly) always remains as the gold standard to detect all performance problems which are relevant for the daily use. The optimized microbenchmark suite, on the other hand, is a simple, cheap, and fast heuristic solution which covers all relevant parts of the software problem but is by definition not able to detect all issues correctly, there will be false positives and false negatives.
The decision on what value to use for n in our heuristic depends on the individual use case and is a trade-off between accuracy (high n) and speed (low n). If the optimized microbenchmark suite should be as accurate as possible and cover all functions, n must be high, and the improved suite will run longer than a suite with fewer microbenchmarks (low n).
Only developers or project insiders can decide on n and it always depends on the concrete use case. Thus, we evaluated several values for n and showed the impact in our empirical evaluation. As an example, a “low n” suite could be run in the IDE for every build and a “high n” suite could be run on every git commit.
Figure 2 presents an example with 2 entry points. How efficient will be the proposed technique in case the redundancy is not at application level?
As application benchmarks evaluate the interactions of several components, multiple entry points are common. For microbenchmark which usually just evaluate a single function (which itself calls other functions to form the call graph), however, multiple entry points are very rare. Nevertheless, the approach would also work with multiple entry points for a microbenchmark.
Our evaluation identifies redundancies among microbenchmarks with respect to what they cover in the application benchmark. In general, however, it can also be applied using a profiling dump from a production system.
Figure 2 shows 19 nodes and its is not clear that how overall coverage comes out to be 6/16?
We have updated the text, thanks for your comment.
Validity of the findings
Various results have been presented that are related with the RQ2 however the the crucial aspect of 'how to improve execution efficiency' is not obvious from the results and discussion.
The runtime duration of the microbenchmark suite is reduced because fewer microbenchmarks are included while still covering all practically relevant functions.
We adapted RQ2.
The authors have suggested that the proposed approach is valid for application that allow profiling in the context of call graphs. Is there any estimate for the percentage of applications that have this characteristic?
There are many profilers available for all common programming languages. Despite that, it is always possible to implement a simple own profiling tool by instrumenting the source code with detailed logging statements which can be analyzed after a benchmark run to generate the call graphs. We used Go applications as Go already comes with a built-in profiling tool.
We tried to clarify this in the paper.
Authors should explore the reasons that all 3 applications behave very similarly in the context of Figure 7. It can be argued using Figure 7 that there is no need to include more than 5-6 benchmarks. Is this result dependent on the ordering of the inclusion of the benchmarks?
As described above, this decision depends on the use case and is trade-off between accuracy (high n) and speed (low n). Application benchmarks cover the core functionality of a software system. For our evaluation using time series databases, this core functionality is storing values and calling functions in HTTP interfaces (where the workload arrives). Thus, the difference between the different load profiles is quite small and reflects the particular characteristics of the respective workload, e.g., a special algorithm which is triggered because of some characteristics in the load. Thus, the first recommended benchmarks will usually be identical for all use cases as they reflect some core functionality and the final result does not depend on the inclusion order. Nevertheless, as the specific characteristics of a workload show up for later recommended ones, the inclusion order might have an impact on the final result.
To sum up: The order can have an impact on the order of the next chosen benchmark.
If 40% coverage is provided by the micro-benchmarks used in this experiment, how satisfied is the community that have been using these micro-benchmarks? Were they getting any contradicting results? If yes, are their any efforts to replace these benchmarks with better micro-benchmarks?
Both study objects are open source projects and it is unclear why/when the microbenchmark were created. Usually microbenchmarks are written for critical functions, after a performance issue, or to evaluate a new feature. Moreover, it is unclear when these microbenchmarks are used, e.g., are there some developers who regularly execute the suite on their own devices or is there a build pipeline which executes the suite for every build. Thus, it is hard to evaluate this point at this time, but your comment raises an interesting point for future research.
" | Here is a paper. Please give your review comments after reading it. |
108 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Due to the application of vital signs in expert systems, new approaches have emerged, and vital signals have been gaining space in biometrics. One of these signals is the electroencephalogram (EEG). The motor task in which a subject is doing, or even thinking, influences the pattern of brain waves and disturb the signal acquired. In this work, biometrics with the EEG signal from a cross-task perspective are explored. Based on deep convolutional networks (CNN) and Squeeze-and-Excitation Blocks, a novel method is developed to produce a deep EEG signal descriptor to assess the impact of the motor task in EEG signal on biometric verification. The Physionet EEG Motor Movement/Imagery Dataset is used here for method evaluation, which has 64 EEG channels from 109 subjects performing different tasks. Since the volume of data provided by the dataset is not large enough to effectively train a Deep CNN model, it is also proposed a data augmentation technique to achieve better performance. An evaluation protocol is proposed to assess the robustness regarding the number of EEG channels and also to enforce train and test sets without individual overlapping. A new state-of-the-art result is achieved for the cross-task scenario (EER of 0.1%) and the Squeeze-and-Excitation based networks overcome the simple CNN architecture in three out of four cross-individual scenarios.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Nowadays, different biometric modalities are being explored. They have been gradually replacing the logging/password systems, once it represents the future path in terms of digital security. Among them, one of the categories which has gained attention is the vital signals based biometric modalities, such as the Electrocardiogram (ECG) <ns0:ref type='bibr' target='#b23'>(Luz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Silva et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Garcia et al., 2017)</ns0:ref> and the Electroencephalogram (EEG) <ns0:ref type='bibr' target='#b34'>(Schons et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Poulos et al., 1999;</ns0:ref><ns0:ref type='bibr' target='#b3'>Carrión-Ojeda et al., 2020)</ns0:ref>. The EEG, initially used for clinical purposes, have already been used by consumers and incorporated into portable devices for applications not directly related to medicine, as in the gaming industry, education Many authors have addressed the EEG signal as a biometric modality with traditional machine learning techniques <ns0:ref type='bibr' target='#b10'>(Fraschini et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b36'>Singh et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b44'>Yang et al., 2018)</ns0:ref>. Today, deep learning represents the state-of-the-art for several computer vision and pattern recognition problems and also there are works in the literature addressing deep-learning-based biometrics on EEG <ns0:ref type='bibr' target='#b25'>(Ma et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b4'>Das et al., 2017</ns0:ref><ns0:ref type='bibr' target='#b5'>Das et al., , 2018;;</ns0:ref><ns0:ref type='bibr' target='#b27'>Mao et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b41'>Wang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilaiprasitporn et al., 2019)</ns0:ref>. Although, it is well known that methods based on deep learning need large amounts of data, especially, deep convolution-based architectures, a workaround for this issue is to artificially create new samples, such as the data augmentation techniques.</ns0:p><ns0:p>In a preliminary work <ns0:ref type='bibr' target='#b34'>(Schons et al., 2017)</ns0:ref>, a data augmentation technique was proposed to increase the training data, improving the network performance, and enabling the deep learning model to converge for the Physionet -EEG Motor Movement/Imagery Dataset.</ns0:p><ns0:p>It has been shown in previous works evidence that EEG has biometric potential under the performance of different tasks <ns0:ref type='bibr' target='#b39'>(Vinothkumar et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>DelPozo-Banos et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Kong et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b21'>Kumar et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b11'>Fraschini et al., 2019)</ns0:ref>. In this work, the explore EEG as a biometric modality under this challenging multi-task perspective. In a real situation, a person can be in motion, or even, imagining an action, and with that, generating interference that can be captured by the EEG signal. Thus, a deep feature descriptor approach is proposed in order to mitigate the impact of Movement/Imagery tasks on the EEG biometrics. A evaluation whether EEG biometrics benefit from any specific task, or from its nature (motor or imaginary) is conducted. The relevance of the number of channels is also investigated as well as a more robust model based on Squeeze-and-Excitation blocks <ns0:ref type='bibr' target='#b16'>(Hu et al., 2018)</ns0:ref>. The Squeeze-and-Excitation blocks promote a channel-wise feature response and since the EEG signal can have up to 64 channels and each channel captures different nuances of brain movement, these blocks are a promising research path for this type of biometric modality.</ns0:p><ns0:p>This work extends the one presented in the 22 nd CIARP (Iberoamerican Congress on Pattern Recognition) <ns0:ref type='bibr' target='#b34'>(Schons et al., 2017)</ns0:ref>. Here, the following improvements are described:</ns0:p><ns0:p>• It presents results for a cross-task (one task is used to train and another to test) multi-task (two or more tasks are used to train and another to test), and cross-individual scenario (the individual used to train is not used to test) extending the approach showed in <ns0:ref type='bibr' target='#b34'>(Schons et al., 2017)</ns0:ref>, considering motor task interference.</ns0:p><ns0:p>• It presents an extensive literature review.</ns0:p><ns0:p>• It explores the Squeeze-and-Excitation blocks instead of conventional CNN blocks.</ns0:p><ns0:p>• A better description of the methodology/experimentation process and detailed analysis regarding parameter tuning.</ns0:p><ns0:p>The contributions of this work are summarized as follow:</ns0:p><ns0:p>• The new state-of-the-art method for EEG-based biometric verification evaluated in a protocol aiming a cross-task scenario.</ns0:p><ns0:p>• A data augmentation technique based on the exploration of the overlap between signals used during the training phase.</ns0:p><ns0:p>• Competitive results with fewer EEG channels.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• An in-depth discussion about the implications of using multiple motors / imaginary tasks for biometrics with EEG.</ns0:p><ns0:p>The results reported in this work corroborates with the feasibility of the EEG as a biometric modality, once the evaluation carried with different tasks in different runs have reported promising results. Although our experiments reveal that physical motor tasks (not imaginary) hinder the task of identification, making the problem more challenging.</ns0:p><ns0:p>With more ubiquitous acquisition method (as illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) and the advancement of hardware accelerators for deep learning, there is a large potential for its usage in real-life applications.</ns0:p><ns0:p>The remaining of this work is organized as follows. Related works are presented in the second section. In the third section, the proposed methodology is presented, while the experiments and results are described in the fourth section. Finally, the conclusions are pointed out in the fifth section.</ns0:p></ns0:div>
<ns0:div><ns0:head>DATASET AND RELATED WORKS</ns0:head><ns0:p>In this section, a popular benchmark dataset is presented: EEG Motor Movement/Imagery dataset <ns0:ref type='bibr' target='#b10'>(Fraschini et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b44'>Yang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b38'>Sun et al., 2019)</ns0:ref>, available in Physionet Resource 1 . The EEG Motor Movement/Imagery dataset is one of the most comprehensive datasets of EEG signals and also contemplates motor tasks during the acquisition sessions.</ns0:p><ns0:p>The majority of the works presented in this section use the EEG Motor Movement/Imagery dataset <ns0:ref type='bibr' target='#b14'>(Goldberger et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b33'>Schalk et al., 2004)</ns0:ref>. However, other datasets were considered by authors in the literature; thus, the presented methods are categorized according to the signals type used in the evaluation: baseline task or single-task and multi-task.</ns0:p></ns0:div>
<ns0:div><ns0:head>EEG Motor Movement/Imagery Dataset</ns0:head><ns0:p>The EEG Motor Movement/Imagery dataset is a popular open-access dataset that consists of over 1,500 recordings of EEG signals obtained from 109 volunteers, with one-minute or two-minutes lengths. The EEG signal was obtained from 64 electrodes distributed over the scalp. The EEG subset contains a total of 109 subjects and 14 acquisition records. From the acquisition records, there are two motor and two imaginary task types and two baseline tasks.</ns0:p><ns0:p>The baseline tasks capturing occurred in a controlled scenario in which the individual follows the command to open and close their eyes during specifics moments. Except for the baseline, all other tasks have three acquisition sessions.</ns0:p><ns0:p>The records of three sessions of four different motor/imaginary tasks result in 12 sessions. Regarding the 12 sessions, there are four distinct motor or imaginary task types (T1-T4), each one with three runs (R1-R3). The activities assigned to the tasks are:</ns0:p><ns0:p>• Task 1 (T1): In a screen, a target appears on the left or right side, and a subject is requested to open and close the corresponding fist on the same side until the target disappears. After that, the subject may relax.</ns0:p><ns0:p>• Task 2 (T2): In a screen, a target appears on the left or right side, and the subject is requested to imagine opening and closing the corresponding fist on the same side until the target disappears.</ns0:p><ns0:p>After that, the subject may relax.</ns0:p><ns0:p>• Task 3 (T3): In a screen, a target appears at the top or bottom of a screen. If the target is on the top, the subject is asked to open and close both fists. Otherwise, if the target is in on the bottom, the subject is requested to open and close both feet. This task runs until the target disappears. After that, the subject may relax.</ns0:p><ns0:p>• Task 4 (T4): On a screen, a target appears at the top or bottom of a screen. If the target is on the top, the subject is asked to imagine opening and closing both fists. Otherwise, if the target is in on the bottom, the subject is requested to imagine opening and closing both feet. This task runs until the target disappears. After that, the subject may relax. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Related Works</ns0:head><ns0:p>It is possible to summarize the related works evaluated under the EEG Motor Movement/Imagery dataset into two groups: (1) the ones focused on the baseline task, i.e., eyes close (EC) and eyes open (EO) tasks, and (2) the ones focused on the multi-task recognition. Regarding the first group, the individuals are in a rest state with the eyes closed or opened <ns0:ref type='bibr' target='#b6'>(Del Pozo-Banos et al., 2014)</ns0:ref>. In contrast, in the second group, the individuals were requested to do simple motor or imaginary tasks.</ns0:p><ns0:p>The work in <ns0:ref type='bibr' target='#b15'>(Gui et al., 2019)</ns0:ref> presents a survey on EEG signals biometry. The authors reviewed the state-of-the-art for the recognition task and concluded that the EEG may has some advantages over other biometrics markers. One of the significant advantages of the EEG to consider is that the user must agree and consent to record the EEG signal what difficulties the theft of this kind of signal. <ns0:ref type='bibr' target='#b15'>Gui et al. (2019)</ns0:ref> also explored all steps regarding the EEG signal, from the acquisition to the classification, and described several EEG datasets from the literature. Moreover, the authors also detected open problems related to brain biometrics, such as multi-modality and the persistence of the EEG. According to the authors, the EEG signal has a great potential as biometry from the theoretical point of view and due to results obtained from empirical experiments. <ns0:ref type='bibr' target='#b36'>Singh et al. (2015)</ns0:ref> proposed an approach to find discriminative characteristics described by unique patterns from the relations generated among the cerebral regions. The authors preprocessed the data by applying a low-pass filter to reduce the noise and used a technique called Magnitude Squared Coherence, which relies on the phase constancy while a brain area is interacting with another, generate the signal descriptors. The classification process performed the K-Nearest Neighbors (KNN) algorithm. The proposed approach evaluated 64, 10, 6, and 5 channels for the EEG signal capturing and achieved a final accuracy score for the identification reported task of 100% with 64 electrodes. This experiment corroborates the hypothesis that the more channels/information employed, the better will be the result. <ns0:ref type='bibr' target='#b10'>Fraschini et al. (2015)</ns0:ref> evaluated a biometric verification approach using all the 64 electrodes available in the dataset. The proposed approach can be divided into four steps: (1) filter the raw data to allow a study of the specific frequency data from the cerebral network; (2) estimate the statistical independency in pairs among EEG temporal series; (3) create a weighted graph where each edge represents a functional connection among the connected electrodes on the head; and (4) perform a characterization of the cerebral functional organization aiming a centrality measure to qualify the importance of each node inserted in the graph. The result is a 64-length-vector used to classify and to report the results segmented by the frequency. The authors reported an EER of 4.4% using the gamma frequency (30-50 Hz) and observed a difficulty enhancement for individual verification when the frequency band gets slower.</ns0:p><ns0:p>CNNs are also evaluated for the EEG problem. <ns0:ref type='bibr' target='#b25'>Ma et al. (2015)</ns0:ref> created a CNN architecture with two convolution layers, two pooling layers, and one fully connected layer. However, the authors did not use all 109 individuals. Instead, <ns0:ref type='bibr' target='#b25'>Ma et al. (2015)</ns0:ref> evaluated their approach with 10 individuals for testing. The authors divided the 60 seconds of EO and EC in 55 seconds in a one-second fragment for training and the remaining five seconds for testing. The identification rate reported was in the range of 64% to 86%. <ns0:ref type='bibr' target='#b4'>Das et al. (2017)</ns0:ref> evaluated a CNN with four convolutional layers, two max-pooling, one rectified linear unit (ReLU), and a softmax-loss layer. The dataset used to perform the experiments were acquired from 40 subjects over two distinct sessions separated by a week in a visual stimuli environment. The correct recognition rate (CRR) reported was in the range between 80.65% and 98.8% in the best scenario. <ns0:ref type='bibr' target='#b5'>Das et al. (2018)</ns0:ref> also performed EEG biometry with a CNN containing four convolutional layers, two max-pooling, a ReLU, and a softmax-loss layer. The dataset evaluated consists of 40 subjects performing imaginary arms and legs movements collected in two sessions with an interval of two weeks. The authors achieved the accuracy scores of 81.25% and 93% for rank-1 and rank-2, respectively. <ns0:ref type='bibr' target='#b27'>Mao et al. (2017)</ns0:ref> proposed another approach using CNN. The authors used convolutional layers with ReLu and max-polling, followed by fully connected layers with softmax. The evaluation was made in a dataset containing data of 100 subjects using 64 channels during a driving experiment. The CRR achieved was 97%. <ns0:ref type='bibr' target='#b26'>Maiorana and Campisi (2018)</ns0:ref> analyzed the discriminative characteristics of EEG in longitudinal behavior, aiming to verify the pertinence across time. The study utilized the signals captured from 45 users during approximately 36 weeks from the first EEG data recording in five to six sessions. The results showed that aging could damage the EEG traits, but yet the authors could achieve EER below 2% from one data collection to another distant in time. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>scenario. The authors used all the 109 individuals of the EEG Motor Movement/Imagery Dataset and also created a dataset with 59 subjects performing attention and descriptive tasks. The proposed approach is a Graph Convolutional Neural Network-based to extract discriminating features from the EEG graphs created by the Phase Locking Value (PLV) algorithm. Using the same task to train and test, the authors reported an average CRR of 99.96% and 98.94% for the EEG Motor Movement/Imagery Dataset and the created datasets, respectively. A degradation of 86.21% was observed when trained on resting-state data and tested in diverse states in the EEG Motor Movement/Imagery Dataset. However, mixing signals in different states for training the GCNN resulted in an average CRR of 99.98% with the Physionet bank and 99.96% with the author's personal acquisition bank. <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref> also evaluated the impact of motor/imaginary tasks during the recognition process in biometric identification and verification tasks. The authors used a discrete Wavelet transform in each electrode and created the feature vector as the concatenation of the standard deviations of all electrodes.</ns0:p><ns0:p>The vector created is used as the input for a Linear Discriminant Analysis (LDA) classifier. A majority voting rule merges all classifier decisions of all time windows. In one of the experiments performed, the authors trained on tasks T1 + T2, tested on T1R2 using 9 electrodes and reached an identification accuracy of 100% and an EER of 2,63%. They also analyzed the impact of using only 3 channels and achieved a maximum accuracy of 89% when training with T1R1 + T1R3 and testing with T1R2. In the verification scenario, the EER is approximately three times greater using the reduced number of electrodes.</ns0:p><ns0:p>To find the best set of electrodes for an identification problem, <ns0:ref type='bibr' target='#b0'>Alyasseri et al. (2020)</ns0:ref> used the EEG Motor Movement/Imagery Dataset as a single matrix containing the signals from all tasks performed by the 109 individuals. The authors propose a hybrid optimization technique to find the most relevant channels to extract discriminant characteristics, using the Flower Pollination Algorithm (FPA) and the β -hill-climbing algorithm (β -hc). Besides, the proposed approach presented a study of the best domain to extract the characteristics of an EEG signal, comparing the use of characteristics in the time domain, frequency domain, and both simultaneously. The hybrid method FPAβ -hc obtained better results than isolated FPA in most cases. The accuracy reported of 96.05% was achieved using a Support Vector Machine (SVM) classifier with the RBF kernel and the characteristics in both the time and frequency domain simultaneously.</ns0:p><ns0:p>In the work proposed in <ns0:ref type='bibr' target='#b38'>(Sun et al., 2019)</ns0:ref>, the authors applied a hybrid convolution and recurrent deep neural network with a Long-Short Time Memory (LSTM) for individual identification. The authors tested the use of 4, 16, 32, and 64 electrodes. Besides, instead of using 12-seconds length segments, a 1-second length segment was used and reached 0.41% of EER with 16 electrodes. The focus of their discussion is the trade-off between performance and the use of the electrodes/recording time. The authors outweighed the loss in EER to the reduction of the electrodes, and, according to the authors, 12-second segments are not feasible for real applications. Despite the outstanding results presented by the authors, it is worth highlighting that the protocol is different from the one showed in this work and in <ns0:ref type='bibr' target='#b44'>(Yang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The authors evaluated two different scenarios compared to the one presented in <ns0:ref type='bibr' target='#b44'>(Yang et al., 2018)</ns0:ref>. The rationale for the first scenario is that the authors use 90% to train/validate and 10% to test, that is, the same individual signal is present in training and testing. This scenario is more straightforward than the one proposed by <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref>. The reported results cannot be compared to the presented in <ns0:ref type='bibr' target='#b44'>(Yang et al., 2018)</ns0:ref>, once it uses only a one-second signal, and the evaluation protocol is different. Some works in the literature do not use the EEG Motor Movement/Imagery dataset but present relevant results and alternative techniques that inspired the development of this work.</ns0:p><ns0:p>The work of <ns0:ref type='bibr' target='#b9'>El-Fiqi et al. (2018)</ns0:ref> proposed a CNN with raw steady-state visual evoked potentials (SSVEPs) for individual identification and verification. In the SSVEPs acquisition protocols, the subject focuses its attention on a repetitive visual stimulus while recording the activity response signals. The study explores two SSVEP datasets with raw data of four and ten subjects that were seated and focused on three groups of flashing LED stimuli blinking at 13Hz, 17Hz, and 21Hz frequencies. The signals were recorded at 256Hz by eight electrodes placed on the parietal-occipital area. The performance of three classical methods (support vector machine, random forest, and shallow feed-forward network with one layer) with spectral features were compared against the proposed CNN method, which works directly with the raw signal. The deep learning-based approach achieved an averaged identification accuracy of 96.80% and an averaged verification accuracy of 98.34%. These results outperform those obtained by other classical methods. Besides, the proposed CNN needs no complex techniques for feature representation or extraction to achieve good results, making it feasible for real-time systems.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_11'>2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:ref> Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>PROPOSED APPROACH AND EVALUATION SCENARIOS</ns0:head><ns0:p>In this section, the proposed method is presented and outlined as the pipeline shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. The method is evaluated on the EEG Motor Movement/Imagery Dataset <ns0:ref type='bibr' target='#b14'>(Goldberger et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b33'>Schalk et al., 2004)</ns0:ref>, under the three scenarios evaluated in this work and presented in Section . In order to execute the pipeline proposed in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, preprocessing methodology is developed along with the data augmentation protocol, the CNN model, and the respective data representation along with the evaluation process. The steps presented in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> are described in details in the next subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data preprocessing</ns0:head><ns0:p>Biometrics, disease detection, and brain death detection are two of many applications that could be employed based on the EEG signal. Regardless of the application, preprocessing is required since the signal is very susceptible to noise. Several approaches have been used in the literature for data processing, such as Common Spatial Pattern (CSP) <ns0:ref type='bibr' target='#b45'>(Yong et al., 2008)</ns0:ref>, Wavelet transforms <ns0:ref type='bibr' target='#b22'>(Kumar et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b13'>Ghandeharion and Ahmadi-Noubari, 2009)</ns0:ref>, and Independent Component Analysis (ICA) <ns0:ref type='bibr' target='#b7'>(Delorme et al., 2007)</ns0:ref>.</ns0:p><ns0:p>For biometrics purposes, the band-pass FIR/IIR filters are popular preprocessing techniques. Once each frequency is related to a specific cerebral activity <ns0:ref type='bibr' target='#b1'>(Boubakeur et al., 2017)</ns0:ref>, filtering out unwanted frequency-bands aids in the detection process. In this work, a band-pass filter is applied aiming only Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the intermediate frequency-bands of the signal spectrum. Since the data acquired by equipment used in Physionet EEG Motor Movement/Imagery Dataset was sampled at 160Hz, it has well-defined components up to 80Hz. Besides, it is not usual to employ the full spectrum, since some frequency bands are more discriminant than others for biometry <ns0:ref type='bibr' target='#b10'>(Fraschini et al., 2015)</ns0:ref>. According to <ns0:ref type='bibr' target='#b43'>Yang and Deravi (2017)</ns0:ref>, signals related to EEG biometrics have higher energy in the frequency spectrum below 50Hz. In that manner, filters are built for the following bands: 01-50Hz, 10-30Hz, and the gamma band (30-50Hz), following the ones proposed by <ns0:ref type='bibr' target='#b10'>Fraschini et al. (2015)</ns0:ref>.</ns0:p><ns0:p>The data division in training and testing dataset is defined according to the evaluated scenario. For the first one (baseline task evaluation), the EC is used for testing. For the second scenario (multi-task evaluation), the data division proposed by Yang and Deravi ( <ns0:ref type='formula'>2017</ns0:ref>) is followed. In the third scenario (cross-task evaluation), proposed in this work, the T1R2 is used as testing. Finally, the forth scenario (cross-individual evaluation), also proposed in this work, the EC and EO are used for testing.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data augmentation</ns0:head><ns0:p>The evaluation protocol in <ns0:ref type='bibr' target='#b10'>(Fraschini et al., 2015)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>CNN training and data representation</ns0:head><ns0:p>After the data augmentation step, the data is split into two subsets: train and validation. The first CNN architecture is composed of three convolution layers, each one followed by max-pooling, and four fully-connected layers with a dropout and a softmax layer. The convolution stride is set to one and no padding is used. The main among architectures investigated here is related to the number of convolutional layers. The second architecture is composed of five convolution layers with larger receptive fields in the initial layers. Both architectures are presented in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>.</ns0:p><ns0:p>Both architectures are evaluated according to the protocol proposed by <ns0:ref type='bibr' target='#b10'>Fraschini et al. (2015)</ns0:ref> and thus, the one that best fits the data can be determined.</ns0:p><ns0:p>Each architecture is trained for n epochs, in which n relies on the convergence of the training and validation error. The training process resembles a simple classification problem for an identification task (close-gallery) on a biometry context. Thus, a conventional supervised classification problem. Upon this fact, each sample provided to a CNN model is feed-forwarded and results in an output. This output is a probability that a sample belongs to one of the classes.</ns0:p><ns0:p>However, for the verification mode, one needs a feature vector as the output of a network instead of a probability vector of classes. To address this requirement, layers responsible for handling the classification (Soft-max, Dropout, and FC4. See in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>) should be removed. The feature extraction uses the FC3 output which can now be used as a feature vector.</ns0:p><ns0:p>From this point, the trained model starts to operate as a deep feature descriptor. Once a 12-seconds segment is provided to the network, the output is a feature vector of the same size as the FC3 output. For architecture A this size corresponds to 256 dimensions, while for architecture B, 4096 dimensions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Biometric verification</ns0:head><ns0:p>Different from the training data in which data augmentation techniques were applied, there is no overlap for the testing data. That is, for the EC session, five segments of 12 seconds are extracted for each subject, while in the motor movement/imaginary tasks, ten segments, as shown in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. Pool1 with stride equal to 4 and pool2 and pool3 with stride equal to 2 and filter size equal to 2 for all three. The padding is equal to zero for all convolutional and pooling layers. B) Architecture proposed in which conv1, conv2, conv3, conv4 and conv5 have filters size equal to 51x1, 17x1, 7x1, 7x1 and 7x1, respectively, and stride equal to 1 all five. Pool1, pool2, pool3, pool4 and pool5 are max pooling with filter size and stride equal to 2 for all pooling layers. The padding is equal to zero for all convolutional and pooling layers.</ns0:p><ns0:p>vectors. Ideally, one should have small scores between vectors from the same class (genuine) and large scores between vectors from different subjects (impostor).</ns0:p><ns0:p>In biometric verification mode, the number of impostor pairs is much higher than the genuine ones. A genuine pair occurs when two feature vectors are from the same individual. In contrast, an impostor pair corresponds to two feature vectors from two different individuals. As the number of impostor pairs is higher than the genuine ones, the biometric verification mode simulates a spoofing attack on a system. Thus, usually, the verification protocols is a challenging scenario.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation scenarios</ns0:head><ns0:p>The first scenario adopted in this work is the one proposed by <ns0:ref type='bibr' target='#b10'>Fraschini et al. (2015)</ns0:ref> Therefore, the first studied scenario disregards the impact of the motor/imaginary tasks for the biometric verification. Contrasting to that, <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref> carried out experiments to evaluate such conditions, i.e., the impact of motor activity on EGG signal for biometrics. Those experiments are the second scenario considered here and referenced as multi-task evaluation. <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref> proposed three experimental protocols to evaluate the EEG biometric in the EEG Motor Movement/Imagery dataset <ns0:ref type='bibr' target='#b14'>(Goldberger et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b33'>Schalk et al., 2004)</ns0:ref>. The protocols are:</ns0:p><ns0:p>1. Protocol P1: aims to investigate the influence of some regions of the scalp for biometric purposes.</ns0:p><ns0:p>Three regions are selected: the frontal lobe (F), with electrodes AF3, AFz, and AF4, the motor cortex (M), with the electrodes C1, Cz, and C2, and the occipital lobe (O), with O1, Oz, and O2 Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>electrodes. For each training, the signals from R1 and R3 of one task were selected and R2 from the same task was used for testing.</ns0:p><ns0:p>2. Protocol P2: explores the impact of training in the first and the third runs of the same motor/imaginary task and testing in the second run of all the remaining tasks and the baseline ones.</ns0:p><ns0:p>Using only a subset of nine electrodes, 24 experiments were evaluated.</ns0:p><ns0:p>3. Protocol P3: aims to explore the impact on the combination of different motor and imaginary tasks for training. The authors reserve the second run of the first task (T1R2) to evaluate the performance of the training tasks. They start training with T1R1 and T1R3 separately and finish with the fusion of both with the other tasks and runs (all runs of T2, T3, T4, and T4).</ns0:p><ns0:p>The protocols P1 and P2 use the second run of all tasks as testing. In the protocol P3, only the task T1R2 is used as testing. Furthermore, all protocols are evaluated from both biometric identification and verification scenarios.</ns0:p><ns0:p>The use of several data acquisition sessions/runs may be unfeasible for real biometric systems once it is onerous for the user, and, typically, the enroll session occurs only on the first time the user gets in touch with the system. Considering that, a third evaluation scenario is proposed in which a new protocol to validate the proposed approach is evaluated, using one task from a single run (session In all the three presented scenarios, samples from the same individual were present in both the training and testing sets. It is well known that such evaluation scheme tend to favor the classifiers. To avoid this issue and to assess the robustness of the proposed approach, a fourth protocol is proposed, forcing the division between training sets and test oriented to individuals. That is, the individual used to train the model is not used to test/evaluate the approach. This scenario was called cross-individual evaluation.</ns0:p><ns0:formula xml:id='formula_0'>)</ns0:formula><ns0:p>Therefore, the Deep Learning models are trained with signals from the first 55 individuals from the Physionet database and tested with the last 54.</ns0:p><ns0:p>Due to the large number of possible combinations involving the 12 imaginary/motor tasks and the objective of this study, only the Eyes Open and Closed conditions were used to evaluate this fourth scenario.</ns0:p><ns0:p>Since the cross-individual evaluation is a more complex scenario and could benefit from a more robust model, the impact of using a state-of-art technique called Squeeze-and-Excitation Network <ns0:ref type='bibr' target='#b16'>(Hu et al., 2018)</ns0:ref> is evaluated. Section details the Squeeze-and-Excitation Networks.</ns0:p></ns0:div>
<ns0:div><ns0:head>Squeeze-and-Excitation Network</ns0:head><ns0:p>The Squeeze-and-Excitation (SE) block is an architecture unit first introduced by <ns0:ref type='bibr' target='#b16'>Hu et al. (2018)</ns0:ref> in order to capture the dependency between the convolution channels of a network. It has two main steps: Squeeze and Excitation.</ns0:p><ns0:p>The Squeeze step aggregates the global information of all the input channels and shrinking the spatial dimensions in a feature map space U by using the global average pooling. The Excitation step transforms each channel of the feature map U with weights obtained with the output of the previous step modulated for each channel.</ns0:p><ns0:p>One can easily obtain an SE network by stacking several SE blocks, replacing some components, or integrating SE blocks in some already known architecture, which is one of the great advantages of this method. In addition, in <ns0:ref type='bibr' target='#b16'>Hu et al. (2018)</ns0:ref> the authors revealed state-of-the-art performance across multiple datasets and applications using models based on SE blocks.</ns0:p><ns0:p>Toward this direction, it is investigated the use of these units in the best model obtained in the other three scenarios evaluated. In order to determine the best SE network, the first step was to reproducing one Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• SE Model 1: all convolution layers were replaced by SE blocks;</ns0:p><ns0:p>• SE Model 2: add a single SE block after the input layer;</ns0:p><ns0:p>• SE Model 3: add a single SE block before the fully connected layers, that is, after all convolution and pooling layers;</ns0:p><ns0:p>• SE Model 4: add a SE block after the input layer and before fully connected layers;</ns0:p><ns0:p>• SE Model 5: add a SE block after each convolution layer.</ns0:p><ns0:p>It is also evaluated the most promising reduction factor (hyper-parameter of SE blocks) and analyzed six different values for all models.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>This section presents and discusses the protocols, the experimental setup and the obtained results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental setup</ns0:head><ns0:p>All the CNN operations are conducted using the TensorFlow library, along with the high level API Keras.</ns0:p><ns0:p>The specifications of the experimental computational environment are 64GB of DDR4 RAM, an Intel (R)</ns0:p><ns0:p>Core i7-5820K CPU 3.30GHz 12-core, and a GeForce GTX TITAN X GPU. The source is available on impostor pairs (inter-class) for the evaluation on the baseline tasks (EO-EC). A total of 4,809 intra-class pairs and 571,392 inter-class pairs were generated for the multi-task (Motor-Imaginary) evaluation. In that sense, the multi-task evaluation is a more complex and computationally expensive than the simple tasks evaluation context.</ns0:p></ns0:div>
<ns0:div><ns0:head>Baseline tasks evaluation</ns0:head><ns0:p>The results regarding the combination of CNN architectures and the frequencies-bands are presented in Table <ns0:ref type='table' target='#tab_5'>2</ns0:ref>.</ns0:p><ns0:p>Architecture B, represented by Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> (b), reached an EER of 0.65% in 30-50 frequency band.</ns0:p><ns0:p>Architecture A, depicted in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> (a), achieved the best overall figures (0.19% EER) for the gamma frequency (30-50 Hz). The results in terms of decidability are also reported. The decidability is a measure of how separable two distributions are <ns0:ref type='bibr' target='#b30'>(Ratha et al., 2001)</ns0:ref>. In this case, it indicates how far the impostor scores are from the genuine scores. As one may see in Table <ns0:ref type='table' target='#tab_5'>2</ns0:ref>, architecture A performs better for all frequency bands, specially for the gamma frequency.</ns0:p><ns0:p>Both architectures A and B perform well for the 30-50 Hz band and worse for 01-50 Hz, but architecture A outperforms architecture B. One of the possible reasons is the filter sizes, which are smaller in the first one and could lead to more details by extracting local features. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref> is presented the EER from the proposed approach against the work proposed by <ns0:ref type='bibr' target='#b10'>Fraschini et al. (2015)</ns0:ref>. As one can see, the proposed method with the use of convolution neural networks significantly reduced the EER. The experiments are under the same scenario: all electrodes (64), all individuals (109), training in EO (eyes open), and testing in EC (eyes closed). Furthermore, as the test protocol used is the same, a comparison becomes valid between both methods.</ns0:p><ns0:p>Despite the outstanding results reported by <ns0:ref type='bibr' target='#b38'>Sun et al. (2019)</ns0:ref>, the evaluated scenario is different from the one proposed in <ns0:ref type='bibr' target='#b10'>(Fraschini et al., 2015)</ns0:ref> and used here.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_12'>6</ns0:ref> shows the performance of the architecture A employing the DET curve over all three frequency bands evaluated. The curve related to the spectrum of 30-50 Hz surpasses all other experiments described in the literature. Those results are in line with the ones presented in <ns0:ref type='bibr' target='#b10'>(Fraschini et al., 2015)</ns0:ref> in which the 30-50 Hz range frequency overcomes the other frequencies for EEG biometrics-based. The reported results in this work differ from those reported by <ns0:ref type='bibr' target='#b10'>Fraschini et al. (2015)</ns0:ref> in the sense that there is a discrepancy in other frequency ranges, which are more significant here.</ns0:p></ns0:div>
<ns0:div><ns0:head>Stride evaluation</ns0:head><ns0:p>To evaluate the impact of different strides in data augmentation, the first step started using a stride of 20, which represents 125 milliseconds (ms) step in the time domain, to generate the samples aiming the baseline task (EO). Then, the size of the step was successively increased by 20 up to 200 and trained</ns0:p><ns0:p>Architecture A for each one. In that sense, the impact of overlap from 125 ms to 1250 ms is analyzed.</ns0:p><ns0:p>Baseline task EC, without overlapping, was reserved for evaluation (test) in all cases. Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref> shows the EER for each stride. One may see from Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref> that the stride increase harms the proposed approach performance in the verification task. There is a trade-off on the stride size, the greater the stride, the smaller are the number of samples generated and the greater is the EER. Upon this fact, the smallest stride ( <ns0:ref type='formula'>20</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Multi-task evaluation</ns0:head><ns0:p>The following experiments were based on the protocols proposed by <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref>, which were presented in Section . It was used the best CNN architecture and stride obtained on the above experiments to perform the experiments on the verification scenario.</ns0:p></ns0:div>
<ns0:div><ns0:head>Protocol P1: region/task pairing</ns0:head><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref> shows the location of the nine selected electrodes to perform this experiment.</ns0:p><ns0:formula xml:id='formula_1'>C 2 C z C 1 AF 4 AF z AF 3 O 2 O z O 1 Figure 7.</ns0:formula><ns0:p>Selected electrodes on motor cortex, frontal and occipital lobule. Based on <ns0:ref type='bibr' target='#b44'>(Yang et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The proposed method is evaluated with signals from the three regions of the scalp (see Figure <ns0:ref type='figure'>7</ns0:ref>) Manuscript to be reviewed Computer Science combined. The results have shown that the performance is better using all nine electrodes. The best result is reported for T1R1 + T1R3 as training and T1R2 as test (EER of 0.12%). Among the experiments carried with only three electrodes, the best result is obtained with task four (T4) and motor cortex electrodes (EER of 0,85%). The worst results are related to the frontal lobe. A comparative analysis can be made with the results obtained in <ns0:ref type='bibr' target='#b44'>(Yang et al., 2018)</ns0:ref>, witch reached an EER of 7.83% when training and testing with the same task T1 using only the data from the occipital lobe. The same experiment performed here achieved 1.07% EER, showing that the proposed approach is robust even with a reduction in the number of channels.</ns0:p></ns0:div>
<ns0:div><ns0:head>Protocol P2: Mismatch training/testing test</ns0:head><ns0:p>After applying the methodology described in section with protocol P2, results showed no improvement when the train and the test come from the same task. The lowest EER, 0.08%, came from training with T2R1+T2R3 and testing with the baseline task EO. The proposed approach outperformed in all scenarios the best figures reported by <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref> when training with T1 and testing with T2.</ns0:p><ns0:p>It is worth emphasizing that the physical (non-imaginary) tasks T3 and T4 are the most difficult for the biometric problem. Even when training and testing with tasks of the same nature (See Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref>), the results are worse. Our hypothesis is that in these cases, noise from muscle movement can interfere with signal acquisition.</ns0:p></ns0:div>
<ns0:div><ns0:head>Protocol P3: heterogeneous training</ns0:head><ns0:p>In this protocol, the samples of the individuals are organized in mini-batches of size 100 and shuffled. All the nine electrodes from Figure <ns0:ref type='figure'>7</ns0:ref> are used and T1R2 fixed as test set for all experiments. Table <ns0:ref type='table' target='#tab_10'>6</ns0:ref> shows the tasks used for training and the EER obtained for each one.</ns0:p><ns0:p>As one may see on Table <ns0:ref type='table' target='#tab_10'>6</ns0:ref>, stacking up different tasks hinder the proposed approach verification performance. The worst-case occurred in P3.11, which has a larger training set (EER of 1.91%). The best scenario occurs when the system is trained and evaluated with one run from the same task (EER of 0,1%).</ns0:p><ns0:p>In some cases, accumulate tasks do not significantly affect performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparative analysis</ns0:head><ns0:p>Protocol 1 led to the same conclusion then <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref>: one could not determinate the most relevant electrode placement among the different regions of the scalp, but using all of them (all nine electrodes) enhances performance.</ns0:p><ns0:p>Protocol 2, both in this work and presented in <ns0:ref type='bibr' target='#b44'>(Yang et al., 2018)</ns0:ref>, showed that using different tasks to train and test does not affect performance. However, the proposed approach does not improve with the accumulation of tasks for training, different from reported by <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref>. Moreover, a lower EER for all scenarios was reported, including Protocol 3.</ns0:p><ns0:p>One may see that the proposed CNN method has a significant improvement over the SOTA method proposed by <ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref>, considering that, the experiments were conducted under the same scenario with nine electrodes (See Table <ns0:ref type='table' target='#tab_11'>7</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Cross-task evaluation</ns0:head><ns0:p>To evaluate the cross-task scenario, a new protocol based in <ns0:ref type='bibr' target='#b44'>(Yang et al., 2018)</ns0:ref> is proposed. Among the 12 sessions, it is selected the T1R2 for testing. Besides, the remaining tasks (and runs) are used as training Manuscript to be reviewed</ns0:p><ns0:p>Computer Science and validation. In that manner, eleven models to evaluate the T1R2 set are created. It is conducted the training following the methodology described in Section . Also, the impact of using only nine electrodes (channels) instead of the 64 electrodes available on the Physionet Dataset is analyzed.</ns0:p><ns0:p>From the results presented in Table <ns0:ref type='table' target='#tab_12'>8</ns0:ref>, one may see that CNN has converged for all training data, and the EER is lower than 0.39% for all of them. With 64 channels, the best scenario reaches 0,02% for T3R3 and an error, on average, of 0,16%. With nine channels, the lower EER achieved was 0.06% for T1R3 and the average error was 0,15%.</ns0:p><ns0:p>From these results, five questions arises:</ns0:p><ns0:p>What is the impact of using different tasks from the same run in biometric verification? Despite the parity of reported EER, the model trained with task T2 (same task as in T1, but in the imaginary state) presented a small and similar EER with both nine channels and 64, outperforming the models trained with tasks T3-T4. Despite task T3 showing the smallest EER with nine channels, it increased significantly with 64.</ns0:p><ns0:p>In that sense, tasks from the same nature, even if it is imaginary, perform better than other ones. Although, the proposed CNN still performs well for the different tasks.</ns0:p><ns0:p>What is the impact of using the same task but different runs in biometric verification? From all experiments with 64 channels, the ones that use the same tasks from different runs had a good performance and a small variation. Upon this fact, one may affirm that EEG biometry benefits of data acquisition in which the same task is performed. Using nine electrodes, the results presented high variance, which may happen due to the reduction in the number of channels used to train the model.</ns0:p><ns0:p>What is the impact of using the same task but imaginary from different runs in biometric verification?</ns0:p><ns0:p>Different from the experiments using the T1 task, the experiments engaging the T2 task have a noticeable variation both with nine and 64 electrodes. Despite this fact, imaginary tasks perform similar to the ones which are not imaginary but with more variance. Toward this direction, EEG biometry benefits from data acquisition in which the task performed are the same in training and testing, even if it is imaginary or not.</ns0:p></ns0:div>
<ns0:div><ns0:head>15/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In that manner, EEG biometric may benefit from using tasks of the same nature, both motor tasks. In the same manner, it benefits from using the same task (imaginary or not). One possible hypothesis is that the activated areas from the brain are different from each task and the respective nature. The electrodes capture this behavior that directly influences the EEG curve. However, those changes are not substantial enough to affect very much the biometric verification process.</ns0:p><ns0:p>What is the impact of the number of electrodes used in biometric verification? Using fewer electrodes for training and testing did not affect considerably the performance of the system. Among all the performed experiments, the best result, 0.02% EER for training with T3R3, occurred with 64 channels, and the worst occurred with nine channels and an EER of 0.39%. This could indicate that EEG biometric benefits from using more channels, but the training accomplished with T3R1, T4R1, T2R2, T3R3, and T1R3, almost half of them, showed better performance with only nine.</ns0:p><ns0:p>One hypothesis to justify this result is that the use of 64 electrodes has redundancies and that using nine is enough to capture the most important signals emitted during the performance of a task, motor, or imaginary. Even in cases where the EER was higher using fewer channels, this increase was not significant.</ns0:p><ns0:p>In that sense, one may conclude that using 64 channels adds unnecessary financial and computational cost since nine channels proved to be enough to achieve good results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cross-individual evaluation</ns0:head><ns0:p>Since the Cross-individual evaluation is more challenging, it has more room for improvement. Thus, in this scenario, we evaluate the SE Blocks. We also compared networks based on SE blocks against networks based on simple CNN blocks under the same conditions.</ns0:p><ns0:p>In total, 30 models were evaluated on training data by changing the hyper-parameter r six times for each of the five SE models proposed. The r given by r = 2 x ranged from 1 to 32 (x = 0, 1, 2, 3, 4, 5). The Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>for SE Model 3 with r = 32, with an EER of 0.11%, both greater than the results reported without the SE blocks. These two models are then used to evaluate the proposed Cross-individual scenario.</ns0:p><ns0:p>To refer to the first half of the dataset (55 first individuals), EO 1 and EC 1 are used. Table <ns0:ref type='table' target='#tab_13'>9</ns0:ref> shows the obtained results. One can observe that all reported results are closer to the reported in the baseline task evaluation (EER = 0,19%). This endorses the robustness of the proposed approach.</ns0:p><ns0:p>SE Model 2 reached the best EER, 0.18% when testing with a different set of individuals under the same condition (eyes open). When trained with EC 1 the best result occurred for test with E0 1 (EER = 0.36%). </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In the present work, it was evaluated the use of deep descriptors to extract features from the EEG signal for biometric purposes. The main focus was on one particular aspect: what is the impact of a motor (or even imaginary) activity in the EEG biometric modality? To investigate this question, the proposed approach with the Physionet EEG Motor Movement/Imagery Dataset was evaluated.</ns0:p><ns0:p>The proposed deep learning-based method achieved outstanding figures, overcoming the state-of-theart methods even for the multi-task scenario. It is worth emphasizing that the proposed method is capable of performing well, even when trained on different tasks. For instance, it reaches an EER of 0.12% when trained with the task T4, corresponding to imagining moving the feet, and tested with the task T1, equivalent to performing the motor movement of closing the fists. One can conclude that the electrodes capture the motor (or imaginary) interference, however, those changes are not substantial enough to inhibit the biometric verification process. This work also evaluated the impact of the Squeeze-and-Excitation blocks for the EEG biometric problem. The Squeeze-and-Excitation blocks explore the interdependence between the channels of the signal, allowing better quality feature extraction and, therefore, better results.</ns0:p><ns0:p>Due to the nature of the proposed CNN-based approach, it allows simultaneous processing of all EEG channels, which is an advantage from a computational point of view. However, the use of multiple channels could turn the biometric modality impractical for real-world applications. Thus, the impact of using fewer channels (electrodes) is investigated, and a possible conclusion is that the use of all channels does not necessarily improve the result. The reported results showed that only 9 electrodes allowed to achieve competitive results, or even better, compared to the use of 64 electrodes in similar tasks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Example of an EEG headband with few electrodes and no need for conductor gel.</ns0:figDesc><ns0:graphic coords='3,193.68,31.62,290.39,163.35' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>https://physionet.org/content/eegmmidb/1.0.0/ 3/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b41'>Wang et al. (2019)</ns0:ref> evaluated the impact of different motor and imaginary tasks in the identification 4/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Proposed pipeline.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Example of the proposed data augmentation based on overlapping with a step of two-second size. Each slide of the window produces a new training instance.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Data split proposed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. Architecture evaluated in this work. A) Architecture proposed in which conv1, conv2 and conv3 have filters size equal to 11x1, 9x1 and 9x1, respectively, and stride equal to 1. Pool1 with stride equal to 4 and pool2 and pool3 with stride equal to 2 and filter size equal to 2 for all three. The padding is equal to zero for all convolutional and pooling layers. B) Architecture proposed in which conv1, conv2, conv3, conv4 and conv5 have filters size equal to 51x1, 17x1, 7x1, 7x1 and 7x1, respectively, and stride equal to 1 all five. Pool1, pool2, pool3, pool4 and pool5 are max pooling with filter size and stride equal to 2 for all pooling layers. The padding is equal to zero for all convolutional and pooling layers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>referenced in this work as baseline task evaluation. The evaluation protocol related to first scenario ensures the acquisition of the training data on the open-eye task, while the closed-eye task composes the testing data from 109 individuals. It is worth to highlight that the acquisition of training data and testing data came from two different sessions.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>for training referenced as Cross-task evaluation. Inspired by the third protocol (P3) proposed by<ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref>, the T1R2 is fixed as the testing data, and all the remaining tasks and runs are used for training. It is worth stressing that different from<ns0:ref type='bibr' target='#b44'>Yang et al. (2018)</ns0:ref>, the remaining tasks and their respective runs are used separately for each training setup, instead of combining them. In that sense, train and testing in different task types should be a more challenging scenario when compared with a scenario that uses the same task in both training and test sets.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>experiment of the baseline task evaluation scenario (eyes open for training and eyes closed for testing) and trained a series of SE models using all 109 individuals for training. The performance of five models obtained by merging SE blocks with the best architecture are investigated as follows: 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>are evaluated with three band-pass filters for the baseline task: eyes open (EO) for training/validation and eyes closed (EC) for testing. First, the performance of the CNN architectures are evaluated on this simple task evaluation protocol by training for 20 epochs. Then, the effect of using different strides on the CNN performance is investigated which also tends to change the data volume created by the data augmentation technique and the time to train the CNN model. Thus, the best stride parameter is used for the remaining experiments.Model weights are randomly initialized, and the optimization method used here is the Stochastic Gradient Descent with momentum coefficient of 0.9. A 10% dropout sub-sampling operation is placed before the softmax layer in order to minimize the over-fitting. Using a learning rate sequence of lr = [0.01 2 , 0.001 18 ], in which the number superscript represents the amount of epochs using that learning rate. For the present experimentation, there is no improvement when training for more than 20 epochs.After the training and the removal of the last layers of the CNN, the model resembles one embedding model or a deep feature descriptor for a 12 seconds length EEG input signal.The data augmentation technique applied to the training data created a total of 384 samples for each individual from the EO data. For the motor (or imaginary) tasks (T1-T4), it was extracted from each individual a total of 889 or 905 samples (signals with 123 seconds and 125 respectively). In the verification mode, the data augmentation technique produces a total of 1,086 genuine pairs (intra-class) and 146,610</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>https://github.com/ufopcsilab/EEG-Multitask 11/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. DET curves comparing all spectrum evaluated with Architecture A.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>r</ns0:head><ns0:label /><ns0:figDesc>hyper-parameter is used to enhance the information of the channels in the excitation operation and it is related to the division of the original number of channels. If r = 64, the 64 original signals were divided 64 and the result will be only one channel in middle, which could mix up the original channels information. The best results were achieved for SE Model 2 with r = 2, reaching an EER of 0.18% and even better 16/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>EO 1 is used to refer to the first half of the individuals present on the eyes open task and EC 1 , for the first half of the eyes close task. EO 2 and EC 2 are the second half of the eyes open and eyes closed task respectively.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Table 1 summarizes these related works. Related works for EEG-based biometric.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Work</ns0:cell><ns0:cell>Database</ns0:cell><ns0:cell>Classes</ns0:cell><ns0:cell>Acquisition</ns0:cell><ns0:cell>Approach</ns0:cell><ns0:cell>Channels</ns0:cell><ns0:cell>Result</ns0:cell></ns0:row><ns0:row><ns0:cell>Singh et al. (2015)</ns0:cell><ns0:cell>Physionet</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>Motor/Imaginary tasks</ns0:cell><ns0:cell>Magnitude Squared Coherence</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>Acc = 100%</ns0:cell></ns0:row><ns0:row><ns0:cell>Fraschini et al. (2015)</ns0:cell><ns0:cell>Physionet</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>Rest</ns0:cell><ns0:cell>Eigenvector</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>EER = 4.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>Ma et al. (2015)</ns0:cell><ns0:cell>Physionet</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>Rest</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>Acc = 88%</ns0:cell></ns0:row><ns0:row><ns0:cell>Das et al. (2017)</ns0:cell><ns0:cell>Own</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>Visual stimuli</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>CRR = 98.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Imaginary</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Das et al. (2018)</ns0:cell><ns0:cell>Own</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>arms/legs</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>CRR = 93%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>movement</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mao et al. (2017)</ns0:cell><ns0:cell>BCIT</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>Driving car</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>CRR = 97%</ns0:cell></ns0:row><ns0:row><ns0:cell>Maiorana and Campisi (2018)</ns0:cell><ns0:cell>Own</ns0:cell><ns0:cell>45</ns0:cell><ns0:cell>Sit</ns0:cell><ns0:cell>Hidden Markov models (HMMs)</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>EER < 2%</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang et al. (2019)</ns0:cell><ns0:cell>Physionet</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>Motor/Imaginary tasks</ns0:cell><ns0:cell>GCNN + PLV</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>CRR = 99.98% FAR = 1.65%</ns0:cell></ns0:row><ns0:row><ns0:cell>Yang et al. (2018)</ns0:cell><ns0:cell>Physionet</ns0:cell><ns0:cell>108</ns0:cell><ns0:cell>Motor/Imaginary tasks</ns0:cell><ns0:cell>Discrete Wavelet Transform + LDA</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>Acc = 100% EER = 2.63%</ns0:cell></ns0:row><ns0:row><ns0:cell>Alyasseri et al. (2020)</ns0:cell><ns0:cell>Physionet</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>Motor/Imaginary tasks</ns0:cell><ns0:cell>FPA + β -hill</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>Acc = 96.05%</ns0:cell></ns0:row><ns0:row><ns0:cell>Sun et al. (2019)</ns0:cell><ns0:cell>Physionet</ns0:cell><ns0:cell>109</ns0:cell><ns0:cell>Motor/Imaginary tasks</ns0:cell><ns0:cell>1D-Conv. LSTM</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>EER = 0.41%</ns0:cell></ns0:row><ns0:row><ns0:cell>El-Fiqi et al. (2018)</ns0:cell><ns0:cell>SSVEP database</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>Visual stimulus</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>Verification Acc = 98.34%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>proposes the division of the EEG data into 12-second segments (1920 samples). Since the recording motor/imagery tasks have 120 seconds, each task has a total of 10 segments of 12 seconds without overlap. From both 60-second baseline recording sessions (EO and EC), five segments are created, disregarding the overlap. A total of only five or ten segments for each individual is not enough for the CNN training to converge.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>20</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>20</ns0:cell></ns0:row></ns0:table><ns0:note>A workaround for this issue is a data augmentation technique for the training set. The strategy consists of abundantly extract segments with overlapping from the EEG segments. In this work, the impact of the data augmentation technique based on segments overlapping is investigated. The technique consists of creating new segments by a sliding window that moves by a constant step called stride. Figure3shows an example of the data augmentation technique used.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>10% of the entire segment is used for validation, but 10% of the signals resulted from the data augmentation step. For example, if the Data Augmentation process output is 500 segments, the first 450 segments (90%) are for training and the last 50 segments (10%) for validation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>2 Minutes (120 seconds)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Training data</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>12 seconds</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>90% for training</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>10% validation</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>12 seconds</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>12 seconds</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>offset</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>12 seconds</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>12 seconds</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>12 seconds</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>12 seconds</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>12 seconds</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>12 seconds</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>12 seconds</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Testing data</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell><ns0:cell>12 seconds</ns0:cell></ns0:row></ns0:table><ns0:note>The first 90% signals resulted from the data augmentation are used for training, and the remaining 10% for validation (model adjusting during training), as presented in Figure 4. It is worth highlighting that not necessarily 7/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54376:1:1:NEW 12 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>EER and decidability reported of the two proposed architectures. EER presented in percentage.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Frequency Band</ns0:cell><ns0:cell>EER (%)</ns0:cell><ns0:cell>Decidability (%)</ns0:cell><ns0:cell>Architecture</ns0:cell></ns0:row><ns0:row><ns0:cell>10-30</ns0:cell><ns0:cell>5.06</ns0:cell><ns0:cell>3.22</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>30-50</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>7.02</ns0:cell><ns0:cell>A</ns0:cell></ns0:row><ns0:row><ns0:cell>01-50</ns0:cell><ns0:cell>9.73</ns0:cell><ns0:cell>2.50</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>10-30</ns0:cell><ns0:cell>6.85</ns0:cell><ns0:cell>2.84</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>30-50</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>3.61</ns0:cell><ns0:cell>B</ns0:cell></ns0:row><ns0:row><ns0:cell>01-50</ns0:cell><ns0:cell>9.64</ns0:cell><ns0:cell>2.20</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>EER reported on EO-EC. EER presented in percentage. # Different evaluation protocol.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reports</ns0:cell><ns0:cell>Approach</ns0:cell><ns0:cell>EER(%)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Fraschini et al. (2015) Eigenvector Centrality</ns0:cell><ns0:cell>4.40</ns0:cell></ns0:row><ns0:row><ns0:cell># Sun et al. (2019)</ns0:cell><ns0:cell>CNN + LSTM</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Method</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>0.19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>EER obtained for different strides. EER presented in percentage.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Stride</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>100 120 140 160 180 200</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>EER 0.09 0.09 0.37 0.76 0.65 0.65 0.74 0.83 0.37 1.11</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Table 5 synthesizes the EER obtained for each experiment. EER reported of mismatch training/testing. EER presented in percentage.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Train</ns0:cell><ns0:cell cols='6'>Test T1R2 T2R2 T3R2 T4R2 EO</ns0:cell><ns0:cell>EC</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>T1R1+T1R3</ns0:cell><ns0:cell>0.12</ns0:cell><ns0:cell>0.29</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>0.10 0.37</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>T2R1+T2R3</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.29</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>0.08 0.56</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>T3R1+T3R3</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.29</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.18 0.36</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>T4R1+T4R3</ns0:cell><ns0:cell>0.12</ns0:cell><ns0:cell>0,13</ns0:cell><ns0:cell>0.27</ns0:cell><ns0:cell>0.27</ns0:cell><ns0:cell>0.20 0.36</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>EER obtained by accumulating tasks/runs. EER presented in percentage.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Protocol</ns0:cell><ns0:cell>Train</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>EER (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.1</ns0:cell><ns0:cell>T1R1</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>0.10</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.2</ns0:cell><ns0:cell>T1R3</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>0.44</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.3</ns0:cell><ns0:cell>T1R1, T1R3</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>0.22</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.4</ns0:cell><ns0:cell>T2R1, T2R2, T2R3</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>0.25</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.5</ns0:cell><ns0:cell>T1R1, T1R3, T2R1</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>1.25</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.6</ns0:cell><ns0:cell>T1R1, T2R1, T2R2, T2R3</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>1.14</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.7</ns0:cell><ns0:cell>T1R1, T1R3, T2R1, T2R2</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>1.29</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.8</ns0:cell><ns0:cell>T1R1, T1R3, T2R1, T2R2, T2R3</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>1.27</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.9</ns0:cell><ns0:cell>T1R1, T1R3, T2R1, T2R2, T2R3, T3R1</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>1.58</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.10</ns0:cell><ns0:cell>T1R1, T1R3, T2R1, T2R2, T2R3, T3R1, T4R1</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>1.47</ns0:cell></ns0:row><ns0:row><ns0:cell>P3.11</ns0:cell><ns0:cell>T1R1, T1R3, T2R1, T2R2, T2R3, T3R1, T4R1, T4R2</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>1.91</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>EER reported on T1R2. Both metrics are presented in percentages.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reports</ns0:cell><ns0:cell>Train</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>EER (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Yang et al. (2018)</ns0:cell><ns0:cell>T1 & T2</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>2.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed Approach</ns0:cell><ns0:cell>T1 & T2</ns0:cell><ns0:cell>T1R2</ns0:cell><ns0:cell>0.27</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>EER reported on T1R2. EER presented in percentage.What is the impact of using different tasks (imaginary or not) in biometric verification? Different tasks, motor or imaginary, for training and testing did not affect the performance of the system substantially.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Test</ns0:cell><ns0:cell>Train</ns0:cell><ns0:cell>EER (%) 9 channels</ns0:cell><ns0:cell>EER (%) 64 channels</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T1R1</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T2R1</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.15</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T3R1</ns0:cell><ns0:cell>0.10</ns0:cell><ns0:cell>0.19</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T4R1</ns0:cell><ns0:cell>0.09</ns0:cell><ns0:cell>0.19</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T2R2</ns0:cell><ns0:cell>0.15</ns0:cell><ns0:cell>0.17</ns0:cell></ns0:row><ns0:row><ns0:cell>T1R2</ns0:cell><ns0:cell>T3R2</ns0:cell><ns0:cell>0.12</ns0:cell><ns0:cell>0.25</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T4R2</ns0:cell><ns0:cell>0.39</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T1R3</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>0.17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T2R3</ns0:cell><ns0:cell>0.12</ns0:cell><ns0:cell>0.04</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T3R3</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.02</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>T4R3</ns0:cell><ns0:cell>0.27</ns0:cell><ns0:cell>0.06</ns0:cell></ns0:row></ns0:table><ns0:note>The biggest errors came from training with T4, a task completely different from the one used for test. On the other hand, training with a different motor task (T3) presented the same behavior from training with the same imagine task, aleatory, but satisfactory.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Results obtained with SE Model 2 with r=2 (SE2r2) and SE Model 3 with r=32 (SE3r32) both trained with signs of half of the individuals. showed lower performance than SE Model 2 for most experiments. Thus, we believe SE-block based network is a promising approach as a feature extractor for EEG signals.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Train</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>CNN Arch. A</ns0:cell><ns0:cell>SE2r2 EER(%)</ns0:cell><ns0:cell>SE3r32 EER(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>EO 1</ns0:cell><ns0:cell>EO 2</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>0.18</ns0:cell><ns0:cell>0.92</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC 2</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.51</ns0:cell><ns0:cell>0.39</ns0:cell></ns0:row><ns0:row><ns0:cell>EC 1</ns0:cell><ns0:cell>EO 2</ns0:cell><ns0:cell>0.40</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.55</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC 2</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>1.11</ns0:cell></ns0:row><ns0:row><ns0:cell>SE Model 3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "March 1, 2021
Gladston Moreira
Department of Computing
Federal University of Ouro Preto (UFOP)
35400-000, Ouro Preto, MG - Brazil
T +55 31 3559-1334
e-mail: gladston@ufop.edu.br
Dears Editors
PeerJ Computer Science
We would like to submit our revised manuscript entitled “A Deep Descriptor for Cross-tasking EEGBased Recognition”, for consideration for publication at the PeerJ Computer Science. The manuscript was
carefully revised according to the comments and recommendations presented by the reviewers and editor,
and we hope you will find it suitable for publication in the journal.
Please find enclosed our revised work:
Paper title: A Deep Descriptor for Cross-tasking EEG-Based Recognition
Version: RQ1
Authors: Mariana R F Mota, Pedro H L Silva, Eduardo J S Luz, Gladston J P Moreira, Thiago Schons,
Lauro A G Moraes, David Menotti
Summary of Changes
The authors would like to thank the anonymous reviewers for their careful reading and reviewing of our
manuscript. Its contributions were essential for the improvement of our work.
The answers to all questions/comments raised by the reviewers are presented below. The main changes can
be summarized as follows:
• We propose a new protocol to evaluate the proposed approach. This new protocol uses half of the
Physionet database to train (55 first individuals) and the other half to test (54 last individuals). An
individual used for training is not used in test and vice-versa.
• Evaluation of the impact of adding Squeeze-and-Excitation blocks on the CNN architecture since it
promotes a channel-wise feature response.
The detailed responses to each point raised by the reviewers are provided below. We believe the current
version of the manuscript was substantially improved after the feedback in this round of reviews, and hope
that you will consider it for publication in the journal.
Response to Reviewer #1
Basic reporting
The topic of this paper is interesting and can be accepted in the PeerJ after do the following corrections:
Comment #1: What are the most challenging tasks to apply deep learning for EEG-based recognition? this
must be well addressed and mentioned in the Introduction section.
Answer:
We believe this question is of paramount importance. We thank the reviewer for pointing this out. Our
experiments reveal that physical motor tasks (not imaginary) are the most challenging, that is, the ones that
hinder the most the biometric task. We addressed this discussion in the manuscript and mentioned it in the
Introduction section.
???
Comment #2: Section “PROPOSED APPROACH AND EVALUATION SCENARIOS” not explain well
and needs to rewrite in a more scientific way. Also, there are some don’t explain for example in Figure 3
“feature extraction” step but there aren’t any details about it.
Answer:
The entire “Proposed Approach and Evaluation Scenarios” was revised and the missing pieces of information
were included. The “feature extraction” step was introduced in the manuscript in the subsection “CNN
training and data representation”.
???
Comment #3: The authors used the EER measure to evaluate the proposed method. Did you try other
measures?
Answer:
The EER measure was considered for comparison with the literature. We have included another metric, the
decidability index, in our analysis since this metric is a good index to assess feature representation.
???
Comment #4: What are the limitations of the proposed method?
Answer:
The main limitation, in our opinion, is regarding the total amount of data used (12 seconds window signal)
to the task. Although, this is part of the reference protocol proposed by Fraschini et al [1]. We state this
limitation in the manuscript. We thank the reviewer for pointing out this problem.
???
Experimental design
Comment #1: More experiments are required.
Answer:
We agree with the reviewer and included new experiments. In particular, aiming to assess the impact of
training models on data sets without individual overlapping.
???
Comment #2: The authors should use the newest EEG datasets.
Answer:
Due to the size of the paper and the number of evaluated scenarios, we have limited the experiments to
use the well-known Physionet EEG Motor Movement/Imagery Dataset. Our objective with this work was to
explore new architectures, aiming at state-of-the-art results.
???
Validity of the findings
Comment #1: More experiments with different validation measures are required.
Answer:
We included new experiments in the work and also evaluated with another metric (decidability index).
Please, See Table 2.
???
Comment #2: The proposed method must be compared with the state of art to prove that its has a good
performance.
Answer:
We added a comparison to the state-of-the-art for the “Baseline task evaluation” [1] and to the “Multi-task
Evaluation” scenario [2]. To the best of our knowledge, those works report the state-of-the-art results for the
Physionet EEG Motor Movement/Imagery Dataset. In addition, in our opinion, the protocols proposed in
[1] and [2] are robust, fair and favor the reproducibility of the results and thus, a fair comparison.
???
[1] Fraschini, M., Hillebrand, A., Demuru, M., Didaci, L., and Marcialis, G. L. (2015). An eeg-based
biometric system using eigenvector centrality in resting state brain networks. IEEE Signal Processing Letters,
22(6):666–670.
[2] Yang, S., Deravi, F., and Hoque, S. (2018). Task sensitivity in eeg biometric recognition. Pattern Analysis
and Applications, 21(1):105–117.
Response to Reviewer #2
Basic reporting
Comment #1: Most contexts are overclaiming. For example, ”We show that the proposed method is
robust, even when trained and evaluated on different motor tasks and fewer electrodes (nine channels). Due
to deep learning hardware accelerators, we claim that our proposal is suitable to be embedded in a real-world
application.”. Subjects performed different actions, but they are 1) motor-related tasks, 2) one single day
with a one-time recording setup. I would recommend the author avoiding overclaiming.
Answer:
We agree with the reviewer and we removed the overclaiming.
???
Comment #2: This recent review article would help the author improve the Introduction on either general
EEG applications or EEG-based biometrics https://ieeexplore.ieee.org/document/8945233.
Answer:
We agree with the reviewer and we included the EEG applications presented in the manuscript in the
introduction section.
???
Comment #3: One important work in the literature is missing. This is very recent work from late 2020.
There are over 30 citations so far. https://ieeexplore.ieee.org/abstract/document/8745473. It is a deep
learning approach for EEG-based biometrics.
Answer:
We agree with the reviewer and we cited this work among the deep learning approaches.
???
Comment #4: Make the contribution list concise and address my comment in 1.
Answer:
We reduced the contributions list concise avoiding overclaiming.
???
Comment #5: Figure 2 is not informative. The author can remove It.
Answer:
We agree with the reviewer and removed it.
???
Comment #6: Related works should be in a table form. The current form is complicated to read and to
follow.
Answer:
We agree with the reviewer and summarize the related works in a table.
???
Experimental design
Comment #1: Remove ”source: the author” from Figure 3, 5, ....
Answer:
We agree with the reviewer and removed it.
???
Comment #2: I am not agreed that data segmentation with overlapping is equal to data augmentation.
Answer:
The overlapping is a manner to increase the number of data for training, that is, this approach augments
the data. It is not a complex technique but achieves the goal of increasing the data as well as expected from
a data augmentation technique.
???
Comment #3: Validation is not explained clearly. Such as k fold cross-validation.
Answer:
The validation follows the protocols proposed in [1] and [2]. We explain it in the Section Evaluation
scenarios.
???
Comment #4: The author must compare their own proposed model to the previous works by reusing their
code on the same data processing or by reproducing the code from their journal papers. It is not a fair
comparison at this moment.
Answer:
The comparison from the “Baseline tasks evaluation” and “Multi-task evaluation” follow the same evaluation
protocol proposed in [1] and [2]. Our experiments use precisely the same data to train and test. The only
difference is regarding the pre-processing applied to the training data (data augmentation). Since the test data
is the same, we believe the direct comparison is fair. The scenarios “Cross-task evaluation” “Cross-individual
evaluation” uses different sets and that is why we only compare against our models.
???
Validity of the findings
Comment #1: There is no statistical testing.
Answer:
Since the neural networks used in deep learning methods are optimized through stochastic gradient descent, it
is a stochastic process. We agree with the reviewer that we should run the tests multiple times, with different
seeds, and then report mean, standard deviation and thus use a statistical test. However, this practice is not
common in the deep learning field, due to the enormous computational cost the models usually require. In
our specific case, unfortunately, we would need over 60 days to run the experiments, for example, 30 times
each.
???
Comment #2: There is a slight novelty in using EEG from the different motor-related actions, which the
author called multi-tasks. But, I would suggest the author make the deep learning model more novel. Or the
author may include some more research questions to make this work solid. The novelty of the current form
might not be enough.
Answer:
We agree with the reviewer. In this version of the manuscript, we explored the Squeeze-and-Excitation
blocks along with our network to add more novelty. The Squeeze-and-Excitation blocks promotes a channel-
wise feature response which could be interesting for EEG applications, as the signal has a large number of
channels. We also explore the impact of Cross-individual evaluation.
???
Comments for the author
Comment #1: Good luck with your chances on a revision.
Answer:
We thank the reviewer for his valuable contributions. We have tried our best to address all comments.
???
[1] Fraschini, M., Hillebrand, A., Demuru, M., Didaci, L., and Marcialis, G. L. (2015). An eeg-based
biometric system using eigenvector centrality in resting state brain networks. IEEE Signal Processing Letters,
22(6):666–670.
[2] Yang, S., Deravi, F., and Hoque, S. (2018). Task sensitivity in eeg biometric recognition. Pattern Analysis
and Applications, 21(1):105–117.
Response to Reviewer #3
Basic reporting
The authors propose a biometric recognition system relying on EEG data, specifically addressing the issue
of cross-task recognition.
Comment #1: This topic is indeed relevant yet not novel. The authors do not cite several recent relevant
studies, such as
- Del Pozo-Banos et al, Evidence of a Task-Independent Neural Signature in the Spectral Shape of the
Electroencephalogram, 2018;
- Vinothkumar et al, Task-Independent EEG based Subject Identification using Auditory Stimulus, 2018
- Kong et al, Task-Independent EEG Identification via Low-rank matrix decomposition, 2018;
- Fraschini et al, Robustness of functional connectivity metrics for EEG-based personal identification over
task-induced intra-class and inter-class variations, 2019;
- Kumar et al, Subspace techniques for task independent EEG person identification, 2019;
Answer:
We agree with the reviewer and we included the studies in the manuscript.
???
Comment #1: The novelty of the performed study with respect to the aforementioned works should have
been deeply detailed. In more detail, several of the aforementioned studies use CNN to extract features from
EEG, further affecting the contribution of the present manuscript.
Answer:
We highlight the main contributions of the proposed manuscript as a list in the introduction section. In
particular, this work explores, for the first time, the Squeeze-and-Excitation block for the task.
???
Experimental design
Comment #1: There is a severe falw in the employed experimental design. The authors use CNNs to
extract discriminative features from EEG data. The employed CNNs are trained over the same subjects then
used for testing. Such conditions are impossible to be replicated in real life: they would imply any potential
malicious subject to be available during the enrolment of a user. The obtained equal error rates are therefore
achieved in unproper conditions. A proper verification test should be carried out over subjects which have not
been considered during the training of the employed network,
Answer:
We agree with the reviewer and we included more experiments. We proposed a novel protocol in which half
of the subject’s data are used for training and the remaining for testing. We deeply thank the reviewer for
this comment. This new evaluation allowed us to achieve new findings.
???
Validity of the findings
Comment #1: A major issue of the paper regards the employed database. It is widely known that the
PhysioNet database has been acquired performing a single recording session for each subject. Although several
tasks are performed by each subject, they are carried out during the same session. The acquired data are
extremely dependent on the session-specific conditions (for instance, the EEG recording device is never took off
by the subjects during the whole session). the obtained fundings are therefore not reliable. It is recommended
to perform tests on database collected, for each subject, during different days.
Answer:
We agree with the reviewer and introduced the “Cross-individual evaluation” experiment in which individuals
present in the training data are not used in the testing phase and vice-versa. We believe that this evaluation
alleviates the aforementioned issue.
???
Comments for the author
Comment #1: Given that CNNs have been already used for EEG-based biometric recognition (also considering cross-task conditions), a discussion about the novelty of the proposed approach has to be given.
Moreover, a performance comparison between the effectiveness of the proposed approach and the literature
ones should be included.
Answer:
We included a comparison to the state-of-the-art for the “Baseline Task Evaluation” [1] and to the “Multitask evaluation” scenario [2] and improve the literature discussion.
???
[1] Fraschini, M., Hillebrand, A., Demuru, M., Didaci, L., and Marcialis, G. L. (2015). An eeg-based
biometric system using eigenvector centrality in resting state brain networks. IEEE Signal Processing Letters,
22(6):666–670.
[2] Yang, S., Deravi, F., and Hoque, S. (2018). Task sensitivity in eeg biometric recognition. Pattern Analysis
and Applications, 21(1):105–117.
" | Here is a paper. Please give your review comments after reading it. |
109 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Probabilistic Programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamliltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source Probabilistic Programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other Probabilistic Programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Probabilistic programming (PP) allows for flexible specification and fitting of Bayesian statistical models. PyMC3 is a new, open-source PP framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features nextgeneration Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS) <ns0:ref type='bibr' target='#b5'>(Homan and Gelman, 2014)</ns0:ref>, a self-tuning variant of Hamiltonian Monte Carlo (HMC) <ns0:ref type='bibr' target='#b3'>(Duane et al., 1987)</ns0:ref>. This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tuneable parameters of Hamiltonian Monte Carlo, which means specialized knowledge about how the algorithms work is not required. PyMC3, Stan <ns0:ref type='bibr'>(Team, 2015)</ns0:ref>, and the LaplacesDemon package for R are currently the only PP packages to offer HMC.</ns0:p><ns0:p>A number of probabilistic programming languages and systems have emerged over the past 2-3 decades. One of the earliest to enjoy widespread usage was the BUGS language <ns0:ref type='bibr' target='#b10'>(Spiegelhalter et al., 1995)</ns0:ref>, which allows for the easy specification of Bayesian models, and fitting them via Markov chain Monte Carlo methods. Newer, more expressive languages have allowed for the creation of factor graphs and probabilistic graphical models. Each of these systems are domainspecific languages built on top of existing low-level languages; notable examples include Church <ns0:ref type='bibr' target='#b4'>(Goodman et al., 2012)</ns0:ref> (derived from Scheme), Anglican <ns0:ref type='bibr' target='#b12'>(Wood et al., 2014)</ns0:ref> (integrated with Clojure and compiled with a Java Virtual Machine), Venture <ns0:ref type='bibr' target='#b8'>(Mansinghka et al., 2014)</ns0:ref> (built from C++), Infer.NET <ns0:ref type='bibr' target='#b9'>(Minka et al., 2010)</ns0:ref> (built upon the .NET framework), Figaro (embedded into Scala), WebPPL (embedded into JavaScript), Picture (embedded into Julia), and Quicksand (embedded into Lua).</ns0:p><ns0:p>Probabilistic programming in Python <ns0:ref type='bibr'>Van Rossum and Drake Jr (2000)</ns0:ref> confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython <ns0:ref type='bibr' target='#b1'>(Behnel et al., 2011)</ns0:ref>. These features make it straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis. While most of PyMC3's user-facing features are written in pure Python, it leverages Theano <ns0:ref type='bibr' target='#b2'>(Bergstra et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b0'>Bastien et al., 2012)</ns0:ref> to transparently transcode models to C and compile them to machine code, thereby boosting performance. Theano is a library that allows expressions to be defined using generalized vector data structures called tensors, which are tightly integrated with the popular NumPy ndarray data structure, and similarly allow for broadcasting and advanced indexing, just as NumPy arrays do. Theano also automatically optimizes the likelihood's computational graph for speed and provides simple GPU integration.</ns0:p><ns0:p>Here, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first describe basic PyMC3 usage, including installation, data creation, model definition, model fitting and posterior analysis. We will then employ two case studies to illustrate how to define and fit more sophisticated models. Finally we will show how PyMC3 can be extended and discuss more advanced features, such as the Generalized Linear Models (GLM) subpackage, custom distributions, custom transformations and alternative storage backends.</ns0:p></ns0:div>
<ns0:div><ns0:head>INSTALLATION</ns0:head><ns0:p>Running PyMC3 requires a working Python interpreter <ns0:ref type='bibr'>(Van Rossum and Drake Jr, 2000)</ns0:ref>, either version 2.7 (or more recent) or 3.4 (or more recent); we recommend that new users install version 3.4. A complete Python installation for Mac OSX, Linux and Windows can most easily be obtained by downloading and installing the free Anaconda Python Distribution by ContinuumIO.</ns0:p><ns0:p>PyMC3 can be installed using 'pip': pip install git+https://github.com/pymc-devs/pymc3</ns0:p><ns0:p>PyMC3 depends on several third-party Python packages which will be automatically installed when installing via pip. The four required dependencies are: Theano, NumPy, SciPy, and Matplotlib. To take full advantage of PyMC3, the optional dependencies Pandas and Patsy should also be installed.</ns0:p></ns0:div>
<ns0:div><ns0:head>pip install patsy pandas</ns0:head><ns0:p>The source code for PyMC3 is hosted on GitHub at https://github.com/pymc-devs/pymc3 and is distributed under the liberal Apache License 2.0. On the GitHub site, users may also report bugs and other issues, as well as contribute code to the project, which we actively encourage. Comprehensive documentation is readily available at http://pymc-devs.github.io/pymc3/.</ns0:p></ns0:div>
<ns0:div><ns0:head>A MOTIVATING EXAMPLE: LINEAR REGRESSION</ns0:head><ns0:p>To introduce model definition, fitting and posterior analysis, we first consider a simple Bayesian linear regression model with normal priors on the parameters. We are interested in predicting outcomes Y as normally-distributed observations with an expected value µ that is a linear function of two predictor variables, X 1 and X 2 .</ns0:p><ns0:formula xml:id='formula_0'>Y ∼ N (µ, σ 2 ) µ = α + β 1 X 1 + β 2 X 2 2/20</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where α is the intercept, and β i is the coefficient for covariate X i , while σ represents the observation or measurement error. We will apply zero-mean normal priors with variance of 10 to both regression coefficients, which corresponds to weak information regarding the true parameter values. Since variances must be positive, we will also choose a half-normal distribution (normal distribution bounded below at zero) as the prior for σ. α ∼ N (0, 10)</ns0:p><ns0:formula xml:id='formula_1'>β i ∼ N (0, 10) σ ∼ |N (0, 1)|</ns0:formula></ns0:div>
<ns0:div><ns0:head>Generating data</ns0:head><ns0:p>We can simulate some data from this model using NumPy's random module, and then use PyMC3 to try to recover the corresponding parameters. The following code implements this simulation, and the resulting data are shown in figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Model Specification</ns0:head><ns0:p>Specifying this model in PyMC3 is straightforward because the syntax is similar to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above. First, we import the components we will need from PyMC.</ns0:p><ns0:p>from pymc3 import Model, Normal, HalfNormal</ns0:p><ns0:p>The following code implements the model in PyMC: with basic_model:</ns0:p><ns0:p>This creates a context manager, with our basic model as the context, that includes all statements until the indented block ends. This means all PyMC3 objects introduced in the indented code block below the with statement are added to the model behind the scenes. Absent this context manager idiom, we would be forced to manually associate each of the variables with basic model as they are created, which would result in more verbose code. If you try to create a new random variable outside of a model context manger, it will raise an error since there is no obvious model for the variable to be added to.</ns0:p><ns0:p>The first three statements in the context manager create stochastic random variables with Normal prior distributions for the regression coefficients, and a half-normal distribution for the standard deviation of the observations, σ. alpha = Normal( alpha , mu=0, sd=10) beta = Normal( beta , mu=0, sd=10, shape=2) sigma = HalfNormal( sigma , sd=1)</ns0:p><ns0:p>These are stochastic because their values are partly determined by its parents in the dependency graph of random variables, which for priors are simple constants, and are partly random, according to the specified probability distribution.</ns0:p><ns0:p>The Normal constructor creates a normal random variable to use as a prior. The first argument for random variable constructors is always the name of the variable, which should almost always match the name of the Python variable being assigned to, since it can be used to retrieve the variable from the model when summarizing output. The remaining required arguments for a stochastic object are the parameters, which in the case of the normal distribution are the mean mu and the standard deviation sd, which we assign hyperparameter values for the model. In general, a distribution's parameters are values that determine the location, shape or scale of the random variable, depending on the parameterization of the distribution. Most commonly used distributions, such as Beta, Exponential, Categorical, Gamma, Binomial and others, are available as PyMC3 objects, and do not need to be manually coded by the user.</ns0:p><ns0:p>The beta variable has an additional shape argument to denote it as a vector-valued parameter of size 2. The shape argument is available for all distributions and specifies the length or shape of the random variable; when unspecified, it defaults to a value of one (i.e. a scalar). It can be an integer to specify an array, or a tuple to specify a multidimensional array. For example, shape=(5,7) makes random variable that takes a 5 by 7 matrix as its value.</ns0:p><ns0:p>Detailed notes about distributions, sampling methods and other PyMC3 functions are available via the help function. Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship:</ns0:p><ns0:formula xml:id='formula_2'>mu = alpha + beta[0]*X1 + beta[1]*X2</ns0:formula><ns0:p>This creates a deterministic random variable, which implies that its value is completely determined by its parents' values. That is, there is no uncertainty in the variable beyond that which is inherent in the parents' values. Here, mu is just the sum of the intercept alpha and the two products of the coefficients in beta and the predictor variables, whatever their current values may be.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2015:09:6618:1:1:NEW 27 Jan 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>PyMC3 random variables and data can be arbitrarily added, subtracted, divided, or multiplied together, as well as indexed (extracting a subset of values) to create new random variables. Many common mathematical functions like sum, sin, exp and linear algebra functions like dot (for inner product) and inv (for inverse) are also provided. Applying operators and functions to PyMC3 objects results in tremendous model expressivity.</ns0:p><ns0:p>The final line of the model defines Y obs, the sampling distribution of the response data.</ns0:p><ns0:p>Y_obs = Normal <ns0:ref type='bibr'>( Y_obs , mu=mu, sd=sigma, observed=Y)</ns0:ref> This is a special case of a stochastic variable that we call an observed stochastic, and it is the data likelihood of the model. It is identical to a standard stochastic, except that its observed argument, which passes the data to the variable, indicates that the values for this variable were observed, and should not be changed by any fitting algorithm applied to the model. The data can be passed in the form of either a numpy.ndarray or pandas.DataFrame object.</ns0:p><ns0:p>Notice that, unlike the prior distributions, the parameters for the normal distribution of Y obs are not fixed values, but rather are the deterministic object mu and the stochastic sigma. This creates parent-child relationships between the likelihood and these two variables, as part of the directed acyclic graph of the model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Model fitting</ns0:head><ns0:p>Having completely specified our model, the next step is to obtain posterior estimates for the unknown variables in the model. Ideally, we could derive the posterior estimates analytically, but for most non-trivial models, this is not feasible. We will consider two approaches, whose appropriateness depends on the structure of the model and the goals of the analysis: finding the maximum a posteriori (MAP) point using optimization methods, and computing summaries based on samples drawn from the posterior distribution using MCMC sampling methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Maximum a posteriori methods</ns0:head><ns0:p>The maximum a posteriori (MAP) estimate for a model, is the mode of the posterior distribution and is generally found using numerical optimization methods. This is often fast and easy to do, but only gives a point estimate for the parameters and can be misleading if the mode isn't representative of the distribution. PyMC3 provides this functionality with the find MAP function.</ns0:p><ns0:p>Below we find the MAP for our original model. The MAP is returned as a parameter point, which is always represented by a Python dictionary of variable names to NumPy arrays of parameter values. It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together.</ns0:p><ns0:p>Also, most techniques for finding the MAP estimate only find a local optimium (which is often good enough), and can therefore fail badly for multimodal posteriors if the different modes are meaningfully different.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sampling methods</ns0:head><ns0:p>Though finding the MAP is a fast and easy way of obtaining parameter estimates of well-behaved models, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulation-based approach such as MCMC can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution.</ns0:p><ns0:p>To conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a single iteration of a particular MCMC algorithm, such as Metropolis, Slice sampling, or the No-U-Turn Sampler (NUTS). PyMC3's step methods submodule contains the following samplers: NUTS, Metropolis, Slice, HamiltonianMC, and BinaryMetropolis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gradient-based sampling methods</ns0:head><ns0:p>PyMC3 implements several standard sampling algorithms, such as adaptive Metropolis-Hastings and adaptive slice sampling, but PyMC3's most capable step method is the No-U-Turn Sampler. NUTS is especially useful for sampling from models that have many continuous parameters, a situation where older MCMC algorithms work very slowly. It takes advantage of information about where regions of higher probability are, based on the gradient of the log posterior-density. This helps it achieve dramatically faster convergence on large problems than traditional sampling methods achieve. PyMC3 relies on Theano to analytically compute model gradients via automatic differentiation of the posterior density. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo. For random variables that are undifferentiable (namely, discrete variables) NUTS cannot be used, but it may still be used on the differentiable variables in a model that contains undifferentiable variables.</ns0:p><ns0:p>NUTS requires a scaling matrix parameter, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, although NUTS uses it somewhat differently. The matrix gives an approximate shape of the posterior distribution, so that NUTS does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions. Poor scaling parameters will slow down NUTS significantly, sometimes almost stopping it completely. A reasonable starting point for sampling can also be important for efficient sampling, but not as often.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Fortunately, NUTS can often make good guesses for the scaling parameters. If you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by find MAP) to NUTS, it will look at the local curvature of the log posterior-density (the diagonal of the Hessian matrix) at that point to guess values for a good scaling vector, which can result in a good value. The MAP estimate is often a good point to use to initiate sampling. It is also possible to supply your own vector or scaling matrix to NUTS. Additionally, the find hessian or find hessian diag functions can be used to modify a Hessian at a specific point to be used as the scaling matrix or vector.</ns0:p><ns0:p>Here, we will use NUTS to sample 2000 draws from the posterior using the MAP as the starting and scaling point. Sampling must also be performed inside the context of the model. The sample function runs the step method(s) passed to it for the given number of iterations and returns a Trace object containing the samples collected, in the order they were collected.</ns0:p><ns0:p>The trace object can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can extract the last 5 values for the alpha variable as follows trace[ alpha ] <ns0:ref type='bibr'>[-5:]</ns0:ref> array([ 0.98134501, 1.04901676, 1.03638451, 0.88261935, 0.95910723])</ns0:p></ns0:div>
<ns0:div><ns0:head>Posterior analysis</ns0:head><ns0:p>PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot.</ns0:p><ns0:p>from pymc3 import traceplot traceplot(trace)</ns0:p><ns0:p>The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients. <ns0:ref type='figure'>-----------------------------------------------------------------</ns0:ref> <ns0:ref type='figure'>-------------|==============|==============|--------------|</ns0:ref> 0.523 0.865 1.024 1.200 1.501</ns0:p></ns0:div>
<ns0:div><ns0:head>CASE STUDY 1: STOCHASTIC VOLATILITY</ns0:head><ns0:p>We present a case study of stochastic volatility, time varying stock market volatility, to illustrate PyMC3's capability for addressing more realistic problems. The distribution of market returns is highly non-normal, which makes sampling the volatilities significantly more difficult. This example has 400+ parameters so using older sampling algorithms like Metropolis-Hastings would be inefficient, generating highly auto-correlated samples with a low effective sample size. Instead, we use NUTS, which is dramatically more efficient.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The Model</ns0:p><ns0:p>Asset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others they are very stable. Stochastic volatility models address this with a latent volatility variable, which is allowed to change over time. The following model is similar to the one described in the NUTS paper (Hoffman 2014, p. 21).</ns0:p><ns0:p>σ ∼ exp(50) ν ∼ exp(.1)</ns0:p><ns0:formula xml:id='formula_3'>s i ∼ N (s i−1 , σ −2 ) log(y i ) ∼ T (ν, 0, exp(−2s i ))</ns0:formula><ns0:p>Here, y is the response variable, a daily return series which we model with a Student-T distribution having an unknown degrees of freedom parameter, and a scale parameter determined by a latent process s. The individual s i are the individual daily log volatilities in the latent log volatility process.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Data</ns0:head><ns0:p>Our data consist of daily returns of the S&P 500 during the 2008 financial crisis.</ns0:p><ns0:p>import pandas as pd returns = pd.read_csv( data/SP500.csv , index_col=0, parse_dates=True) See Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> for a plot of the daily returns data. As can be seen, stock market volatility increased remarkably during the 2008 financial crisis. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Model Implementation</ns0:head><ns0:p>As with the linear regression example, implementing the model in PyMC3 mirrors its statistical specification. This model employs several new distributions: the Exponential distribution for the ν and σ priors, the Student-T (StudentT) distribution for distribution of returns, and the GaussianRandomWalk for the prior for the latent volatilities.</ns0:p><ns0:p>In PyMC3, variables with positive support like Exponential are transformed with a log transform, making sampling more robust. Behind the scenes, the variable is transformed to the unconstrained space (named 'variableName log') and added to the model for sampling. In this model this happens behind the scenes for both the degrees of freedom, nu, and the scale parameter for the volatility process, sigma, since they both have exponential priors. Variables with priors that are constrained on both sides, like Beta or Uniform, are also transformed to be unconstrained, here with a log odds transform.</ns0:p><ns0:p>Although (unlike model specification in PyMC2) we do not typically provide starting points for variables at the model specification stage, it is possible to provide an initial value for any distribution (called a 'test value' in Theano) using the testval argument. This overrides the default test value for the distribution (usually the mean, median or mode of the distribution), and is most often useful if some values are invalid and we want to ensure we select a valid one. The test values for the distributions are also used as a starting point for sampling and optimization by default, though this is easily overriden.</ns0:p><ns0:p>The vector of latent volatilities s is given a prior distribution by a GaussianRandomWalk object.</ns0:p><ns0:p>As its name suggests, GaussianRandomWalk is a vector-valued distribution where the values of the vector form a random normal walk of length n, as specified by the shape argument. The scale of the innovations of the random walk, sigma, is specified in terms of the precision of the normally distributed innovations and can be a scalar or vector. Notice that we transform the log volatility process s into the volatility process by exp(-2*s).</ns0:p><ns0:p>Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.</ns0:p><ns0:p>Also note that we have declared the Model name sp500 model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fitting</ns0:head><ns0:p>Before we draw samples from the posterior, it is prudent to find a decent starting value, by which we mean a point of relatively high probability. For this model, the full maximum a posteriori (MAP) point over all variables is degenerate and has infinite density. But, if we fix log sigma and nu it is no longer degenerate, so we find the MAP with respect only to the volatility process s keeping log sigma and nu constant at their default values (remember that we set testval=.1</ns0:p></ns0:div>
<ns0:div><ns0:head>11/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for sigma). We use the Limited-memory BFGS (L-BFGS) optimizer, which is provided by the scipy.optimize package, as it is more efficient for high dimensional functions; this model includes 400 stochastic random variables (mostly from s).</ns0:p><ns0:p>As a sampling strategy, we execute a short initial run to locate a volume of high probability, then start again at the new starting point to obtain a sample that can be used for inference. Notice that the call to sample includes an optional njobs=2 argument, which enables the parallel sampling of 4 chains (assuming that we have 2 processors available).</ns0:p><ns0:p>We can check our samples by looking at the traceplot for nu and sigma; each parallel chain will be plotted within the same set of axes (Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref>). Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph (Figure <ns0:ref type='figure' target='#fig_10'>5</ns0:ref>). Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly. As you can see, the model correctly infers the increase in volatility during the 2008 financial crash.</ns0:p><ns0:p>It is worth emphasizing the complexity of this model due to its high dimensionality and dependency-structure in the random walk distribution. NUTS as implemented in PyMC3, however, correctly infers the posterior distribution with ease.</ns0:p></ns0:div>
<ns0:div><ns0:head>CASE STUDY 2: COAL MINING DISASTERS</ns0:head><ns0:p>This case study implements a change-point model for a time series of recorded coal mining disasters in the UK from 1851 to 1962 <ns0:ref type='bibr' target='#b6'>(Jarrett, 1979)</ns0:ref>. The annual number of disasters is thought to have been affected by changes in safety regulations during this period. We have also included a pair of years with missing data, identified as missing by a NumPy MaskedArray using -999 as a sentinel value.</ns0:p><ns0:p>Our objective is to estimate when the change occurred, in the presence of missing data, using multiple step methods to allow us to fit a model that includes both discrete and continuous random variables. disaster_data = np.ma.masked_values <ns0:ref type='bibr'>([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4</ns0:ref>, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2, 3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1], value=-999) year = np.arange <ns0:ref type='bibr'>(1851,</ns0:ref><ns0:ref type='bibr'>1962)</ns0:ref> plot(year, disaster_data, o , markersize=8); ylabel('Disaster count') xlabel('Year') Counts of disasters in the time series is thought to follow a Poisson process, with a relatively large rate parameter in the early part of the time series, and a smaller rate in the later part. The Bayesian approach to such a problem is to treat the change point as an unknown quantity in the model, and assign it a prior distribution, which we update to a posterior using the evidence in the dataset.</ns0:p><ns0:p>In our model,</ns0:p><ns0:formula xml:id='formula_4'>D t ∼ Pois(r t ) r t = l, if t < s e, if t ≥ s s ∼ Unif(t l , t h ) e ∼ exp(1) l ∼ exp(1)</ns0:formula><ns0:p>the parameters are defined as follows:</ns0:p><ns0:p>• D t : The number of disasters in year t</ns0:p><ns0:p>• r t : The rate parameter of the Poisson distribution of disasters in year t.</ns0:p><ns0:p>• s: The year in which the rate parameter changes (the switchpoint).</ns0:p><ns0:p>• e: The rate parameter before the switchpoint s.</ns0:p><ns0:p>• l: The rate parameter after the switchpoint s. The conditional statement is realized using the Theano function switch, which uses the first argument to select either of the next two arguments.</ns0:p><ns0:p>Missing values are handled concisely by passing a MaskedArray or a pandas.DataFrame with NaN values to the observed argument when creating an observed stochastic random variable. From this, PyMC3 automatically creates another random variable, disasters.missing values, which treats the missing values as unobserved stochastic nodes. All we need to do to handle the missing values is ensure we assign a step method to this random variable.</ns0:p><ns0:p>Unfortunately, because they are discrete variables and thus have no meaningful gradient, we cannot use NUTS for sampling either switchpoint or the missing disaster observations. Instead, we will sample using a Metroplis step method, which implements self-tuning Metropolis-Hastings, because it is designed to handle discrete values.</ns0:p><ns0:p>Here, the sample function receives a list containing both the NUTS and Metropolis samplers, and sampling proceeds by first applying step1 then step2 at each iteration. [-----------------100%-----------------] 10000 of 10000 complete in 6.9 sec</ns0:p><ns0:p>In the trace plot (figure <ns0:ref type='figure' target='#fig_13'>7</ns0:ref>) we can see that there is about a 10 year span that's plausible for a significant change in safety, but a 5 year span that contains most of the probability mass. The distribution is jagged because of the jumpy relationship between the year switch-point and the likelihood and not due to sampling error. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>PYMC3 FEATURES Arbitrary deterministic variables</ns0:head><ns0:p>Due to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore PyMC3 provides functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC3 models. This is supported with the as op function decorator. Theano requires the types of the inputs and outputs of a function to be declared, which are specified for as op by itypes for inputs and otypes for outputs. An important drawback of this approach is that it is not possible for Theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as op.</ns0:p></ns0:div>
<ns0:div><ns0:head>Arbitrary distributions</ns0:head><ns0:p>The library of statistical distributions in PyMC3, though large, is not exhaustive, but PyMC allows for the creation of user-defined probability distributions. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability log(p(x)). This function may employ other parent random variables in its calculation. Here is an example inspired by a blog post by <ns0:ref type='bibr' target='#b11'>VanderPlas (2014)</ns0:ref>, where Jeffreys priors are used to specify priors that are invariant to transformation. In the case of simple linear regression, these are:</ns0:p><ns0:formula xml:id='formula_5'>β ∝ (1 + β 2 ) 3/2 σ ∝ 1 σ</ns0:formula><ns0:p>The logarithms of these functions can be specified as the argument to DensityDist and inserted into the model. # Create likelihood like = Normal( y_est , mu=alpha + beta * X, sd=eps, observed=Y)</ns0:p><ns0:p>For more complex distributions, one can create a subclass of Continuous or Discrete and provide the custom logp function, as required. This is how the built-in distributions in PyMC3 are specified. As an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. In these cases, it is impossible to write the function in terms of predefined Theano operators and we must use a custom Theano operator using as op or inheriting from theano.Op.</ns0:p><ns0:p>Implementing the beta variable above as a Continuous subclass is shown below, along with a sub-function using the as op decorator, though this is not strictly necessary. </ns0:p></ns0:div>
<ns0:div><ns0:head>Generalized Linear Models</ns0:head><ns0:p>The generalized linear model (GLM) is a class of flexible models that is widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of simple GLMs with an intuitive R-like syntax that is implemented via the patsy module.</ns0:p><ns0:p>The glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object. Models specified via glm can be sampled using the same sample function as standard PyMC3 models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Backends</ns0:head><ns0:p>PyMC3 has support for different ways to store samples from MCMC simulation, called backends. These include storing output in-memory, in text files, or in a SQLite database. By default, an in-memory ndarray is used but for very large models run for a long time, this can exceed the available RAM, and cause failure. Specifying a SQLite backend, for example, as the trace argument to sample will instead result in samples being saved to a database that is initialized automatically by the model. </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Probabilistic programming is an emerging paradigm in statistical learning, of which Bayesian modeling is an important sub-discipline. The signature characteristics of probabilistic programmingspecifying variables as probability distributions and conditioning variables on other variables and on observations-makes it a powerful tool for building models in a variety of settings, and over a range of model complexity. Accompanying the rise of probabilistic programming has been a burst of innovation in fitting methods for Bayesian models that represent notable improvement over existing MCMC methods. Yet, despite this expansion, there are few software packages available that have kept pace with the methodological innovation, and still fewer that allow non-expert users to implement models.</ns0:p><ns0:p>PyMC3 provides a probabilistic programming platform for quantitative researchers to implement statistical models flexibly and succinctly. A large library of statistical distributions and several pre-defined fitting algorithms allows users to focus on the scientific problem at hand, rather than the implementation details of Bayesian modeling. The choice of Python as a development</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Simulated regression data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>#</ns0:head><ns0:label /><ns0:figDesc>Priors for unknown model parameters alpha = Normal( alpha , mu=0, sd=10) beta = Normal( beta , mu=0, sd=10, shape=2) sigma = HalfNormal( sigma , sd=1) # Expected value of outcome mu = alpha + beta[0]*X1 + beta[1]*X2 # Likelihood (sampling distribution) of observations Y_obs = Normal( Y_obs , mu=mu, sd=sigma, observed=Y) The first line, basic_model = Model() creates a new Model object which is a container for the model random variables. Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>By default, find MAP uses the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP. array(1.0175522109423465), beta : array([ 1.51426782, 0.03520891]), sigma_log : array(0.11815106849951475)}</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>values via MAP start = find_MAP(fmin=optimize.fmin_powell) # instantiate sampler step = NUTS(scaling=start) # draw 2000 posterior samples trace = sample(2000, step, start=start) [-----------------100%-----------------] 2000 of 2000 complete in 4.6 sec</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>For a tabular summary, the summary function provides a text-based output of common posterior statistics:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>FFigure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Historical daily returns of the S&P500 during the 2008 financial crisis.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>from pymc3 import Exponential, StudentT, exp, Deterministic from pymc3.distributions.timeseries import GaussianRandomWalk with Model() as sp500_model: nu = Exponential( nu , 1./10, testval=5.) sigma = Exponential( sigma , 1./.02, testval=.1) s = GaussianRandomWalk( s , sigma**-2, shape=len(returns)) volatility_process = Deterministic( volatility_process , exp(-2*s)) r = StudentT( r , nu, lam=1/volatility_process, observed=returns[ S&P500 ])</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>trace[-1] gives us the last point in the sampling trace. NUTS will recalculate the scaling parameters based on the new point, and in this case it leads to faster sampling due to better scaling. import scipy with sp500_model: start = find_MAP(vars=[s], fmin=scipy.optimize.fmin_l_bfgs_b) step = NUTS(scaling=start) trace = sample(100, step, progressbar=False) # Start next run at the last sampled position. step = NUTS(scaling=trace[-1], gamma=.25) trace = sample(2000, step, start=trace[-1], progressbar=False, njobs=2)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Posterior samples of degrees of freedom (nu) and scale (sigma) parameters of the stochastic volatility model. Each plotted line represents a single independent chain sampled in parallel.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>fig, ax = plt.subplots(figsize=(15, 8)) returns.plot(ax=ax) ax.plot(returns.index, 1/np.exp(trace[ s ,::30].T), r , alpha=.03); ax.set(title= volatility_process , xlabel= time , ylabel= volatility ); ax.legend([ S&P500 , stochastic volatility process ])</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Recorded counts of coal mining disasters in the UK, 1851-1962.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Posterior distributions and traces from disasters change point model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>import theano.tensor as T from theano.compile.ops import as_op @as_op(itypes=[T.lscalar], otypes=[T.lscalar]) def crazy_modulo3(value): if value > 0: return value % 3 else : return (-value + 1) % 3 with Model() as model_deterministic: a = Poisson( a , 1) b = crazy_modulo3(a)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>import theano.tensor as T from pymc3 import DensityDist, Uniform with Model() as model: alpha = Uniform( intercept , -100, 100) # Create custom densities 17/20 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016) Manuscript to be reviewed Computer Science beta = DensityDist( beta , lambda value: -1.5 * T.log(1 + value**2), testval=0) eps = DensityDist( eps , lambda value: -T.log(T.abs_(value)), testval=1)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>from pymc3.distributions import Continuous class Beta(Continuous): def __init__(self, mu, *args, **kwargs): super(Beta, self).__init__(*args, **kwargs) self.mu = mu self.mode = mu def logp(self, value): mu = self.mu return beta_logp(value -mu) @as_op(itypes=[T.dscalar], otypes=[T.dscalar]) def beta_logp(value): return -1.5 * np.log(1 + (value)**2) with Model() as model: beta = Beta( slope , mu=0, testval=0)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>#</ns0:head><ns0:label /><ns0:figDesc>Convert X and Y to a pandas DataFrame import pandas df = pandas.DataFrame({ x1 : X1, x2 : X2, y : Y}) The model can then be very concisely specified in one line of code. from pymc3.glm import glm with Model() as model_glm: glm( y ˜x1 + x2 , df) 18/20 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>from pymc3.glm.families import Binomial df_logistic = pandas.DataFrame({ x1 : X1, x2 : X2, y : Y > 0}) with Model() as model_glm_logistic: glm( y ˜x1 + x2 , df_logistic, family=Binomial())</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>---------------100%-----------------] 5000 of 5000 complete in 2.0 sec A secondary advantage to using an on-disk backend is the portability of model output, as the stored trace can then later (e.g. in another session) be re-loaded using the load function: from pymc3.backends.sqlite import load with basic_model: trace_loaded = load( logistic_trace.sqlite )</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Kernel density estimates and simulated trace for each variable in the linear regression model.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Frequency</ns0:cell><ns0:cell>0 1 2 3 4 5 6</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.0 sigma_log</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>Sample value</ns0:cell><ns0:cell>0.3 0.2 0.1 0.0 0.1 0.2 0.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell cols='3'>2000 sigma_log 3000</ns0:cell><ns0:cell>4000</ns0:cell><ns0:cell>5000</ns0:cell></ns0:row><ns0:row><ns0:cell>Frequency</ns0:cell><ns0:cell cols='2'>0.7 0 1 2 3 4 5 6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>0.9</ns0:cell><ns0:cell>1.0 sigma 1.1</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>1.3</ns0:cell><ns0:cell>1.4</ns0:cell><ns0:cell>Sample value</ns0:cell><ns0:cell>0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>sigma</ns0:cell><ns0:cell>3000</ns0:cell><ns0:cell>4000</ns0:cell><ns0:cell>5000</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Figure 2. from pymc3 import summary</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>summary(trace[ alpha ])</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>alpha:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Mean</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>SD</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='4'>MC Error</ns0:cell><ns0:cell /><ns0:cell cols='3'>95% HPD interval</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>This model introduces discrete variables with the Poisson likelihood and a discrete-uniform prior on the change-point s. Our implementation of the rate variable is as a conditional deterministic variable, where its value is conditioned on the current value of s.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>with Model() as disaster_model:</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>switchpoint = DiscreteUniform( switchpoint , lower=year.min(),</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>upper=year.max(), testval=1900)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'># Priors for pre-and post-switch rates number of disasters</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>early_rate = Exponential( early_rate , 1)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>late_rate = Exponential( late_rate , 1)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'># Allocate appropriate Poisson rates to years before and after current</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>rate = switch(switchpoint >= year, early_rate, late_rate)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>disasters = Poisson( disasters , rate, observed=disaster_data)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>rate = switch(switchpoint >= year, early_rate, late_rate)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>from pymc3 import DiscreteUniform, Poisson, switch</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>14/20</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016)</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>• t l , t h : The lower and upper boundaries of year t.</ns0:note></ns0:figure>
<ns0:note place='foot' n='15'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='19'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='20'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2015:09:6618:1:1:NEW 27 Jan 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Dr Elkan,
We would like to resubmit a revised version of “Probabilistic Programming in Python using PyMC3” for publication in PeerJ. The first submission has received favorable reviews from both reviewers with minor requests for improvement. We believe we have addressed all issues. Below we respond to each reviewer comment point-by-point.
This article is well-written. It is clear, understandable, and self-contained. The prose is tight; the examples useful, and the figures tidy.
We thank the reviewer for the encouraging words.
I only found one error, which is on page 14: r_t should equal 'e' if t>=s, not 'r'.
This oversight has been corrected.
Comments for the author
In my opinion, the PyMC3 package should not be considered a 'probabilistic programming' language; rather, it should be considered an API for constructing graphical models.
I draw a clear distinction between the two; I reserve the phrase 'probabilistic programming' for systems which are 'turing-complete', in the sense that they can model nonparametric distributions, recursive distributions, programs that can write programs, higher-order functions, and the inclusion of arbitrary (stateless) deterministic functions in the middle of probabilistic models. As far as I can tell, PyMC3 does not support any of these.
PyMC3 seems much more comparable to, say, BUGS or BNT than to, say, Church or IBAL. I would therefore not call it 'probabilistic programming' at all.
I would strongly encourage the authors to change the title and introduction to reflect this, to help keep the terminology consistent throughout the community.
We appreciate the feedback but want to point out that Probabilistic Programming is already being used to describe frameworks similar to PyMC3. For example, on the Stan front-page (http://mc-stan.org/) it reads “Stan is a probabilistic programming language”. PyMC3 can do almost everything that Stan can and should thus be allowed to call itself probabilistic programming as well. Cam Davidson-Pilon’s recent book Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference implements its probabilistic models entirely in Python using PyMC. Wikipedia also includes packages like WinBUGS, for which PyMC is a replacement, in their article on probabilistic programming (https://en.wikipedia.org/wiki/Probabilistic_programming_language). Moreover, PyMC3 could certainly add features like nonparametric and recursive distributions to make it “turing-complete” in an upcoming version so this is not a fundamental design limitation in our approach. In fact, some of these features might already be possible via custom Theano density functions.
Reviewer 2 (Anonymous)
Basic reporting
Key prior work is not mentioned or contextualized. For example, Figaro is a Scala-embedded probabilistic programming system that exposes customizable inference strategies; WebPPL is another embedded language (embedded into JavaScript); Picture is another embedded language (into Julia); and Quicksand is another (into Lua).
We apologize for the oversight and have now added a paragraph to the introduction (highlighted in bold) that mentions prior work.
Figure 6 is of very low resolution and needs to be redone.
We have converted all graphics to vector format.
Sincerely,
John Salvatier,
Thomas Wiecki, &
Christopher Fonnesbeck
" | Here is a paper. Please give your review comments after reading it. |
110 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The goal of this research is to develop and implement a highly effective deep learning model for detecting in this paper, we propose an ensemble of Convolutional Neural Network (CNN) based on EfficientNet, named ECOVNet, to detect COVID-19 from chest X-rays. To make the proposed model more robust, we have used one of the largest open-access chest X-ray data sets named COVIDx containing three classes-COVID-19, normal, and pneumonia. For feature extraction, we have applied an effective CNN structure, namely EfficientNet, with ImageNet pre-training weights. The generated features are transferred into custom fine-tuned top layers followed by a set of model snapshots. The predictions of the model snapshots (which are created during a single training) are consolidated through two ensemble strategies, i.e., hard ensemble and soft ensemble, to enhance classification performance. In addition, a visualization technique is incorporated to highlight areas that distinguish classes, thereby enhancing the understanding of primal components related to COVID-19. The results of our empirical evaluations show that the proposed ECOVNet outperforms the state-of-the-art approaches and significantly improves detection performance with 100% recall for COVID-19 and overall accuracy of 96.07%. We believe that ECOVNet can enhance the detection of COVID-19 disease, and thus, propel towards a fully automated and efficacious COVID-19 detection system.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Corona virus disease 2019 (COVID-19) is a contagious disease that was caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). The disease was first detected in Wuhan City, Hubei Province, China in December 2019, and was related to contact with a seafood wholesale market and quickly spread to all parts of the world <ns0:ref type='bibr'>(World Health Organization, 2020)</ns0:ref>. The World Health Organization (WHO) promulgated the outbreak of the COVID-19 pandemic on March 11, 2020. This perilous virus has not only overwhelmed the world, but also affected millions of lives (World Health Organization, 2020). To limit the spread of this infection, all infected countries strive to cover many strategies such as encourage people to maintain social distancing as well as lead hygienic life, enhance the infection screening system through multi-functional testing, seek mass vaccination to reduce the pandemic ahead of time, etc. The reverse transcriptase-polymerase chain reaction (RT-PCR) is a modular diagnosis method, however, it has certain limitations, such as the accurate detection of suspect patients causes delay since the testing procedures inevitably preserve the strict necessity of conditions at the clinical laboratory <ns0:ref type='bibr' target='#b56'>(Zheng et al., 2020)</ns0:ref> and false-negative results may lead to greater impact in the prevention and control of the disease <ns0:ref type='bibr' target='#b9'>(Fang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>To make up for the shortcomings of RT-PCR testing, researchers around the world are seeking to promote a fast and reliable diagnostic method to detect COVID-19 infection. The WHO and Wuhan University Zhongnan Hospital respectively issued quick guides (World Health Organization, 2020; Jin PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:1:2:NEW 17 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr'>et al., 2020b)</ns0:ref>, suggesting that in addition to detecting clinical symptoms, chest imaging can also be used to evaluate the disease to diagnose and treat COVID-19. In <ns0:ref type='bibr' target='#b42'>(Rubin et al., 2020)</ns0:ref>, the authors have contributed a prolific guideline for medical practitioners to use chest radiography and computed tomography (CT) to screen and assess the disease progression of COVID-19 cases. Although CT scans have higher sensitivity, it also has some drawbacks, such as high cost and the need for high doses of radiation during screening, which exposes pregnant women and children to greater radiation risks <ns0:ref type='bibr' target='#b8'>(Davies et al., 2011)</ns0:ref>. On the other hand, diagnosis based on chest X-ray appears to be a propitious solution for COVID-19 detection and treatment. <ns0:ref type='bibr' target='#b34'>Ng et al. (2020)</ns0:ref> remarked that COVID-19 infection pulmonary manifestation is immensely delineated by chest X-ray images.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The purpose of this study is to ameliorate the accuracy of COVID-19 detection system from chest Xray images. In this context, we contemplate a CNN-based architecture since it is illustrious for its topnotch recognition performance in image classification or detection. For medical image analysis, higher detection accuracy along with crucial findings is a top aspiration, and in current years, CNN based architectures are comprehensively featured the critical findings related to medical imaging. In order to achieve the defined purpose, this paper presents a novel CNN based architecture called ECOVNet, exploiting the cutting-edge EfficientNet <ns0:ref type='bibr'>(Tan and Le, 2019)</ns0:ref> family of CNN models together with ensemble strategies. The pipeline of the proposed architecture commences with the data augmentation approach, then optimizes and fine-tunes the pre-trained EfficientNet models, creating respective model's snapshots. After that, generated model snapshots are integrated into an ensemble, i.e., soft ensemble and hard ensemble, to make predictions.</ns0:p><ns0:p>The motivation for using EfficientNet is that they are known for their high accuracy, while being smaller and faster than the best existing CNN architectures. Moreover, an ensemble technique has proven to be effective in predicting since it produces a lower error rate compared with the prediction of a single model. The use of ensemble techniques on different CNN models has proven to be an effective technique for image-based diagnosis and biomedical research <ns0:ref type='bibr' target='#b23'>(Kumar et al., 2017)</ns0:ref>. Owing to the limited number of COVID-19 images currently available, diagnosing COVID-19 infection is more challenging, thereby investing with a visual explainable approach is applied for further analysis. In this regard, we use a Gradient-based Class Activation Mapping algorithm, i.e., Grad-CAM <ns0:ref type='bibr' target='#b45'>(Selvaraju et al., 2017)</ns0:ref>, providing explanations of the predictions and identifying relevant features associated with COVID-19 infection. The key contributions of this paper are as follows:</ns0:p><ns0:p>• We propose a novel CNN based architecture that includes pre-trained EfficientNet for feature extraction and model snapshots to detect COVID-19 from chest X-rays.</ns0:p><ns0:p>• Taking into account the following assumption, the decisions of multiple radiologists are considered in the final prediction, we propose an ensemble in the proposed architecture to make predictions, thus making a credible and fair evaluation of the system.</ns0:p><ns0:p>• We visualize a class activation map through Grad-CAM to explain the prediction as well as to identify the critical regions in the chest X-ray.</ns0:p><ns0:p>• We present an empirical evaluation of our model comparing with state-of-the-art models to appraise the effectiveness of the proposed architecture in detecting COVID-19.</ns0:p><ns0:p>The remainder of the paper is arranged as follows. Section 2 discusses related work. Section 3 explains the details of the data set and presents ECOVNet architecture. The results of our empirical evaluation is presented in Section 4. Finally, Section 5 concludes the paper and highlights the future work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORKS</ns0:head><ns0:p>The impressive improvements achieved through CNN technology have attracted attention of computational medical imaging researchers to study the potential of CNN in medical images obtained through CT, Magnetic resonance imaging (MRI), and X-rays. Many researchers have explored CNN technology in an effective way to classify <ns0:ref type='bibr' target='#b19'>(Kaur and Gandhi, 2020)</ns0:ref> and segment <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2015)</ns0:ref> MR brain images, and have obtained the best performance. What's more, pneumonia detection on chest X-rays with CNN is a state-of-the-art technology with historical prospects for image diagnosis systems <ns0:ref type='bibr' target='#b39'>(Rajpurkar et al., 2017)</ns0:ref>. Owing to the need to identify COVID-19 infections faster, the latest application areas of CNN-based AI systems are booming, which can speed up the analysis of various medical images. <ns0:ref type='bibr' target='#b36'>(Ozturk et al., 2020)</ns0:ref> proposed for the automatic detection of COVID-19 using chest X-ray images where the proposed method carried out two types of classification, one for binary classification (such as COVID-19 and No-Findings) and another for multi-class (such as COVID-19, No-Findings and pneumonia) classification. Finally, the authors provided an intuitive explanation through the heat map, so it can assist the radiologist to find the affected area on the chest X-ray. Another research <ns0:ref type='bibr' target='#b18'>(Karim et al., 2020)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>In this section, we briefly discuss our approach. First, we precede the benchmark data set and data augmentation strategy used in the proposed architecture. Next, we outline the proposed ECOVNet architecture, including network construction using a pre-trained EfficientNet and training methods, and then model ensemble strategies. Finally, to make disease detection more acceptable, we integrate decision visualizations to highlight pivotal facts with visual markers. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Dataset</ns0:head><ns0:p>In this sub-section, we concisely inaugurate the benchmark data set, named COVIDx <ns0:ref type='bibr' target='#b51'>(Wang et al., 2020)</ns0:ref>, that we used in our experiment. The dataset comprises three categories of images -COVID-19, normal and pneumonia, with total number of 13, 914 images for training and 1, 579 for testing (accessed on July 17, 2020). To generate the COVIDx, the authors <ns0:ref type='bibr' target='#b51'>(Wang et al., 2020)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science • Radiological Society of North America (RSNA) Pneumonia Detection Challenge dataset (RSNA, 2019) -normal and non-COVID-19 pneumonia cases.</ns0:p><ns0:p>• COVID-19 radiography database <ns0:ref type='bibr'>(Chowdhury et al., 2020)</ns0:ref> -only COVID-19 cases.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Data Augmentation</ns0:head><ns0:p>Data augmentation is usually performed during the training process to expand the training set. As long as the semantic information of an image is preserved, the transformation can be used for data augmentation.</ns0:p><ns0:p>Using data augmentation, the performance of the model can be improved by solving the problem of overfitting. Although the CNN model has properties such as partial translation-invariant, augmentation strategies (i.e., translated images) can often considerably enhance generalization capabilities <ns0:ref type='bibr' target='#b10'>(Goodfellow et al., 2016)</ns0:ref>. Data augmentation strategies provide various alternatives, each of which has the advantage of interpreting images in multiple ways to present important features, thereby improving the performance of the model. We have considered the following transformations: horizontal flip, rotation, shear, and zoom for augmentation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>ECOVNet Architecture</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows a graphical presentation of the proposed ECOVNet architecture using a pre-trained Effi-cientNet. After augmenting the COVIDx dataset, we used pre-trained EfficientNet <ns0:ref type='bibr'>(Tan and Le, 2019)</ns0:ref> as a feature extractor. This step ensures that the pre-trained EfficientNet can extract and learn useful chest X-ray features, and can generalize it well. Indeed, EfficientNets are an order of models that are obtained from a base model, i.e., EfficientNet-B0. In the Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, we depicted our proposed architecture using EfficientNet-B0 for the sake of brevity, however, during the experimental evaluation, we have also considered other five EfficientNets B1 to B5. The output features from the pre-trained EfficientNet fed to our proposed custom top layers through two fully connected layers, which are respectively integrated with batch normalization, activation, and dropout. We generated several snapshots in a training session, and then combined their predictions with an ensemble prediction. At the same time, the visualization approach, which can qualitatively analyze the relationship between input examples and model predictions, was incorporated into the following part of the proposed model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1'>Pre-trained Efficientnet Feature Extraction</ns0:head><ns0:p>EfficientNets are a series of models (namely EfficientNet-B0 to B7) that are derived from the baseline network (often called EfficientNet-B0) by scale it up. By adopting a compound scaling method in all dimensions of the network, i.e., width, depth, and resolution, EfficientNets have pulled attention due to its supremacy in prediction performance. The intuition of using compound scaling is that scaling any dimension of the network (such as width, depth, or image resolution) can increase accuracy, but for larger models, the accuracy gain will decrease. To scale the dimensions of the network systematically, compound scaling uses a compound coefficient that controls how many more resources are functional for model scaling, and the dimensions are scaled by the compound coefficient in the following way <ns0:ref type='bibr'>(Tan and Le, 2019)</ns0:ref>: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>depth: d = α φ width: w = β φ resolution: r = γ φ s.t. α.β 2 .γ 2 ≈ 2 α ≥ 1, β ≥ 1, γ ≥ 1 (1)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where φ is the compound coefficient, and α, β , and γ are the scaling coefficients of each dimension that can be fixed by a grid search. After determining the scaling coefficients, they are applied to the baseline network (EfficientNet-B0) for scaling to obtain the desired target model size. For instance, in the case of EfficientNet-B0, when φ = 1 is set, the optimal values are yielded using a grid search, i.e., α = 1.2, β = 1.1, and γ = 1.15, under the constraint of α.β 2 .γ 2 ≈ 2 <ns0:ref type='bibr'>(Tan and Le, 2019)</ns0:ref>.</ns0:p><ns0:p>The feature extraction of the EfficientNet-B0 baseline architecture is comprised of the several mobile inverted bottleneck convolution (MBConv) <ns0:ref type='bibr' target='#b43'>(Sandler et al., 2018;</ns0:ref><ns0:ref type='bibr'>Tan et al., 2019)</ns0:ref> blocks with built-in squeeze-and-excitation (SE) <ns0:ref type='bibr' target='#b12'>(Hu et al., 2018)</ns0:ref>, batch normalization, and swish activation <ns0:ref type='bibr' target='#b40'>(Ramachandran et al., 2017)</ns0:ref> as integrated into EfficientNet. Table <ns0:ref type='table' target='#tab_5'>2</ns0:ref> shows the detailed information of each layer of the EfficientNet-B0 baseline network. EfficientNet-B0 consists of 16 MBConv blocks varying in several aspects, for instance, kernel size, feature maps expansion phase, reduction ratio, etc. A complete workflow of the MBConv1, k3 × 3 and MBConv6, k3 × 3 blocks are shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. Both MBConv1, k3 × 3 and MBConv6, k3 × 3 use depthwise convolution, which integrates a kernel size of 3 × 3 with the stride size of s. In these two blocks, batch normalization, activation, and convolution with a kernel size of 1 × 1 are integrated. The skip connection and a dropout layer are also incorporated in MBConv6, k3 × 3, but this is not the case with MBConv1, k3 × 3. Furthermore, in the case of the extended feature map, MBConv6, k3 × 3 is six times that of MBConv1, k3 × 3, and the same is true for the reduction rate in the SE block, that is, for MBConv1, k3 × 3 and MBConv6, k3 × 3, r is fixed to 4 and 24, respectively. Note that, MBConv6, k5 × 5 performs the identical operations as MBConv6, k3 × 3, but MBConv6, k5 × 5 applies a kernel size of 5 × 5, while a kernel size of 3 × 3 is used by MBConv6, k3 × 3. The feature extraction process of the proposed ECOVNet architecture applying pre-trained ImageNet weights is executed after the image augmentation process, as it is presented in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.2'>Classifier</ns0:head><ns0:p>After the feature extraction process, a customized top layer is used, which works as the classifier shown in and <ns0:ref type='bibr' target='#b15'>Szegedy, 2015)</ns0:ref>. It makes the optimization process smoother, resulting in a more predictable and stable gradient behavior, thereby speeding up training <ns0:ref type='bibr' target='#b44'>(Santurkar et al., 2018)</ns0:ref>. In this study, in a case of activation function, we have preferred Swish which is defined as <ns0:ref type='bibr' target='#b40'>(Ramachandran et al., 2017)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_1'>f (x) = x • σ (x)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where σ (x) = (1 + exp(−x)) −1 is the sigmoid function. Comparison with other activation functions Swish consistently outperforming others including Rectified Linear Unit (ReLU) <ns0:ref type='bibr' target='#b31'>(Nair and Hinton, 2010)</ns0:ref>, which is the most successful and widely-used activation function, on deep networks applied to a variety of challenging fields including image classification and machine translation. Swish has many characteristics, such as one-sided boundedness at zero, smoothness, and non-monotonicity, which play an important role in improving it <ns0:ref type='bibr' target='#b40'>(Ramachandran et al., 2017)</ns0:ref>. After performing the activation operation, we integrated a Dropout <ns0:ref type='bibr' target='#b48'>(Srivastava et al., 2014)</ns0:ref> layer, which is one of the preeminent regularization methods to reduce overfitting and make better predictions. This layer can randomly drop certain FC layer nodes, which means removing all randomly selected nodes, along with all its incoming and outgoing weights. The number of randomly selected nodes drop in each layer is obtained with a probability p independent of other layers, where p can be chosen by using either a validation set or a random estimate (i.e., p = 0.5). In this study, we maintained a dropout size of 0.3. Next, the classification layer used the softmax activation function to render the activation from the previous FC layers into a class score to determine the class of the input chest X-ray image as COVID-19, normal, and pneumonia. The softmax activation function is defined in the following way:</ns0:p><ns0:formula xml:id='formula_2'>s(y i ) = e y i ∑ C j=1 e y j (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>where C is the total number of classes. This normalization limits the output sum to 1, so the softmax output s(y i ) can be interpreted as the probability that the input belongs to the i class. In the training process, Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we apply the categorical cross-entropy loss function, which uses the softmax activation function in the classification layer to measure the loss between the true probability of the category and the probability of the predicted category. The categorical cross-entropy loss function is defined as</ns0:p><ns0:formula xml:id='formula_4'>l = − N ∑ n=1</ns0:formula><ns0:p>log( e y i,n ∑ C j=1 e y j,n</ns0:p><ns0:p>).</ns0:p><ns0:p>(4)</ns0:p><ns0:p>The total number of input samples is denoted as N, and C is the total number of classes, that is, C = 3 in our case.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.3'>Model Snapshots and Ensemble Prediction</ns0:head><ns0:p>The main concept of building model snapshots is to train one model with constantly reducing the learning rate to attain a local minimum and save a snapshot of the current model's weight. Later, it is necessary to actively increase the learning rate to retreat from the current local minimum requirements. This process continues repeatedly until it completes cycles. One of the prominent methods for creating model snapshots for CNN is to collect multiple models during a single training run with cyclic cosine annealing <ns0:ref type='bibr' target='#b13'>(Huang et al., 2017a)</ns0:ref>. The cyclic cosine annealing method starts from the initial learning rate, then gradually decreases to the minimum, and then rapidly increases. The learning rate of cyclic cosine annealing in each epoch is defined as: Ensemble through model snapshots is more effective than a structure based on a single model only. Therefore, compared with the prediction of a single model, the ensemble prediction reduces the generalization error, thereby improving the prediction performance. We have experimented with two ensemble strategies, i.e., hard ensemble and soft ensemble, to consolidate the predictions of snapshots model to classify chest X-ray images as COVID-19 or normal or pneumonia. Both hard ensemble and soft ensemble use the last m (m ≤ M) model's softmax outputs since these models have a tendency to have the lowest test error. We also consider class weights to obtain a softmax score before applying the ensemble. Let O i (x) is the softmax score of the test sample x of the i-th snapshot model. Using hard ensemble, the prediction of the i-th snapshot model is defined as</ns0:p><ns0:formula xml:id='formula_5'>α(t) = α 0 2 (cos( πmod(t − 1, ⌈T /M⌉) ⌈T /M⌉ ) + 1)<ns0:label>(5</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>H i = argmax x O i (x).<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>The final ensemble constrains to aggregate the votes of the classification labels (i.e., COVID-19, normal, and pneumonia) in the other snapshot models and predict the category with the most votes. On the other hand, the output of the soft ensemble includes averaging the predicted probabilities of class labels in the last m snapshots model defined as</ns0:p><ns0:formula xml:id='formula_7'>S = 1 m m−1 ∑ i=0 O M−i (x).<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Finally, the class label with the highest probability is used for the prediction. The creation of model snapshots and ensemble predictions are integrated at the end of the proposed architecture, as shown in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.4'>Hyper-Parameters Adjustment</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3.5'>Visual Explanations using Grad-CAM</ns0:head><ns0:p>Although the CNN-based modular architecture provides encouraging recognition performance for image classification, there are still several issues where it is challenging to reveal why and how to produce such impressive results. Due to its black-box nature, it is sometimes contrary to apply it in a medical</ns0:p><ns0:p>diagnosis system where we need an interpretable system, i.e., visualization as well as an accurate diagnosis.</ns0:p><ns0:p>Despite it has certain challenges, researchers are still endeavoring to seek for an efficient visualization technique since it can contribute the most critical key facts in the health-care system into focus, assist medical practitioners to distinguish correlations and patterns in imaging, and perform data analysis more efficacious. In the field of detecting COVID-19 through chest X-rays, some early studies focused on visualizing the behavior of CNN models to distinguish between different categories (such as COVID-19, normal, and pneumonia), so they can produce explanatory models. In our proposed model, we applied a gradient-based approach named Grad-CAM <ns0:ref type='bibr' target='#b45'>(Selvaraju et al., 2017)</ns0:ref>, which measures the gradients of features maps in the final convolution layer on a CNN model for a target image, to foreground the critical regions that are class-discriminating saliency maps. In Grad-CAM, gradients that are flowing back to the final convolutional layer in a CNN model are globally averaged to calculate the target class weights of each filter. Grad-CAM heat-map is a combination of weighted feature maps, followed by a ReLU activation. The class-discriminative saliency map L c for the target image class c is defined as follows <ns0:ref type='bibr' target='#b45'>(Selvaraju et al., 2017)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_8'>L c i, j = ReLU( ∑ k w c k A k i, j ),<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where A k i, j denotes the activation map for the k-th filter at a spatial location (i, j), and ReLU captures the positive features of the target class. The target class weights of k-th filter is computed as:</ns0:p><ns0:formula xml:id='formula_9'>w c k = 1 Z ∑ i ∑ j ∂Y c ∂ A k i, j ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>where Y c is the probability of classifying the target category as c, and the total number of pixels in the activation map is denoted as Z. The Grad-CAM visualization of each model snapshot is incorporated at the end edge of the proposed architecture, as displayed in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EXPERIMENTS AND RESULTS</ns0:head><ns0:p>In this section, we evaluate the classification performance of our proposed ECOVNet and compare it's performance with the state-of-the-art methods. We consider several experimental settings to analyze the robustness of the ECOVNet model. All our programs are written in Python, and the software pile is composed of Keras with TensorFlow and scikit-learn. The source code and models are publicly available in github 1 .</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Dataset and Parameter Settings</ns0:head><ns0:p>In this section, we introduce the distribution of the benchmark data set and the model parameters generated in the experiment. We used COVIDx <ns0:ref type='bibr' target='#b51'>(Wang et al., 2020)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Evaluation Metrics</ns0:head><ns0:p>In order to evaluate the performance of the proposed method, we considered the following evaluation metrics: accuracy, precision, recall, F1-score, confidence interval (CI), receiver operating characteristic (ROC) curve and area under the curve (AUC). The definitions of accuracy, precision, recall and F1 score are as follows:</ns0:p><ns0:formula xml:id='formula_10'>Accuracy = TP+TN Total Samples (10) Precision = TP TP+FP (11) Recall = TP TP+FN (12) F1 = 2 × Precision × Recall Precision + Recall (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>)</ns0:formula><ns0:p>where TP stands for true positive, while TN, FP, and FN stand for true negative, false positive, and false negative, respectively. Since the benchmark data set is not balanced, F1 score may be a more substantial evaluation metric Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Prediction Performance</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_11'>5</ns0:ref> reports prediction performances of the proposed ECOVNet without using ensemble. The results</ns0:p><ns0:p>show that ECOVNet with EfficientNet-B5 pre-trained weights outperforms over other base models both for the case of images with augmentation and without augmentation. It reflects the fact that feature extraction using an optimized model that considers three aspects, namely higher depth and width, and a broader image resolution, can capture more and finer details, thereby improving classification accuracy.</ns0:p><ns0:p>Without augmentation, under the condition of without ensemble, ECOVNet's accuracy reaches 96.26%, and its performance is lower for with augmentation, reaching 94.68% accuracy. We calculated accuracy with 95% CI. A tight range of CI means higher precision, while the wide range of CI indicates the opposite.</ns0:p><ns0:p>As we can see, the CI interval is in a narrow range for the case of no augmentation, and the CI range is wider for the case of augmentation. Furthermore, Fig. <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the training loss of ECOVNet considering EfficientNet-B5. <ns0:ref type='table' target='#tab_13'>6</ns0:ref> and Table <ns0:ref type='table' target='#tab_14'>7</ns0:ref> show the classification results using ensembles for without augmentation and with augmentation, respectively.</ns0:p><ns0:p>As shown in Table <ns0:ref type='table' target='#tab_13'>6</ns0:ref>, in handling COVID-19 cases, the ensemble methods are significantly better than the no ensemble method. More specifically, the recall hits its maximum value 100%, and to a large extent, this result demonstrates the robustness of our proposed architecture. Furthermore, for COVID-19 detection, soft ensemble seems to be the preferred method due to its recall and F1-score 100% and 96.15%, respectively. In the soft ensemble, the average softmax score of each category affects the direction of the desired result, thus the performance of the soft ensemble is better than the hard ensemble. Owing to the uneven distribution of the test set, an F1-score may be more reliable than an accuracy.</ns0:p><ns0:p>For augmentation, we see that the ensemble methods present better results than the no ensemble (see Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Table <ns0:ref type='table' target='#tab_14'>7</ns0:ref>). When comparing between two ensemble methods, we see that hard ensemble outperforms soft ensemble with a significant margin in the case of precision and F1-score, but an exception is that the soft ensemble is slightly better than the hard ensemble while recall is taken into account. Moreover, in the case of overall accuracy, the hard ensemble shows better detection performance than the soft ensemble.</ns0:p><ns0:p>It can also be clearly seen from Table <ns0:ref type='table' target='#tab_13'>6</ns0:ref> and Table <ns0:ref type='table' target='#tab_14'>7</ns0:ref> that for COVID-19 cases, the precision of the hard ensemble method is better than the soft ensemble method in terms of augmentation and no augmentation.</ns0:p><ns0:p>Finally, we also observe that the confidence interval range is small for no augmentation strategy (Table <ns0:ref type='table' target='#tab_13'>6</ns0:ref>) compared to augmentation strategy (Table <ns0:ref type='table' target='#tab_14'>7</ns0:ref>). In Fig. <ns0:ref type='figure'>4</ns0:ref>, the proposed ECOVNet (Base models B0-B5) is aimed at the precision, recall, and F1-score of the soft ensemble of test data considering the COVID-19 cases. When comparing the precision of ECOVNet, we have seen that ECOVNet-B4 (Base model B4) shows significantly better performance than other base models. However, in terms of recall, as we consider more in-depth base models, the value gradually increases. The same is true for F1-scores as well except with a slight decrease of 0.5% from ECOVNet-B5 to ECOVNet-B4.</ns0:p><ns0:p>It is often useful to analyze the ROC curve to reflect the classification performance of the model since the ROC curve gives a summary of the trade-off between the true positive rate and the false positive rate of a model that takes into account different probability thresholds. In Fig. <ns0:ref type='figure'>5</ns0:ref>, the ROC curves show the micro and macro average and class-wise AUC scores obtained by the proposed ECOVNet, where each curve refers to the ROC curve of an individual model snapshot. The AUC scores of all categories are consistent, indicating that the prediction of the proposed model is stable. However, the AUC scores in the third and fourth snapshots are better than other snapshots. As it is evident from Fig. <ns0:ref type='figure'>5</ns0:ref> that the area under the curve of all classes is relatively similar, but COVID-19's AUC is higher than other classes, i.e., 1. Furthermore, Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> shows the confusion matrices of the proposed ECOVNet considering the base model of EfiicientNet-B5. In Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>, it is clear that for COVID-19, the ensemble methods provide better results than those without ensemble. These methods provide results that are 3% − 4% better than without ensemble. However, ECOVNet has the ability to detect normal and pneumonia chest X-rays, whether in </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Comparison between ECOVNet and the other models</ns0:head><ns0:p>When comparing with other methods, we considered whether the existing method had regarded one of the following factors: used ImageNet weights, applied ensemble methods and liked us used the COVIDx dataset. Table <ns0:ref type='table' target='#tab_18'>8</ns0:ref> shows the comparison between our proposed ECOVNet method and the state-of-the-art methods for detecting COVID-19 from chest X-rays. It shows that our proposed method outperforms all the existing methods in terms of precision, recall and accuracy. COVID-Net <ns0:ref type='bibr' target='#b51'>(Wang et al., 2020)</ns0:ref>, EfficientNet-B3 <ns0:ref type='bibr' target='#b25'>(Luz et al., 2020)</ns0:ref> and DeepCOVIDExplainer <ns0:ref type='bibr' target='#b18'>(Karim et al., 2020)</ns0:ref>) used ImageNet weights and the COVIDx data set. When comparing with COVID-Net, we observe that our proposed method is better, with a recall rate greater than or equal to 6%. Compared with ours, another method called EfficientNet-B3 shows higher precision, but the recall rate and accuracy lag behind. Using a total of 1, 125 images containing COVID-19, normal and pneumonia, a method called DarkCovidNet can achieve 87.02% accuracy. The proposed ECOVNet is superior to DarkCovidNet in recall rate and accuracy.</ns0:p><ns0:p>Another method named DeepCOVIDExplainer achieves an accuracy of 96.77% that is comparable compared with ours. However, our proposed method is significantly better in terms of precision and recall. In addition, the recall rate of CoroNet <ns0:ref type='bibr' target='#b21'>(Khan et al., 2020)</ns0:ref> is the same as our proposed ECOVNet, which is 100%, although the accuracy of this method is about 1% behind. Moreover, CovXNet <ns0:ref type='bibr' target='#b26'>(Mahmud et al., 2020)</ns0:ref> In the confusion matrices, the predicted labels, such as COVID-19, Normal, and Pneumonia, are marked as 0, 1 and 2, respectively. pool of generated features, and a classification layer. On the other hand, the proposed ECOVNet has considered the use of transfer learning in combination with fine-tuning steps and ensemble methods, so significant improvements can be achieved. In Table <ns0:ref type='table' target='#tab_19'>9</ns0:ref>, we have observed that our proposed method shows better results than EfficientNet.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Visualization using Grad-CAM</ns0:head><ns0:p>We applied the Grad-CAM visual interpretation method to visually depict the salient areas where ECOV-Net emphasizes the classification decision for a given chest X-ray image. Accurate and definitive salient region detection is crucial for the analysis of classification decisions as well as for assuring the trustworthiness of the results. In order to locate the salient area, the feature weights with various illuminations related to feature importance are used to create a two-dimensional heat map and superimpose it on a Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>given input image. Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref> shows the visualization results of locating Grad-CAM using ECOVNet for each model snapshots. This salient area locates the area of each category area in the lung that has been identified when a given image is classified as COVID-19 or normal or pneumonia. As shown in Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>, for COVID-19, a ground-glass opacity (GGO) occurs along with some consolidation, thereby partially covering the markings of the lungs. Hence, it leads to lung inflammation in both the upper and lower zones of the lung. When examining the heat maps generated from the COVID-19 chest X-ray, it can be distinguished that the heat maps created from snapshot 2 and snapshot 3 points to the salient area (such as GGO). However, in the case of the normal chest X-ray, no lung inflammation is observed, so there is no significant area, thereby easily distinguishable from COVID-19 and pneumonia. As well, it can be observed from the chest X-ray for pneumonia is that there are GGOs in the middle and lower parts of the lungs. The heat maps generated for the pneumonia chest X-ray are localized in the salient regions with GGO, but for the 4th snapshot model, it appears to fail to identify the salient regions as the heat map highlights outside the lung. Accordingly, we believe that the proposed ECOVNet provides sufficient information about the inherent causes of the COVID-19 disease through an intuitive heat map, and this type of heat map can help AI-based systems interpret the classification results.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In this paper, we proposed a novel modular architecture ECOVNet based on CNN, which can effectively detect COVID-19 with the class activation maps from one of the largest publicly available chest X-ray data set, i.e., COVIDx. In this work, a highly effective CNN structure (such as the EfficientNet base model with ImageNet pre-trained weights) is used as feature extractors, while fine-tuned pre-trained weights are considered for related COVID-19 detection tasks. Also, ensemble predictions improve performance by exploiting the predictions obtained from the proposed ECOVNet model snapshots. The results of our empirical evaluations show that the soft ensemble of the proposed ECOVNet model snapshots outperforms the other state-of-the-art methods. Finally, we performed a visualization study to locate significant areas in the chest X-ray through the class activation map for classifying the chest X-ray into its expected category.</ns0:p><ns0:p>Thus, we believe that our findings could make a useful contribution to the detection of COVID-19 infection and the widespread acceptance of automated applications in medical practice. While this work contributes to reduce the effort of health professional's radiological assessment, our future plan is to lead this work to design a fully-functional application using guidelines of the design</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Graphical representation of the proposed ECOVNet architecture</ns0:figDesc><ns0:graphic coords='5,349.59,537.27,106.91,69.24' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>•</ns0:head><ns0:label /><ns0:figDesc>COVID-19 Image Data Collection (Cohen et al., 2020) -non-COVID-19 pneumonia and COVID-19 cases are taken from this repository. • COVID-19 Chest X-ray Dataset initiative (Chung, 2020b) -only COVID-19 cases are taken from this repository. • ActualMed COVID-19 Chest X-ray Dataset Initiative (Chung, 2020a) -taken only COVID-19 cases.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Fig. 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 2. The basic building block of EfficientNet-B0. All MBConv blocks take the height, width, and channel of h, w, and c as input. C is the output channel of the two MBConv blocks. (Note that, MBConv= Mobile Inverted Bottleneck Convolution, DW Conv= Depth-wise Convolution, SE= Squeeze-Excitation, Conv= Convolution)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>)where α(t) is the learning rate at epoch t, α 0 is the initial learning rate, T is the total number of training iterations and M is the number of cycles. The weight at the bottom of each cycle is regarded as the weight of the snapshot model. The following learning rate cycle uses these weights, but allows the learning algorithm to converge to different solutions, thereby generating diverse snapshots model. After completing M cycles of training, we get M model snapshots s 1 ...s M , each of which will be utilized in the ensemble prediction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Fine</ns0:head><ns0:label /><ns0:figDesc>-tuned hyper-parameters have a great impact on the performance of the model because they directly govern the training of the model. What's more, fine-tuned parameters can avoid overfitting and form a generalized model. Since we have dealt with an unbalanced data set, the proposed architecture may have a 8/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:1:2:NEW 17 Mar 2021)Manuscript to be reviewed Computer Science huge possibility to confront the problem of overfitting. In order to solve the problem of overfitting, we use L1L2 weight decay regularization with coefficients 1e−5 and 1e−3 in FC layers. Next, dropout is another successful regularization technique that has been integrated into the proposed architecture, especially in FC layers with p = 0.3, to suppress overfitting. In the experiments on the proposed architecture, we have explored the Adam optimizer<ns0:ref type='bibr' target='#b22'>(Kingma and Ba, 2014)</ns0:ref>, which can converge faster. When creating snapshots, we set the number of epochs to 25, the minimum batch size to 8, the initial learning rate to 1e−4, and the number of cycles to 5, thus providing 5 snapshots for each model, on which we build up the ensemble prediction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Loss curve of ECOVNet (Base model EfficientNet-B5) during training</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Confusion matrices of the proposed ECOVNet considering EfficientNet-B5 as a base model.In the confusion matrices, the predicted labels, such as COVID-19, Normal, and Pneumonia, are marked as 0, 1 and 2, respectively.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Grad-CAM visualization for i th snapshot model of the proposed ECOVNet considering the base model EfficientNet-B5.</ns0:figDesc><ns0:graphic coords='16,161.68,583.13,58.79,58.71' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Overview of CNN based architectures for detecting COVID-19 from chest X-rays</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>segmentation using X-rays and CT images is presented in<ns0:ref type='bibr' target='#b46'>(Shoeibi et al., 2020)</ns0:ref>. Due to the need to interpret chest CT images faster,<ns0:ref type='bibr' target='#b16'>Jin et al. (2020a)</ns0:ref> proposed an AI system based on deep learning that can speed up the analysis of chest CTs to detect COVID-19 and validated using a large multi-class data sets. A new CNN architecture named COVID-Net and a large chest x-ray benchmark data set (COVIDx) have introduced in Wang et al. (2020). The proposed COVID-Net obtained the best test accuracy of 93.3%, and studied how COVID-Net uses an interpretability method to predict. Luz et al. (2020) proposed a new deep learning framework that extends the EfficientNet (Tan and Le, 2019) series, which is well known for its excellent prediction performance and fewer computational steps. Their experimental evaluation showed noteworthy classification performance, especially in COVID-19 cases. A CNN model called DarkCovidNet</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b47'>Simonyan and Zisserman, 2015)</ns0:ref>, ResNet18<ns0:ref type='bibr' target='#b11'>(He et al., 2016)</ns0:ref>, and DenseNet161<ns0:ref type='bibr' target='#b14'>(Huang et al., 2017b)</ns0:ref>, but it has two flaws.Firstly, each model requires a separate training session, and secondly, an individual model suffers from training many parameters. Another method Mahmud et al. (2020) used an ensemble on a single model with various image resolutions, and for each image resolution, it creates a separate model and stacks it for prediction, which incurs a significant computational overhead. To address the aforementioned problems, we use a lightweight but effective model EfficientNet since it is 8.4 times</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>VGG19 (smaller and 6.1 times faster than the best existing CNN (Tan and Le, 2019). Also, we force large changes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>in model weights through the recursive learning rate, create model snapshots in the same training, and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>further apply the ensemble to make the proposed architecture more robust, thereby achieving a higher</ns0:cell></ns0:row><ns0:row><ns0:cell>detection rate compared to other methods.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>proposed an explainable CNN-based method adjusting on a neural ensemble technique followed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>by highlighting class-discriminating regions named DeepCOVIDExplainer for automatic detection of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>COVID-19 cases from chest x-ray images. Khan et al. (2020) proposed a model named CoroNet that used</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Xception architecture pre-trained on ImageNet dataset and trained on their benchmark creating from two</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>publicly available data sets, and carried out two different classification performance measurement, i.e.,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>three and four classes with classification accuracy 95% and 89.6%, respectively. In another work, Mahmud</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>et al. (2020) proposed a CNN-based model called CovXNet, which uses depthwise dilated convolution.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>At first, the model trained with some non-COVID-19 pneumonia images, and further transferred the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>acquired learning with some additional fine-tuning layers that trained again with a smaller number of chest</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>X-rays related to COVID-19 and other pneumonia cases. As features extracted from different resolutions</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>of X-rays, a stacking algorithm is used in the prediction process, and for multi-class classification, the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>accuracy of CovXNet is 90.3%. An advanced custom CNN architecture, COVID-Net (Wang et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>2020) was implemented and tested using a large COVID-19 benchmark, but due to the large number of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>parameters, the computational overhead of this model is high. Another CNN-based modular architecture,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>named PDCOVIDNet, is proposed by Chowdhury et al. (2020), which consists of a parallel stack of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>multi-layer filter blocks in a cascade with a classification and visualization block. The authors claimed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>the effectiveness of the model compared with a number of well known CNN architectures and showed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>precision and recall of 96.58% and 96.59%, respectively.Table 1 shows an overview of some CNN based</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>architectures for detecting COVID-19 from chest X-rays.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Most of the existing work, as discussed above, make prediction decisions based on the output of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>a single model, only a few methods (such as Karim et al. (2020) and Mahmud et al. (2020)) used</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>an ensemble. The key benefit of the ensemble is that it can reduce prediction errors, thus makes the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>model more versatile. In (Karim et al., 2020), authors used ensemble on heterogeneous models, i.e.,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:1:2:NEW 17 Mar 2021)</ns0:cell><ns0:cell>3/19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>EfficientNet-B0 baseline network layers outline</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Stage</ns0:cell><ns0:cell>Operator</ns0:cell><ns0:cell cols='3'>Resolution #Output Feature Maps #Layers</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Conv 3 × 3</ns0:cell><ns0:cell>224 × 224</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>MBConv1, k3 × 3</ns0:cell><ns0:cell>112 × 112</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>MBConv6, k3 × 3</ns0:cell><ns0:cell>112 × 112</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>MBConv6, k5 × 5</ns0:cell><ns0:cell>56 × 56</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>MBConv6, k3 × 3</ns0:cell><ns0:cell>28 × 28</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>MBConv6, k5 × 5</ns0:cell><ns0:cell>14 × 14</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>MBConv6, k5 × 5</ns0:cell><ns0:cell>14 × 15</ns0:cell><ns0:cell>192</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>MBConv6, k3 × 3</ns0:cell><ns0:cell>7 × 7</ns0:cell><ns0:cell>320</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>Conv 1 × 1 & Pooling & FC</ns0:cell><ns0:cell>7 × 7</ns0:cell><ns0:cell>1280</ns0:cell><ns0:cell>1</ns0:cell></ns0:row></ns0:table><ns0:note>Instead of random initialization of network weights, we instantiate ImageNet's pre-trained weights in the EfficientNet model thereby accelerating the training process. Transferring the pre-trained weights of the ImageNet have performed a great feat in the field of image analysis, since it composes more than 14 million images covering eclectic classes. The rationale for using pre-trained weights is that the imported model already has sufficient knowledge in the broader aspects of the image domain. As it has been manifested in several studies<ns0:ref type='bibr' target='#b37'>(Rajaraman et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b32'>Narin et al., 2020)</ns0:ref>, using pre-trained ImageNet weights in the state-of-the-art CNN models remain optimistic even when the problem area (namely COVID-19 detection) is considerably distinct from the one in which the original weights have been obtained. The optimization process will fine-tune the initial pre-training weights in the new training phase so that we can fit the pre-trained model to a specific problem domain, such as COVID-19 detection.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Image partition of Training, Validation, and Testing set The entire image distribution of training, validation, and testing is shown in Table3.In our experiment, we use EfficientNet B0 to B5 as base models. However, the input image resolution size is different for each base model, and the size increases from B0 to B5. As the image resolution size increases, the model needs more layers to capture the finer-grained patterns, and thereby increasing the size of the parameter in the model. Table4displays a list of input image resolution size for each base model and the total number of parameters generated during training.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Category COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>Total</ns0:cell></ns0:row><ns0:row><ns0:cell>Training</ns0:cell><ns0:cell>441</ns0:cell><ns0:cell>7, 170</ns0:cell><ns0:cell>4, 914</ns0:cell><ns0:cell>12, 525</ns0:cell></ns0:row><ns0:row><ns0:cell>Validation</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>796</ns0:cell><ns0:cell>545</ns0:cell><ns0:cell>1, 389</ns0:cell></ns0:row><ns0:row><ns0:cell>Testing</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>885</ns0:cell><ns0:cell>594</ns0:cell><ns0:cell>1, 579</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>further split the training set into training and validation with a ratio of 9:1. We have used the original test</ns0:cell></ns0:row><ns0:row><ns0:cell>set as it comes with.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Image resolution and total number of parameters of ECOVNet considering the base models of EfficientNet (B0 to B5)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Base Model</ns0:cell><ns0:cell cols='2'>Image Resolution Parameter Size (ECOVNet)</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell>224 × 224</ns0:cell><ns0:cell>4, 978, 847</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B1</ns0:cell><ns0:cell>240 × 240</ns0:cell><ns0:cell>7, 504, 515</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B2</ns0:cell><ns0:cell>260 × 260</ns0:cell><ns0:cell>8, 763, 893</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B3</ns0:cell><ns0:cell>360 × 360</ns0:cell><ns0:cell>11, 844, 907</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B4</ns0:cell><ns0:cell>380 × 380</ns0:cell><ns0:cell>18, 867, 291</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B5</ns0:cell><ns0:cell>456 × 456</ns0:cell><ns0:cell>29, 839, 091</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Prediction performance of proposed ECOVNet without using ensemble Bold indicates that the method has statistically better performance than other methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Pre-trained Weight</ns0:cell><ns0:cell>Precision(%)</ns0:cell><ns0:cell>Recall(%)</ns0:cell><ns0:cell>F1-score(%)</ns0:cell><ns0:cell>Accuracy(%)(95% CI)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell>93.27</ns0:cell><ns0:cell>93.29</ns0:cell><ns0:cell>93.27</ns0:cell><ns0:cell>93.29 ± 1.23</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Without Augmentation)</ns0:cell><ns0:cell>EfficientNet-B1 EfficientNet-B2 EfficientNet-B3 EfficientNet-B4</ns0:cell><ns0:cell>94.28 93.24 95.56 95.52</ns0:cell><ns0:cell>94.30 93.03 95.57 95.50</ns0:cell><ns0:cell>94.26 93.08 95.56 95.50</ns0:cell><ns0:cell>94.30 ± 1.14 93.03 ± 1.26 95.57 ± 1.01 95.50 ± 1.02</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EfficientNet-B5</ns0:cell><ns0:cell>96.28</ns0:cell><ns0:cell>96.26</ns0:cell><ns0:cell>96.26</ns0:cell><ns0:cell>96.26 ± 0.94</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell>91.71</ns0:cell><ns0:cell>74.10</ns0:cell><ns0:cell>79.72</ns0:cell><ns0:cell>74.10 ± 2.16</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (With Augmentation)</ns0:cell><ns0:cell>EfficientNet-B1 EfficientNet-B2 EfficientNet-B3 EfficientNet-B4</ns0:cell><ns0:cell>91.02 93.60 92.60 94.32</ns0:cell><ns0:cell>86.19 93.10 90.25 93.73</ns0:cell><ns0:cell>87.67 93.24 90.92 93.89</ns0:cell><ns0:cell>86.19 ± 1.70 93.10 ± 1.25 90.25 ± 1.46 93.73 ± 1.20</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EfficientNet-B5</ns0:cell><ns0:cell>94.79</ns0:cell><ns0:cell>94.68</ns0:cell><ns0:cell>94.70</ns0:cell><ns0:cell>94.68 ± 1.11</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Class-wise classification results of ECOVNet (Base model EfficientNet-B5) without augmentation Bold indicates that the method has statistically better performance than other methods for COVID-19.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Precision(%)</ns0:cell><ns0:cell>Recall(%)</ns0:cell><ns0:cell>F1-score(%)</ns0:cell><ns0:cell>Accuracy(%)(95% CI)</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Without Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>91.43 97.07 95.91</ns0:cell><ns0:cell>96.00 97.29 94.78</ns0:cell><ns0:cell>93.66 97.18 95.34</ns0:cell><ns0:cell>96.26 ± 0.94</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Hard Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>94.17 97.05 94.95</ns0:cell><ns0:cell>97.00 96.72 94.95</ns0:cell><ns0:cell>95.57 96.89 94.95</ns0:cell><ns0:cell>96.07 ± 0.96</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Soft Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>92.59 97.05 95.25</ns0:cell><ns0:cell>100 96.61 94.61</ns0:cell><ns0:cell>96.15 96.83 94.93</ns0:cell><ns0:cell>96.07 ± 0.96</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Class-wise classification results of ECOVNet (Base model EfficientNet-B5) with augmentation Bold indicates that the method has statistically better performance than other methods for COVID-19.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell cols='4'>Precision(%) Recall(%) F1-score(%) Accuracy(%)(95% CI)</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Without Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>87.62 97.31 97.31</ns0:cell><ns0:cell>92.00 94.12 95.96</ns0:cell><ns0:cell>89.76 95.69 94.06</ns0:cell><ns0:cell>94.68 ± 1.11</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Hard Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>90.29 97.35 93.76</ns0:cell><ns0:cell>93.00 95.37 96.13</ns0:cell><ns0:cell>91.63 96.35 94.93</ns0:cell><ns0:cell>95.50 ± 1.02</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Soft Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>85.45 97.67 93.43</ns0:cell><ns0:cell>94.00 94.92 95.79</ns0:cell><ns0:cell>89.52 96.28 94.60</ns0:cell><ns0:cell>95.19 ± 1.06</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_16'><ns0:head /><ns0:label /><ns0:figDesc>applies an ensemble method and a transfer learning scheme from non-COVID chest X-rays, while retaining training and testing data sets other than COVIDx. Our proposed ECOVNet outperformsFigure 5. ROC curves of model snapshots of the proposed ECOVNet considering EfficientNet-B5 base model</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell /><ns0:cell cols='4'>Micro-average (Area=0.9924) Macro-average (Area = 0.9971)</ns0:cell><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell /><ns0:cell cols='4'>Micro-average (Area=0.9897) Macro-average (Area = 0.9943)</ns0:cell><ns0:cell /><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell /><ns0:cell>Micro-average (Area=0.9915) Macro-average (Area = 0.9954)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell cols='4'>COVID-19 (Area = 0.9996)</ns0:cell><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell cols='4'>COVID-19 (Area = 0.9998)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell>COVID-19 (Area = 1.0000)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Normal (Area = 0.9970)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Normal (Area = 0.9935)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Normal (Area = 0.9955)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell cols='4'>Pneumonia (Area = 0.9918)</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell cols='4'>Pneumonia (Area = 0.9845)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>Pneumonia (Area = 0.9881)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>False Positive Rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>False Positive Rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>False Positive Rate</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(a) ROC for Snapshot model 1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(b) ROC for Snapshot model 2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(c) ROC for Snapshot model 3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell cols='4'>Micro-average (Area=0.9959) Macro-average (Area = 0.9974)</ns0:cell><ns0:cell /><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell /><ns0:cell cols='4'>Micro-average (Area=0.9911) Macro-average (Area = 0.9947)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell cols='3'>COVID-19 (Area = 1.0000)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell cols='4'>COVID-19 (Area = 0.9998)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Normal (Area = 0.9975)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Normal (Area = 0.9941)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell cols='3'>Pneumonia (Area = 0.9924)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell cols='4'>Pneumonia (Area = 0.9877)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>False Positive Rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>False Positive Rate</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(d) ROC for Snapshot model 4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(e) ROC for Snapshot model 5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell cols='2'>0.00</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>97.0</ns0:cell><ns0:cell /><ns0:cell>0.00</ns0:cell><ns0:cell cols='2'>0.03</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell cols='2'>0.96</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.03</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>True label</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell cols='2'>0.97</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>True label</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell /><ns0:cell>0.97</ns0:cell><ns0:cell cols='2'>0.03</ns0:cell><ns0:cell>True label</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>0.00</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.03</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell cols='2'>0.04</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.01</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell /><ns0:cell>0.95</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.01</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell>0.95</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Predicted label</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Predicted label</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Predicted label</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(a) Soft Ensemble</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='5'>(b) Hard Ensemble</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(c) No Ensemble</ns0:cell></ns0:row></ns0:table><ns0:note>CovXNet in all evaluation measures. As we have observed from empirical evaluation, for the test data set, the proposed method shows the same classification accuracy in different combinations of soft and hard ensembles. When comparing the results of the soft and hard ensemble, we observed that the soft ensemble showed impressive results when classifying COVID-19 with 100% recall. We have alsoconducted experiments on EfficientNet, which mainly consists of feature extraction, a global average 13/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:1:2:NEW 17 Mar 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_18'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparison of the proposed ECOVNet with other state-of-the-art methods on COVID-19 detectionBold indicates that the method has statistically better performance than other methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Total chest X-rays</ns0:cell><ns0:cell>Precision(%) (COVID-19)</ns0:cell><ns0:cell>Recall(%) (COVID-19)</ns0:cell><ns0:cell>Accuracy(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>COVID-Net(Wang et al., 2020)</ns0:cell><ns0:cell>573 COVID-19, 8, 066 Normal, 5, 559 Pneumonia</ns0:cell><ns0:cell>98.90</ns0:cell><ns0:cell>91.00</ns0:cell><ns0:cell>93.30</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B3(Luz et al., 2020)</ns0:cell><ns0:cell>183 COVID-19, 8, 066 Normal, 5, 521 Pneumonia</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>96.80</ns0:cell><ns0:cell>93.90</ns0:cell></ns0:row><ns0:row><ns0:cell>DarkCovidNet(Ozturk et al., 2020)</ns0:cell><ns0:cell>125 COVID-19, 500 Normal, 500 Pneumonia</ns0:cell><ns0:cell>97.87</ns0:cell><ns0:cell>80.70</ns0:cell><ns0:cell>87.02</ns0:cell></ns0:row><ns0:row><ns0:cell>DeepCOVIDExplainer(Karim et al., 2020)</ns0:cell><ns0:cell>259 COVID-19, 8, 066 Normal, 8, 614 Pneumonia</ns0:cell><ns0:cell>89.61</ns0:cell><ns0:cell>81.17</ns0:cell><ns0:cell>96.77</ns0:cell></ns0:row><ns0:row><ns0:cell>CoroNet(Khan et al., 2020)</ns0:cell><ns0:cell>284 COVID-19, 327 Viral Pneumonia, 310 Normal</ns0:cell><ns0:cell>96.66</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>95.00</ns0:cell></ns0:row><ns0:row><ns0:cell>CovXNet(Mahmud et al., 2020)</ns0:cell><ns0:cell>305 COVID-19, 305 Viral Pneumonia, 305 Bacterial Pneumonia , 305 Normal</ns0:cell><ns0:cell>91.89</ns0:cell><ns0:cell>85.00</ns0:cell><ns0:cell>90.30</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet-Hard Ensemble (Proposed)</ns0:cell><ns0:cell>589 COVID-19, 8, 851 Normal, 6, 053 Pneumonia</ns0:cell><ns0:cell>94.17</ns0:cell><ns0:cell>97.00</ns0:cell><ns0:cell>96.07</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet-Soft Ensemble (Proposed)</ns0:cell><ns0:cell>589 COVID-19, 8, 851 Normal, 6, 053 Pneumonia</ns0:cell><ns0:cell>92.59</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>96.07</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_19'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparison of the proposed ECOVNet with other CNN architectures on COVID-19 detection Bold indicates that the method has statistically better performance than other methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Precision(%) (COVID-19)</ns0:cell><ns0:cell>Recall(%) (COVID-19)</ns0:cell><ns0:cell>Accuracy(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B5(Without augmentation)</ns0:cell><ns0:cell>94.12</ns0:cell><ns0:cell>96.00</ns0:cell><ns0:cell>95.76</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B5(With augmentation)</ns0:cell><ns0:cell>84.55</ns0:cell><ns0:cell>93.00</ns0:cell><ns0:cell>95.00</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet-Hard Ensemble (Proposed)</ns0:cell><ns0:cell>94.17</ns0:cell><ns0:cell>97.00</ns0:cell><ns0:cell>96.07</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet-Soft Ensemble (Proposed)</ns0:cell><ns0:cell>92.59</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>96.07</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the
reviewers’ comments.
We are uploading (a) our point-by-point response to the comments (below) (response to
reviewers), (b) an updated manuscript with blue highlighting newly added contents and red
showing removed contents, and (c) a clean and updated manuscript without highlights (PDF
main document).
Best regards,
Chowdhury et al.
Reviewer #1: Comments #1: Some related works are not mentioned, for example:
https://doi.org/10.1038/s41467-020-18685-1
Response: We acknowledge the comment and added the mentioned paper in the “Related
Works” section.
Reviewer #1: Comments #2: It's a little misleading to use an unnatural balanced testing
dataset. It also makes it hard to compare with other methods unless they are using the same
strategy. So I would suggest the authors to remove everything related to the balanced dataset
and report results on the original dataset.
Response: We acknowledge the comment and revised the “Experiments and Results” section
by removing all experimental results related to the balanced test set.
Reviewer #2 Comment #1: The literature references about work related to Covid are sufficient,
but there are no references for the vast amount of deep learning work in other closely related
areas. There are also many references about well known concepts like ensemble methods.
Response: We acknowledge the comment and revised the manuscript by including references
and discussing the research in other closely related areas such as ensemble and deep learning
techniques.
Reviewer #2 Comment #2:The research cannot be said to be original. There is no identified
knowledge gap being filled. Applying ensemble methods is not sufficient to meet this
requirement.
Response: Thanks for the comment. We argue that the contribution of this paper lies in
designing and developing a highly effective deep learning architecture for detecting COVID-19
from chest x-ray. Towards this goal, in the paper we have analyzed the existing work and
identified their limitations in the related work section. We proposed a novel EfficientNet-based
ensemble architecture for detecting COVID-19 and evaluated using the largest publicly available
COVID-19 dataset. We further compared the efficacy of our architecture with the state-of-the-art
models. Please refer to the Introduction section for the complete list of our contributions.
Reviewer #2 Comment #3:The method applies a stock EfficientNet with the final layers
changed from 1000 classes (ImageNet) to 3 classes. Two fully connected layers are added, but
there is no comparison included to show that these contribute to performance.
Response: We acknowledge the comment. A comparison is added in the revised manuscript as
asked (please see Table 9).
Reviewer #2 Comment #4:The size of the validation and test sets is too small with 90% of the
data used for training. There are only a very small number of positive Covid cases in the
validation and test sets.
Response: We thank the reviewer for the comment. In our experiment, the benchmark dataset
COVIDx is used which is the largest publicly available COVID-19 dataset to the best of our
knowledge. This dataset has come with a training and testing set. We further split the training
set into training and validation with a ratio of 9:1. We have used the original test set as it comes
with.
Reviewer #2 Comment #5:The work does not answer any research question, as it applies
stock methods with little to no modification to a very small dataset. The paper claims a very
minor modification to a stock EfficientNet as a novel CNN architecture. As such, it is not clear
what conclusions can be made from the presented research.
Response: Thanks for the comment. The dataset we used was the largest publicly available
dataset at that time to the best of our knowledge. We argue that the paper makes a significant
contribution by proposing a novel EfficientNet-based ensemble architecture for detecting
COVID-19 with 100% sensitivity/recall which we believe a remarkable outcome towards
developing an AI based COVID-19 detection system
.
" | Here is a paper. Please give your review comments after reading it. |
111 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The goal of this research is to develop and implement a highly effective deep learning model for detecting in this paper, we propose an ensemble of Convolutional Neural Network (CNN) based on EfficientNet, named ECOVNet, to detect COVID-19 from chest X-rays. To make the proposed model more robust, we have used one of the largest open-access chest X-ray data sets named COVIDx containing three classes-COVID-19, normal, and pneumonia. For feature extraction, we have applied an effective CNN structure, namely EfficientNet, with ImageNet pre-training weights. The generated features are transferred into custom fine-tuned top layers followed by a set of model snapshots. The predictions of the model snapshots (which are created during a single training) are consolidated through two ensemble strategies, i.e., hard ensemble and soft ensemble, to enhance classification performance. In addition, a visualization technique is incorporated to highlight areas that distinguish classes, thereby enhancing the understanding of primal components related to COVID-19. The results of our empirical evaluations show that the proposed ECOVNet outperforms the state-of-the-art approaches and significantly improves detection performance with 100% recall for COVID-19 and overall accuracy of 96.07%. We believe that ECOVNet can enhance the detection of COVID-19 disease, and thus, propel towards a fully automated and efficacious COVID-19 detection system.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Corona virus disease 2019 (COVID-19) is a contagious disease that was caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). The disease was first detected in Wuhan City, Hubei Province, China in December 2019, and was related to contact with a seafood wholesale market and quickly spread to all parts of the world <ns0:ref type='bibr'>(World Health Organization, 2020)</ns0:ref>. The World Health Organization (WHO) promulgated the outbreak of the COVID-19 pandemic on March 11, 2020. This perilous virus has not only overwhelmed the world, but also affected millions of lives (World Health Organization, 2020). To limit the spread of this infection, all infected countries strive to cover many strategies such as encourage people to maintain social distancing as well as lead hygienic life, enhance the infection screening system through multi-functional testing, seek mass vaccination to reduce the pandemic ahead of time, etc. The reverse transcriptase-polymerase chain reaction (RT-PCR) is a modular diagnosis method, however, it has certain limitations, such as the accurate detection of suspect patients causes delay since the testing procedures inevitably preserve the strict necessity of conditions at the clinical laboratory <ns0:ref type='bibr' target='#b56'>(Zheng et al., 2020)</ns0:ref> and false-negative results may lead to greater impact in the prevention and control of the disease <ns0:ref type='bibr' target='#b9'>(Fang et al., 2020)</ns0:ref>.</ns0:p><ns0:p>To make up for the shortcomings of RT-PCR testing, researchers around the world are seeking to promote a fast and reliable diagnostic method to detect COVID-19 infection. The WHO and Wuhan University Zhongnan Hospital respectively issued quick guides (World Health Organization, 2020; Jin Manuscript to be reviewed <ns0:ref type='bibr'>et al., 2020b)</ns0:ref>, suggesting that in addition to detecting clinical symptoms, chest imaging can also be used to evaluate the disease to diagnose and treat COVID-19. In <ns0:ref type='bibr' target='#b42'>(Rubin et al., 2020)</ns0:ref>, the authors have contributed a prolific guideline for medical practitioners to use chest radiography and computed tomography (CT) to screen and assess the disease progression of COVID-19 cases. Although CT scans have higher sensitivity, it also has some drawbacks, such as high cost and the need for high doses of radiation during screening, which exposes pregnant women and children to greater radiation risks <ns0:ref type='bibr' target='#b8'>(Davies et al., 2011)</ns0:ref>. On the other hand, diagnosis based on chest X-ray appears to be a propitious solution for COVID-19 detection and treatment. <ns0:ref type='bibr' target='#b34'>Ng et al. (2020)</ns0:ref> remarked that COVID-19 infection pulmonary manifestation is immensely delineated by chest X-ray images.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The purpose of this study is to ameliorate the accuracy of COVID-19 detection system from chest Xray images. In this context, we contemplate a CNN-based architecture since it is illustrious for its topnotch recognition performance in image classification or detection. For medical image analysis, higher detection accuracy along with crucial findings is a top aspiration, and in current years, CNN based architectures are comprehensively featured the critical findings related to medical imaging. In order to achieve the defined purpose, this paper presents a novel CNN based architecture called ECOVNet, exploiting the cutting-edge EfficientNet <ns0:ref type='bibr'>(Tan and Le, 2019)</ns0:ref> family of CNN models together with ensemble strategies. The pipeline of the proposed architecture commences with the data augmentation approach, then optimizes and fine-tunes the pre-trained EfficientNet models, creating respective model's snapshots. After that, generated model snapshots are integrated into an ensemble, i.e., soft ensemble and hard ensemble, to make predictions.</ns0:p><ns0:p>The motivation for using EfficientNet is that they are known for their high accuracy, while being smaller and faster than the best existing CNN architectures. Moreover, an ensemble technique has proven to be effective in predicting since it produces a lower error rate compared with the prediction of a single model. The use of ensemble techniques on different CNN models has proven to be an effective technique for image-based diagnosis and biomedical research <ns0:ref type='bibr' target='#b23'>(Kumar et al., 2017)</ns0:ref>. Owing to the limited number of COVID-19 images currently available, diagnosing COVID-19 infection is more challenging, thereby investing with a visual explainable approach is applied for further analysis. In this regard, we use a Gradient-based Class Activation Mapping algorithm, i.e., Grad-CAM <ns0:ref type='bibr' target='#b45'>(Selvaraju et al., 2017)</ns0:ref>, providing explanations of the predictions and identifying relevant features associated with COVID-19 infection. The key contributions of this paper are as follows:</ns0:p><ns0:p>• We propose a novel CNN based architecture that includes pre-trained EfficientNet for feature extraction and model snapshots to detect COVID-19 from chest X-rays.</ns0:p><ns0:p>• Taking into account the following assumption, the decisions of multiple radiologists are considered in the final prediction, we propose an ensemble in the proposed architecture to make predictions, thus making a credible and fair evaluation of the system.</ns0:p><ns0:p>• We visualize a class activation map through Grad-CAM to explain the prediction as well as to identify the critical regions in the chest X-ray.</ns0:p><ns0:p>• We present an empirical evaluation of our model comparing with state-of-the-art models to appraise the effectiveness of the proposed architecture in detecting COVID-19.</ns0:p><ns0:p>The remainder of the paper is arranged as follows. Section 2 discusses related work. Section 3 explains the details of the data set and presents ECOVNet architecture. The results of our empirical evaluation is presented in Section 4. Finally, Section 5 concludes the paper and highlights the future work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORKS</ns0:head><ns0:p>The impressive improvements achieved through CNN technology have attracted attention of computational medical imaging researchers to study the potential of CNN in medical images obtained through CT, Magnetic resonance imaging (MRI), and X-rays. Many researchers have explored CNN technology in an effective way to classify <ns0:ref type='bibr' target='#b19'>(Kaur and Gandhi, 2020)</ns0:ref> and segment <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2015)</ns0:ref> MR brain images, and have obtained the best performance. What's more, pneumonia detection on chest X-rays with CNN is a state-of-the-art technology with historical prospects for image diagnosis systems <ns0:ref type='bibr' target='#b39'>(Rajpurkar et al., 2017)</ns0:ref>. Owing to the need to identify COVID-19 infections faster, the latest application areas of CNN-based AI systems are booming, which can speed up the analysis of various medical images. <ns0:ref type='bibr' target='#b36'>(Ozturk et al., 2020)</ns0:ref> proposed for the automatic detection of COVID-19 using chest X-ray images where the proposed method carried out two types of classification, one for binary classification (such as COVID-19 and No-Findings) and another for multi-class (such as COVID-19, No-Findings and pneumonia) classification. Finally, the authors provided an intuitive explanation through the heat map, so it can assist the radiologist to find the affected area on the chest X-ray. Another research <ns0:ref type='bibr' target='#b18'>(Karim et al., 2020)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>In this section, we briefly discuss our approach. First, we precede the benchmark data set and data augmentation strategy used in the proposed architecture. Next, we outline the proposed ECOVNet architecture, including network construction using a pre-trained EfficientNet and training methods, and then model ensemble strategies. Finally, to make disease detection more acceptable, we integrate decision visualizations to highlight pivotal facts with visual markers. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Dataset</ns0:head><ns0:p>In this sub-section, we concisely inaugurate the benchmark data set, named COVIDx <ns0:ref type='bibr' target='#b51'>(Wang et al., 2020)</ns0:ref>, that we used in our experiment. The dataset comprises three categories of images -COVID-19, normal and pneumonia, with total number of 13, 914 images for training and 1, 579 for testing (accessed on July 17, 2020). To generate the COVIDx, the authors <ns0:ref type='bibr' target='#b51'>(Wang et al., 2020)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science • Radiological Society of North America (RSNA) Pneumonia Detection Challenge dataset (RSNA, 2019) -normal and non-COVID-19 pneumonia cases.</ns0:p><ns0:p>• COVID-19 radiography database <ns0:ref type='bibr'>(Chowdhury et al., 2020)</ns0:ref> -only COVID-19 cases.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Data Augmentation</ns0:head><ns0:p>Data augmentation is usually performed during the training process to expand the training set. As long as the semantic information of an image is preserved, the transformation can be used for data augmentation.</ns0:p><ns0:p>Using data augmentation, the performance of the model can be improved by solving the problem of overfitting. Although the CNN model has properties such as partial translation-invariant, augmentation strategies (i.e., translated images) can often considerably enhance generalization capabilities <ns0:ref type='bibr' target='#b10'>(Goodfellow et al., 2016)</ns0:ref>. Data augmentation strategies provide various alternatives, each of which has the advantage of interpreting images in multiple ways to present important features, thereby improving the performance of the model. We have considered the following transformations: horizontal flip, rotation, shear, and zoom for augmentation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>ECOVNet Architecture</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows a graphical presentation of the proposed ECOVNet architecture using a pre-trained Effi-cientNet. After augmenting the COVIDx dataset, we used pre-trained EfficientNet <ns0:ref type='bibr'>(Tan and Le, 2019)</ns0:ref> as a feature extractor. This step ensures that the pre-trained EfficientNet can extract and learn useful chest X-ray features, and can generalize it well. Indeed, EfficientNets are an order of models that are obtained from a base model, i.e., EfficientNet-B0. In the Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>, we depicted our proposed architecture using EfficientNet-B0 for the sake of brevity, however, during the experimental evaluation, we have also considered other five EfficientNets B1 to B5. The output features from the pre-trained EfficientNet fed to our proposed custom top layers through two fully connected layers, which are respectively integrated with batch normalization, activation, and dropout. We generated several snapshots in a training session, and then combined their predictions with an ensemble prediction. At the same time, the visualization approach, which can qualitatively analyze the relationship between input examples and model predictions, was incorporated into the following part of the proposed model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1'>Pre-trained Efficientnet Feature Extraction</ns0:head><ns0:p>EfficientNets are a series of models (namely EfficientNet-B0 to B7) that are derived from the baseline network (often called EfficientNet-B0) by scale it up. By adopting a compound scaling method in all dimensions of the network, i.e., width, depth, and resolution, EfficientNets have pulled attention due to its supremacy in prediction performance. The intuition of using compound scaling is that scaling any dimension of the network (such as width, depth, or image resolution) can increase accuracy, but for larger models, the accuracy gain will decrease. To scale the dimensions of the network systematically, compound scaling uses a compound coefficient that controls how many more resources are functional for model scaling, and the dimensions are scaled by the compound coefficient in the following way <ns0:ref type='bibr'>(Tan and Le, 2019)</ns0:ref>: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>depth: d = α φ width: w = β φ resolution: r = γ φ s.t. α.β 2 .γ 2 ≈ 2 α ≥ 1, β ≥ 1, γ ≥ 1 (1)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where φ is the compound coefficient, and α, β , and γ are the scaling coefficients of each dimension that can be fixed by a grid search. After determining the scaling coefficients, they are applied to the baseline network (EfficientNet-B0) for scaling to obtain the desired target model size. For instance, in the case of EfficientNet-B0, when φ = 1 is set, the optimal values are yielded using a grid search, i.e., α = 1.2, β = 1.1, and γ = 1.15, under the constraint of α.β 2 .γ 2 ≈ 2 <ns0:ref type='bibr'>(Tan and Le, 2019)</ns0:ref>.</ns0:p><ns0:p>The feature extraction of the EfficientNet-B0 baseline architecture is comprised of the several mobile inverted bottleneck convolution (MBConv) <ns0:ref type='bibr' target='#b43'>(Sandler et al., 2018;</ns0:ref><ns0:ref type='bibr'>Tan et al., 2019)</ns0:ref> blocks with built-in squeeze-and-excitation (SE) <ns0:ref type='bibr' target='#b12'>(Hu et al., 2018)</ns0:ref>, batch normalization, and swish activation <ns0:ref type='bibr' target='#b40'>(Ramachandran et al., 2017)</ns0:ref> as integrated into EfficientNet. Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> shows the detailed information of each layer of the EfficientNet-B0 baseline network. EfficientNet-B0 consists of 16 MBConv blocks varying in several aspects, for instance, kernel size, feature maps expansion phase, reduction ratio, etc. A complete workflow of the MBConv1, k3 × 3 and MBConv6, k3 × 3 blocks are shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. Both MBConv1, k3 × 3 and MBConv6, k3 × 3 use depthwise convolution, which integrates a kernel size of 3 × 3 with the stride size of s. In these two blocks, batch normalization, activation, and convolution with a kernel size of 1 × 1 are integrated. The skip connection and a dropout layer are also incorporated in MBConv6, k3 × 3, but this is not the case with MBConv1, k3 × 3. Furthermore, in the case of the extended feature map, MBConv6, k3 × 3 is six times that of MBConv1, k3 × 3, and the same is true for the reduction rate in the SE block, that is, for MBConv1, k3 × 3 and MBConv6, k3 × 3, r is fixed to 4 and 24, respectively. Note that, MBConv6, k5 × 5 performs the identical operations as MBConv6, k3 × 3, but MBConv6, k5 × 5 applies a kernel size of 5 × 5, while a kernel size of 3 × 3 is used by MBConv6, k3 × 3. The feature extraction process of the proposed ECOVNet architecture applying pre-trained ImageNet weights is executed after the image augmentation process, as it is presented in Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.2'>Classifier</ns0:head><ns0:p>After the feature extraction process, a customized top layer is used, which works as the classifier shown in and <ns0:ref type='bibr' target='#b15'>Szegedy, 2015)</ns0:ref>. It makes the optimization process smoother, resulting in a more predictable and stable gradient behavior, thereby speeding up training <ns0:ref type='bibr' target='#b44'>(Santurkar et al., 2018)</ns0:ref>. In this study, in a case of activation function, we have preferred Swish which is defined as <ns0:ref type='bibr' target='#b40'>(Ramachandran et al., 2017)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_1'>f (x) = x • σ (x)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where σ (x) = (1 + exp(−x)) −1 is the sigmoid function. Comparison with other activation functions Swish consistently outperforming others including Rectified Linear Unit (ReLU) <ns0:ref type='bibr' target='#b31'>(Nair and Hinton, 2010)</ns0:ref>, which is the most successful and widely-used activation function, on deep networks applied to a variety of challenging fields including image classification and machine translation. Swish has many characteristics, such as one-sided boundedness at zero, smoothness, and non-monotonicity, which play an important role in improving it <ns0:ref type='bibr' target='#b40'>(Ramachandran et al., 2017)</ns0:ref>. After performing the activation operation, we integrated a Dropout <ns0:ref type='bibr' target='#b48'>(Srivastava et al., 2014)</ns0:ref> layer, which is one of the preeminent regularization methods to reduce overfitting and make better predictions. This layer can randomly drop certain FC layer nodes, which means removing all randomly selected nodes, along with all its incoming and outgoing weights. The number of randomly selected nodes drop in each layer is obtained with a probability p independent of other layers, where p can be chosen by using either a validation set or a random estimate (i.e., p = 0.5). In this study, we maintained a dropout size of 0.3. Next, the classification layer used the softmax activation function to render the activation from the previous FC layers into a class score to determine the class of the input chest X-ray image as COVID-19, normal, and pneumonia. The softmax activation function is defined in the following way:</ns0:p><ns0:formula xml:id='formula_2'>s(y i ) = e y i ∑ C j=1 e y j (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>where C is the total number of classes. This normalization limits the output sum to 1, so the softmax output s(y i ) can be interpreted as the probability that the input belongs to the i class. In the training process, Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we apply the categorical cross-entropy loss function, which uses the softmax activation function in the classification layer to measure the loss between the true probability of the category and the probability of the predicted category. The categorical cross-entropy loss function is defined as</ns0:p><ns0:formula xml:id='formula_4'>l = − N ∑ n=1</ns0:formula><ns0:p>log( e y i,n ∑ C j=1 e y j,n</ns0:p><ns0:p>).</ns0:p><ns0:p>(4)</ns0:p><ns0:p>The total number of input samples is denoted as N, and C is the total number of classes, that is, C = 3 in our case.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.3'>Model Snapshots and Ensemble Prediction</ns0:head><ns0:p>The main concept of building model snapshots is to train one model with constantly reducing the learning rate to attain a local minimum and save a snapshot of the current model's weight. Later, it is necessary to actively increase the learning rate to retreat from the current local minimum requirements. This process continues repeatedly until it completes cycles. One of the prominent methods for creating model snapshots for CNN is to collect multiple models during a single training run with cyclic cosine annealing <ns0:ref type='bibr' target='#b13'>(Huang et al., 2017a)</ns0:ref>. The cyclic cosine annealing method starts from the initial learning rate, then gradually decreases to the minimum, and then rapidly increases. The learning rate of cyclic cosine annealing in each epoch is defined as: Ensemble through model snapshots is more effective than a structure based on a single model only. Therefore, compared with the prediction of a single model, the ensemble prediction reduces the generalization error, thereby improving the prediction performance. We have experimented with two ensemble strategies, i.e., hard ensemble and soft ensemble, to consolidate the predictions of snapshots model to classify chest X-ray images as COVID-19 or normal or pneumonia. Both hard ensemble and soft ensemble use the last m (m ≤ M) model's softmax outputs since these models have a tendency to have the lowest test error. We also consider class weights to obtain a softmax score before applying the ensemble. Let O i (x) is the softmax score of the test sample x of the i-th snapshot model. Using hard ensemble, the prediction of the i-th snapshot model is defined as</ns0:p><ns0:formula xml:id='formula_5'>α(t) = α 0 2 (cos( πmod(t − 1, ⌈T /M⌉) ⌈T /M⌉ ) + 1)<ns0:label>(5</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>H i = argmax x O i (x).<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>The final ensemble constrains to aggregate the votes of the classification labels (i.e., COVID-19, normal, and pneumonia) in the other snapshot models and predict the category with the most votes. On the other hand, the output of the soft ensemble includes averaging the predicted probabilities of class labels in the last m snapshots model defined as</ns0:p><ns0:formula xml:id='formula_7'>S = 1 m m−1 ∑ i=0 O M−i (x).<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Finally, the class label with the highest probability is used for the prediction. The creation of model snapshots and ensemble predictions are integrated at the end of the proposed architecture, as shown in Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.4'>Hyper-Parameters Adjustment</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3.5'>Visual Explanations using Grad-CAM</ns0:head><ns0:p>Although the CNN-based modular architecture provides encouraging recognition performance for image classification, there are still several issues where it is challenging to reveal why and how to produce such impressive results. Due to its black-box nature, it is sometimes contrary to apply it in a medical</ns0:p><ns0:p>diagnosis system where we need an interpretable system, i.e., visualization as well as an accurate diagnosis.</ns0:p><ns0:p>Despite it has certain challenges, researchers are still endeavoring to seek for an efficient visualization technique since it can contribute the most critical key facts in the health-care system into focus, assist medical practitioners to distinguish correlations and patterns in imaging, and perform data analysis more efficacious. In the field of detecting COVID-19 through chest X-rays, some early studies focused on visualizing the behavior of CNN models to distinguish between different categories (such as COVID-19, normal, and pneumonia), so they can produce explanatory models. In our proposed model, we applied a gradient-based approach named Grad-CAM <ns0:ref type='bibr' target='#b45'>(Selvaraju et al., 2017)</ns0:ref>, which measures the gradients of features maps in the final convolution layer on a CNN model for a target image, to foreground the critical regions that are class-discriminating saliency maps. In Grad-CAM, gradients that are flowing back to the final convolutional layer in a CNN model are globally averaged to calculate the target class weights of each filter. Grad-CAM heat-map is a combination of weighted feature maps, followed by a ReLU activation. The class-discriminative saliency map L c for the target image class c is defined as follows <ns0:ref type='bibr' target='#b45'>(Selvaraju et al., 2017)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_8'>L c i, j = ReLU( ∑ k w c k A k i, j ),<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where A k i, j denotes the activation map for the k-th filter at a spatial location (i, j), and ReLU captures the positive features of the target class. The target class weights of k-th filter is computed as:</ns0:p><ns0:formula xml:id='formula_9'>w c k = 1 Z ∑ i ∑ j ∂Y c ∂ A k i, j ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>where Y c is the probability of classifying the target category as c, and the total number of pixels in the activation map is denoted as Z. The Grad-CAM visualization of each model snapshot is incorporated at the end edge of the proposed architecture, as displayed in Fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EXPERIMENTS AND RESULTS</ns0:head><ns0:p>In this section, we evaluate the classification performance of our proposed ECOVNet and compare it's performance with the state-of-the-art methods. We consider several experimental settings to analyze the robustness of the ECOVNet model. All our programs are written in Python, and the software pile is composed of Keras with TensorFlow and scikit-learn. The source code and models are publicly available in github 1 .</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Dataset and Parameter Settings</ns0:head><ns0:p>In this section, we introduce the distribution of the benchmark data set and the model parameters generated in the experiment. We used COVIDx <ns0:ref type='bibr' target='#b51'>(Wang et al., 2020)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Evaluation Metrics</ns0:head><ns0:p>In order to evaluate the performance of the proposed method, we considered the following evaluation metrics: accuracy, precision, recall, F1-score, confidence interval (CI), receiver operating characteristic (ROC) curve and area under the curve (AUC). The definitions of accuracy, precision, recall and F1 score are as follows:</ns0:p><ns0:formula xml:id='formula_10'>Accuracy = TP+TN Total Samples (10) Precision = TP TP+FP (11) Recall = TP TP+FN (12) F1 = 2 × Precision × Recall Precision + Recall (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>)</ns0:formula><ns0:p>where TP stands for true positive, while TN, FP, and FN stand for true negative, false positive, and false negative, respectively. Since the benchmark data set is not balanced, F1 score may be a more substantial evaluation metric Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Prediction Performance</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_9'>5</ns0:ref> reports prediction performances of the proposed ECOVNet without using ensemble. The results</ns0:p><ns0:p>show that ECOVNet with EfficientNet-B5 pre-trained weights outperforms over other base models both for the case of images with augmentation and without augmentation. It reflects the fact that feature extraction using an optimized model that considers three aspects, namely higher depth and width, and a broader image resolution, can capture more and finer details, thereby improving classification accuracy.</ns0:p><ns0:p>Without augmentation, under the condition of without ensemble, ECOVNet's accuracy reaches 96.26%, and its performance is lower for with augmentation, reaching 94.68% accuracy. We calculated accuracy with 95% CI. A tight range of CI means higher precision, while the wide range of CI indicates the opposite.</ns0:p><ns0:p>As we can see, the CI interval is in a narrow range for the case of no augmentation, and the CI range is wider for the case of augmentation. Furthermore, Fig. <ns0:ref type='figure' target='#fig_8'>3</ns0:ref> shows the training loss of ECOVNet considering EfficientNet-B5. <ns0:ref type='table' target='#tab_11'>6</ns0:ref> and Table <ns0:ref type='table' target='#tab_12'>7</ns0:ref> show the classification results using ensembles for without augmentation and with augmentation, respectively.</ns0:p><ns0:p>As shown in Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref>, in handling COVID-19 cases, the ensemble methods are significantly better than the no ensemble method. More specifically, the recall hits its maximum value 100%, and to a large extent, this result demonstrates the robustness of our proposed architecture. Furthermore, for COVID-19 detection, soft ensemble seems to be the preferred method due to its recall and F1-score 100% and 96.15%, respectively. In the soft ensemble, the average softmax score of each category affects the direction of the desired result, thus the performance of the soft ensemble is better than the hard ensemble. Owing to the uneven distribution of the test set, an F1-score may be more reliable than an accuracy.</ns0:p><ns0:p>For augmentation, we see that the ensemble methods present better results than the no ensemble (see Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Table <ns0:ref type='table' target='#tab_12'>7</ns0:ref>). When comparing between two ensemble methods, we see that hard ensemble outperforms soft ensemble with a significant margin in the case of precision and F1-score, but an exception is that the soft ensemble is slightly better than the hard ensemble while recall is taken into account. Moreover, in the case of overall accuracy, the hard ensemble shows better detection performance than the soft ensemble.</ns0:p><ns0:p>It can also be clearly seen from Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref> and Table <ns0:ref type='table' target='#tab_12'>7</ns0:ref> that for COVID-19 cases, the precision of the hard ensemble method is better than the soft ensemble method in terms of augmentation and no augmentation.</ns0:p><ns0:p>Finally, we also observe that the confidence interval range is small for no augmentation strategy (Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref>) compared to augmentation strategy (Table <ns0:ref type='table' target='#tab_12'>7</ns0:ref>). In Fig. <ns0:ref type='figure'>4</ns0:ref>, the proposed ECOVNet (Base models B0-B5) is aimed at the precision, recall, and F1-score of the soft ensemble of test data considering the COVID-19 cases. When comparing the precision of ECOVNet, we have seen that ECOVNet-B4 (Base model B4) shows significantly better performance than other base models. However, in terms of recall, as we consider more in-depth base models, the value gradually increases. The same is true for F1-scores as well except with a slight decrease of 0.5% from ECOVNet-B5 to ECOVNet-B4.</ns0:p><ns0:p>It is often useful to analyze the ROC curve to reflect the classification performance of the model since the ROC curve gives a summary of the trade-off between the true positive rate and the false positive rate of a model that takes into account different probability thresholds. In Fig. <ns0:ref type='figure'>5</ns0:ref>, the ROC curves show the micro and macro average and class-wise AUC scores obtained by the proposed ECOVNet, where each curve refers to the ROC curve of an individual model snapshot. The AUC scores of all categories are consistent, indicating that the prediction of the proposed model is stable. However, the AUC scores in the third and fourth snapshots are better than other snapshots. As it is evident from Fig. <ns0:ref type='figure'>5</ns0:ref> that the area under the curve of all classes is relatively similar, but COVID-19's AUC is higher than other classes, i.e., 1. Furthermore, Fig. <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> shows the confusion matrices of the proposed ECOVNet considering the base model of EfiicientNet-B5. In Fig. <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>, it is clear that for COVID-19, the ensemble methods provide better results than those without ensemble. These methods provide results that are 3% − 4% better than without ensemble. However, ECOVNet has the ability to detect normal and pneumonia chest X-rays, whether in ensemble or no ensemble, it can provide the same performance. When comparing normal and pneumonia, </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Comparison between ECOVNet and the other models</ns0:head><ns0:p>When comparing with other methods, we considered whether the existing method had regarded one of the following factors: used ImageNet weights, applied ensemble methods and used the COVIDx dataset.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_16'>8</ns0:ref> shows the comparison between our proposed ECOVNet method and the state-of-the-art methods for detecting COVID-19 from chest X-rays. We compare our proposed method with COVID-Net 2 , EfficientNet-B3 3 , DarkCovidNet 4 and CoroNet 5 . All these previous methods have released either the trained model (such as COVID-Net) on our used train dataset or source code. We compare the precision, recall and accuracy of our proposed method with the existing methods using the same training and test dataset (see Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref>) derived from COVIDx 2 While comparing with COVID-Net <ns0:ref type='bibr' target='#b51'>(Wang et al., 2020)</ns0:ref>, we observe that the recall of our proposed method is 6% higher than the COVID-Net. Another method called EfficientNet-B3 <ns0:ref type='bibr' target='#b25'>(Luz et al., 2020)</ns0:ref> shows higher precision than ours, but their recall and accuracy lag behind. A method called DarkCovidNet <ns0:ref type='bibr' target='#b36'>(Ozturk et al., 2020)</ns0:ref> achieves a precision of 96.00%, which is higher than our precision. The proposed ECOVNet is superior to DarkCovidNet in recall and accuracy.</ns0:p><ns0:p>The precision, recall and accuracy of CoroNet <ns0:ref type='bibr' target='#b21'>(Khan et al., 2020)</ns0:ref> In the confusion matrices, the predicted labels, such as COVID-19, Normal, and Pneumonia, are marked as 0, 1 and 2, respectively.</ns0:p><ns0:p>with fine-tuning steps and ensemble methods, so significant improvements can be achieved. In Table <ns0:ref type='table' target='#tab_17'>9</ns0:ref>, we have observed that our proposed method shows better results than EfficientNet.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Visualization using Grad-CAM</ns0:head><ns0:p>We applied the Grad-CAM visual interpretation method to visually depict the salient areas where ECOV-Net emphasizes the classification decision for a given chest X-ray image. Accurate and definitive salient region detection is crucial for the analysis of classification decisions as well as for assuring the trustworthiness of the results. In order to locate the salient area, the feature weights with various illuminations related to feature importance are used to create a two-dimensional heat map and superimpose it on a given input image. Fig. <ns0:ref type='figure' target='#fig_10'>7</ns0:ref> shows the visualization results of locating Grad-CAM using ECOVNet for each model snapshots. This salient area locates the area of each category area in the lung that has been Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Computer Science identified when a given image is classified as COVID-19 or normal or pneumonia. As shown in Fig. <ns0:ref type='figure' target='#fig_10'>7</ns0:ref>, for COVID-19, a ground-glass opacity (GGO) occurs along with some consolidation, thereby partially covering the markings of the lungs. Hence, it leads to lung inflammation in both the upper and lower zones of the lung. When examining the heat maps generated from the COVID-19 chest X-ray, it can be distinguished that the heat maps created from snapshot 2 and snapshot 3 points to the salient area (such as GGO). However, in the case of the normal chest X-ray, no lung inflammation is observed, so there is no significant area, thereby easily distinguishable from COVID-19 and pneumonia. As well, it can be observed from the chest X-ray for pneumonia is that there are GGOs in the middle and lower parts of the lungs. The heat maps generated for the pneumonia chest X-ray are localized in the salient regions with GGO, but for the 4th snapshot model, it appears to fail to identify the salient regions as the heat map highlights outside the lung. Accordingly, we believe that the proposed ECOVNet provides sufficient information about the inherent causes of the COVID-19 disease through an intuitive heat map, and this type of heat map can help AI-based systems interpret the classification results.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In this paper, we proposed a novel modular architecture ECOVNet based on CNN, which can effectively detect COVID-19 with the class activation maps from one of the largest publicly available chest X-ray data set, i.e., COVIDx. In this work, a highly effective CNN structure (such as the EfficientNet base model with ImageNet pre-trained weights) is used as feature extractors, while fine-tuned pre-trained weights are considered for related COVID-19 detection tasks. Also, ensemble predictions improve performance by exploiting the predictions obtained from the proposed ECOVNet model snapshots. The results of our empirical evaluations show that the soft ensemble of the proposed ECOVNet model snapshots outperforms the other state-of-the-art methods. Finally, we performed a visualization study to locate significant areas in the chest X-ray through the class activation map for classifying the chest X-ray into its expected category.</ns0:p><ns0:p>Thus, we believe that our findings could make a useful contribution to the detection of COVID-19 infection and the widespread acceptance of automated applications in medical practice. While this work contributes to reduce the effort of health professional's radiological assessment, our future plan is to lead this work to design a fully-functional application using guidelines of the design research paradigm <ns0:ref type='bibr' target='#b28'>(Miah and Gammack, 2014;</ns0:ref><ns0:ref type='bibr' target='#b27'>Miah, 2008)</ns0:ref>. Such a modern methodological lens could offer further directions both for developing innovative clinical solutions and associative knowledge in the body of relevant literature.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:2:0:NEW 11 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>An in-depth survey of the application of CNN technology in COVID-19 detection and automatic lung 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:2:0:NEW 11 Apr 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Graphical representation of the proposed ECOVNet architecture</ns0:figDesc><ns0:graphic coords='5,349.59,537.27,106.91,69.24' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>•</ns0:head><ns0:label /><ns0:figDesc>COVID-19 Image Data Collection (Cohen et al., 2020) -non-COVID-19 pneumonia and COVID-19 cases are taken from this repository. • COVID-19 Chest X-ray Dataset initiative (Chung, 2020b) -only COVID-19 cases are taken from this repository. • ActualMed COVID-19 Chest X-ray Dataset Initiative (Chung, 2020a) -taken only COVID-19 cases.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Fig. 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 2. The basic building block of EfficientNet-B0. All MBConv blocks take the height, width, and channel of h, w, and c as input. C is the output channel of the two MBConv blocks. (Note that, MBConv= Mobile Inverted Bottleneck Convolution, DW Conv= Depth-wise Convolution, SE= Squeeze-Excitation, Conv= Convolution)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:01:57178:2:0:NEW 11 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>)where α(t) is the learning rate at epoch t, α 0 is the initial learning rate, T is the total number of training iterations and M is the number of cycles. The weight at the bottom of each cycle is regarded as the weight of the snapshot model. The following learning rate cycle uses these weights, but allows the learning algorithm to converge to different solutions, thereby generating diverse snapshots model. After completing M cycles of training, we get M model snapshots s 1 ...s M , each of which will be utilized in the ensemble prediction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Fine</ns0:head><ns0:label /><ns0:figDesc>-tuned hyper-parameters have a great impact on the performance of the model because they directly govern the training of the model. What's more, fine-tuned parameters can avoid overfitting and form a generalized model. Since we have dealt with an unbalanced data set, the proposed architecture may have a 8/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:2:0:NEW 11 Apr 2021)Manuscript to be reviewed Computer Science huge possibility to confront the problem of overfitting. In order to solve the problem of overfitting, we use L1L2 weight decay regularization with coefficients 1e−5 and 1e−3 in FC layers. Next, dropout is another successful regularization technique that has been integrated into the proposed architecture, especially in FC layers with p = 0.3, to suppress overfitting. In the experiments on the proposed architecture, we have explored the Adam optimizer<ns0:ref type='bibr' target='#b22'>(Kingma and Ba, 2014)</ns0:ref>, which can converge faster. When creating snapshots, we set the number of epochs to 25, the minimum batch size to 8, the initial learning rate to 1e−4, and the number of cycles to 5, thus providing 5 snapshots for each model, on which we build up the ensemble prediction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Loss curve of ECOVNet (Base model EfficientNet-B5) during training</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Confusion matrices of the proposed ECOVNet considering EfficientNet-B5 as a base model.In the confusion matrices, the predicted labels, such as COVID-19, Normal, and Pneumonia, are marked as 0, 1 and 2, respectively.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Grad-CAM visualization for i th snapshot model of the proposed ECOVNet considering the base model EfficientNet-B5.</ns0:figDesc><ns0:graphic coords='16,161.68,573.05,58.79,58.71' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Overview of CNN based architectures for detecting COVID-19 from chest X-rays</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Data Source</ns0:cell><ns0:cell>Architecture</ns0:cell><ns0:cell cols='3'>Pre-trained Weight Ensemble Visualization</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Wang et al. (2020) COVIDx (Wang et al.,</ns0:cell><ns0:cell>COVID-Net</ns0:cell><ns0:cell>ImageNet</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>GSInquire</ns0:cell><ns0:cell>(Lin</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>et al., 2019)</ns0:cell></ns0:row><ns0:row><ns0:cell>Luz et al. (2020)</ns0:cell><ns0:cell>COVIDx</ns0:cell><ns0:cell>EfficientNet</ns0:cell><ns0:cell>ImageNet</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>Heat Map</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Ozturk et al. (2020) Cohen et al. (2020),</ns0:cell><ns0:cell>DarkCovidNet</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>Grad-CAM</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>NIH Chest X-ray dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>(Wang et al., 2017)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Karim et al. (2020) COVIDx</ns0:cell><ns0:cell>VGG, ResNet, DenseNet</ns0:cell><ns0:cell>ImageNet</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>Grad-CAM,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Grad-CAM++</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(Chattopadhay</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>et al., 2018), LRP</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(Bach et al., 2015)</ns0:cell></ns0:row><ns0:row><ns0:cell>Khan et al. (2020)</ns0:cell><ns0:cell>Cohen et al. (2020), P.</ns0:cell><ns0:cell>Xception</ns0:cell><ns0:cell>ImageNet</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Mooney (Mooney, 2017)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mahmud et al.</ns0:cell><ns0:cell>Mendeley Data, V2 (Ker-</ns0:cell><ns0:cell>CovXNet</ns0:cell><ns0:cell>Non-COVID</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>Grad-CAM</ns0:cell></ns0:row><ns0:row><ns0:cell>(2020)</ns0:cell><ns0:cell>many et al., 2018), 305</ns0:cell><ns0:cell /><ns0:cell>X-rays</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>COVID-19 images</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chowdhury et al.</ns0:cell><ns0:cell>COVID-19 Radiography</ns0:cell><ns0:cell>PDCOVIDNet</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell cols='2'>Grad-CAM, Grad-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2020)</ns0:cell><ns0:cell>Database (Chowdhury</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>CAM++</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>et al., 2020)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>segmentation using X-rays and CT images is presented in<ns0:ref type='bibr' target='#b46'>(Shoeibi et al., 2020)</ns0:ref>. Due to the need to interpret chest CT images faster,<ns0:ref type='bibr' target='#b16'>Jin et al. (2020a)</ns0:ref> proposed an AI system based on deep learning that can speed up the analysis of chest CTs to detect COVID-19 and validated using a large multi-class data sets. A new CNN architecture named COVID-Net and a large chest x-ray benchmark data set (COVIDx) have introduced in Wang et al. (2020). The proposed COVID-Net obtained the best test accuracy of 93.3%, and studied how COVID-Net uses an interpretability method to predict. Luz et al. (2020) proposed a new deep learning framework that extends the EfficientNet (Tan and Le, 2019) series, which is well known for its excellent prediction performance and fewer computational steps. Their experimental evaluation showed noteworthy classification performance, especially in COVID-19 cases. A CNN model called DarkCovidNet</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b47'>Simonyan and Zisserman, 2015)</ns0:ref>, ResNet18<ns0:ref type='bibr' target='#b11'>(He et al., 2016)</ns0:ref>, and DenseNet161<ns0:ref type='bibr' target='#b14'>(Huang et al., 2017b)</ns0:ref>, but it has two flaws.Firstly, each model requires a separate training session, and secondly, an individual model suffers from training many parameters. Another method Mahmud et al. (2020) used an ensemble on a single model with various image resolutions, and for each image resolution, it creates a separate model and stacks it for prediction, which incurs a significant computational overhead. To address the aforementioned problems, we use a lightweight but effective model EfficientNet since it is 8.4 times</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>VGG19 (smaller and 6.1 times faster than the best existing CNN (Tan and Le, 2019). Also, we force large changes</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>in model weights through the recursive learning rate, create model snapshots in the same training, and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>further apply the ensemble to make the proposed architecture more robust, thereby achieving a higher</ns0:cell></ns0:row><ns0:row><ns0:cell>detection rate compared to other methods.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>proposed an explainable CNN-based method adjusting on a neural ensemble technique followed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>by highlighting class-discriminating regions named DeepCOVIDExplainer for automatic detection of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>COVID-19 cases from chest x-ray images. Khan et al. (2020) proposed a model named CoroNet that used</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Xception architecture pre-trained on ImageNet dataset and trained on their benchmark creating from two</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>publicly available data sets, and carried out two different classification performance measurement, i.e.,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>three and four classes with classification accuracy 95% and 89.6%, respectively. In another work, Mahmud</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>et al. (2020) proposed a CNN-based model called CovXNet, which uses depthwise dilated convolution.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>At first, the model trained with some non-COVID-19 pneumonia images, and further transferred the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>acquired learning with some additional fine-tuning layers that trained again with a smaller number of chest</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>X-rays related to COVID-19 and other pneumonia cases. As features extracted from different resolutions</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>of X-rays, a stacking algorithm is used in the prediction process, and for multi-class classification, the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>accuracy of CovXNet is 90.3%. An advanced custom CNN architecture, COVID-Net (Wang et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>2020) was implemented and tested using a large COVID-19 benchmark, but due to the large number of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>parameters, the computational overhead of this model is high. Another CNN-based modular architecture,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>named PDCOVIDNet, is proposed by Chowdhury et al. (2020), which consists of a parallel stack of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>multi-layer filter blocks in a cascade with a classification and visualization block. The authors claimed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>the effectiveness of the model compared with a number of well known CNN architectures and showed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>precision and recall of 96.58% and 96.59%, respectively.Table 1 shows an overview of some CNN based</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>architectures for detecting COVID-19 from chest X-rays.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Most of the existing work, as discussed above, make prediction decisions based on the output of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>a single model, only a few methods (such as Karim et al. (2020) and Mahmud et al. (2020)) used</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>an ensemble. The key benefit of the ensemble is that it can reduce prediction errors, thus makes the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>model more versatile. In (Karim et al., 2020), authors used ensemble on heterogeneous models, i.e.,</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:2:0:NEW 11 Apr 2021)</ns0:cell><ns0:cell>3/19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>EfficientNet-B0 baseline network layers outline</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Stage</ns0:cell><ns0:cell>Operator</ns0:cell><ns0:cell cols='3'>Resolution #Output Feature Maps #Layers</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Conv 3 × 3</ns0:cell><ns0:cell>224 × 224</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>MBConv1, k3 × 3</ns0:cell><ns0:cell>112 × 112</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>MBConv6, k3 × 3</ns0:cell><ns0:cell>112 × 112</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>MBConv6, k5 × 5</ns0:cell><ns0:cell>56 × 56</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>MBConv6, k3 × 3</ns0:cell><ns0:cell>28 × 28</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>MBConv6, k5 × 5</ns0:cell><ns0:cell>14 × 14</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>MBConv6, k5 × 5</ns0:cell><ns0:cell>14 × 15</ns0:cell><ns0:cell>192</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>MBConv6, k3 × 3</ns0:cell><ns0:cell>7 × 7</ns0:cell><ns0:cell>320</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>Conv 1 × 1 & Pooling & FC</ns0:cell><ns0:cell>7 × 7</ns0:cell><ns0:cell>1280</ns0:cell><ns0:cell>1</ns0:cell></ns0:row></ns0:table><ns0:note>Instead of random initialization of network weights, we instantiate ImageNet's pre-trained weights in the EfficientNet model thereby accelerating the training process. Transferring the pre-trained weights of the ImageNet have performed a great feat in the field of image analysis, since it composes more than 14 million images covering eclectic classes. The rationale for using pre-trained weights is that the imported model already has sufficient knowledge in the broader aspects of the image domain. As it has been manifested in several studies<ns0:ref type='bibr' target='#b37'>(Rajaraman et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b32'>Narin et al., 2020)</ns0:ref>, using pre-trained ImageNet weights in the state-of-the-art CNN models remain optimistic even when the problem area (namely COVID-19 detection) is considerably distinct from the one in which the original weights have been obtained. The optimization process will fine-tune the initial pre-training weights in the new training phase so that we can fit the pre-trained model to a specific problem domain, such as COVID-19 detection.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Image partition of Training, Validation, and Testing set The entire image distribution of training, validation, and testing is shown in Table3.In our experiment, we use EfficientNet B0 to B5 as base models. However, the input image resolution size is different for each base model, and the size increases from B0 to B5. As the image resolution size increases, the model needs more layers to capture the finer-grained patterns, and thereby increasing the size of the parameter in the model. Table4displays a list of input image resolution size for each base model and the total number of parameters generated during training.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Category COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>Total</ns0:cell></ns0:row><ns0:row><ns0:cell>Training</ns0:cell><ns0:cell>441</ns0:cell><ns0:cell>7, 170</ns0:cell><ns0:cell>4, 914</ns0:cell><ns0:cell>12, 525</ns0:cell></ns0:row><ns0:row><ns0:cell>Validation</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>796</ns0:cell><ns0:cell>545</ns0:cell><ns0:cell>1, 389</ns0:cell></ns0:row><ns0:row><ns0:cell>Testing</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>885</ns0:cell><ns0:cell>594</ns0:cell><ns0:cell>1, 579</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>further split the training set into training and validation with a ratio of 9:1. We have used the original test</ns0:cell></ns0:row><ns0:row><ns0:cell>set as it comes with.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Image resolution and total number of parameters of ECOVNet considering the base models of EfficientNet (B0 to B5)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Base Model</ns0:cell><ns0:cell cols='2'>Image Resolution Parameter Size (ECOVNet)</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell>224 × 224</ns0:cell><ns0:cell>4, 978, 847</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B1</ns0:cell><ns0:cell>240 × 240</ns0:cell><ns0:cell>7, 504, 515</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B2</ns0:cell><ns0:cell>260 × 260</ns0:cell><ns0:cell>8, 763, 893</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B3</ns0:cell><ns0:cell>360 × 360</ns0:cell><ns0:cell>11, 844, 907</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B4</ns0:cell><ns0:cell>380 × 380</ns0:cell><ns0:cell>18, 867, 291</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B5</ns0:cell><ns0:cell>456 × 456</ns0:cell><ns0:cell>29, 839, 091</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Prediction performance of proposed ECOVNet without using ensemble Bold indicates that the method has statistically better performance than other methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Pre-trained Weight</ns0:cell><ns0:cell>Precision(%)</ns0:cell><ns0:cell>Recall(%)</ns0:cell><ns0:cell>F1-score(%)</ns0:cell><ns0:cell>Accuracy(%)(95% CI)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell>93.27</ns0:cell><ns0:cell>93.29</ns0:cell><ns0:cell>93.27</ns0:cell><ns0:cell>93.29 ± 1.23</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Without Augmentation)</ns0:cell><ns0:cell>EfficientNet-B1 EfficientNet-B2 EfficientNet-B3 EfficientNet-B4</ns0:cell><ns0:cell>94.28 93.24 95.56 95.52</ns0:cell><ns0:cell>94.30 93.03 95.57 95.50</ns0:cell><ns0:cell>94.26 93.08 95.56 95.50</ns0:cell><ns0:cell>94.30 ± 1.14 93.03 ± 1.26 95.57 ± 1.01 95.50 ± 1.02</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EfficientNet-B5</ns0:cell><ns0:cell>96.28</ns0:cell><ns0:cell>96.26</ns0:cell><ns0:cell>96.26</ns0:cell><ns0:cell>96.26 ± 0.94</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EfficientNet-B0</ns0:cell><ns0:cell>91.71</ns0:cell><ns0:cell>74.10</ns0:cell><ns0:cell>79.72</ns0:cell><ns0:cell>74.10 ± 2.16</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (With Augmentation)</ns0:cell><ns0:cell>EfficientNet-B1 EfficientNet-B2 EfficientNet-B3 EfficientNet-B4</ns0:cell><ns0:cell>91.02 93.60 92.60 94.32</ns0:cell><ns0:cell>86.19 93.10 90.25 93.73</ns0:cell><ns0:cell>87.67 93.24 90.92 93.89</ns0:cell><ns0:cell>86.19 ± 1.70 93.10 ± 1.25 90.25 ± 1.46 93.73 ± 1.20</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EfficientNet-B5</ns0:cell><ns0:cell>94.79</ns0:cell><ns0:cell>94.68</ns0:cell><ns0:cell>94.70</ns0:cell><ns0:cell>94.68 ± 1.11</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Class-wise classification results of ECOVNet (Base model EfficientNet-B5) without augmentation Bold indicates that the method has statistically better performance than other methods for COVID-19.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Precision (%)</ns0:cell><ns0:cell>Recall (%)</ns0:cell><ns0:cell>F1-score (%)</ns0:cell><ns0:cell>Accuracy (%)(95% CI)</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Without Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>91.43 97.07 95.91</ns0:cell><ns0:cell>96.00 97.29 94.78</ns0:cell><ns0:cell>93.66 97.18 95.34</ns0:cell><ns0:cell>96.26 ± 0.94</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Hard Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>94.17 97.05 94.95</ns0:cell><ns0:cell>97.00 96.72 94.95</ns0:cell><ns0:cell>95.57 96.89 94.95</ns0:cell><ns0:cell>96.07 ± 0.96</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Soft Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>92.59 97.05 95.25</ns0:cell><ns0:cell>100 96.61 94.61</ns0:cell><ns0:cell>96.15 96.83 94.93</ns0:cell><ns0:cell>96.07 ± 0.96</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Class-wise classification results of ECOVNet (Base model EfficientNet-B5) with augmentation Bold indicates that the method has statistically better performance than other methods for COVID-19.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell cols='4'>Precision(%) Recall(%) F1-score(%) Accuracy(%)(95% CI)</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Without Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>87.62 97.31 97.31</ns0:cell><ns0:cell>92.00 94.12 95.96</ns0:cell><ns0:cell>89.76 95.69 94.06</ns0:cell><ns0:cell>94.68 ± 1.11</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Hard Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>90.29 97.35 93.76</ns0:cell><ns0:cell>93.00 95.37 96.13</ns0:cell><ns0:cell>91.63 96.35 94.93</ns0:cell><ns0:cell>95.50 ± 1.02</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet (Soft Ensemble)</ns0:cell><ns0:cell>COVID-19 Normal Pneumonia</ns0:cell><ns0:cell>85.45 97.67 93.43</ns0:cell><ns0:cell>94.00 94.92 95.79</ns0:cell><ns0:cell>89.52 96.28 94.60</ns0:cell><ns0:cell>95.19 ± 1.06</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head /><ns0:label /><ns0:figDesc>are significantly lower than ours. As we have observed from empirical evaluation, for the test data set, the proposed method shows the same classification accuracy in different combinations of soft and hard ensembles. When comparing the results of the soft and hard ensemble, we observed that the soft ensemble showed impressive results when classifying COVID-19 with 100% recall. We have also conducted experiments on EfficientNet, which mainly consists of feature extraction, a global average pool of generated features, and a classification layer.Figure 5. ROC curves of model snapshots of the proposed ECOVNet considering EfficientNet-B5 base model</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell /><ns0:cell cols='4'>Micro-average (Area=0.9924) Macro-average (Area = 0.9971)</ns0:cell><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell /><ns0:cell cols='4'>Micro-average (Area=0.9897) Macro-average (Area = 0.9943)</ns0:cell><ns0:cell /><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell /><ns0:cell>Micro-average (Area=0.9915) Macro-average (Area = 0.9954)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell cols='4'>COVID-19 (Area = 0.9996)</ns0:cell><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell cols='4'>COVID-19 (Area = 0.9998)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell>COVID-19 (Area = 1.0000)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Normal (Area = 0.9970)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Normal (Area = 0.9935)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Normal (Area = 0.9955)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell cols='4'>Pneumonia (Area = 0.9918)</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell cols='4'>Pneumonia (Area = 0.9845)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>Pneumonia (Area = 0.9881)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>False Positive Rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>False Positive Rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>False Positive Rate</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(a) ROC for Snapshot model 1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(b) ROC for Snapshot model 2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(c) ROC for Snapshot model 3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell cols='4'>Micro-average (Area=0.9959) Macro-average (Area = 0.9974)</ns0:cell><ns0:cell /><ns0:cell>True Positive Rate</ns0:cell><ns0:cell>0.4 0.6 0.8</ns0:cell><ns0:cell /><ns0:cell cols='4'>Micro-average (Area=0.9911) Macro-average (Area = 0.9947)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell cols='3'>COVID-19 (Area = 1.0000)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell cols='4'>COVID-19 (Area = 0.9998)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Normal (Area = 0.9975)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>Normal (Area = 0.9941)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell cols='3'>Pneumonia (Area = 0.9924)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell cols='4'>Pneumonia (Area = 0.9877)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.8</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>False Positive Rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>False Positive Rate</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(d) ROC for Snapshot model 4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(e) ROC for Snapshot model 5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell cols='2'>0.00</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>97.0</ns0:cell><ns0:cell /><ns0:cell>0.00</ns0:cell><ns0:cell cols='2'>0.03</ns0:cell><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell cols='2'>0.96</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.03</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>True label</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell cols='2'>0.97</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>True label</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell /><ns0:cell>0.97</ns0:cell><ns0:cell cols='2'>0.03</ns0:cell><ns0:cell>True label</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>0.00</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.03</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell cols='2'>0.04</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.01</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell /><ns0:cell>0.95</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell cols='2'>0.01</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell>0.95</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Predicted label</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Predicted label</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Predicted label</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>(a) Soft Ensemble</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='5'>(b) Hard Ensemble</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(c) No Ensemble</ns0:cell></ns0:row></ns0:table><ns0:note>On the other hand, the proposed ECOVNet has considered the use of transfer learning in combination2 https://github.com/lindawangg/COVID-Net 3 https://github.com/ufopcsilab/EfficientNet-C19 4 https://github.com/muhammedtalo/COVID-19 5 https://github.com/drkhan107/CoroNet 13/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57178:2:0:NEW 11 Apr 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_16'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparison of the proposed ECOVNet with other state-of-the-art methods on COVID-19 detection Bold indicates that the method has statistically better performance than other methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Precision (%) (COVID-19)</ns0:cell><ns0:cell>Recall (%) (COVID-19)</ns0:cell><ns0:cell>Accuracy (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>COVID-Net (Wang et al., 2020)</ns0:cell><ns0:cell>92.80</ns0:cell><ns0:cell>91.00</ns0:cell><ns0:cell>93.92</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B3 (Luz et al., 2020)</ns0:cell><ns0:cell>95.29</ns0:cell><ns0:cell>81.00</ns0:cell><ns0:cell>94.49</ns0:cell></ns0:row><ns0:row><ns0:cell>DarkCovidNet (Ozturk et al., 2020)</ns0:cell><ns0:cell>96.00</ns0:cell><ns0:cell>88.00</ns0:cell><ns0:cell>92.00</ns0:cell></ns0:row><ns0:row><ns0:cell>CoroNet (Khan et al., 2020)</ns0:cell><ns0:cell>91.00</ns0:cell><ns0:cell>87.00</ns0:cell><ns0:cell>88.00</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet-Hard Ensemble (Proposed)</ns0:cell><ns0:cell>94.17</ns0:cell><ns0:cell>97.00</ns0:cell><ns0:cell>96.07</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet-Soft Ensemble (Proposed)</ns0:cell><ns0:cell>92.59</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>96.07</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparison of the proposed ECOVNet with other CNN architectures on COVID-19 detection Bold indicates that the method has statistically better performance than other methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Precision (%) (COVID-19)</ns0:cell><ns0:cell>Recall (%) (COVID-19)</ns0:cell><ns0:cell>Accuracy (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B5 (Without augmentation)</ns0:cell><ns0:cell>94.12</ns0:cell><ns0:cell>96.00</ns0:cell><ns0:cell>95.76</ns0:cell></ns0:row><ns0:row><ns0:cell>EfficientNet-B5 (With augmentation)</ns0:cell><ns0:cell>84.55</ns0:cell><ns0:cell>93.00</ns0:cell><ns0:cell>95.00</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet-Hard Ensemble (Proposed)</ns0:cell><ns0:cell>94.17</ns0:cell><ns0:cell>97.00</ns0:cell><ns0:cell>96.07</ns0:cell></ns0:row><ns0:row><ns0:cell>ECOVNet-Soft Ensemble (Proposed)</ns0:cell><ns0:cell>92.59</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>96.07</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the
reviewers’ comments.
We are uploading (a) our point-by-point response to the comments (below) (response to
reviewers), (b) an updated manuscript with blue highlighting newly added contents and red
showing removed contents, and (c) a clean and updated manuscript without highlights (PDF
main document).
Best regards,
Chowdhury et al.
Revision #2: Reviewer #1: Comments #1: Table 8 compared the proposed method with
other existing methods so its accuracy is very important. I haven't checked all numbers
but at least for DeepCOVIDExplainer the number in Table 8 does not match the one I
found
in
either
https://ieeexplore.ieee.org/document/9313304
or
https://arxiv.org/pdf/2004.04582.pdf. Perhaps the authors of DeepCOVIDExplainer have
updated their numbers, but it is still better for the current work to cite numbers from the
peer-reviewed version (the IEEE version) or at least the latest pre-print version instead
of the old pre-print version. Please check the numbers of other methods too to make
sure all numbers are current and accurate.
Response: We thank the reviewer for the comment. We reported older versions of the arxiv
paper (https://arxiv.org/pdf/2004.04582v2.pdf) that's why the values do not match. Now, we
have cited the peer-reviewed published version (the IEEE version). In addition, we have revised
the Table 8 by comparing our proposed method with other existing methods (including
DeepCOVIDExplainer (https://github.com/rezacsedu/DeepCOVIDExplainer)) on the same
dataset derived from COVIDx (see Table 3) to address the reviewer second comment. The
results we obtained using the DeepCOVIDExplainer authors’ provided code for our dataset are
very
different
from
the
results
in
their
published
paper
(https://ieeexplore.ieee.org/document/9313304). We tested their code in a variety of ways, and
found the recall of COVID-19 cases is between 28% and 34%, which is highly inconsistent with
the results in their published paper. Thus in our revised version, we have removed the
DeepCOVIDExplainer from the comparison Table 8.
Revision #2: Reviewer #1: Comments #2: It is not explained in the text why COVID-Net,
EfficientNet-B3, DeepCOVIDExplainer, and the proposed method all use the COVIDx dataset
but have different dataset size in Table 8. It would be much better if the authors could compare
the proposed method with other methods on the same dataset, at least for methods that use
COVIDx, if possible.
Response: We thank the reviewer for the comment. The COVID-Net, EfficientNet-B3,
DeepCOVIDExplainer and our proposed method all use the COVIDx dataset, but have different
dataset sizes. One of the reasons is that the COVIDx github repository
(https://github.com/lindawangg/COVID-Net) is updated regularly, which results in different data
sizes. Since the github repository updates the image source files on a regular basis, it only
tracks the current version of the train and test sets. In our current version, we compare our
proposed method with COVID-Net (https://github.com/lindawangg/COVID-Net), EfficientNet-B3
(https://github.com/ufopcsilab/EfficientNet-C19),
DarkCovidNet
(https://github.com/muhammedtalo/COVID-19),
CoroNet
(https://github.com/drkhan107/CoroNet). All these previous methods have released either the
trained model (such as COVID-Net) on our used train dataset or source code, and in this
revised version we use the same train and test dataset (see Table 3) derived from COVIDx
(https://github.com/lindawangg/COVID-Net) to compare the precision, recall and accuracy of our
proposed method with the existing methods. Accordingly, we have rewritten Section 4.4 and
revised Table 8.
" | Here is a paper. Please give your review comments after reading it. |
112 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Robot navigation allows mobile robots to navigate among obstacles without hitting them and reaching the specified goal point. In addition to preventing collisions, it is also essential for mobile robots to sense and maintain an appropriate battery power level at all times to avoid failures and non-fulfillment with their scheduled tasks. Therefore, selecting the proper time to recharge the batteries is crucial to address the navigation algorithm design for the robot's prolonged autonomous operation. In this paper, a machine learning algorithm is used to ensure the extended robot autonomy based on a reinforcement learning method combined with a fuzzy inference system. The proposal enables a mobile robot to learn whether to continue through its path toward the destination or modify its course on the fly, if necessary, to proceed toward the battery charging station, based on its current state. The proposal performs a flexible behavior to choose an action that allows a robot to move from a starting to a destination point, guaranteeing battery charge availability. This paper shows the obtained results using an approach with thirty-six states and its reduction with twenty states. The conducted simulations show that the robot requires fewer training epochs to achieve ten consecutive successes in the fifteen proposed scenarios than traditional reinforcement learning methods exhibit. Moreover, in four scenarios, the robot ends up with a battery level above 80%, that value is higher than the obtained results with two deterministic methods.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Autonomous mobile robots are getting a great deal of attention due to their adoption in many areas such as space exploration, search and rescue tasks, inspection and maintenance operations, in agricultural, domestic, security, and defense tasks, among many others. They are generally composed of three main modules that allow them to complete their job. The first module integrates all the mechanisms and elements that materialized the robot's locomotion system, including the pneumatic, electromechanical, electrical, and electronic components. The second module deals with the environment's data acquisition based on sensors and the software needed. The third module is the robot's brain, i.e., this module is responsible for data processing, robot control, navigation.</ns0:p><ns0:p>Mobile robot navigation involves any programmed activity that allows the robot to move from its current position to a destination point <ns0:ref type='bibr' target='#b5'>(Huskić and Zell, 2019)</ns0:ref>, being the path planning and obstacle avoidance the main tasks performed during this process. Navigation algorithms commonly use artificial intelligence (AI) to efficiently accomplish their current mission during obstacle avoidance maneuvers.</ns0:p><ns0:p>Various AI approaches are employed to deal with the complex decision-making problems faced during mobile robot navigation, such as fuzzy inference systems (FIS), neural networks (NN), genetic algorithms (GA), A* algorithms, and the artificial potential field method (APF) <ns0:ref type='bibr' target='#b3'>(Gul et al., 2019)</ns0:ref>. For these PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science navigation approaches where path planning and obstacle avoidance are sub-tasks commonly solved during navigation time, the employed methodologies assume that the environment corresponds to static scenarios with obstacles and destinations that do not change over time or dynamic scenarios with obstacles and destinations changing over time. Among the different methods reported in the literature, the APF method demonstrates a suitable way of generating paths to guide mobile robots from their initial position to their destination. This method involves using a simple set of equations, easily programmable under limited computing platforms like the mobile robot's embedded processors, rendering an adequate reactive response to the environment.</ns0:p><ns0:p>As far as decision-making is concerned, several variants are employed involving using different methodologies that help in the decision process. When the decision process implies reaching a predefined goal with obstacle avoidance capability, fuzzy logic or bioinspired methodologies can be used for a robot to decide between avoiding an obstacle, following a wall to the right, or to the left <ns0:ref type='bibr' target='#b23'>(Zapata-Cortes et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b6'>Khedher. et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b21'>Yang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b18'>Wang et al., 2020)</ns0:ref>. Within the decision-making process, it is possible to increase precision in parameters such as the speed, or the robot's angle rotation, improving the executed movements in narrow paths <ns0:ref type='bibr' target='#b14'>(Teja S. and Alami, 2020)</ns0:ref>. However, these navigation methodologies fail to consider battery recharging, which is vital to keep a robot working for prolonged periods. Since robots use batteries to power their motors and the navigation system, the robot's battery must supply adequate voltage and current levels for the robot to complete its tasks without any unexpected energy interruption due to a low battery condition. This problem is known as the autonomous recharging problem.</ns0:p><ns0:p>In recent years, the number of applications using mobile robots is increasing, and linked with this growth is the autonomous operation margin. The autonomous recharging problem (ARP) is getting attention to effectively planning and coordinating when, where, and how to recharge robots to maximize operational efficiency, improving the robot's autonomy. ARP means all the actions required to decide the moment, the destination, and how to recharge the robot's battery to maximize operational efficiency <ns0:ref type='bibr' target='#b16'>(Tomy et al., 2020)</ns0:ref>. The ARP faces two main problems: how to proceed to the battery charge station and the precise moment to deviate from the main path to the final destination. The first deals with designing and testing the hardware and software employed to help the robot maneuver to the charging station. The second involves deciding when the robot should go to the charging station <ns0:ref type='bibr' target='#b2'>(de Lucca Siqueira et al., 2016)</ns0:ref>. This work focuses only on the second problem. This proposal introduces a novel approach to solve the ARP based on machine learning to determine the robot's pertinent moment to recharge its battery. The proposed system consists of a path planning module and a decision-making module. The decision-making module uses the fuzzy Q-learning (FQL) method for the robot to decide between going to the destination point or head to the battery charging station or remain static. The paper's main contribution lies in a reactive navigation scheme that grants a robot to learn, based on trial and error, to make decisions to fulfill its tasks with a suitable battery level.</ns0:p><ns0:p>The following sections describe the related work, as well as the analyzed methods and the proposed approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Work</ns0:head><ns0:p>One of the classical methods used to address the path planning problem is the APF method, which bases its foundations on attractive and repulsive forces. The APF method has been widely applied in static real-time path planning. Several works found in literature addressing the APF method make slight modifications to Khatib's model, introduced in 1985, to improve its performance and avoid falling into a minimum local state. For example, the work of Hosseini <ns0:ref type='bibr' target='#b4'>Rostami et al. (2019)</ns0:ref>, which uses APF method in a dynamic environment, showed robot obstacle avoidance capability while moving towards its target <ns0:ref type='bibr' target='#b9'>Matoui et al. (2017)</ns0:ref>. It implemented the APF method and combined a set of equations to operate mobile robots cooperatively under a decentralized architecture. Alternatively, this method's implementation combined with fuzzy logic <ns0:ref type='bibr' target='#b17'>(Tuazon et al., 2016)</ns0:ref> and reinforcement learning (RL) <ns0:ref type='bibr' target='#b8'>(Liu et al., 2017)</ns0:ref> is another approach employed for robot navigation. Other alternatives to solve the path planning are addressed using NN <ns0:ref type='bibr' target='#b19'>(Wei et al., 2019)</ns0:ref> or FQL <ns0:ref type='bibr' target='#b7'>(Lachekhab et al., 2019)</ns0:ref>. Some strategies have arisen to solve the ARP focus on when and where to recharge the battery. The traditional charging method refers to move automatically to a charging station when the battery charge is below a certain threshold. It is a simple strategy to solve the ARP, but with little flexibility, i.e., the robot could be about to reach the goal, but instead, it heads to charge the battery by considering only a charge</ns0:p></ns0:div>
<ns0:div><ns0:head>2/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science level threshold as the only criteria. To solve this disadvantage <ns0:ref type='bibr' target='#b10'>Rappaport and Bettstetter (2017)</ns0:ref> proposed using an adaptative threshold using five policies to select a charging point (CP). The first policy consists on select the closest CP, the second is to select a free CP, the third policy is to choose a CP when there are not more possibilities to explore, the fourth consists of stay as long as possible before moving to the next one, and the last policy refers to send information to other robots to avoid redundancies in exploration.</ns0:p><ns0:p>Based on a fuzzy inference system <ns0:ref type='bibr' target='#b2'>de Lucca Siqueira et al. (2016)</ns0:ref> propose a solution for the first sub-problem. It considered three crisp input variables, the battery level, the distance to the charging station, and the distance to the destination point for the fuzzy mapping rules. Under this approach, if a warning state is set-on due to a sensed low battery level, and the destination point is closer than the charging station, the robot will head towards the target, and it will not turn off. An alternative way of resolving ARP is the scheduling strategy of mobile chargers (MC). The MCs improve the battery charging process while the robots perform a task, avoiding task execution failure due to a low battery level and prioritizing the task completion. It is necessary to establish a minimum number of MCs and a charging sequence algorithm to design an operative scheduling strategy. Some methods used in designing a scheduling strategy are linear programming, nonlinear programming, clustering analysis, TSP algorithm, queuing theory, and other mathematical methods. In this way, <ns0:ref type='bibr' target='#b1'>Cheng et al. (2021)</ns0:ref> proposed a mesh model to cope with the robot's limited movements and the MCs. Considering the distances between the MCs and the robot and the expended time at the charge station, they developed a scheduling algorithm of minimum encounter time to solve the recharging problem in a robot that executes a priority task. While <ns0:ref type='bibr' target='#b16'>Tomy et al. (2020)</ns0:ref> proposed a recharging strategy based on a learning process to schedule visits to the charging station. The main contribution reported was the robot's capability to predict high-value tasks assigned for execution. Therefore, the robot can find the specific moment to schedule a visit to a charging station when it is less busy and does not have essential tasks to execute. This proposal could have more flexibility compared to a system based on rules only.</ns0:p><ns0:p>Another solution to the ARP is task planning. The main idea is that after solving a task, the robot goes to a charging station and recharges its battery. In this way, <ns0:ref type='bibr' target='#b10'>Rappaport and Bettstetter (2017)</ns0:ref> proposed a method where a robot has a sequence of tasks, and it has to segment the sequence to plan when to recharge. In the second stage, multiple robots were coordinated to recharge using a maximum bipartite graph matching method. Similarly, <ns0:ref type='bibr' target='#b20'>Xu et al. (2019)</ns0:ref> describe a method to solve the ARP with multiple robots capable of planning regular visitings to the charging station as part of a sequence of tasks they must follow under a collaborative environment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>This section summarizes the methods used in this work for the navigation approach, starting with the classical reinforced learning methods: Q-learning (QL) and the State-Action-Reward-State-Action (SARSA) algorithm. This section ends up with the fuzzy Q-Learning (FQL) method description implemented in the decision-making module.</ns0:p></ns0:div>
<ns0:div><ns0:head>The artificial potential field method</ns0:head><ns0:p>The APF method provides a simple and effective motion planning method for a practical purpose. Under this method, the attractive force is computed according to equation (1):</ns0:p><ns0:formula xml:id='formula_0'>F attr (q, g) = −ξ p(q, g)<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where p(q, g) is the euclidean distance between the robot's position, denoted by q, and the destination point's position g. The term ξ is defined as the attractive factor. The repulsive force, on the other hand, is calculated with the aid of equation (2).</ns0:p><ns0:formula xml:id='formula_1'>F rep (q) =        η( 1 p(q,q o j ) − 1 p o j ) p 2 (q,q o j ) p(q,q o j ) , if p(q, q o j ) < p o j η(− 1 p o j ), if p(q, q o j ) = p o j 0, if other case (2)</ns0:formula><ns0:p>where p(q, q o j ) is the distance between the robot's position and the j-th obstacle's position q o j . The obstacle radius threshold is denoted by p o j . The term η is the repulsive factor. Furthermore, the resultant force (3) is the sum of attractive and repulsive forces.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>F res = F attr (q, g) + F rep (q)</ns0:p><ns0:p>(3)</ns0:p></ns0:div>
<ns0:div><ns0:head>Classical reinforcement learning methods</ns0:head><ns0:p>Reinforcement learning (RL) is a machine learning paradigm, where the main identified elements are the agents, the states, the actions, the rewards, and punishments. In general, RL problems involve learning what to do and how to map situations to actions to maximize a numerical reward signal when the learning agent does not know what steps to take. The agent involved in the learning process must discover which actions give the best reward based on a trial and error process <ns0:ref type='bibr' target='#b13'>(Sutton and Barto, 2018)</ns0:ref>. Essentially, RL involves closed-loop problems because the learning system's actions influence its later inputs. Different methods use this paradigm; among the classic methods are QL and SARSA. QL provides learning agents with the ability to act optimally in a Markovian domain experiencing the consequences of their actions, i.</ns0:p><ns0:p>e., the agents learn through rewards and punishments <ns0:ref type='bibr' target='#b13'>(Sutton and Barto, 2018)</ns0:ref>. QL is a method based on estimating the value of the Q function of the states and actions, and it can be defined as shown in equation ( <ns0:ref type='formula' target='#formula_2'>4</ns0:ref>), where Q(S t , A t ) is the expected sum of the discounted reward for the performance of action a in a state s, α is the learning rate, γ is the discounting factor, S t is the current state, S t+1 is the new state, R t+1</ns0:p><ns0:p>is the reward in the new state, and A t is the action in S t , and a is the action with the best q-value in S t .</ns0:p><ns0:formula xml:id='formula_2'>Q(S t , A t ) ← Q(S t , A t ) + α[R t+1 + γ max a Q(S t+1 , a) − Q(S t , A t )]<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The method called SARSA results from a modification made to QL. The main difference between these algorithms is that SARSA does not use the max operator from the Q function during the rule update, as shown in equation ( <ns0:ref type='formula' target='#formula_3'>5</ns0:ref>) <ns0:ref type='bibr' target='#b13'>(Sutton and Barto, 2018)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>Q(S t , A t ) ← Q(S t , A t ) + α[R t+1 + γQ(S t+1 , A t+1 ) − Q(S t , A t )]<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Fuzzy Q-learning method FQL <ns0:ref type='bibr' target='#b0'>(Anam et al., 2009)</ns0:ref> is an extension of fuzzy inference systems (FIS) <ns0:ref type='bibr' target='#b12'>(Ross, 2010)</ns0:ref>, where the fuzzy rules define the learning agent's states. At the first step of the process, crisp input variables are converted into fuzzy inputs through input membership functions. Next, rule evaluation is conducted, where any convenient fuzzy t-norm can be used at this stage, a common choice is the MIN operator. Only one fuzzy rule, i.e., the i-th fuzzy rule, r i , determines the learning agent's state, and i ∈ [1, N], N denotes the total number of fuzzy rules. Each rule has associated a numerical value, α i , called the rule's strength, defining the degree to which the agent is in a certain state. This value α i allows the agent to choose an action from the set of all possible actions A, the j-th possible action in the i-th rule is called a(i, j), and its corresponding q-value is q(i, j). Therefore, fuzzy rules can be formulated as:</ns0:p><ns0:p>If x is S i then a(i, 1) with q(i, 1) or ... or a(i, j) with q(i, j)</ns0:p><ns0:p>FQL's primary goal implies the learning agent finds the best solution for each rule, i.e., the action with the higher q-value. This value is rendered from a table of i × j dimensions, containing q number of values. The table dimension corresponds to the number of fuzzy rules times the total number of actions. A learning policy selects the right action based on the quality of a state-action pair, based on the equation ( <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>), where V (x, a) is the quality value of the states, i * corresponds to the optimal action index, i.e., the action index with the highest q-value, and x is the current state. On the other hand, the exploration-exploitation probability is given by ε = 10 10+T , where T corresponds to the step number, is assumed in this work.</ns0:p><ns0:formula xml:id='formula_4'>V (x, a) = ∑ N i=1 α i (x) × q(i, i * ) ∑ N i=1 α i (x)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>The functions in (7) allow to get the inferred action and its corresponding q-value, where x is the input value in state i, a is the inferred action, i o is the inferred action index, α i is the strength of the rule and N is a positive number, N ∈ N + , which corresponds to the total number of rules.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_5'>a(x) = ∑ N i=1 α i × a(i, i o ) ∑ N i=1 α i (x) ; Q(x, a) = ∑ N i=1 α i × q(i, i o ) ∑ N i=1 α i (x) (7)</ns0:formula><ns0:p>Also, it is necessary to calculate an eligibility value, e(i, j), during the q-value updating phase by means of equation ( <ns0:ref type='formula' target='#formula_6'>8</ns0:ref>),</ns0:p><ns0:formula xml:id='formula_6'>e(i, j) = λ γe(i, j) + α i (x) ∑ N i=1 α i (x) if j = i o λ γe(i, j) other case (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>)</ns0:formula><ns0:p>where j is the selected action, γ is the discount factor 0 ≤ γ ≤ 1 and λ is the decay parameter in the range of [0,1). Next, the equation ( <ns0:ref type='formula' target='#formula_8'>9</ns0:ref>) is used for the ∆Q calculation, where r corresponds to the reward.</ns0:p><ns0:formula xml:id='formula_8'>∆Q = r + γV (x, a) − Q(x, a)<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>And finally, equation ( <ns0:ref type='formula' target='#formula_9'>10</ns0:ref>) updates q-value, where ε is a small number, ε ∈ (0, 1), which affect the learning rate,</ns0:p><ns0:formula xml:id='formula_9'>∆q(i, j) = ε × ∆Q × e(i, j)<ns0:label>(10)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Proposed hardware</ns0:head><ns0:p>The documented results come from navigation simulations, only. The testing results on a physical robot are left for future work. However, this section presents the hardware proposal developed to carry out the navigation proposal and explains some technical details to be considered during the implementation. The proposed hardware shown in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> is the King Spider robot from the BIOLOID Premium kit of ROBOTIS <ns0:ref type='bibr'>(ROBOTIS, 2019)</ns0:ref>. This robot is configured with six legs with a total of 18 servo-motors with 3 degrees of freedom per leg. Each actuator uses the AX-12A servo-motors, which have mobility from 0°to 360°, equipped with a serial interface to establish communication with the processing unit, a microcontroller Raspberry Pi, running the Raspbian Buster operating system. The microcontroller has an I 2 C interface for energy monitoring and a pair of GPIOS to connect the ultrasonic sensor. The INA260 high-accuracy current and power monitor were selected to measure the battery charge level and the power consumption.</ns0:p><ns0:p>For detecting obstacles, the HC-SR04 ultrasonic sensor was chosen, which allows detecting objects at a distance of 40 cm. if kinematics has to be adapted to a determined robot configuration, the software provided in this paper would have to be modified. The simplest way would be to call a function that interprets the coordinates within the control module to obtain the required parameters. One of the variables that could be affected if robotic configuration changes is the battery charge estimation, as other robotic platforms could occupy batteries with different features. Therefore, the system's behavior could change according to the discharge range. This approach uses separated modules from the program's main body. The input values come from functions that update and normalize the battery level and the distances to avoid significant modifications to the proposal in the implementation with other hardware.</ns0:p></ns0:div>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>The proposed navigation system comprises three main modules, one for the control of the robot, a module for path planning, and another module for decision making. This section begins with a description of the path planning module, using the APF method for this proposal. Then it continues with a description of the decision-making module.</ns0:p></ns0:div>
<ns0:div><ns0:head>Path planning module</ns0:head><ns0:p>This module implements the APF method, and it considers an operation under environmentally controlled conditions. This work employs a static environment with one battery charging station, obstacles, and a defined destination point. The workspace is a 10 × 10 grid, and each space of the grid is equivalent to one step of the robot. The obstacles were arbitrarily distributed into the workspace to define diverse scenarios. For this module, the attraction factor used in equation ( <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>) has a value equal to 2.3, while the repulsion factor in equation ( <ns0:ref type='formula'>2</ns0:ref>) is 61.5. The robot's movements were limited to left, right, forward, and backward. An objects' position is equivalent to a coordinate pair (x, y). Whereas the robot's position is equivalent to the coordinate pair (x robot , y robot ), and when the robot experience a position change, its coordinates increase or decrease one unit. Algorithm 1 shows the steps followed by this module for the path generation.</ns0:p><ns0:p>Algorithm 1 handles the term neighborhood, which refers to the coordinate pairs (x robot , y robot + 1), (x robot , y robot − 1), (x robot + 1, y robot ), and (x robot − 1, y robot ). When the path generation concludes, the result is a list of the robot's coordinates to reach the destination. This module requires the robot's information, i.e., the robot's position, destination, and obstacles positions. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Decision-making module</ns0:head><ns0:p>With the proposed decision-making module, we demonstrate that the mobile robot can appropriately choose between heading to the goal, going to the battery charging station, or remaining static. This module comprises three main sections: fuzzification of the input variables, fuzzy rule evaluation, and the selection of the action to execute. Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> depicts an overall view of the proposed system's architecture.</ns0:p><ns0:p>Given a robot is powered by a battery, and it travels from a starting to a destination point, it is a fact that the battery loses electric charge during displacement. Then, the battery voltage level is an input to the fuzzy inference system (BL). Fuzzy rules can be formulated, using common sense, as follows. If the battery level is low and a charging station is nearby, the robot could select to go to the charging station to recharge the battery. Even so, if the robot is far from the charging station but close to the destination point, reaching the destination point would prioritize instead of changing its course to the charging station.</ns0:p><ns0:p>Ergo, the distance between the robot and the destiny is the second input, and the distance between the robot and the charging station is considered the third input. where DRD is the distance between the robot and destination, CDD is the current distance to the goal, and IDD is the initial distance to the goal. Similarly, the system normalizes the distance between the robot, and the charging station with the equation DRC in ( <ns0:ref type='formula'>11</ns0:ref>), where DRC is the distance between the robot and charging station, CDC is the current distance to the charging station, and IDC is the initial distance to the charging station. Simultaneously, the battery level value corresponds to a percentage value between 0 and 100%. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There first input variable, BL, has four fuzzy sets, labeled as Empty, Very Low, Low, and Full. The second input variable,DRD, has three fuzzy sets defined as Close, Near, and Far. Finally. The third input variable, DRC, has three fuzzy sets: Close, Near, and Far. Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> depicts the three input variables with their corresponding triangle-type fuzzy sets. By considering all the possible combinations in the rule's antecedents, the fuzzy system could operate with a total of 36 fuzzy rules, herein named FQL-36. Each rule is formulated to perform a specific robot action (RA) in response to the observed inputs. <ns0:ref type='formula'>7</ns0:ref>); q ← compute the q value with function q(x) in eq. ( <ns0:ref type='formula'>7</ns0:ref>); new state ← get the current rule; reward ← get the reward from ( <ns0:ref type='formula'>12</ns0:ref>); states value ← compute with the eq. ( <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>); ∆Q ← compute with eq. ( <ns0:ref type='formula' target='#formula_9'>10</ns0:ref>); eligibility ← get value from eq. ( <ns0:ref type='formula' target='#formula_6'>8</ns0:ref>); new q ← compute the new q with eq. ( <ns0:ref type='formula' target='#formula_9'>10</ns0:ref>); Update the Q value in the q-table; return the action; Actions denoted by a1, a2, and a3 are associated with a numerical value q(i, j), where the index i, corresponds to the i-th fuzzy rule, and j is the index of the j-th action. Action index 1 refers to going to the destination point, action index 2 refers to going to the charging station, and action index 3 means to remain static. The action selection is made with Algorithm 2. Likewise, the fuzzy actions are implemented by three output singleton-type fuzzy sets, as depicted in Fig. <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. The exploring-exploitation policy described in the FQL section serves to make the decision and choose an action. Expression (12) gives the reward function used, the discount factor, and the learning rate is equal to 0.5 and 0.01, respectively. Given that some rules are redundant and that the fuzzy inference system may not fire any of them, some fuzzy rules are not considered, reducing the fuzzy rule database from 36 to 20; herein, named</ns0:p><ns0:formula xml:id='formula_10'>r =        +10 if</ns0:formula></ns0:div>
<ns0:div><ns0:head>9/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed FQL-20. Then the reduced number of fuzzy rules are defined as shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, where AC corresponds to any case, and the symbol | refers to logic function OR. </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Navigation system proposal</ns0:head><ns0:p>The modules described so far integrate the robot's navigation system. The system plans two paths to conduct the robot to the goal and the other one to proceed to the battery charging station. If the robot encounters an obstacle during its movement, it re-plans its path to the current destiny. If the robot's action changes between going to the goal or the charging station or vice versa, it plans a new course if necessary.</ns0:p><ns0:p>These changes depend on the results of the decision-making module. The following steps in Algorithm 3 describe the navigation proposal.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>This section introduces the simulated results obtained from this navigation proposal. The system simulation with 36 fuzzy-rules is denoted as FQL-36, and the reduced proposal with only 20 rules is referred as Edition IDE on a computer with an Ubuntu 20.04 operating system. The source code designed for this research can be obtained at https://github.com/ElizBth/Reactive-Navigation-Under-a-Fuzzy-Rules-Based-Scheme-and-Reinforcement-Learning. This proposal can be implemented from scratch by employing the equations related to the APF and fuzzy q-learning section, following algorithms 1, 2, and 3 of the Background Section. Some methods may serve to compare this proposal's performance. Firstly, with a simple two threshold-based method, where the input is the battery level: the first threshold is 40%, so, if the battery level is above this value, the selected action is a1; the second threshold is 20%, so, if the battery level is above this value the selected action is a2; otherwise the selected action is a3. Secondly, being the FQL method resulting from the combination of the QL method with a FIS, they are a good starting point to compare these methods' performance with the introduced proposal. Like the FQL proposal, this work uses two FIS, denoted as FIS-36 and FIS-20. Moreover, the last method considered is the SARSA method, which is similar to the QL method. The inputs of these methods correspond to the battery level and the distances to the destination and the charging station; additionally, the RL methods occupy the reward function expressed in (12). Figure <ns0:ref type='figure' target='#fig_9'>7a</ns0:ref> shows the planned number of steps to reach the goal for the 15 scenarios proposed in Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Path planning simulations</ns0:head></ns0:div>
<ns0:div><ns0:head>11/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The number of movements the robot must complete reaching the destination or the charging station, starting from the departing point, is similar in all cases, resulting in average planned paths of 14 positions.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>7b</ns0:ref> shows the module's computing time lag since it performs the parameter initialization until it finds a path free from collisions. The average period consumed to generate the path in each scenario was registered between 3.2 and 3.8 ms from this graph. The navigation system's first planned paths serve during the early navigation stages under each scenario since the system is prone to change direction during the path. The new computations take as reference the new current position to calculate a new path.</ns0:p></ns0:div>
<ns0:div><ns0:head>System simulations</ns0:head><ns0:p>The variables used to model the system behavior are the accumulated reward and the number of steps taken to complete the robot's task in each scenario. The simulations initiate with the battery at the maximum charge level. Figure <ns0:ref type='figure' target='#fig_10'>8</ns0:ref> shows the accumulated reward obtained at each scenario after reaching ten task successes. By comparing the performance between FQL-36 and FQL-20, the reward does not significantly change in either case. There are some slight behavior variations in the choice actions, which may have been random or greedy. There is similar behavior only when the tenth success was reached by observing the accumulative reward. Figure <ns0:ref type='figure' target='#fig_11'>9</ns0:ref> shows the accumulated reward in this attempt with QL, SARSA, FQL-36, and FQL-20.</ns0:p><ns0:p>However, there is a more significant difference in behavior when comparing with QL and SARSA; this indicates that there were more cases where the simulation was close to the destination or the charging station, and if there were more steps executed, it is logical to see a more generous amount of reward than with FQL-36 and FQL-20. To confirm this behavior, it should observe the number of steps executed during the simulation and the number of epochs it took to reach the destination.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_12'>10</ns0:ref> shows the number of epochs it took for the QL, SARSA, FQL-36, and FQL-20 methods.</ns0:p><ns0:p>Here, FQL-20 presents better results than the other methods since. In 80% of the scenarios, it only requires ten epochs to reach the ten successes followed by FQL-36, while the other two methods take more epochs to reach the goal. Given these results, this paper's proposal already presents an advantage over classical RL methods by requiring fewer epochs for the system to learn which decisions to make to complete its goal. Figure <ns0:ref type='figure' target='#fig_12'>10</ns0:ref> does not include the threshold-based method or the FIS as they do not require a learning stage.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Figure <ns0:ref type='figure' target='#fig_13'>11</ns0:ref> shows the average number of steps involved in reaching the goal, whose average resulted from the ten successes obtained by using any of the systems driven by 36 or 20 fuzzy rules. The results</ns0:p><ns0:p>show that only a difference of three steps was obtained in two scenarios. It is also interesting to verify that the system's overall behavior is better with the base of 20 fuzzy rules. Therefore, the simplicity in the design of the FIS agrees with the effectiveness of the results. Figure <ns0:ref type='figure' target='#fig_14'>12</ns0:ref> shows the comparison with the number of executed steps to obtain the tenth success with the methods mentioned above. Similarly to the epochs, the number of steps performed with the QL and SARSA methods was higher than the proposal made with the FQL-36 and FQL-20. However, at this point, with the number of training epochs, none of them match the number of steps executed with the deterministic methods. The FQL-36 shows the least amount of deviations towards the loading station, followed by the FQL-20, while the other two methods present a higher number of deviations than this proposal. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> shows the total deviations computed in each scenario.</ns0:p><ns0:p>Likewise, Fig. <ns0:ref type='figure' target='#fig_15'>13</ns0:ref> shows the distribution of the selection of actions made during the simulation. Where the action a3 was the one that was chosen the least times, while the action of going to the destination was</ns0:p></ns0:div>
<ns0:div><ns0:head>13/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the one that was chosen the most times, with these graphs, it is possible to observe how this selection of actions affects from the point of view that it takes more steps to reach the destination. Besides, Fig. <ns0:ref type='figure' target='#fig_16'>14</ns0:ref> shows the remaining battery level at the end of the simulations in each of the scenarios.</ns0:p><ns0:p>The threshold method and the FIS end up with very similar battery levels. These values correspond to the maximum remaining battery levels attainable in each scenario since these are the methods that perform the fewest steps to reach the destination.</ns0:p><ns0:p>A possible event can occur if the robot changes its direction to the battery charging station and then continues on its way to its destination. This behavior happens with the FQL-20 method in scenarios 6, 7, 11, and 14. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> shows that system FQL-20 deviated from their destination along the 24, 22, 21, and 37 steps. There is a particular advantage in letting the system learn what decisions to make since it makes it more flexible, unlike a deterministic method that could be more rigid.</ns0:p><ns0:p>The spent execution time is a variable that helps to analyze the proposal's performance. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the spent execution time for the simulation to complete the task of going to the goal successfully ten times.</ns0:p><ns0:p>The threshold method and the FIS finished with the shortest delay, employing less than 20 ms. While the QL and SARSA methods take the most prolonged latency, exceeding 1 min, and in one case, even reaching 65 min. The proposal made in this paper has an intermediate-range execution time of up to 19 s.</ns0:p><ns0:p>One of the reasons why the FQL proposal is advantageous is the reduced number of states that a robot can take during navigation, which is reflected in the memory usage, being this number less than SARSA and QL, as shown in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>. This proposal only involves 20 or 36 states, while the QL and SARSA methods, with the same number of entries, involve 1,030,301 states. Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref> shows the memory usage comparison occupied compared to this article's proposal, obtained with the aid of profiler python, measured throughout the simulated navigation process.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54839:1:1:NEW 8 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science During the development of this work, a validation process was used during the execution of the simulations. The validation procedure consisted of the following: first, after the execution of a new step, it was validated if the coordinates of the simulated robot were at the destination; if so, the task was terminated. Then it was validated if the robot had reached the charging station; if so, the battery charge was restored to 100%, and the simulation continued. Finally, with the robot's coordinates and the obstacles, it was verified if the robot had collided; if so, the simulation indicated a failure and it was restarted; if not, the simulation continued. On average, per scenario, the time taken for validation was 74ms.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The proposed navigation system enables a mobile robot to move from a starting to a destination point or decide to change its course to go to the battery charging station under a fuzzy rule-based combined with a reinforced learning system. The proposed path planning module enables a robot to move in partial and known environments using a scheme where the robot knows obstacles and senses them, it then plans the path while moving. This module's use provides the system with a reactive behavior in the face Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The reactive method introduced in this work allows finding a short path to the destination, although not in all cases since the system can avoid passing through a cluster of obstacles and go more safely.</ns0:p><ns0:p>Nonetheless, the method has its limitations. Under certain environmental conditions, the system falls into the local minimum problem, which causes the system to get stuck in a point on the map and not reach destiny. Usually, this occurs when there are groups of obstacles in the form of a fence, and the robot finishes obstructed. Besides, being a global navigation method, the system does not learn, and it must know the destination point, which limits navigation in unknown terrain to a certain degree. In future work, other hybrid methodologies should be explored to avoid falling into the local minimum problem to handle this behavior.</ns0:p><ns0:p>The resultant proposal for decision making based on FQL alongside the path planning module allowed a navigation system with a certain level of autonomy while handling the ARP, given that the proposed system learns which actions to execute, on a trial and error basis, as a function of the input variables, named: the battery level and the distances between the robot and the possible destination points. The tasks chosen within the decision-making module of this proposal were three; the importance of them lies in the fact that the first action corresponding to the displacement to a destination, which is one of the primary tasks performed by the mobile robot, a human would expect that the robot executes most of the time.</ns0:p><ns0:p>Likewise, as the battery charge level was paramount to completing the main task, the second action was to go to a battery charging station. With this, the decision module would seem to be complete. However, a situation could arise in which the robot has a shallow battery level and does not reach the destination or the charging station. This paper considers the third action because a robot should be suspended or shut down to avoid causing damage to its electrical/electronic system. An expert could easily define when to take a particular action. Though, the striking about this work lies in observing the actions selected autonomously by a learning agent. During the simulation, it was possible to observe the decision-making process and how it improved during the training until the agent selected the actions that benefited him complete the journey to the destination. The worst result could be that the robot always remained static; however, the obtained results show that the robot learned that this was not the essential task, and it reached the destination. Nevertheless, it is a behavior that does not occur when observing the results in the proposed scenarios.</ns0:p><ns0:p>During the simulations, the proposed system's behavior does not improve the number of steps required to reach the destination than the threshold method and the FIS, but a significant improvement is observed compared to the QL SARSA methods. This improvement is present in the number of steps executed and the number of training epochs required to achieve ten consecutive successes. During the decision-making stage, the actions that guided the robot to the destination were selected faster than with classic RL methods.</ns0:p><ns0:p>Although the classic RL methods resulted in a higher accumulated reward, this did not imply that they solved the task in less time, while the proposal with FQL that had less reward than those methods managed to complete the task in less time. a shorter time. The execution times occupied to complete the route to the destination were more significant than the time taken with the threshold method and the FIS; however, it should note that these methods do not learn, which makes them inflexible compared to the proposal that uses FQL.</ns0:p><ns0:p>It is worth highlighting the result of the FQL-20 proposal, which manages to complete the route to the destination with a higher battery level than with the threshold method and FIS. This result is due to the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science system's flexibility when it decides to stop at the battery charging station and then continue on its way to its destination. This behavior could be an advantage as long as there are no execution timing restrictions.</ns0:p><ns0:p>On the other hand, the proposal made in this article equates the use of memory to the threshold method and the FIS, which dramatically improves the RL methods. This result shows the system's simplicity in memory usage and its implementation viability in a limited embedded platform like the raspberry pi.</ns0:p><ns0:p>The navigation proposal improved at the action selection phase through simulations, compared with the classical methods of reinforced learning QL and SARSA. The system prioritized and learned to move to a battery charging station when a battery charge reduction was detected over time or went to a predetermined destination. The system's learning depends on the established rules and the reward function that assigns the prizes and punishments. The reduction in the number of fuzzy rules employed involved an improvement in completing assigned navigation tasks.</ns0:p><ns0:p>For further work, the conducted simulations will be translated to real testing under the considered scenarios throughout the hexapod-type robot's use, proposed in the Introduction Section. Eventually, it will be sought to implement the system in dynamic environments and less structured to take advantage of the six-legged robot's adaptability to verify that the proposal solves the autonomic recharge problem in a mobile robot. It is expected that the computing performance in a limited embedded platform does not degrade drastically.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>The proposed system allows a mobile robot to have a reactive movement in an environment with dispersed static obstacles and can decide to change the way to the charging station as soon as it considers it necessary or under a critical state to remain static.</ns0:p><ns0:p>System simplification results from using a scheme based on fuzzy rules that allow an expert to limit the number of states in which the robot may fall, dramatically reducing its complexity, compared to the classic reinforced learning scheme that takes each sensed voltage level, or charge percentage point, as a system state. Therefore, consuming more memory space than this proposal. Hence a rule-based system is advantageous if working with systems with limited computing capabilities.</ns0:p><ns0:p>With this in mind, the following observations arise. The proposed system demonstrates better results than the QL and SARSA methods when observing the number of times it takes to train the system and the number of steps executed with that amount of training epochs. The time that the proposed system takes to complete the task is longer than with the deterministic methods. However, it presents improvements compared to the SARSA and QL methods. It occupies fewer states and requires less memory than the classic RL methods, which is a great advantage considering it will equal the memory used by the threshold-based method and the FIS.</ns0:p><ns0:p>According to the application, the proposal's flexibility, of going to charge the battery and then heading to the destination, can be advantageous as long as time is not the fundamental factor for fulfilling the tasks.</ns0:p><ns0:p>Although this work does not face the energy consumption improvement problem, instead, we try to guarantee the robot's battery availability to complete its task, being this an interesting research topic to develop in future work.</ns0:p><ns0:p>Finally, combining the decision-making module with the FQL method with a path planning module, the decision-making is critical since the generation of a route to the destination usually does not consider the robot's energy levels. In this paper's case, a simple planning method was used that does not have a learning curve and, with the elements of the environment, it generates a path from the resulting forces.</ns0:p><ns0:p>It does not consider the battery charge level. However, it would be attractive to explore using a route planning module that uses deep learning techniques and its implementation in embedded systems in future work.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The King Spider robot, equipped with: 3DOF per leg, I 2 , GPIOS, INA260 and HC-SR04 ultrasonic sensor.</ns0:figDesc><ns0:graphic coords='6,250.49,501.33,196.06,145.22' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Battery discharge curve.</ns0:figDesc><ns0:graphic coords='7,237.43,113.86,222.18,100.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Path planning in diverse scenarios. Figures A to O correspond to the scenarios used for the simulations showing the routes generated, in each scenario, from the starting point to the battery charging station and destination.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>using DRD in equation (11), the distance values are normalized to the value interval of [0, 100],</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. FQL architecture for decision making.</ns0:figDesc><ns0:graphic coords='9,169.98,359.52,357.08,346.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Algorithm 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Action selection r pos ← insert R position; d pos ← insert D position; bcs pos ← insert BCS position; obs pos list ← initialize an empty list; state ← get current rule; action ← select an action; output ← compute with function a(x) in eq. (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Input variables and fuzzy sets. (A) BL t; the sets in (B) are for DRD input; and the sets in (C) are for DRC input.</ns0:figDesc><ns0:graphic coords='10,147.64,386.90,401.76,115.83' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Robot's action (RA), output variable with singleton-type fuzzy sets defined. Fuzzy singleton a1 is the action for heading to the goal, a2 refers to going to the charging station, and action a3 means to remain static.</ns0:figDesc><ns0:graphic coords='11,287.62,63.78,121.80,106.85' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Navigation r pos ← initial position ; d pos ← insert D position; bcs pos ← insert BCS position; Initialize obs pos list; destiny ← d pos; Generate the MF's and rules; D path, BCS path← generate the paths; while r pos != destiny do action ← get from action selection; next destiny ← update the destiny; if next destiny != destiny then Update the path to next destiny; destiny ← next destiny Update next r pos ; if there is an obstacle in next r pos then Set obstacle in obs pos list ; Update the D path, and BCS path; else Move to new position; Update r pos; FQL-20. The proposal implementation was programmed using Python 3.8 with the PyCharm Community</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Number of positions computed to reach the destination or the charging station from the point of departure (A) and computing time lag observed for path planning (B).</ns0:figDesc><ns0:graphic coords='12,152.74,576.02,391.56,97.81' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Accumulated reward obtained at each scenario under simulated test after reaching ten task successes.</ns0:figDesc><ns0:graphic coords='13,147.09,260.08,402.87,101.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Accumulated reward on tenth success.</ns0:figDesc><ns0:graphic coords='13,154.11,421.89,388.83,103.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Number of epochs needed to reach the ten successes.</ns0:figDesc><ns0:graphic coords='14,156.37,63.78,384.31,103.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Number of simulated steps involved in reaching the goal by using any of the systems driven by 32 or 20 fuzzy rules.</ns0:figDesc><ns0:graphic coords='14,149.74,206.48,397.57,101.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Number of steps executed on the 10th success.</ns0:figDesc><ns0:graphic coords='14,148.96,363.41,399.13,138.45' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Distribution of action selection for each scenario concerning the approaches for (A) Threshold, (B) FIS, (C) QL, (D) SARSA, (E) FQL-36 and, (F) FQL-20.</ns0:figDesc><ns0:graphic coords='15,153.72,209.74,389.60,199.45' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Remaining battery level at the end of the simulations in each scenario and according to the evaluated approaches.</ns0:figDesc><ns0:graphic coords='16,155.71,63.78,385.63,134.94' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='8,191.02,363.72,315.00,318.67' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The twenty rules of the reduced FIS used in the FQL-20.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Rule</ns0:cell><ns0:cell>BL</ns0:cell><ns0:cell cols='2'>DRD DRC</ns0:cell><ns0:cell>Output</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Full</ns0:cell><ns0:cell>AC</ns0:cell><ns0:cell>AC</ns0:cell><ns0:cell>action=a1|a2|a3 with q value=q(1,1)|q(1,2)|q(1,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell>action=a1|a2|a3 with q value=q(2,1)|q(2,2)|q(2,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell cols='2'>Near action=a1|a2|a3 with q value=q(3,1)|q(3,2)|q(3,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell cols='2'>Close action=a1|a2|a3 with q value=q(4,1)|q(4,2)|q(4,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Near</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell>action=a1|a2|a3 with q value=q(5,1)|q(5,2)|q(5,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Near</ns0:cell><ns0:cell cols='2'>Near action=a1|a2|a3 with q value=q(6,1)|q(6,2)|q(6,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell cols='3'>Near Close action=a1|a2|a3 with q value=q(7,1)|q(7,2)|q(7,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Close</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell>action=a1|a2|a3 with q value=q(8,1)|q(8,2)|q(8,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell cols='3'>Close Near action=a1|a2|a3 with q value=q(9,1)|q(9,2)|q(9,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell cols='3'>Close Close action=a1|a2|a3 with q value=q(10,1)|q(10,2)|q(10,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>Very Low</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell>action=a1|a2|a3 with q value=q(11,1)|q(11,2)|q(11,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>Very Low</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell cols='2'>Near action=a1|a2|a3 with q value=q(12,1)|q(12,2)|q(12,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>Very Low</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell cols='2'>Close action=a1|a2|a3 with q value=q(13,1)|q(13,2)|q(13,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell cols='2'>Very Low Near</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell>action=a1|a2|a3 with q value=q(14,1)|q(14,2)|q(14,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell cols='2'>Very Low Near</ns0:cell><ns0:cell cols='2'>Near action=a1|a2|a3 with q value=q(15,1)|q(15,2)|q(15,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell cols='4'>Very Low Near Close action=a1|a2|a3 with q value=q(16,1)|q(16,2)|q(16,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>17</ns0:cell><ns0:cell cols='2'>Very Low Close</ns0:cell><ns0:cell>Far</ns0:cell><ns0:cell>action=a1|a2|a3 with q value=q(17,1)|q(17,2)|q(17,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>18</ns0:cell><ns0:cell cols='4'>Very Low Close Near action=a1|a2|a3 with q value=q(18,1)|q(18,2)|q(18,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>19</ns0:cell><ns0:cell cols='4'>Very Low Close Close action=a1|a2|a3 with q value=q(19,1)|q(19,2)|q(19,3)</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>Empty</ns0:cell><ns0:cell>AC</ns0:cell><ns0:cell>AC</ns0:cell><ns0:cell>action=a1|a2|a3 with q value=q(20,1)|q(20,2)|q(20,3)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Number of registered deviations per scenario and method.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method/Scen.</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell cols='4'>10 11 12 13 14 15</ns0:cell></ns0:row><ns0:row><ns0:cell>QL</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell cols='11'>76 324 35 154 17 191 84 234 16 9 25 69</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>28</ns0:cell></ns0:row><ns0:row><ns0:cell>SARSA</ns0:cell><ns0:cell cols='8'>187 134 104 12 54 14 110 9</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell cols='4'>59 91 11 27 208 38</ns0:cell></ns0:row><ns0:row><ns0:cell>FQL-36</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell cols='3'>11 10 10 1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>FQL-20</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>24 22</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>4 21 2</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>4</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Simulation execution time to complete the task of going to the goal successfully ten times, for each scenario and according to the evaluated approach.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Threshold</ns0:cell><ns0:cell>FIS-36</ns0:cell><ns0:cell>FIS-20</ns0:cell><ns0:cell>QL</ns0:cell><ns0:cell>SARSA</ns0:cell><ns0:cell>FQL-36</ns0:cell><ns0:cell>FQL-20</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(ms)</ns0:cell><ns0:cell>(ms)</ns0:cell><ns0:cell>(ms)</ns0:cell><ns0:cell>(min)</ns0:cell><ns0:cell>(min)</ns0:cell><ns0:cell>(s)</ns0:cell><ns0:cell>(s)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>17.60</ns0:cell><ns0:cell>18.42</ns0:cell><ns0:cell>17.96</ns0:cell><ns0:cell>3.53</ns0:cell><ns0:cell>1.93</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>7.91</ns0:cell><ns0:cell>17.87</ns0:cell><ns0:cell>17.65</ns0:cell><ns0:cell>4.85</ns0:cell><ns0:cell>2.91</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.35</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>8.00</ns0:cell><ns0:cell>8.52</ns0:cell><ns0:cell>8.48</ns0:cell><ns0:cell>4.42</ns0:cell><ns0:cell>1.26</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>7.63</ns0:cell><ns0:cell>8.29</ns0:cell><ns0:cell>8.39</ns0:cell><ns0:cell>2.83</ns0:cell><ns0:cell>1.29</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>7.73</ns0:cell><ns0:cell>8.36</ns0:cell><ns0:cell>8.21</ns0:cell><ns0:cell>9.16</ns0:cell><ns0:cell>3.99</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell>0.23</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>7.75</ns0:cell><ns0:cell>18.16</ns0:cell><ns0:cell>17.75</ns0:cell><ns0:cell>6.75</ns0:cell><ns0:cell>3.24</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>0.51</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>7.92</ns0:cell><ns0:cell>19.42</ns0:cell><ns0:cell>35.61</ns0:cell><ns0:cell>6.01</ns0:cell><ns0:cell>2.29</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.40</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>7.87</ns0:cell><ns0:cell>8.32</ns0:cell><ns0:cell>10.44</ns0:cell><ns0:cell>2.97</ns0:cell><ns0:cell>1.64</ns0:cell><ns0:cell>2.15</ns0:cell><ns0:cell>0.28</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>7.76</ns0:cell><ns0:cell>8.38</ns0:cell><ns0:cell>8.36</ns0:cell><ns0:cell>3.72</ns0:cell><ns0:cell>1.76</ns0:cell><ns0:cell>0.44</ns0:cell><ns0:cell>0.29</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>8.14</ns0:cell><ns0:cell>9.61</ns0:cell><ns0:cell>8.28</ns0:cell><ns0:cell>65.38</ns0:cell><ns0:cell>33.63</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.30</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>8.15</ns0:cell><ns0:cell>8.78</ns0:cell><ns0:cell>8.81</ns0:cell><ns0:cell>2.61</ns0:cell><ns0:cell>1.15</ns0:cell><ns0:cell>1.19</ns0:cell><ns0:cell>0.43</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>7.80</ns0:cell><ns0:cell>8.26</ns0:cell><ns0:cell>8.30</ns0:cell><ns0:cell>28.91</ns0:cell><ns0:cell>15.64</ns0:cell><ns0:cell>18.70</ns0:cell><ns0:cell>0.27</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>8.46</ns0:cell><ns0:cell>8.32</ns0:cell><ns0:cell>8.41</ns0:cell><ns0:cell>5.94</ns0:cell><ns0:cell>2.36</ns0:cell><ns0:cell>19.76</ns0:cell><ns0:cell>0.26</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>15.31</ns0:cell><ns0:cell>8.26</ns0:cell><ns0:cell>8.37</ns0:cell><ns0:cell>1.56</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>18.47</ns0:cell><ns0:cell>0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>8.44</ns0:cell><ns0:cell>8.12</ns0:cell><ns0:cell>8.64</ns0:cell><ns0:cell>2.40</ns0:cell><ns0:cell>2.04</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>0.30</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Number of states or rules for each method.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='3'>Threshold FIS-36 FIS-20</ns0:cell><ns0:cell>QL</ns0:cell><ns0:cell>SARSA</ns0:cell><ns0:cell cols='2'>FQL-36 FQL-20</ns0:cell></ns0:row><ns0:row><ns0:cell>Rules/States</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell cols='2'>1,030,301 1,030,301</ns0:cell><ns0:cell>36</ns0:cell><ns0:cell>20</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Amount of memory occupied by each method</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='2'>Threshold FIS approach</ns0:cell><ns0:cell>QL</ns0:cell><ns0:cell cols='2'>SARSA FQL approach</ns0:cell></ns0:row><ns0:row><ns0:cell>Memory (MB)</ns0:cell><ns0:cell>48.9</ns0:cell><ns0:cell>49.2</ns0:cell><ns0:cell>121.1</ns0:cell><ns0:cell>121.1</ns0:cell><ns0:cell>49.2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>of environmental conditions, and thanks to its simplicity, the implementation is painless in embedded</ns0:cell></ns0:row><ns0:row><ns0:cell>systems.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Centro de Investigación en Computación
Instituto Politécnico Nacional
Av. Juan de Dios Bátiz S/N,
Nueva Industrial Vallejo, Gustavo A. Madero,
07738, Ciudad de México, CDMX
https://www.cic.ipn.mx/index.php/es/
April 7th, 2021
Dear Editors,
The authors would like to thank the editor-in-chief, the associate editor and all the anonymous
reviewers for their constructive comments and positive support. We provide below a detailed account
of the changes that we have made in response to the comments that the reviewers have raised. We have
marked the corresponding changes in the new version. In addition, we have carefully polished the
presentation.
In particular, all the code can be found at https://github.com/ElizBth/Reactive-Navigation-Under-aFuzzy-Rules-Based-Scheme-and-Reinforcement-Learning. We have included links throughout the
paper to the appropriate code repositories.
We hope now that the new version of the paper satisfies the requirements for a suitable publication in
PeerJ.
MSc. Elizabeth López Lozada
On behalf of all authors.
Manuscript: “Reactive navigation under a fuzzy rules-based scheme and reinforcement learning for
mobile robots”
Comments by reviewer 1: Dear reviewer, thank you for your valuable comments. Below are the
responses and changes made in accordance with your comments. Please note that the changes have
been highlighted in yellow in the document.
Comment 1: The structure of this paper is not consistent to the objective of the work. In the
Introduction section, there should be more focus on the decision-making techniques rather than path
finding.
Answer:
Regarding this comment, it is worth mentioning that in the introduction we have added
in lines 66-76 more information about recent works focused on decision making techniques, also in
the related work we have added in lines 139-155 information about proposals focused on the
autonomous
recharging
problem.
Comment 2: There are many grammatical errors.
Answer:
Thanks for pointing this out. We have carefully revised the manuscript and corrected the
grammatical and spelling errors found throughout the entire document.
Comment 3: Focus on the writing on what you want to achieve in the results.
Answer:
Thank you for your valuable comment. We have added more information about decision
making techniques and recent proposals about the autonomous recharging problem. Also, we have
improved the background, results, discussion (in lines 577-582) and conclusion sections in order to
focus in the problem that we are addressing and the obtained results.
Comments by reviewer 2: Dear reviewer, thank you for your valuable comments. Below are the
responses and changes made in accordance with your comments. Please note that the changes have
been highlighted in green in the document.
Comment 1: This paper has a lot of typing and grammatical errors. The English should be properly
reviewed.
Answer:
Thanks for pointing this out. We have revised the manuscript and corrected the
grammatical and spelling errors found throughout the entire document.
Comment 2: Abstract is very fuzzy, and poor written.
Answer:
Dear reviewer, thanks for this valuable comment. Regarding this comment, it is worth
mentioning that we have revised the abstract and changed to improve the writing and to make it
clear and concise. The new abstract version can be found in lines 15-29.
Comment 3: Paper main contribution is not clear.
Answer:
Thank you for marking this out. Regarding this comment, the paper’s main contribution
lies in a reactive navigation scheme that grants a robot to learn, based on trial and error, for making
decisions to fulfill its tasks with a suitable battery level. We have clarified and added the main
contribution in the manuscript in lines 95- 97.
Comment 4: Authors should give more details about implementation, validation procedure, validation
time, computational complexity…
Answer:
Dear reviewer, thanks for your constructive comments. The answer will be divided in
four parts:
(1) Concerning the comment about the implementation. We have made some modifications in the
manuscript to explain the implementation process. The location of the information required
for implementation is as follows:
o In lines 415 – 419: The proposal's implementation was done using Python 3.8 with the
PyCharm Community Edition IDE on a computer with an Ubuntu 20.04 operating
system. The software used to present the results of this paper can be found at
https://github.com/ElizBth/Reactive-Navigation-Under-a-Fuzzy-Rules-Based-Schemeand-Reinforcement-Learning. For the implementation of the proposal from scratch, it
can be done by using the equations of the artificial potential fields and fuzzy q-learning
section, following algorithms 1, 2, and 3 of the Background Section.
o On page 9: The algorithm 1 used for path planning was collocated.
o On page 12: The algorithm 2 was added.
o On page 12: There is the information needed for the decision-making implementation.
o On page 14: There is a table representing the rules used for FQL-20.
o On page 15: The algorithm used for navigation was collocated.
(2) During the development of this work, a validation process was used during the execution of
the simulations. The validation procedure consisted of the following: first, after the execution
of a new step, it was validated if the coordinates of the simulated robot were at the destination;
if so, the task was terminated. Then it was validated if the robot had reached the charging
station; if so, the battery charge was restored to 100%, and the simulation continued. Finally,
with the robot's coordinates and the obstacles, it was verified if the robot had collided; if so,
the simulation indicated a failure and was restarted; if not, the simulation continued. This
information was added in lines 537 – 543.
(3) Concerning to this comment. On average, per scenario, the time spent for validation was
74ms, we added this information in lines 543-544.
(4) Respecting to the comment about the computational complexity, we have appended a series of
tables that show the execution times and the amount of memory used during the simulations'
execution. Table 4 shows the number of states that the proposed method can take together with
traditional reinforcement learning methods. Table 3 shows the time took for the simulation to
complete each of the proposed scenarios. While table 5 shows the amount of memory that
each method required during the execution of the simulations. These tables can be found on
from page 20 to 21.
Comment 5: Authors should justify their improvements against previous results.
Answer:
Concerning this comment, we have added more tables and graphs to show our approach
improvements against other methods. Also, in the Discussion section, we added:
• In lines 583-602: During the simulations, the proposed system’s behavior does not improve the
number of steps it travels towards the destination than the threshold method and the FIS, but a
significant improvement is observed compared to the QL SARSA methods. This improvement
is present in the number of steps executed and the number of training epochs required to
achieve ten consecutive successes. During the decision-making stage, the actions that guided
the robot to the destination were selected faster than with classic RL methods. Although the
classic RL methods resulted in a higher accumulated reward, this did not imply that they
solved the task in less time, while the proposal with FQL that had less reward than those
methods managed to complete the task in less time. a shorter time. The execution times occu-
pied to complete the route to the destination were more significant than the time taken with
the threshold method and the FIS; however, it should note that these methods do not learn,
which makes them inflexible compared to the proposal that uses FQL.
It is worth highlighting the result of the FQL-20 proposal, which manages to complete the
route to the destination with a higher battery level than with the threshold method and FIS.
This result is that due to the system’s flexibility, it decides to stop at the battery charging station and then continue its way to its destination. This behavior could be an advantage whether
there are no time problems. However, according to the application, it could be a disadvantage
if the system’s purpose is to reach the destination as soon as possible.
On the other hand, the proposal made in this article equates the use of memory to the threshold method and the FIS and dramatically improves the RL methods. This result shows the system’s simplicity in memory usage and could be implemented in microcomputers like a raspberry pi, but not in an Arduino because of the lack of computational resources.
Comment 6: Authors should include technical information in order the reader can replicate the
proposed results.
Answer:
All the technical information to replicate the proposed results are provided in the paper.
In line 417 you will find a link to the source code used. Similarly, we have added the algorithms used
for the development of this work. The algorithm for the path planning can be found on page 9, the algorithm for decision-making will be seen on page 12, and finally the general algorithm for navigation
can be consulted on page 15. Also, in lines 291-292 there are the parameters used for path planning;
in lines 354-355 there are the parameters for the decision-making module; on page 12, there is the
reward function selected; on page 14, there is a table with the rules of FQL-20 proposal. We believe
this information is required in case if someone wants to replicate the proposal.
Comment 7: Authors should include a detailed analysis about energy consumption improvements.
Answer:
Dear reviewer thanks for this valuable comment. Regarding this comment, we added a
paragraph in lines 642-644 about this issue. It is worth mentioning that in the content of this
investigation we do not face the energy consumption improvement problem. Instead, we try to
guaranty the battery availability for the robot to complete its task. This would be an interesting
research topic to develop in future work.
Comment 8: Authors should include a detailed comparative analysis.
Answer:
Thanks for pointing this out. Respecting this comment, we have improved the results
and discussion section, comparing the results of our proposal with methods as threshold, traditional
fuzzy inference systems, and traditional reinforcement learning methods as QL and SARSA. We have
added detailed information and figures about the epochs taken for the task competition on page 17 in
Figure 10; as well as the number of steps and the actions taken, in page 18 in Figure 12, and on page
19 in Figure 13; likewise, the battery levels comparison in page 19 in Figure 14. Also, we added some
tables in page 18 in Table 2 we show the deviations taken in each tested scenario; in page 20, Table 3
with a time comparison between different methods to reach ten successes, and Table 4 with total
states used in each method; and finally, in page 21 in Table 5 there is a comparison of memory usage.
Comment 9: References should be updated and improved.
Answer:
Thanks for your valuable comment. With respect to the references, we have improved
the references as you have suggested us, and we have included them in the paper. They can be found
in the references section.
•
Cheng, Z., Fu, X., Wang, J., and Xu, X. (2021). Research on robot charging strategy based on
the scheduling algorithm of minimum encounter time. Journal of the Operational Research
Society, 72(1):237–245.
• Rappaport, M. and Bettstetter, C. (2017). Coordinated recharging of mobile robots during exploration. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), pages 6809–6816.
• Wang, D., Hu, Y., and Ma, T. (2020). Mobile robot navigation with the combination of supervised learning in cerebellum and reward-based learning in basal ganglia. Cognitive Systems
Research, 59:1–14.
• Yang, Y., Bevan, M. A., and Li, B. (2020). Efficient navigation of colloidal robots in an unknown environment via deep reinforcement learning. Advanced Intelligent Systems,
2(1):1900106.
Comment 10: This paper only includes simulation results.
Answer:
Thanks for pointing this out. Relating to your comment, we limited this work to present
the simulations results, and merely we have presented the hardware which we will use for
implementing our proposal in future work, on page 6 in lines 240-243 we added this information.
Comment 11: It is not clear the importance of selected navigation profiles.
Answer:
Dear reviewer, thanks for this valuable comment. Regarding this comment, we have
added an explanation about the selected navigation profiles in lines 567-577.
The importance of them lies in the fact that the first action corresponding to the displacement to a
destination, which is one of the primary tasks performed by the mobile robot, a human would expect
that the robot executes most of the time. Likewise, as the battery charge level was paramount to completing the main task, the second action was to go to a battery charging station. With this, the decision
module would seem to be complete; however, a situation could arise in which the robot has a shallow
battery level and does not reach the destination or the charging station. This paper considers the third
action because a robot should be suspended or shut down to avoid causing damage to its electrical/electronic system.
Comments by reviewer 3: Dear reviewer, thank you very much for your valuable comments. Below
you can find the responses and changes made in accordance with your comments. Please note that
the changes have been highlighted in blue in the document.
Comment 1: It would be interesting if the authors can give an insight about considering another
platform instead of the King Spider robot. What could be the main differences to identify when
considering working with another robot and which criteria must be considered to modify the
algorithm in order to adapt it to the new robot platform?
Answer:
Dear reviewer, thank you for your comments. Regarding to this comment, we have attached the corresponding information in the paper on page 7 in lines 262-264.
This hardware was selected to employ it in exploration tasks or as a service robot, taking advantage
of its extremity’s ability to travel in irregular terrains. However, other robotic platforms, as differential robots, may also be used. The selection of the robotic platform is left to the context specifications.
Comment 2: In what way the physical/kinematic parameters will impact for the path generation?
Answer:
Concerning with this comment. The information about this issue was added on page 8 in
lines 266-270. The proposal seeks to be modular. Therefore, a path planning module and a decision-
making module are handled. The movements were limited to four movements: forward, backward,
left, and right within the path planning module. The generated route can be saved in a text file. It is
suggested to add a control module that contains the robot's kinematics and that as input receives the
data of the new position where it needs to move.
Comment 3: How this last will affect the path generation?
Answer:
Respecting with this comment we added on page 8 in lines 270-273. In case the control
module requires other parameters such as speed or direction angles, or if kinematics must be adapted
to a determined robot configuration, the software provided in this paper would have to be modified.
The simplest way would be to call a function that interprets the coordinates within the control module
to obtain the required parameters.
Comment 4: If this is the case, how it will affect to the other approaches used?
Answer:
With the respect with other approaches. Our proposal is modular in order to facilitate the
change of modules when necessary. In this proposal, decision-making is centralized, and as inputs, it
expects normalized values between 0 and 100 with the expressions in page 10, so that it depends as
little as possible on the hardware to be used.
Comment 5: I know that the input variables and fuzzy sets are given in terms of battery level and
distances. So, how will impact the use/modification of hardware in your proposal? And with respect
to the others approaches? All of these show the same disadvantage/advantage?
Answer:
Dear reviewer, thanks for your constructive comments. The answer will be divided in
two parts. They were added on lines 273-278:
(1) One of the variables that could be affected by the proposed hardware change would be the battery voltage. Since for the simulations, an 11.1v battery is being considered, and other robotic
platforms could occupy batteries with other values, the system's behavior could change according to the discharge range.
(2) Our approach uses separated modules from the program's main body. The input values come
from functions that update and normalize the battery level and the distances to avoid significant modifications to the proposal in the implementation with other hardware.
Comment 6: What about if, instead to go to the charging station, the use of some of the King Spider’s
servomotors could be clamped and just those that permit to follow, in some way, the path is limited to
use? Maybe another premise variable can be included.
Answer:
Dear reviewer, thanks for your constructive comments. The answer will be divided in
two parts:
(1) In the robot's gait, the servomotors' use would vary according to the displacement the platform
would perform: move forward, backward, turn right or turn left; and the servo motors would
have to be activated according to this type of basic movements. If the servo motors want to be
clamped, the robot would need to adapt the gait autonomously. We do not have an adaptive
module for the robot control, but it will be included as one of this research work's future activities.
(2) More input premises can be included; however, if all the rules generated with the fuzzy inference system are considered, the number of operations evaluated during navigation increases,
and with it, the memory use.
Comment 7: This reviewer suggests including at references section some books about basics on fuzzy
logic.
Answer:
Thank you for pointing this. We have added in the paper a reference about fuzzy logic
basis.
(1) Ross, T. J. (2010). Logic and Fuzzy Systems, chapter 5, pages 117–173. John Wiley & Sons,
Ltd.
" | Here is a paper. Please give your review comments after reading it. |
113 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Almost all of the computer vision tasks use Convolutional Neural Network (CNN) to perform the task of image classification on ImageNet. This task is pre-training which is followed by fine-tuning wherein the features are adapted to perform the target task. ImageNet is a large database consisting of 15 million images belonging to 22,000 categories. Images after being collected from the web were then labeled using Amazon's Mechanical Turk crowd-sourcing tool by human labelers. It is believed that ImageNet is useful for transfer learning because of the sheer volume of the dataset and the number of object classes available. Transfer learning using pre-trained models is useful as it helps to accurately build computer vision models in a less costly way. Pre-trained models which have already been trained on substantial datasets are used and then we repurpose it for our own problem. Scene recognition is one of the most prominent applications of computer vision where it is used in many spheres and industries like tourism. In this paper, we aim to show multi-label scene classification using five architectures namely VGG16, VGG19, ResNet50, InceptionV3, and Xception using imagenet weights available in the Keras library. We have shown a performance comparison of the different architectures further in the paper. Lastly, we propose a new model, EnsemV3X with reduced number of parameters giving an accuracy of 91% better than the best performing models Inception and Xception.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Almost all of the computer vision tasks use Convolutional Neural Network (CNN) to perform the task of image classification on ImageNet. This task is pre-training which is followed by fine-tuning wherein the features are adapted to perform the target task. ImageNet is a large database consisting of 15 million images belonging to 22,000 categories. Images after being collected from the web were then labeled using Amazon's Mechanical Turk crowd-sourcing tool by human labelers. It is believed that ImageNet is useful for transfer learning because of the sheer volume of the dataset and the number of object classes available. Transfer learning using pre-trained models is useful as it helps to accurately build computer vision models in a less costly way. Pre-trained models which have already been trained on substantial datasets are used and then we repurpose it for our own problem. Scene recognition is one of the most prominent applications of computer vision where it is used in many spheres and industries like tourism. In this paper, we aim to show multi-label scene classification using five architectures namely VGG16, VGG19, ResNet50, InceptionV3, and Xception using imagenet weights available in the Keras library. We have shown a performance comparison of the different architectures further in the paper. Lastly, we propose a new model, EnsemV3X with reduced number of parameters giving an accuracy of 91% better than the best performing models Inception and Xception.</ns0:p></ns0:div>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>A vast array of problems like transferring our findings to a new dataset, object detection <ns0:ref type='bibr' target='#b18'>(Goyal and Benjamin, 2014)</ns0:ref> <ns0:ref type='bibr' target='#b52'>(Ren et al., 2015)</ns0:ref>, scene recognition <ns0:ref type='bibr' target='#b23'>(Herranz et al., 2016)</ns0:ref> are being researched and looked into due to network architectures that are measured against the ImageNet dataset <ns0:ref type='bibr' target='#b10'>(Deng et al., 2009</ns0:ref>) <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al., 2012)</ns0:ref>. It is also assumed that any architecture that performs superior on ImageNet will perform effectively on other computer vision tasks <ns0:ref type='bibr' target='#b66'>(Voulodimos et al., 2018)</ns0:ref> as well. If the second to last layers have better features then such ImageNet networks show better performance with transfer learning <ns0:ref type='bibr' target='#b35'>(Kornblith et al., 2019)</ns0:ref>. ImageNet is an extremely diverse dataset comprising of more than 15 million images belonging to 22,000 categories <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al., 2012)</ns0:ref> which are structured as per the WordNet <ns0:ref type='bibr' target='#b48'>(Miller, 1995)</ns0:ref> hierarchy. In WordNet, multiple words or phrases are called 'synset' or 'synonym set' <ns0:ref type='bibr' target='#b49'>(Miller, 1998)</ns0:ref>. There are currently more than 100,000 synsets in it and ImageNet tries to provide 1000 images for each such synset. ImageNet had been put together with the motivation to provide researchers with sophisticated resources to carry out the work of computer vision. Also, an annual competition named ImageNet Large Scale Visual Recognition Challenge (ILSVRC) <ns0:ref type='bibr' target='#b53'>(Russakovsky et al., 2015)</ns0:ref> is organized by the ImageNet team using a subset of the ImageNet database. The competition uses around 1.2 million images that belong to 1000 classes <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Deep networks consisting of hidden layers that learn different features at every layer are required. It is clear that if one has more data <ns0:ref type='bibr' target='#b44'>(Lohr, 2012)</ns0:ref>, the deep network will learn better. However, the issue is that it is difficult to get a huge labeled dataset for training <ns0:ref type='bibr' target='#b46'>(Masud et al., 2008)</ns0:ref>. Even if you get the dataset it might take a substantial amount of time and cost to train a deep network. This is where the concept of pre-training <ns0:ref type='bibr' target='#b14'>(Erhan et al., 2010)</ns0:ref> <ns0:ref type='bibr' target='#b75'>(Zoph et al., 2020)</ns0:ref> comes in. Luckily, many models that have been trained on powerful GPUs using millions of images for hundreds of hours are available. Pre-trained models that are available have actually been trained on a sizeable dataset to solve problems that are close to ours <ns0:ref type='bibr' target='#b45'>(Marcelino, 2018)</ns0:ref>. After deciding on the pre-trained model to use, we have proceeded with the concept of transfer learning. Transfer learning transfers the information from one domain to another and improves the learner <ns0:ref type='bibr' target='#b51'>(Pan et al., 2008)</ns0:ref> <ns0:ref type='bibr' target='#b64'>(Torrey and Shavlik, 2010)</ns0:ref> <ns0:ref type='bibr' target='#b68'>(Weiss et al., 2016)</ns0:ref> . We have two domains: one is known as the source domain D S and the other as the target domain D T with their tasks namely T S and T T respectively. Transfer learning is defined as the process of making the target predictive function better given the domains and the tasks by using the required information from D S and T S where D S is not equal to D T or T S is not equal to T T <ns0:ref type='bibr' target='#b68'>(Weiss et al., 2016)</ns0:ref>. We have used five architectures that are available with the Keras <ns0:ref type='bibr' target='#b32'>(Ketkar, 2017)</ns0:ref> library namely VGG16, VGG19, ResNet50, InceptionV3, and Xception <ns0:ref type='bibr' target='#b54'>(Sarkar et al., 2018)</ns0:ref>. We also aim to show a comparison of performance for each of these architectures on our chosen dataset. Scene recognition has a lot of applications and is used in the social media and tourism industry to extract information from the images where it is used to find the location in the images. Hence, the objectives of the paper are as follows:</ns0:p><ns0:p>• To study the different architectures which shall be used with ImageNet weights to perform the task of multi-class classification.</ns0:p><ns0:p>• To compare the performance of different architectures used to perform the task of transfer learning on the dataset consisting of multiple classes.</ns0:p><ns0:p>• To solve the scene recognition task using ImageNet and employing it for multi-label scene classification.</ns0:p><ns0:p>• To study the performance of transfer learning in the aforesaid problem using metrics and graphs obtained.</ns0:p><ns0:p>• To design an ensemble using the best performing models enabling better classification of images.</ns0:p><ns0:p>The paper consists of 6 sections. Section 2 describes the literature review carried out to understand the topic. Section 3 puts forth the methodology used in the paper to meet the objectives as well as the dataset used. Section 4 describes the learning curves and the results obtained. Section 5 contains the discussions regarding the results and graphs obtained. Lastly, Section 6 contains the conclusion and the future scope. <ns0:ref type='bibr'>LeNet-5 (LeCun et al., 1989)</ns0:ref> mainly describes convolutional neural networks that are nothing but a stack of convolutional layers followed by fully connected layers which have shown commendable performance on various datasets like MNIST <ns0:ref type='bibr' target='#b12'>(Deng, 2012)</ns0:ref> and also in the ImageNet classification challenge. ImageNet being such a large and diverse dataset has been used by many researchers and academicians to carry out machine vision tasks. ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is a challenge that is organized by the ImageNet team that uses a subset of the ImageNet database i.e. 1.2 million images belonging to 1000 object classes. <ns0:ref type='bibr' target='#b37'>Krizhevsky et al. (Krizhevsky et al., 2012)</ns0:ref> performed the task of training a deep convolutional neural network for ILSVRC-2010 <ns0:ref type='bibr' target='#b6'>(Berg et al., 2010)</ns0:ref> to categorize the images belonging to the dataset and were able to achieve top-1 and top-5 error rates of 37.5% and 17.0%.</ns0:p></ns0:div>
<ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>The methodology used by them consisted of a convolutional neural network comprising of 650,000 neurons and around 60 million parameters. The deep network consisted of five convolutional layers followed by three fully connected layers with a final 1000-way softmax layer which represented the AlexNet architecture. They also used two GPUs to train the network as it was found out that 1.2 million images were too large for a single GPU. <ns0:ref type='bibr' target='#b58'>Simonyan et al. (Simonyan and Zisserman, 2014)</ns0:ref> in the year 2014 addressed another very important facet of convolutional neural network which is the depth. They evaluated deep networks consisting of 19 layers known as VGG19 which consisted of a stack of convolutional layers followed by three fully connected layers and softmax layer as the final layer. They achieved state of the art with 23.7% as the top-1 validation error rate, 6.8% as the top-5 validation error rate and 6.8% as the top-5 test error rate.</ns0:p><ns0:p>Yosinski et al. <ns0:ref type='bibr' target='#b74'>(Yosinski et al., 2014)</ns0:ref> in the year 2014 used a neural network that was trained on ImageNet and tried to show the transferability of features. They went onto depict that features become less transferable as the distance between the initial and final task increases.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_1'>2020:10:53974:1:3:NEW 21 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Le et al. <ns0:ref type='bibr' target='#b38'>(Le and Yang, 2015)</ns0:ref> and <ns0:ref type='bibr' target='#b73'>Yao et al. (Yao and Miller, 2015)</ns0:ref> tried to study the effect of various parameters like the convolutional layer depth, dropout layers, receptive field size on the accuracy in the Tiny ImageNet Visual Recognition Challenge. The TinyImageNet dataset is a subset of the dataset used in ILSVRC-2010 consisting of 200 object classes using 500 training images and 50 validation as well as testing images. The paper also consisted of increasing the network depth and then applying techniques like PRELu <ns0:ref type='bibr' target='#b71'>(Xu et al., 2015)</ns0:ref> and dropout <ns0:ref type='bibr' target='#b60'>(Srivastava et al., 2014)</ns0:ref> to the model. They used images that were not initially annotated and the algorithms had to suggest the labels to which the images belong. The methodology achieved the final error rate of 0.444. <ns0:ref type='bibr' target='#b63'>Szegedy et al. (Szegedy et al., 2015)</ns0:ref> proposed a new architecture called Inception for ImageNet Large-Scale Visual Recognition Challenge 2014. Inception architecture mainly consists of modules that are stacked one over the other occasionally succeeded by max-pooling layers. They further went on to suggest another embodiment of the Inception architecture named GoogLeNet consisting of 22 layers with 6.67% as the top-5 error rate.</ns0:p><ns0:p>In the year 2015, He et al. <ns0:ref type='bibr' target='#b22'>(He et al., 2015)</ns0:ref> proposed a Parametric Rectified Linear Unit (PReLU) which is a generalization of the standard one. It was suggested that PReLU improved the model fitting without any extra cost. Their methodology helped them achieve a 4.94% top-5 test error rate which shows a 26% improvement over GoogLeNet (winner of ILSVRC14).</ns0:p><ns0:p>Han et al. <ns0:ref type='bibr' target='#b20'>(Han et al., 2015)</ns0:ref> followed three steps and tried to only keep connections and weights that are important without degrading the accuracy. They used the ImageNet dataset with AlexNet and VGG-16 Caffe <ns0:ref type='bibr' target='#b31'>(Jia et al., 2014)</ns0:ref> models to show how they were able to reduce the number of parameters 9 times and 13 times respectively without deteriorating the accuracy.</ns0:p><ns0:p>Lei Sun <ns0:ref type='bibr' target='#b62'>(Sun, 2016)</ns0:ref> implemented a different version of ResNet with 34 layers. Later, after applying data augmentation and stochastic depth <ns0:ref type='bibr' target='#b26'>(Huang et al., 2016)</ns0:ref>, an improved model was developed as the baseline and compared with other residual networks. The paper showed that heavy image augmentation <ns0:ref type='bibr' target='#b7'>(Bloice et al., 2017)</ns0:ref> improves accuracy significantly and the suggested methodology in the paper achieves an error rate of 34.68%.</ns0:p><ns0:p>Simon et al. <ns0:ref type='bibr' target='#b57'>(Simon et al., 2016)</ns0:ref> in the year 2016 suggested a set of pre-trained models including ResNet-10, ResNet-50, and batch normalized versions of AlexNet and VGG19. Such models can be trained within minutes using powerful GPUs <ns0:ref type='bibr' target='#b0'>(Akiba et al., 2017)</ns0:ref>. All of their models performed better than previous models with Top-1 error rate and Top-5 error rate as good as 26.9% and 8.8% for VGG19 respectively.</ns0:p><ns0:p>Huh et al. <ns0:ref type='bibr' target='#b27'>(Huh et al., 2016)</ns0:ref> investigated in the year 2016 how ImageNet is crucial in learning good features. They studied the behavior of ImageNet by fine-tuning for three tasks namely object detection using PASCAL VOC <ns0:ref type='bibr' target='#b15'>(Everingham et al., 2010)</ns0:ref> 2007 dataset, action classification on PASCAL-VOC 2012 dataset and scene classification on the SUN dataset <ns0:ref type='bibr' target='#b69'>(Xiao et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Alexei Bastidas <ns0:ref type='bibr' target='#b5'>(Bastidas, 2017)</ns0:ref> benchmarked on the Tiny ImageNet challenge by adapting and finetuning two such models namely InceptionV3 and VGGNet <ns0:ref type='bibr' target='#b67'>(Wang et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Transfer learning had been in existence for a very long time where the features were extracted from ImageNet trained networks. The extracted features were further used to train SVMs <ns0:ref type='bibr' target='#b47'>(Mayoraz and Alpaydin, 1999)</ns0:ref> and logistic regression classifiers <ns0:ref type='bibr' target='#b13'>(Donahue et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b55'>(Sharif Razavian et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b59'>(Smola and Schölkopf, 1998</ns0:ref>) and showed great performance on tasks that were different from ImageNet classification. Kornblith et al. <ns0:ref type='bibr' target='#b35'>(Kornblith et al., 2019)</ns0:ref> showed that ImageNet architectures generalize well across datasets by using 16 networks that had top-1 accuracy from 71.6% to 80.8% in ILSVRC 2012.</ns0:p><ns0:p>However, some research gaps and issues were identified which paved the way for our research work which have been listed below:</ns0:p><ns0:p>• Transfer learning improved scene classification accuracy <ns0:ref type='bibr' target='#b1'>(Akilan et al., 2017)</ns0:ref> but still some problems were identified during analysis.</ns0:p><ns0:p>-Abundant training data is required if a deep CNN model contains a large number of parameters and principally if trained using transfer learning.</ns0:p><ns0:p>-Due to the difficulty of understanding the rich and dense indoor images, it maybe a little difficult to get desired results. </ns0:p></ns0:div>
<ns0:div><ns0:head>ENSEMV3X -ENSEMBLE MODEL OF INCEPTIONV3 AND XCEPTION</ns0:head><ns0:p>Face recognition <ns0:ref type='bibr' target='#b30'>(Jain et al., 2020)</ns0:ref>, object detection <ns0:ref type='bibr' target='#b18'>(Goyal and Benjamin, 2014)</ns0:ref>, scene classification are areas of application of CNN <ns0:ref type='bibr' target='#b34'>(Khan et al., 2018)</ns0:ref> <ns0:ref type='bibr'>(Koushik, 2016) (O'Shea and</ns0:ref><ns0:ref type='bibr' target='#b50'>Nash, 2015)</ns0:ref>. CNN comprises three layers namely convolutional layer, pooling layer and a fully connected layer. CNN use the concept of learnable kernels in the convolutional layer that forms the base for CNN. The convolutional layer produces a 2D activation map <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al., 2012)</ns0:ref> and as per the stride value, we glide through the input to produce a scalar product for each value in the kernel. Forward pass of convolutional layer <ns0:ref type='bibr' target='#b43'>(Liu et al., 2015)</ns0:ref> can be described by equation 1.</ns0:p><ns0:formula xml:id='formula_0'>x l i, j = ∑ m ∑ n w l m,n o l−1 i+m, j+n + b l i, j<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where x is the neuron output for row i and column j, w represents the weights and b represents the biases <ns0:ref type='bibr' target='#b17'>(Gordon and Desjardins, 1995)</ns0:ref> for the l th convolutional layer. Reduction in the number of parameters is done with the help of pooling layers. An activation function <ns0:ref type='bibr' target='#b56'>(Sibi et al., 2013)</ns0:ref> is what comes next which speeds up the learning process by adding non-linearity. Maxout, tanh, ReLU and variants of ReLU like ELU, leaky ReLU, and PReLU <ns0:ref type='bibr' target='#b71'>(Xu et al., 2015)</ns0:ref> help add non-linearity. Fully connected layers <ns0:ref type='bibr' target='#b4'>(Basha et al., 2020)</ns0:ref> are then used to help determine the best weights using backpropagation. These layers take input from the previous layers and analyze the output of all previous layers <ns0:ref type='bibr' target='#b33'>(Khan et al., 2019)</ns0:ref>. Fully connected layers perform linear and non-linear transformation on the incoming input. Equation 2 denotes the equation for linear transformation.</ns0:p><ns0:formula xml:id='formula_1'>z = W ⊤ • X + B (2)</ns0:formula><ns0:p>where X represents the input, W is weight and B is bias which is a constant.</ns0:p><ns0:p>In this paper, we study and implement different ImageNet architectures. One such architecture named VGG16 was used to win the ILSVRC 2014. It uses a large number of hyper-parameters about 138 million along with a stride one 3X3 filter for convolutional layers and uses a 2X2 filter with a stride of 2 for the max pool layer. This structure is followed throughout the architecture. Here, the number 16 denotes the number of layers that have weights. Figure <ns0:ref type='figure'>1</ns0:ref> Manuscript to be reviewed ResNet <ns0:ref type='bibr' target='#b22'>(He et al., 2015)</ns0:ref> was the winner of the ImageNet challenge in 2015 and since then it has been used for a number of tasks related to computer vision. It helps to overcome the problem of vanishing gradient <ns0:ref type='bibr' target='#b24'>(Hochreiter, 1998)</ns0:ref> and allows us to train neural networks that more than 150 layers deep. Skip connection <ns0:ref type='bibr' target='#b16'>(Furusho and Ikeda, 2019)</ns0:ref> is the main innovation in ResNet which allows skipping some layers that are not relevant for training.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Inception as a model had been introduced by Szegedy et al. <ns0:ref type='bibr' target='#b63'>(Szegedy et al., 2015)</ns0:ref> for ILSVRC14.</ns0:p><ns0:p>Inception V3 is a Convolutional Neural network 48 layers deep given by Google which is trained on almost 1 million images and classifies the images to 1000 classes like mouse, pencil, keyboard and a number of animals. Figure <ns0:ref type='figure'>3</ns0:ref> depicts the architecture for Inception model. The 1x1, 3x3 and 5x5 convolutional layers are used and their output is concatenated which forms the input to the next stage.</ns0:p><ns0:p>Here, the 1x1 layers is used for dimensionality reduction <ns0:ref type='bibr' target='#b65'>(Van Der Maaten et al., 2009)</ns0:ref>. This inception module is thus used in larger architectures. Such a concept is mainly used to highlight the fact that the learnings from previous layers are important for all the subsequent layers <ns0:ref type='bibr' target='#b61'>(Stone and Veloso, 2000)</ns0:ref>.</ns0:p><ns0:p>Lastly, Xception is another convolutional neural network that is 71 layers deep and requires an image with an input size of 299X299. Francois Chollet <ns0:ref type='bibr' target='#b8'>(Chollet, 2017)</ns0:ref> proposed an original deep convolutional Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>We have used a dataset that was published by Intel to host an Image classification Challenge. The dataset has been taken from Kaggle and is named 'Intel Image Classification' <ns0:ref type='bibr' target='#b28'>(Intel, 2018)</ns0:ref>. It consists of 25K</ns0:p><ns0:p>images under 6 labels namely building, glacier, mountain, forest, sea and street. We have used the entire dataset for training the model and subsequently testing it.The dataset has around 14K images for training, 3K for testing and 7K for prediction.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methodology</ns0:head><ns0:p>In this paper, we have used the concept of transfer learning. It has been shown time and again that transfer learning <ns0:ref type='bibr' target='#b68'>(Weiss et al., 2016)</ns0:ref> helps to build deep neural networks in a more faster and accurate manner due to the usage of pre-trained networks that can later be used for a specific task.</ns0:p><ns0:p>CNN consists of two parts namely Convolutional Base and a classifier. The convolutional base helps to extract features from the images using pooling layers and convolutional layers while the classifier helps to label the image based on the recognized features. Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref> shows the two components of a CNN.</ns0:p><ns0:p>Keras comes bundled with a number of models like VGG, ResNet, Inception, Xception <ns0:ref type='bibr' target='#b54'>(Sarkar et al., 2018)</ns0:ref>, and much more. Generally, these models have two parts which are the model architecture and the model weights <ns0:ref type='bibr' target='#b19'>(Gulli and Pal, 2017)</ns0:ref>. Model weights being large files are not encapsulated with Keras but they can be included using the weight parameter in the model definition. Making use of the weight parameter in the model definition, we have imported the ImageNet weights for each one of the models during implementation. Figure <ns0:ref type='figure'>6</ns0:ref> shows the flowchart for the implementation. We have performed the task of multi-class classification <ns0:ref type='bibr' target='#b2'>(Aly, 2005)</ns0:ref> for analyzing deep features of various scenes like forest, sea or</ns0:p></ns0:div>
<ns0:div><ns0:head>6/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53974:1:3:NEW 21 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Further after finding the results, the two best models -Xception and InceptionV3 were chosen for the EnsemV3X. The two models are used to extract the image features from the dataset and then their individual weight files after training the images are saved. These weight files are then loaded into the respective classifiers of the two architectures and these models are used to create an ensemble. Figure <ns0:ref type='figure'>7</ns0:ref> shows the ensemble model created from Xception and InceptionV3. The weight files obtained by running the models for InceptionV3 and Xception were then used to train the ensemble and then obtain the results for the test data using the ensemble model. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>Learning curves <ns0:ref type='bibr' target='#b3'>(Amari, 1993)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Accuracy <ns0:ref type='bibr' target='#b25'>(Hossin and Sulaiman, 2015)</ns0:ref> is defined as the number of correct predictions made by our model to the total number of predictions. Equation 3 represents the formula for accuracy.</ns0:p><ns0:formula xml:id='formula_2'>A = 1 n n ∑ i=1 |M i ∩ N i | |M i ∪ N i |<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Here, Mi represents the labels that are given in the dataset and Ni represents the labels which are predicted.</ns0:p><ns0:p>A stand for Accuracy.</ns0:p><ns0:p>These terms can be understood in context of a confusion matrix <ns0:ref type='bibr' target='#b9'>(Chow et al., 2013)</ns0:ref>. Confusion matrix is a metric that uses 4 combinations of predicted and actual values. True positive are the terms that are true and also have been predicted to be true. True negative is a term that is false and is also predicted false. False positive is a term whose actual class is negative but it has been predicted positive. Similarly, false negative is a term whose actual class is positive and it has been predicted negative.</ns0:p><ns0:p>VGG16 and VGG19 confusion matrix are shown by figures 13 and 14 respectively. Figure <ns0:ref type='figure' target='#fig_3'>15</ns0:ref> depicts the confusion matrix for the Xception model. Also, figure <ns0:ref type='figure'>16</ns0:ref> shows the confusion matrix for the ensemble obtained after combining the Inception and Xception architectures. Figure <ns0:ref type='figure'>17</ns0:ref> and figure <ns0:ref type='figure' target='#fig_8'>18</ns0:ref> depict the confusion matrix for InceptionV3 and ResNet50 model respectively. These confusion matrices represent the outcome for around 3K images that had been used for testing. Similarly, other metrics can also be calculated as per the confusion matrix obtained for each model. The confusion matrix shows how many images were labelled as class 0 and actually belonged to class 0 and how many images have been incorrectly classified. We can define precision <ns0:ref type='bibr' target='#b25'>(Hossin and Sulaiman, 2015)</ns0:ref> and recall <ns0:ref type='bibr' target='#b25'>(Hossin and Sulaiman, 2015)</ns0:ref> using the following equations: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_3'>P = 1 n n ∑ i=1 |M i ∩ N i | |N i |<ns0:label>(</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>Computer Science R = 1 n n ∑ i=1 |M i ∩ N i | |M i | (5)</ns0:formula><ns0:p>where Mi represents the labels given in the dataset and Ni represents the labels predicted for the instance i. Equation 4 depicts the formula for precision and equation 5 depicts the formula for recall. Lastly, equation 6 depicts the equation for f1-score <ns0:ref type='bibr' target='#b25'>(Hossin and Sulaiman, 2015)</ns0:ref> that can be calculated using previously calculated precision and recall.</ns0:p><ns0:formula xml:id='formula_5'>F 1 = 1 n n ∑ i=1 2 |M i ∩ N i | |M i | + |N i | (6)</ns0:formula><ns0:p>Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> represents the performance metrics calculated for each of the model thereby comparing their performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS AND FUTURE SCOPE</ns0:head><ns0:p>ImageNet is a dense and diverse dataset that has been trained over 22,000 object categories as per the word suggested in WordNet. ImageNet has shown tremendous performance in object recognition but has not been able to show commendable performance for scene recognition. We tried to perform the task of multi-label scene recognition using ImageNet architectures using Keras library and Google Colab and went on to compare their performance as listed in table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. This shows that ImageNet is able to achieve satisfactory results and can further be improved by changing the architectures depending upon the problem.</ns0:p><ns0:p>Since ImageNet is a very diverse dataset, it can be used for for solving many scene recognition challenges and we have tried to prepare an ensemble by using models that have been trained using ImageNet. Also, the InceptionV3 and Xception models have performed superior to the rest of the models. Hence, after combining the two models to form an ensemble the accuracy obtained is 91%, the best of all architectures considered.</ns0:p><ns0:p>In the future, we can study ImageNet architectures for rich and dense images that have been captured using mobile phone cameras. Another course that can be taken in comparing ImageNet performance with its counterpart namely PlacesCNN on the same dataset and analyze their performances for both object detection as well as scene recognition.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1. VGG16 Architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. VGG19 Architecture</ns0:figDesc><ns0:graphic coords='6,230.20,63.78,236.64,108.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Xception Architecture</ns0:figDesc><ns0:graphic coords='6,245.80,598.30,205.44,108.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Components of CNN</ns0:figDesc><ns0:graphic coords='7,281.02,63.78,135.00,165.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. EnsemV3X</ns0:figDesc><ns0:graphic coords='8,226.30,63.78,244.44,108.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 9 .Figure 10 .Figure 11 .</ns0:head><ns0:label>91011</ns0:label><ns0:figDesc>Figure 9. Training and Validation Metrics for ResNet50</ns0:figDesc><ns0:graphic coords='9,149.77,76.53,397.50,168.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Training and Validation Metrics for Xception</ns0:figDesc><ns0:graphic coords='10,155.11,63.78,386.81,168.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>obtained after running the models are great sources for measuring the performance of the deep learning model. There are two types of learning curves namely train learning curve and validation curve that are calculated from training dataset and validation dataset (Xu and Goodacre, 2018) respectively. Loss (Hastie et al., 2009) helps us understand how bad a model performs with every epoch. On the other hand, accuracy helps to denote how accurate the model's prediction is to the actual data. The training and the validation curve for each of the architectures have been shown.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 depict the training vs validation accuracy and loss curve for InceptionV3 model. Similarly, figure 9 depict the same curves for ResNet50 model. VGG16 and VGG19 training vs validation accuracy and loss curves are depicted by figures 10 and 11 respectively.Figure 12 depict the training vs validation accuracy and loss curve for Xception model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 13 .Figure 14 .Figure 15 .Figure 17 .</ns0:head><ns0:label>13141517</ns0:label><ns0:figDesc>Figure 13. Confusion Matrix for VGG16</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Performance Metrics</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>F1-Score</ns0:cell></ns0:row><ns0:row><ns0:cell>InceptionV3</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNet50</ns0:cell><ns0:cell>37%</ns0:cell><ns0:cell>32%</ns0:cell><ns0:cell>37%</ns0:cell><ns0:cell>29</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG19</ns0:cell><ns0:cell>87%</ns0:cell><ns0:cell>87%</ns0:cell><ns0:cell>87%</ns0:cell><ns0:cell>87</ns0:cell></ns0:row><ns0:row><ns0:cell>Xception</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>90</ns0:cell></ns0:row><ns0:row><ns0:cell>EnsemV3X</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>91</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>7/14</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53974:1:3:NEW 21 Feb 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Delhi, 26th January 2021
Our Response on The Reviewer‘s Comments
Title: Ensembled Model of InceptionV3 and Xception - EnsemV3X: A Novel Deep Learning Architecture for Multi-Label Scene Classification
Respected reviewers,
Thank you for your useful comments and suggestions on the modification of our manuscript. We have modified the manuscript accordingly, and detailed corrections are listed below point by point:
Reviewer 1:
1. The novelty of this paper is not clear. The proposed structure is based on ImageNet and VGG. The authors should explain more about their collaborations.
Ans. We have created an ensemble using two best performing models to show their use for scene classification. The ensemble had the best accuracy amongst all the models used. The same has been included in the paper.
2. The authors should explain more on the proposed structure. Based on Figure 7, the only difference between the proposed structure and ImageNet is a fully-connected NN. The authors should explain more about how Figure 8 is related to Figure 7.
Ans. The same has been modified in the paper. Line 221-223
3. The result and the discussion sections should be more clarified based on Figures 14-19.
Ans. The same has been modified in the Discussion part.
Reviewer 2:
1. The experimental part of the article lacks the experimental design, which is a serious drawback of the experiment. Besides, the results cover a very narrow area. Meanwhile, no statistical analysis of the results.
Ans. The details have been added in the paper. A table of comparison has been shown in the paper. Table 1(page no. 8)
2. The content of the abstract is not complete, and the purpose of scientific work is not highlighted.
Ans. The details have been included in the abstract.
3. In LITERATURE REVIEW, 'Lack of indoor scene datasets makes it difficult to get satisfying results by fine-tuning a pre-trained CNN'. Why is it difficult to get satisfying results? The satisfying results are not explained clearly.
Ans. The part has been rewritten for better understanding. Line 148-150
4. The authors are suggested to provide a comparative study between the proposed work and earlier published works. It will help for future researchers.
Ans. The table for performance of all models has been included. Table 1(page no. 8)
5. What are the major advantages of the proposed new model over the other available techniques, such as WideResNets, AlexNet, VGG, Inception, ResNets and others?
Ans. The new model has better accuracy as compared to the rest of the models and the same has been included in the paper.
6. The authors are suggested to improve the conclusions.
Ans. The conclusion has been changed.
7. The authors are advised to revise the manuscript thoroughly and carefully to avoid any possible technical and grammatical errors.
Ans. The manuscript has been checked for any grammatical errors.
8. The format of references is not uniform.
Ans. The references have been checked.
Technical changes as required:
#1. Raw Data
Please provide a direct link to the Intel dataset hosted at Kaggle named 'Intel Image Classification' in your Data and Code Availability statement here.
We note you've also stated: 'We are not submitting any Code as the code is Uploaded on Github and we dont want to declare the code unless the paper is Published.' Please add a link to your GitHub repository to your Data and Code Availability statement here and remove this statement from the Confidential Information for PeerJ staff for clarity.
Answer: Raw data and code has been uploaded and the option has been changed to “upload to peerj”
#2. Figure Permissions
You have stated that you have permission to republish Figures 6, 7 from the ImageNet dataset.
However, we will need to see the full copyright information from the owner of these images granting permission to republished under our CC BY 4.0 license. https://peerj.com/about/policies-and-procedures/#open-access-copyright-policy.
Answer:
Figure 6 and 7 are self-made and Figure 5 is being removed from the paper (Figure 5: Images from the dataset)
#3. Figure Accessibility
If possible, please adjust the red/green colors used on Figures 9, 10, 11, 12, and 13 to make them accessible to those with color blindness. Please review our color blindness guidelines for figures.
Answer: Figures has been converted according to color blindness guidelines.
" | Here is a paper. Please give your review comments after reading it. |
114 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Convolutional neural network is widely used to perform the task of image classification, including pretraining, followed by fine-tuning whereby features are adapted to perform the target task, on ImageNet. ImageNet is a large database consisting of 15 million images belonging to 22,000 categories. Images collected from the Web are labeled using Amazon Mechanical Turk crowd-sourcing tool by human labelers. ImageNet is useful for transfer learning because of the sheer volume of its dataset and the number of object classes available. Transfer learning using pretrained models is useful because it helps to build computer vision models in an accurate and inexpensive manner. Models that have been pretrained on substantial datasets are used and repurposed for our requirements. Scene recognition is a widely used application of computer vision in many communities and industries, such as tourism.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Many problems, such as transferring our findings to a new dataset <ns0:ref type='bibr' target='#b17'>(Goyal and Benjamin, 2014)</ns0:ref>, object detection <ns0:ref type='bibr' target='#b52'>(Ren et al., 2015)</ns0:ref>, scene recognition <ns0:ref type='bibr' target='#b23'>(Herranz et al., 2016)</ns0:ref>, have been extensively investigated due to network architectures measured against the ImageNet dataset <ns0:ref type='bibr' target='#b10'>(Deng et al., 2009)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al., 2012)</ns0:ref>. Furthermore, any architecture that performs effectively on ImageNet is assumed effectual on other computer vision tasks as well <ns0:ref type='bibr' target='#b66'>(Voulodimos et al., 2018)</ns0:ref>. If the second to last layers demonstrate satisfactory features, then networks, such as ImageNet, show acceptable performance with transfer learning <ns0:ref type='bibr' target='#b35'>(Kornblith et al., 2019)</ns0:ref>. ImageNet is an extremely diverse dataset comprising of more than 15 million images that belong to 22,000 categories <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al., 2012)</ns0:ref> which are structured according to WordNet hierarchy <ns0:ref type='bibr' target='#b47'>(Miller, 1995)</ns0:ref>. WordNet contains more than 100,000 multiple words or phrases called 'synsets' or 'synonym sets,' and ImageNet attempts to provide 1,000 images for each synset <ns0:ref type='bibr' target='#b48'>(Miller, 1998)</ns0:ref>. ImageNet aims to provide researchers with sophisticated resources for computer vision. An annual competition named ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) <ns0:ref type='bibr' target='#b53'>(Russakovsky et al., 2015)</ns0:ref> is organized by the ImageNet team using a subset of the ImageNet database. The competition uses around 1.2 million images that belong to 1,000 classes <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Deep networks consisting of hidden layers that learn different features at every layer are required.</ns0:p><ns0:p>The deep network with the maximum amount of data can optimize learning <ns0:ref type='bibr' target='#b43'>(Lohr, 2012)</ns0:ref>. However, the issue is that it is difficult to get a huge labeled dataset for training <ns0:ref type='bibr' target='#b45'>(Masud et al., 2008)</ns0:ref>. Even if you get the dataset it might take a substantial amount of time and cost to train a deep network. This is where the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53974:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science concept of pre-training <ns0:ref type='bibr' target='#b13'>(Erhan et al., 2010)</ns0:ref> <ns0:ref type='bibr' target='#b74'>(Zoph et al., 2020)</ns0:ref> comes in. Notably, many models that have been trained on powerful GPUs using millions of images for hundreds of hours are available. Existing pretrained models have been processed on a sizeable dataset to solve problems similar to ours <ns0:ref type='bibr' target='#b44'>(Marcelino, 2018)</ns0:ref>. We performed transfer learning after selecting the pretrained model to use. Transfer learning transfers information from one domain to another and improves the learner <ns0:ref type='bibr' target='#b50'>(Pan et al., 2008)</ns0:ref> <ns0:ref type='bibr' target='#b64'>(Torrey and Shavlik, 2010)</ns0:ref> <ns0:ref type='bibr' target='#b68'>(Weiss et al., 2016)</ns0:ref>. We have two domains, namely, source D S and target D T with their tasks T S and T T respectively. Transfer learning is defined as the process of improving the target predictive function given domains and tasks by using the required information from D S and T S , where D S is not equal to D T or T S is not equal to T T <ns0:ref type='bibr' target='#b68'>(Weiss et al., 2016)</ns0:ref>. We used five architectures available in the Keras <ns0:ref type='bibr' target='#b32'>(Ketkar, 2017)</ns0:ref> library, namely, VGG16, VGG19, ResNet50, InceptionV3, and Xception <ns0:ref type='bibr' target='#b54'>(Sarkar et al., 2018)</ns0:ref>. We also aim to compare the performance of each architecture on our chosen dataset. Many applications of scene recognition are used in social media and the tourism industry to extract the location information of images. Hence, objectives of this study are as follows:</ns0:p><ns0:p>• to investigate the different architectures that will be used with ImageNet weights to perform the task of multiclass classification</ns0:p><ns0:p>• to compare the performance of different architectures used to perform the task of transfer learning on the dataset consisting of multiple classes • to solve the scene recognition task using ImageNet and then use it for multilabel scene classification • to explore the performance of transfer learning in the problem using metrics and graphs obtained • to design an ensemble using optimally performing models and improve image classification The remainder of this paper is arranged as follows. The literature is reviewed in Section 2. The methodology and dataset used in this study are introduced in Section 3. Learning curves are described in Section 4. The results of this study are discussed in Section 5. Finally, the conclusion and future scope are presented in Section 6. <ns0:ref type='bibr'>LeNet-5 (LeCun et al., 1989)</ns0:ref> mainly describes convolutional neural networks (CNNs) as a stack of convolutional layers, followed by fully connected layers that have been successfully used on various datasets, such as MNIST, <ns0:ref type='bibr' target='#b11'>(Deng, 2012)</ns0:ref> and in the ImageNet classification challenge. ImageNet is a large and diverse dataset widely used in machine vision tasks. ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is a challenge that is organized by the ImageNet team that uses a subset of the ImageNet database i.e. 1.2 million images belonging to 1000 object classes. <ns0:ref type='bibr' target='#b37'>Krizhevsky et al. (2012)</ns0:ref> trained a deep CNN ILSVRC-2010 <ns0:ref type='bibr' target='#b6'>(Berg et al., 2010)</ns0:ref> to categorize images belonging to the dataset and achieved top-1 and -5 error rates of 37.5% and 17.0%. The methodology used a CNN comprising of 650,000 neurons and around 60 million parameters.The deep network used the AlexNet architecture, which consists of five convolutional layers, followed by three fully connected layers with a final 1,000-way softmax layer. The researchers also used two GPUs to train the network given that 1.2 million images are excessively large for a single GPU. <ns0:ref type='bibr' target='#b58'>Simonyan and Zisserman (2014)</ns0:ref> addressed depth, with is another very important facet of CNNs. The researchers evaluated deep networks consisting of 19 layers known as VGG19, which is composed of a stack of convolutional layers, followed by three fully connected layers and a softmax layer as the final layer. VGG19 achieved 23.7% and 6.8% as the top-1 and -5 validation error rates, respectively, and 6.8% as the top-5 test error rate. <ns0:ref type='bibr' target='#b73'>Yosinski et al. (2014)</ns0:ref> in the year 2014 used a neural network trained on ImageNet and attempted to show the transferability of features and subsequently demonstrated that feature transferability decreases as the distance between initial and final tasks increases. <ns0:ref type='bibr' target='#b38'>Le and Yang (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b72'>Yao and Miller (2015)</ns0:ref> attempted to investigate the effect of various parameters , such as convolutional layer depth, dropout layers, and receptive field size, on the accuracy in the Tiny ImageNet Visual Recognition Challenge. The TinyImageNet dataset is a subset of the dataset used in ILSVRC-2010 that consists of 200 object classes, which use 500 training images and 50 validation and testing images. The study also increased the network depth and then applied techniques, such as parametric rectified linear unit (PRELu) <ns0:ref type='bibr' target='#b70'>(Xu et al., 2015)</ns0:ref> and dropout <ns0:ref type='bibr' target='#b60'>(Srivastava et al., 2014)</ns0:ref>, to the model. The researchers used images that were not initially annotated whereby algorithms must suggest labels to which images belong. The methodology achieved a final error rate of 0.444.</ns0:p></ns0:div>
<ns0:div><ns0:head>LITERATURE REVIEW</ns0:head></ns0:div>
<ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2020:10:53974:2:1:NEW 6 Apr 2021)</ns0:ref> Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b63'>Szegedy et al. (2015)</ns0:ref> proposed a new architecture called Inception for ILSVRC 2014. The Inception architecture mainly consists of modules that are stacked one over the other and sometimes succeeded by max-pooling layers. The researcher also suggested the use of GoogLeNet, which is another aspect of the Inception architecture that consists of 22 layers and demonstrates a top-5 error rate of 6.67%. <ns0:ref type='bibr' target='#b22'>He et al. (2015)</ns0:ref> proposed PReLU, which is a generalization of the standard rectified linear unit (ReLU). PReLU improved the model fitting without any extra cost and helped achiev a top-5 test error rate of 4.94%, thereby indicating a 26% improvement over GoogLeNet (winner of ILSVRC14). <ns0:ref type='bibr' target='#b20'>Han et al. (2015)</ns0:ref> followed three steps and attempted to only maintain important connections and weights without degrading the accuracy. The researchers used the ImageNet dataset with AlexNet and VGG-16 Caffe <ns0:ref type='bibr' target='#b31'>(Jia et al., 2014)</ns0:ref> models to show how reduced the number of parameters 9 and 13 times without deteriorating the accuracy.</ns0:p><ns0:p>Sun ( <ns0:ref type='formula'>2016</ns0:ref>) implemented a different version of ResNet with 34 layers. <ns0:ref type='bibr' target='#b26'>Huang et al. (2016)</ns0:ref> developed an improved model as the baseline after applying data augmentation and stochastic depth and then compared the results with those of other residual networks. <ns0:ref type='bibr' target='#b7'>Bloice et al. (2017)</ns0:ref> showed that heavy image augmentation significantly improves the accuracy with an error rate of 34.68%. <ns0:ref type='bibr' target='#b57'>Simon et al. (2016)</ns0:ref> put forward a set of pretrained models, including ResNet-10, ResNet-50, and batch normalized versions of AlexNet and VGG19. Such models can be trained within minutes using powerful GPUs <ns0:ref type='bibr' target='#b0'>(Akiba et al., 2017)</ns0:ref>. All their models performed better than previous models with Top-1 and -5 error rates, which are equivalent to 26.9% and 8.8% for VGG19, respectively. <ns0:ref type='bibr' target='#b27'>Huh et al. (2016)</ns0:ref> investigated the importance of ImageNet in learning acceptable features and explored the behavior of ImageNet by fine-tuning three tasks, namely, object detection using PASCAL VOC <ns0:ref type='bibr' target='#b14'>(Everingham et al., 2010)</ns0:ref> 2007 dataset, action classification on PASCAL-VOC 2012 dataset, and scene classification on the SUN dataset <ns0:ref type='bibr' target='#b69'>(Xiao et al., 2010)</ns0:ref>. <ns0:ref type='bibr' target='#b5'>Bastidas (2017)</ns0:ref> benchmarked on the Tiny ImageNet challenge by adapting and fine tuning two such models , namely, InceptionV3 and VGGNet <ns0:ref type='bibr' target='#b67'>(Wang et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Transfer learning extracts features from trained ImageNet networks. Extracted features are further used to train SVMs <ns0:ref type='bibr' target='#b46'>(Mayoraz and Alpaydin, 1999)</ns0:ref> and logistic regression classifiers <ns0:ref type='bibr' target='#b12'>(Donahue et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b55'>(Sharif Razavian et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b59'>(Smola and Schölkopf, 1998</ns0:ref>) and showed great performance on tasks that were different from ImageNet classification. <ns0:ref type='bibr' target='#b35'>Kornblith et al. (2019)</ns0:ref> showed that ImageNet architectures generalize appropriately across datasets by using 16 networks with a top-1 accuracy range of 71.6% to 80.8% in ILSVRC 2012.</ns0:p><ns0:p>Our study aims to address the following research gaps and issues:</ns0:p><ns0:p>• Transfer learning improved the scene classification accuracy <ns0:ref type='bibr' target='#b1'>(Akilan et al., 2017)</ns0:ref> but some problems were still identified during analysis.</ns0:p><ns0:p>-A considerable amount of training data are required when the deep CNN model containing a large number of parameters is trained using transfer learning.</ns0:p><ns0:p>-Satisfactory results are difficult to obtain due to the high complexity of rich and dense indoor images.</ns0:p><ns0:p>• Images with rich semantic content pose a problem <ns0:ref type='bibr' target='#b41'>(Liu and Tian, 2019)</ns0:ref> when attempting to solve the scene classification problem.</ns0:p></ns0:div>
<ns0:div><ns0:head>ENSEMV3X: -ENSEMBLE MODEL OF INCEPTIONV3 AND XCEPTION</ns0:head><ns0:p>Face recognition <ns0:ref type='bibr' target='#b30'>(Jain et al., 2020)</ns0:ref>, object detection <ns0:ref type='bibr' target='#b17'>(Goyal and Benjamin, 2014)</ns0:ref>, and scene classification are application areas of CNN <ns0:ref type='bibr' target='#b34'>(Khan et al., 2018)</ns0:ref> <ns0:ref type='bibr' target='#b36'>(Koushik, 2016)</ns0:ref> <ns0:ref type='bibr' target='#b49'>(O'Shea and Nash, 2015)</ns0:ref>. CNN comprises three layers, namely, convolutional layer, pooling layer and a fully connected layer. CNNs use the concept of learnable kernels in the convolutional layer to form the base for CNN. The convolutional layer produces a 2D activation map <ns0:ref type='bibr' target='#b37'>(Krizhevsky et al., 2012)</ns0:ref> and as per the stride value, we glide through the input to produce a scalar product for each value in the kernel. Forward pass of convolutional layer <ns0:ref type='bibr' target='#b42'>(Liu et al., 2015)</ns0:ref> can be described by equation 1.</ns0:p><ns0:formula xml:id='formula_0'>x l i, j = ∑ m ∑ n w l m,n o l−1 i+m, j+n + b l i, j ,<ns0:label>(1)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>3/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53974:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science where x is the neuron output for row i and column j, w represents the weights, and b represents the biases <ns0:ref type='bibr' target='#b16'>(Gordon and Desjardins, 1995)</ns0:ref> for the l th convolutional layer. The number of parameters is reduced with the help of pooling layers. The subsequent application of an activation function <ns0:ref type='bibr' target='#b56'>(Sibi et al., 2013)</ns0:ref> accelerates up the learning process by adding non-linearity. Maxout, tanh, ReLU, and variants of ReLU, such as ELU, leaky ReLU, and PReLU <ns0:ref type='bibr' target='#b70'>(Xu et al., 2015)</ns0:ref> , help add nonlinearity. Fully connected layers <ns0:ref type='bibr' target='#b4'>(Basha et al., 2020)</ns0:ref> are then used to help determine optimal weights using backpropagation. These layers take the input from previous layers and analyze the output of all previous layers <ns0:ref type='bibr' target='#b33'>(Khan et al., 2019)</ns0:ref>. Fully connected layers perform linear and nonlinear transformation on the incoming input. Equation 2 denotes the equation for linear transformation.</ns0:p><ns0:formula xml:id='formula_1'>z = W ⊤ • X + B,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where X represents the input; W is the weight; and B is the bias, which is a constant.</ns0:p><ns0:p>We investigate and implement different ImageNet architechtures, such as VGG16, which was used to win the ILSVRC 2014, in this study. It uses a large number of hyperparameters of approximately 138 million and a 3×3 filter for a stride of one for convolutional layers and a 2×2 filter for a stride of two for the maxpool layer. This structure is followed throughout the architecture. The number 16 in this study denotes the number of layers with weights. Figure <ns0:ref type='figure'>1</ns0:ref> shows the architecture of VGG16 consisting of two convolutional blocks with two layers each, three convolutional blocks with three layers, and two fully connected layers. The number of filters increases with the increase of convolutional blocks. Hence, the first block has 64 filters with a size of 3×3, the second block has 128 filters, the third block has 256 filters, and the last two blocks have 512 filters with a size of 3×3. The VGG16 architecture used in this study classifies the output into 16 classes.</ns0:p><ns0:p>Simonyan and Zisserman (2014) further suggested that VGG19 is a deep network with a large number of layers. The image input with a size of 224×224 for this model was trained on more than a million images belonging to 1,000 object categories. VGG19 successfully learned many features from a wide range of images. Figure <ns0:ref type='figure'>2</ns0:ref> depicts the VGG19 architecture consisting of five blocks of convolutional layers, followed by two fully connected layers. The number of filters for the first block are 64 followed by 128 and 256 filters, and the last two blocks have 512 filters with a size of 3×3. Figure <ns0:ref type='figure'>2</ns0:ref> is the general representation of the VGG19 architecture.</ns0:p><ns0:p>ResNet50 is generally used as the model for transfer learning and a miniaturized version of ResNet152.</ns0:p><ns0:p>ResNet <ns0:ref type='bibr' target='#b22'>(He et al., 2015)</ns0:ref> was the winner of the ImageNet challenge in 2015 and has since been used for</ns0:p></ns0:div>
<ns0:div><ns0:head>4/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53974:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr'>et al., 2009)</ns0:ref>. This inception module is thus used in large architectures.</ns0:p><ns0:p>Such a concept is mainly used to highlight the important learnings from previous layers for all subsequent layers <ns0:ref type='bibr' target='#b61'>(Stone and Veloso, 2000)</ns0:ref>.</ns0:p><ns0:p>Finally, Xception is another CNN that is 71 layers deep and requires an image with an input size of 299x299. Francois Chollet <ns0:ref type='bibr' target='#b8'>(Chollet, 2017)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>We used a dataset published by Intel to host an image classification challenge. The dataset extracted from Kaggle and is called 'Intel Image Classification,' <ns0:ref type='bibr' target='#b28'>(Intel, 2018)</ns0:ref> consisting of 25,000 images under six labels, namely, building, glacier, mountain, forest, sea, and street. We utilized the entire dataset for training the model and subsequently tested it. The dataset has around 14,000 images for training, 3,000 for testing, and 7,000 for prediction.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methodology</ns0:head><ns0:p>We used the concept of transfer learning in this study. Transfer learning <ns0:ref type='bibr' target='#b68'>(Weiss et al., 2016)</ns0:ref> helps build deep neural networks in a fast and accurate manner due to the usage of pretrained networks that can later be used for specific tasks.</ns0:p><ns0:p>CNNs consist of a convolutional base and a classifier. The convolutional base helps extract features from images using pooling and convolutional layers while the classifier helps label the image on the</ns0:p></ns0:div>
<ns0:div><ns0:head>5/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53974:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science basis of recognized features. Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref> shows the two components of CNNs. Keras comes bundled with a number of models , such as VGG, ResNet, Inception, Xception <ns0:ref type='bibr' target='#b54'>(Sarkar et al., 2018)</ns0:ref>, and others.</ns0:p><ns0:p>These models generally have two parts, namely, model architecture and weights <ns0:ref type='bibr' target='#b18'>(Gulli and Pal, 2017)</ns0:ref>.</ns0:p><ns0:p>Model weights are excluded from Keras but can be included using the weight parameter in the model definition. We used the weight parameter in the model definition to import ImageNet weights for each model during implementation. Figure <ns0:ref type='figure'>6</ns0:ref> shows the flowchart for the implementation. We performed the task of multiclass classification <ns0:ref type='bibr' target='#b2'>(Aly, 2005)</ns0:ref> to analyze deep features of various scenes, such as forest, sea, or street, using models bundled with Keras. We started with a pretrained model by importing it from the Keras library and then setting the weight parameter in the model to ' imagenet.' Images were converted to appropriate dimensions according to the model and then features were extracted from images to form a part of the convolutional base. We used fully connected layers as the classifier to classify images to their respective classes depending on the probability of each class.</ns0:p><ns0:p>The top two models, Xception and InceptionV3, were chosen for EnsemV3X after obtaining the results. The two models are used to extract image features from the dataset, and their individual weight files are saved after training images. These weight files are then loaded into the respective classifiers of the two architectures and, the models are used to create an ensemble. Figure <ns0:ref type='figure'>7</ns0:ref> shows the ensemble model created from Xception and InceptionV3. Weight files obtained by running the models for InceptionV3 and Xception were then used to train the ensemble and obtain the results for the test data using the ensemble model.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53974:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>Learning curves <ns0:ref type='bibr' target='#b3'>(Amari, 1993)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Accuracy <ns0:ref type='bibr' target='#b25'>(Hossin and Sulaiman, 2015)</ns0:ref> is defined as the number of correct predictions made by our model to the total number of predictions. Equation 3 represents the formula for accuracy.</ns0:p><ns0:formula xml:id='formula_2'>A = 1 n n ∑ i=1 |M i ∩ N i | |M i ∪ N i | ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where Mi represents the given labels in the dataset, Ni represents the predicted labels in the dataset, and A denotes accuracy.</ns0:p><ns0:p>These terms can be understood in the context of a confusion matrix <ns0:ref type='bibr' target='#b9'>(Chow et al., 2013)</ns0:ref>. The confusion matrix is a metric that uses four combinations of predicted and actual values. True positive represents true terms that were predicted to be true. True negative is a false term that was predicted to be false. False positive is a term with a negative actual class that was predicted to be positive. False negative is a term with a positive actual class that was predicted to be negative.</ns0:p><ns0:p>VGG16 and VGG19 confusion matrices are shown by <ns0:ref type='bibr'>Figures 13 and 14,</ns0:ref><ns0:ref type='bibr'>respectively. Figures 15,</ns0:ref><ns0:ref type='bibr'>16,</ns0:ref><ns0:ref type='bibr'>17,</ns0:ref><ns0:ref type='bibr'>and 18</ns0:ref> depict the confusion matrices for the Xception model, ResNet50 model, InceptionV3 model, and Ensemble models, respectively. These confusion matrices represent the outcome of around 3,000 images used for testing. Similarly, other metrics can also be calculated according to the confusion matrix obtained for each model. The label 0 denotes one of the six classes and the same applies for other labels. The confusion matrix shows how many images were labeled class 0 and actually belonged to class 0 and those that were incorrectly classified. Precision <ns0:ref type='bibr' target='#b25'>(Hossin and Sulaiman, 2015)</ns0:ref> and recall <ns0:ref type='bibr' target='#b25'>(Hossin and Sulaiman, 2015)</ns0:ref> are expressed as follows:</ns0:p><ns0:formula xml:id='formula_3'>P = 1 n n ∑ i=1 |M i ∩ N i | |N i | ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>R = 1 n n ∑ i=1 |M i ∩ N i | |M i | ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where Mi represents the labels given in the dataset and Ni represents the labels predicted for the instance i. Equation 4 depicts the formula for precision and equation 5 depicts the formula for recall. Lastly, equation 6 depicts the equation for f1-score <ns0:ref type='bibr' target='#b25'>(Hossin and Sulaiman, 2015)</ns0:ref> can be calculated using precision and recall as follows:</ns0:p><ns0:formula xml:id='formula_5'>F 1 = 1 n n ∑ i=1 2 |M i ∩ N i | |M i | + |N i | . (<ns0:label>6</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>)</ns0:formula><ns0:p>The comparison of performance of all models is presented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53974:2:1:NEW 6 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 1. VGG16 Architecture</ns0:figDesc><ns0:graphic coords='5,210.52,63.78,276.00,108.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Inception Architecture</ns0:figDesc><ns0:graphic coords='6,258.64,63.78,179.76,108.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>proposed an original deep CNN inspired by Inception and named it Xception. Xception performs better than InceptionV3 on the ImageNet dataset and performs significantly better on large image classification datasets with nearly 350 million images and 17,000 classes. Figure 4 depicts the architecture of the Xception model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 5 .Figure</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Components of CNN</ns0:figDesc><ns0:graphic coords='7,281.02,97.40,135.00,165.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 8 .Figure 9 .</ns0:head><ns0:label>89</ns0:label><ns0:figDesc>Figure 8. Training and Validation Metrics for InceptionV3</ns0:figDesc><ns0:graphic coords='8,148.82,63.77,399.42,162.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 10 .Figure 11 .Figure 12 .</ns0:head><ns0:label>101112</ns0:label><ns0:figDesc>Figure 10. Training and Validation Metrics for VGG16</ns0:figDesc><ns0:graphic coords='9,157.55,79.90,381.96,162.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 13 .Figure 15 .</ns0:head><ns0:label>1315</ns0:label><ns0:figDesc>Figure 13. Confusion Matrix for VGG16 Figure 14. Confusion Matrix for VGG19</ns0:figDesc><ns0:graphic coords='10,177.63,344.64,135.00,135.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>obtained after running the models are important sources for measuring the performance of the deep learning model. Training and validation curves are two types of learning curves calculated from training and validation datasets (Xu and Goodacre, 2018) respectively. Loss (Hastie et al., 2009) helps us understand how bad a model performs with every epoch and the extent of its performance in a particular data sample. Meanwhile, accuracy helps denote how accurate the model prediction is to the actual data. Training and the validation curves for each of the architecture are demonstrated. On the one hand, the plot for training accuracy and training loss shows the outcome for the training data. On the other hand, the plot for validation accuracy and loss show the outcome for the validation data. Validation data are used to validate outcomes from the training data and check what the model has learnt from the data. The distance between curves shows the performance of the model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8 Figure 17 .Figure 18 .</ns0:head><ns0:label>81718</ns0:label><ns0:figDesc>Figure 8 illustrates the training versus validation accuracy and loss curve for the InceptionV3 model. Similarly, Figure 9 presents the same curves for the ResNet50 model. VGG16 and VGG19 training versus validation accuracy and loss curves are depicted in Figures 10 and 11, respectively. Figure 12 shows the training versus validation accuracy and loss curve for the Xception model.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Performance Metrics</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Precision</ns0:cell><ns0:cell>Recall</ns0:cell><ns0:cell>F1-score</ns0:cell></ns0:row><ns0:row><ns0:cell>InceptionV3</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89</ns0:cell></ns0:row><ns0:row><ns0:cell>ResNet50</ns0:cell><ns0:cell>37%</ns0:cell><ns0:cell>32%</ns0:cell><ns0:cell>37%</ns0:cell><ns0:cell>29</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>89</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG19</ns0:cell><ns0:cell>87%</ns0:cell><ns0:cell>87%</ns0:cell><ns0:cell>87%</ns0:cell><ns0:cell>87</ns0:cell></ns0:row><ns0:row><ns0:cell>Xception</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>90</ns0:cell></ns0:row><ns0:row><ns0:cell>EnsemV3X</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>91</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Delhi, 23th March 2021
Our Response on The Reviewer‘s Comments
Title: Ensembled Model of InceptionV3 and Xception - EnsemV3X: A Novel Deep Learning Architecture for Multi-Label Scene Classification
Respected reviewer,
Thank you for your useful comments and suggestions on the modification of our manuscript. We have modified the manuscript accordingly, and detailed corrections are listed below point by point:
1. Different convolution kernels will lead to different effects of feature extraction. How to choose the appropriate weights as a general feature because the weights files are best for the default models with special kernels, but there is no guarantee that they will be universal?
Ans. Imagenet weights are considered to be helpful when it comes to image recognition and they are regularly used for scene recognition tasks. They have been used on these kernels before and hence were used by us but on a different dataset.
2. Authors make an ensemble feature recognition with fully-connection ways based on the outputs of five models. It still needs the weights of connection. How to determine these weights in order to get the best results?
Ans. The process of transfer learning requires weights and hence the image-net weights were used when using the 5 models for classification. The best two models were chosen and their weights were used to run the ensemble as shown by a figure in the paper.
3. The definition of loss should be shown in order to be clear.
Ans. The same has been added in the paper.
4. Fig.9 to 11 are not clear. For example it seems the training accuracy has not be stable at 70 epochs. What is the end condition? Do the accuracy and the loss have the same coordinates? What is the relation of them? More explanations are necessary. The similar problems are for figure 13 to 18.
Ans. The quality of the images has been enhanced. The decision of running the model till 70 epochs was taken due to hardware constraints and therefore, what happens after 70 epochs is not visible. The accuracy and loss have the same axis but the coordinates depend on the dataset. There is no end condition as we wanted to show the behaviour of the model with an outdoor dataset and limited the number of epochs to show the result. Similarly for the figures 13-18, the numbers in the confusion matrix are the number of samples and are generated by the code which has been highlighted in the paper.
" | Here is a paper. Please give your review comments after reading it. |
115 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The monophyly of taxa is an important attribute of a phylogenetic tree. A lack of it may hint at shortcomings of either the tree or the current taxonomy, or can indicate cases of incomplete lineage sorting or horizontal gene transfer. Whichever is the reason, a lack of monophyly can misguide subsequent analyses. While monophyly is conceptually simple, it is manually tedious and time consuming to assess on modern phylogenies of hundreds to thousands of species. Results. The R package MonoPhy allows assessment and exploration of monophyly of taxa in a phylogeny. It can assess the monophyly of genera using the phylogeny only, and with an additional input file any other desired higher order taxa or unranked groups can be checked as well. Conclusion.</ns0:p><ns0:p>Summary tables, easily subsettable results and several visualization options allow quick and convenient exploration of monophyly issues, thus making MonoPhy a valuable tool for any researcher working with phylogenies.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Phylogenetic trees are undoubtedly crucial for most research in ecology or evolutionary biology. Whether one is studying trait evolution (e.g. <ns0:ref type='bibr' target='#b2'>Coddington 1988;</ns0:ref><ns0:ref type='bibr' target='#b5'>Donoghue 1989)</ns0:ref>, diversification (e.g. <ns0:ref type='bibr' target='#b8'>Gilinsky & Good 1991;</ns0:ref><ns0:ref type='bibr' target='#b9'>Hey 1992)</ns0:ref>, phylogeography <ns0:ref type='bibr' target='#b0'>(Avise et al. 1987)</ns0:ref>, or simply relatedness within a group (e.g. <ns0:ref type='bibr' target='#b3'>Czelusniak et al. 1982;</ns0:ref><ns0:ref type='bibr' target='#b23'>Shochat & Dessauer 1981;</ns0:ref><ns0:ref type='bibr' target='#b24'>Sibley & Ahlquist 1981)</ns0:ref>, bifurcating trees representing hierarchically nested relationships are central to the analysis. Exactly because phylogenies are so fundamental to the inferences we make, we need tools that enable us to examine how reconstructed relationships compare with existing assumptions, particularly taxonomy. We have computational approaches to estimate confidence for parts of a phylogeny <ns0:ref type='bibr' target='#b6'>(Felsenstein 1985;</ns0:ref><ns0:ref type='bibr' target='#b10'>Larget & Simon 1999)</ns0:ref> or measuring distance between two phylogenies <ns0:ref type='bibr' target='#b19'>(Robinson 1971)</ns0:ref>, but assessing agreement of a new phylogeny with existing taxonomy is often done manually. This does not scale to modern phylogenies of hundreds to thousands of taxa. Modern taxonomy seeks to name clades: an ancestor and all of its descendants (the descendants thus form a monophyletic group). Discrepancies between the new phylogenetic hypothesis and the current taxonomic classification may indicate that the phylogeny is wrong or poorly resolved. Alternatively, a well-supported phylogeny that conflicts with currently recognized groups might suggest that the taxonomy should be reformed. To identify such discrepancies, one can simply assess whether the established taxa are monophyletic. A lack of group monophyly however, can also be an indicator for conflict between gene trees and the species tree, which may be a result of incomplete lineage sorting or horizontal gene transfer. In any case, monophyly issues in a phylogeny suggest a potential error that can affect downstream analysis and inference. For example, it will mislead ancestral trait or area reconstruction or introduce false signals when assigning unsampled diversity for diversification analyses (e.g. in diversitree (FitzJohn 2012) or BAMM <ns0:ref type='bibr' target='#b16'>(Rabosky 2014)</ns0:ref>). In general, a lack of monophyly can blur patterns we might see in the data otherwise.</ns0:p><ns0:p>As this problem is by no means new, approaches to solve it have been developed earlier, particularly for large scale sequencing projects in bacteria and archaea, for which taxonomic issues are notoriously challenging. The program GRUNT <ns0:ref type='bibr' target='#b4'>(Dalevi et al. 2007</ns0:ref>) uses a tip to root walk approach to group, regroup, and name clades according to certain user defined criteria. The subsequently developed 'taxonomy to tree' approach <ns0:ref type='bibr' target='#b12'>(McDonald et al. 2012</ns0:ref>) matches existing taxonomic levels onto newly generated trees, allowing classification of unidentified sequences and proposal of changes to the taxonomic nomenclature based on tree topology. Finally, <ns0:ref type='bibr' target='#b11'>Matsen & Gallagher (2012)</ns0:ref> have developed algorithms that find mismatches between taxonomy and phylogeny using a convex subcoloring approach.</ns0:p><ns0:p>The new tool presented here, the R package MonoPhy, is a quick and user-friendly method for assessing monophyly of taxa in a given phylogeny. While the R package ape <ns0:ref type='bibr' target='#b14'>(Paradis et al. 2004</ns0:ref>) already contains the helpful function is.monophyletic, which also enables testing for monophyly, the functionality of MonoPhy is much broader. Apart from assessing monophyly for all groups and focal taxonomic levels in a tree at once, MonoPhy is also not limited to providing a simple 'yes-or-no' output, but rather enables the user to explore underlying causes of nonmonophyly. In the following, we outline the structure and usage of the package and provide examples to demonstrate its functionality. For a more usage-focused and application-oriented treatment, one should refer to the tutorial vignette (vignette('MonoPhyVignette')), which contains stepwise instructions for the different functions and their options. For any other package details consult the documentation (help('MonoPhy')).</ns0:p></ns0:div>
<ns0:div><ns0:head>Description</ns0:head><ns0:p>The package MonoPhy is written in R (R Development Core Team 2014, http://www.Rproject.org/), an increasingly important language for evolutionary biology. It builds on the existing packages ape <ns0:ref type='bibr' target='#b14'>(Paradis et al. 2004)</ns0:ref>, phytools <ns0:ref type='bibr' target='#b17'>(Revell 2012)</ns0:ref>, phangorn <ns0:ref type='bibr' target='#b20'>(Schliep 2011)</ns0:ref>, RColorBrewer <ns0:ref type='bibr' target='#b13'>(Neuwirth 2014</ns0:ref>) and taxize <ns0:ref type='bibr' target='#b1'>(Chamberlain & Szocs 2013)</ns0:ref>. A list of the currently implemented commands is given in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Note that in the code and this paper, we distinguish between tips, the organisms at the tip of the tree, and higher order taxa. Functions with 'taxa' only return information about higher order taxa, not tips. The main function -AssessMonophyly -evaluates the monophyly of each higher order taxon by identifying the most recent common ancestor (MRCA) of a collection of tips (e.g. all species in a genus), and then returning all descendants of this node. The taxon is monophyletic if the number of its members (tips) equals the number of descendants of its MRCA. If there are more descendants than taxon members, the function will identify and list the tips that do not belong to the focal taxon and we then call these tips 'intruders'. Accordingly, we will further refer to the taxa whose monophyly was disrupted by these 'intruders' as 'intruded'. Note that if two taxa are reciprocally disrupting each other's monophyly, certain tips of intruded taxa will often be intruders </ns0:p></ns0:div>
<ns0:div><ns0:head>Function name Description</ns0:head></ns0:div>
<ns0:div><ns0:head>AssessMonophyly</ns0:head><ns0:p>Runs the main analysis to assess monophyly of groups on a tree</ns0:p></ns0:div>
<ns0:div><ns0:head>GetAncNodes</ns0:head><ns0:p>Returns MRCA nodes for taxa.</ns0:p></ns0:div>
<ns0:div><ns0:head>GetIntruderTaxa</ns0:head><ns0:p>Returns lists of taxa that cause monophyly issues for another taxon.</ns0:p></ns0:div>
<ns0:div><ns0:head>GetIntruderTips</ns0:head><ns0:p>Returns lists of tips that cause monophyly issues for a taxon.</ns0:p></ns0:div>
<ns0:div><ns0:head>GetOutlierTaxa</ns0:head><ns0:p>Returns lists of taxa that have monophyly issues due to outliers.</ns0:p></ns0:div>
<ns0:div><ns0:head>GetOutlierTips</ns0:head><ns0:p>Returns lists of tips that cause monophyly issues for their taxon by being outliers.</ns0:p></ns0:div>
<ns0:div><ns0:head>GetResultMonophyly</ns0:head></ns0:div>
<ns0:div><ns0:head>Returns an extended table of the results</ns0:head></ns0:div>
<ns0:div><ns0:head>GetSummaryMonophyly</ns0:head></ns0:div>
<ns0:div><ns0:head>Returns a summary table of the results</ns0:head></ns0:div>
<ns0:div><ns0:head>PlotMonophyly</ns0:head><ns0:p>Allows several visualizations of the result. Biologically, identifying a few intruders may suggest that the definition of a group should be expanded; observing some group members in very different parts of the tree than the rest of their taxon may instead suggest that these individuals were misidentified, that their placement is the result of contaminated sequences or due to horizontal gene transfer between members of two remote clades. Moreover, the approach as described above would suggest that the clades that are intruded by the outlier tips would in turn be intruders to the taxon the outliers belong to, which intuitively would not make sense. We thus implemented an option to specify a cutoff value, which defines the minimal proportion of tips among the descendants of a taxon's MRCA that are labeled as being actual members of that taxon. If a given group falls below this value, the function will find the 'core clade' (a subclade for which the proportion matches or exceeds the cutoff value) by moving tipward, always following the descendant node with the greater number of tips in the focal taxon (absolute, relative if tied), and at each step evaluating the subtree rooted at that node to see if it exceeds the cutoff value. Once such a subtree is found, it is then called the 'core clade', and taxon members outside this clade are then called 'outliers'. As there is no objective criterion to decide at what point individuals should be considered outliers, a reasonable cutoff value must be chosen by the user.</ns0:p><ns0:p>If the tree's tip labels are in the format 'Genus_speciesepithet', the genus names will be extracted and used as taxon assignments for the tips. If the tip labels are in another format, or other taxonomic levels should be tested, taxon names can be assigned to the tips using an input file. To avoid having to manually compose a taxonomy file for a taxon-rich phylogeny, MonoPhy can automatically download desired taxonomic levels from ITIS or NCBI using taxize <ns0:ref type='bibr' target='#b1'>(Chamberlain & Szocs 2013)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8131:1:0:NEW 9 Mar 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>All inference results are stored in a solution object, from which the other functions can extract information (e.g. summary tables, intruder and outlier lists) for one or more higher-level taxa of interest. PlotMonophyly reconstructs and plots the monophyly state of the tips using phytools <ns0:ref type='bibr' target='#b17'>(Revell 2012)</ns0:ref>. Apart from the basic monophyly plot (Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>), branches can be coloured according to taxonomic groups or to highlight intruders and outliers. Monophyletic groups can be collapsed and plots can be saved directly to PDF to facilitate the visualization of large trees.</ns0:p><ns0:p>It is important to remember that the results produced by the package are merely the product of the used phylogeny and the available taxonomic information. It thus only makes the mismatches between those accessible, but does not reveal any more than that. The decision of whether the result suggests problems in the phylogeny or the taxonomy, whether a tip should be considered a rogue taxon and be removed or whether gene tree -species tree conflicts should be investigated, is entirely up to the user's judgment.</ns0:p><ns0:p>MonoPhy is available through CRAN (https://cran.r-project.org/package=MonoPhy/) and is developed on GitHub (https://github.com/oschwery/MonoPhy). Intended extensions and fixes can be seen in the issues list of the package's GitHub page. Among the planned extensions of the package are: multiple trees, displaying the result for specific subtrees, proposing monophyletic subgroups, enabling formal tests for monophyly (incorporating clade support) and providing increased plot customizability.</ns0:p></ns0:div>
<ns0:div><ns0:head>Examples</ns0:head><ns0:p>Our first example makes use of the example files contained in the package. They come from a phylogeny of the plant family Ericaceae <ns0:ref type='bibr' target='#b22'>(Schwery et al. (2015)</ns0:ref> pruned to 77 species; original data see <ns0:ref type='bibr' target='#b21'>Schwery et al. (2014)</ns0:ref>) and two taxon files assigning tribes and subfamilies to the tips (in both files, errors have been introduced for demonstration purposes; see code and output for both examples in Supplementary Data). Running the main analysis command AssessMonophyly on genus level (i.e. tree only) and tribe level (i.e. tree plus taxonomy file) using standard settings took 0.045 and 0.093 seconds respectively on a MacBook Pro with 2.4 GHz Intel Core i5 and 8GB Ram. We could now use the remaining commands to extract the information of interest from the saved output object (e.g. summary tables, lists of problem taxa, etc.). The basic monophyly plot for the genus level analysis is displayed for a subclade of the tree in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> (the figure of the full tree is shown in Fig. <ns0:ref type='figure' target='#fig_1'>S1</ns0:ref>).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8131:1:0:NEW 9 Mar 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For the second example, we demonstrate the package's performance on a tree of 31,749 species of Embriophyta <ns0:ref type='bibr' target='#b25'>(Zanne et al. 2014</ns0:ref>; data see <ns0:ref type='bibr' target='#b26'>Zanne et al. 2013)</ns0:ref>, using an outlier-cutoff of 0.9 this time. Just checking monophyly for genera took 1.78 hours, but revealed that 22% of genera on the tree are not monophyletic, while around half of all genera are only represented by one species each. Furthermore, we can see that the largest monophyletic genus is Iris (139 tips), that Justicia had the most intruders (13 tips) and that Acacia produced the most outliers (99 tips). Finally, with 2337 other tips as descendants of their MRCA, the 3 species of Aldina are most spread throughout the tree. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8131:1:0:NEW 9 Mar 2016) Manuscript to be reviewed Computer Science themselves: if the phylogeny is ((A1,B1),(A2,B2)), where A and B are genera, it's not clear if the A tips are intruding in B or the B tips are intruding in A.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Fig. 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig. 1. Monophyly plot of the genera of Ericaceae.Close-up on subfamily Vaccinioideae only. Branches of the tree coloured according to monophyly status. We can see that Vaccinium has two outliers and that its intruders are Paphia, Dimorphanthera, Agapetes and Gaylussacia.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Functions of the package MonoPhy.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8131:1:0:NEW 9 Mar 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Editor's comments
As you can see the reviewers were generally positive about your article. All reviewers, however, raised several specific comments that should be addressed in a revised manuscript. These comments include a better discussion of monophyly and reasons why this property may not be satisfied, as well as small changes to the software to make it more useable.
Please find our responses to the reviewers’ comments below, as well as in the revised manuscript. The revised version of the code is available on GitHub and should appear on CRAN shortly.
Reviewer Comments
Reviewer 1 (Anonymous)
Basic reporting
The article is easy to understand and well written.
My major comment is that the authors state that higher order taxa not being monophyletic might be caused by problems in the taxonomy (i.e., mis-annotation) or problems with the data (i.e., contamination). One of the most important (but not discussed) reasons is gene tree/species tree conflict. This could be a result of incomplete lineage sorting (ILS) or hortizontal gene transfer (HGT). I think some discussion about this would help strengthen the paper. For example, an intruder species might be due to error (as suggested by the authors), or it might be a real biological finding due to HGT. The authors could then motivate MonoPhy as not only being able to identify error but also potentially HGT events.
That is a great point that we seemed to have overlooked. Indeed, while the other cases point out important errors, it would be wrong to exclude the possibility of a biologically meaningful finding (although one could argue that finding errors in the taxonomy is a biologically meaningful result as well). In any case, the paper and the package will certainly be stronger after including this important alternative reason for non-monophyly. We added the necessary points to the manuscripts in the abstract, and on lines 45 ff., 100 ff. and 131ff.
Finally, there were a few minor corrections needed:
Page 2, Line 64: higher taxon -> higher order taxon
Corrected, in Abstract as well.
Page 3, Line 89: from by all -> from all
That would actually change the meaning of the sentence, but we have rephrased the sentence to be clearer.
Experimental design
No comments
Validity of the findings
No comments
Comments for the author
I have tested this software and it is easy to use. My only suggestion is that it would be helpful if the taxonomy file allowed header information. Thus, instead of using this command:
GetIntruderTips(solution1, taxa='Ericoideae', taxlevels=2)
One could use this command:
GetIntruderTips(solution1, taxa='Ericoideae', taxlevels='order')
As I have many different metadata files with header information beyond just taxonomy (i.e., geographic location, host organism, etc..) I could then easily write a script that could use any of my metadata files without needing to know exactly which column number is the group I'm interested in, I could just use the column name.
This is a very good suggestion, which will certainly make the package more useful. We have implemented the option to run the commands with or without header (using column names in the former case and column numbers in the latter) and the outputs show the same names now. The tutorial vignette and help files were updated accordingly.
Reviewer 2 (Frederick Matsen IV)
Basic reporting
In this paper, Schwery and O’Meara describe a new R package they have written, MonoPhy, for assessing taxonomic monophyly of phylogenetic trees. The package takes in phylogenetic trees with taxonomic leaf labels, and from that input makes plots and finds “intruder” taxa, which are taxa that disrupt monophyly. This should be a helpful package for researchers using R. The paper is written as a software announcement, and is generally suitable for that format.
However, I was left wanting clearer definitions of the objects under discussion. In particular, I was confused by the definitions of “intruders” and “outliers”. If I parse the definition of intruders correctly, “intruded” taxa will commonly themselves be intruders as well. For example, imagine that we have a large clade that is monophyletic with the exception of two somewhat widely spaced leaves. These two leaves will be marked as “intruders”, as should be the case, but by the definition the leaves below their MRCA will also be marked as intruders, which seems strange. Is this the case? I shouldn’t have to read the source to understand.
In the example case above, those two widely spaced leaves would indeed be marked as ‘intruders’, and the leaves the other taxa below their MRCA would in turn be labeled ‘intruders’ of that clade as well. This would absolutely be strange, and not biologically sensible (in most cases), which is why we implemented the outlier cutoff value option (which your next question is concerned with), so those far spaced leaves will not only be ‘intruders’ of the neighboring clades, but also ‘outliers’ of their own taxon, instead of regular members of the taxon’s ‘core clade’. This way, the leaves below the outlier’s MRCA (the neighboring taxa) will not be considered intruders, which should make a lot more sense, biologically.
We have adjusted this part of the manuscript accordingly to make the distinction clearer, see line 81 ff.
The defintion of outliers starts with the sentence “We thus implemented an option to specify a cutoff value, which gives the minimal proportion of tips among the descendants of a taxon’s MRCA that are actual members of that taxon.” I think this is supposed to mean “are assumed to be actual members of that taxon”? Again, it’s important to make these definitions clear.
We reworded this part to make it clearer. It now says “that are labeled as being actual members of that taxon”, making clear that this assumption relies on the taxonomic input information. See lines 103 ff.
Please describe the algorithm the code uses to find these various characteristics of the trees in the paper.
We have added a sketch of the algorithm (lines 83 ff. for the basic algorithm, 103 ff. for the algorithm dealing with remote leaves of a group (‘outliers’)). We do not explore the performance properties of the algorithm.
The paper does not cover relevant prior literature. In fact, I couldn’t find any reference to computational approaches to assessing concordance between a phylogeny and a taxonomy. Here are some relevant papers:
Dalevi, D., Desantis, T. Z., Fredslund, J., Andersen, G. L., Markowitz, V. M., & Hugenholtz, P. (2007). Automated group assignment in large phylogenetic trees using GRUNT: GRouping, Ungrouping, Naming Tool. BMC Bioinformatics, 8, 402.
This next paper (cited 667 times) builds on that one, developing the `tax2tree` tool:
McDonald, D., Price, M. N., Goodrich, J., Nawrocki, E. P., DeSantis, T. Z., Probst, A., … Hugenholtz, P. (2011). An improved Greengenes taxonomy with explicit ranks for ecological and evolutionary analyses of bacteria and archaea. The ISME Journal, 6(3), 610–618.
We also have written a paper in the area:
Matsen, F., & Gallagher, A. (2012). Reconciling taxonomy and phylogenetic inference: formalism and algorithms for describing discord and inferring taxonomic roots. Algorithms for Molecular Biology: AMB, 7(1), 8.
In it we address the ambiguity of intruder versus intruded (described above) is by casting it as a minimization problem, which is NP-complete but fixed-parameter tractable.
We apologize for this error – missing relevant literature is a major problem. Thank you for making us aware of this part of the literature. We have mentioned the respective work and references to the manuscript on line 53 ff.
Experimental design
This paper is a software announcement rather than a research paper, so the “research question” doesn’t quite fit here.
The code is well commented, and the vignette does a nice job of showing functionality of the package. The reference manual seems complete. I could not find any sort of tests, which is unfortunate.
We have added tests now, as they are indeed appropriate, especially in the light of future expansions and changes to the package.
I was surprised that the code does not handle multifurcating trees.
Adding the possibility of polytomies was planned and has now been implemented.
Validity of the findings
No Comments, as this is a software announcement.
Comments for the author
My recommendation is somewhere between minor and major revisions. If it would seem helpful for me to look at the paper again, I'd be happy to do so, but I'm not going to insist.
Reviewer 3 (Anonymous)
Basic reporting
The manuscript appears to meet all the standards for the journal.
Experimental design
The experimental design seems sound, apart from one small point. It would seem better not to report on the monophyly of taxa with one representative. The outcome is trivial, as there is no opportunity to not be monophyletic. Hence, restricting the evaluation to taxa with two more more representatives allows more meaningful results, and perhaps might save some computation. I do agree that reporting the number of taxa with a single representative is useful, but considering them as monophyletic when there is no possible way for them not to be monophyletic seems silly.
Referring to monotypic taxa as monophyletic when that is all they can be is a valid point. However, in the analysis output tables for both in the results and the summary, we actually distinguish between ‘monophyletic’, ‘non-monophyletic’ and ‘monotypic’, to avoid said inflation of the number of monophyletic taxa. The only instance where monophyletic and monotypic taxa are treated the same is when plotting the results. We do however feel that this is sensible, as the plot is mainly intended to highlight the issues (non-monophyletic taxa and the tips disrupting their monophyly), and we would deem adding an additional color to distinguish the monotypic taxa as adding unnecessary complexity to the plot.
Validity of the findings
The findings seem valid.
Comments for the author
Although my determination is accept as is, I think you should consider the point raised regarding evaluating and reporting monophyly for taxa with one representative.
" | Here is a paper. Please give your review comments after reading it. |
116 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. While there is no cure for Alzheimer's disease (AD), early diagnosis and accurate prognosis of AD may enable or encourage lifestyle changes, neurocognitive enrichment, and interventions to slow the rate of cognitive decline. The goal of our study was to develop and evaluate a novel deep learning algorithm to predict mild cognitive impairment (MCI) to AD conversion at three years using longitudinal and whole-brain 3D MRI.</ns0:p><ns0:p>Methods. This retrospective study consisted of 320 normal cognition (NC), 554 MCI, and 237 AD patients. Longitudinal data include T1-weighted 3D MRI obtained at initial presentation with diagnosis of MCI and at 12-month follow up. Whole-brain 3D MRI volumes were used without a priori segmentation of regional structural volumes or cortical thicknesses. MRIs of the AD and NC cohort were used to train a deep learning classification model to obtain weights to be applied via transfer learning for prediction of MCI patient conversion to AD at three years. Two (zero-shot and fine tuning) transfer learning methods were evaluated. Three different convolutional neural network (CNN) architectures (sequential, residual bottleneck, and wide residual) were compared. Data were split into 75% and 25% for training and testing, respectively, with 4-fold cross validation. Prediction accuracy was evaluated using balanced accuracy. Heatmaps were generated.</ns0:p><ns0:p>Results. The sequential convolutional approach yielded slightly better performance than the residualbased architecture, the zero-shot transfer learning approach yielded better performance than fine tuning, and CNN using longitudinal data performed better than CNN using a single timepoint MRI in predicting MCI conversion to AD. The best CNN model for predicting MCI conversion to AD at three years yielded a balanced accuracy of 0.793. Heatmaps of the prediction model showed regions most relevant to the network including the lateral ventricles, periventricular white matter and cortical gray matter.</ns0:p><ns0:p>Conclusions. This is the first convolutional neural network model using longitudinal and whole-brain 3D MRIs without extracting regional brain volumes or cortical thicknesses to predict future MCI to AD conversion at 3 years. This approach could lead to early prediction of patients who are likely to progress to AD and thus may lead to better management of the diseases.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Background</ns0:head><ns0:p>Alzheimer's disease (AD) is a progressive neurodegenerative disease characterized by loss of memory and other cognitive functions <ns0:ref type='bibr' target='#b32'>(McKhann et al. 2011)</ns0:ref>. Mild Cognitive Impairment (MCI) is considered a transitional state between normal aging and dementia. Many patients progress from MCI to AD, but others remain stable without developing AD. Although diagnoses of MCI and AD are typically made using neuropsychological tests <ns0:ref type='bibr' target='#b38'>(Petersen et al. 1999;</ns0:ref><ns0:ref type='bibr' target='#b16'>Jak et al. 2009)</ns0:ref>, imaging methods are also used to diagnose AD and to monitor disease progression because they provide neural correlates of the underlying brain dysfunction in a longitudinal non-invasive manner <ns0:ref type='bibr' target='#b18'>(Johnson et al. 2012)</ns0:ref>. While there is no cure for AD, early diagnosis and accurate prognosis may enable or encourage lifestyle changes, neurocognitive enrichment, and therapeutic interventions that strive to improve symptoms, or at least slow their rate of decline, thereby improving the quality of life <ns0:ref type='bibr' target='#b6'>(Epperly et al. 2017)</ns0:ref>.</ns0:p><ns0:p>Machine learning (ML) is increasingly being used in medicine from disease classification to prediction of clinical progression (de Bruijne 2016; <ns0:ref type='bibr' target='#b7'>Erickson et al. 2017)</ns0:ref>. ML uses algorithms to learn the relationship amongst different data elements to inform outcomes. Neural networks, a form of ML, are made up of a collection of connected nodes that model the neurons present in a human brain <ns0:ref type='bibr' target='#b8'>(Graupe 2013)</ns0:ref>. Each connection, similar to a synapse, transmits and receives signals to other nodes. Each node and the connections it forms are initialized with weights which are adjusted throughout training and create mathematical relationships between the input data and the outcomes. In contrast to traditional analysis methods such as logistic regression, neural networks do not require relationships between different input variables and the outcomes to be explicitly specified a priori. In radiology, ML can accurately detect lung nodules on chest X-rays <ns0:ref type='bibr' target='#b10'>(Harris et al. 2019)</ns0:ref>. In cardiology, ML can detect abnormal EKG patterns <ns0:ref type='bibr' target='#b19'>(Johnson et al. 2018</ns0:ref>). ML has also been used to estimate risk, such as in the Framingham Risk Score for coronary heart disease <ns0:ref type='bibr' target='#b0'>(Alaa et al. 2019)</ns0:ref>, and to guide antithrombotic therapy in atrial fibrillation <ns0:ref type='bibr' target='#b29'>(Lip et al. 2010)</ns0:ref> and defibrillator implantation in hypertrophic cardiomyopathy <ns0:ref type='bibr' target='#b35'>(O'Mahony et al. 2014)</ns0:ref>. Convolutional neural networks (CNNs), a deep-learning method, are widely used for image analysis and analysis of complex data <ns0:ref type='bibr' target='#b25'>(Lecun et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b24'>Krizhevsky et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b44'>Simonyan & Zisserman 2014)</ns0:ref>.</ns0:p><ns0:p>Deep learning classification amongst normal cognition (NC), MCI and AD based on magnetic resonance imaging (MRI) data have been reported <ns0:ref type='bibr' target='#b4'>(Cheng et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b23'>Korolev et al. 2017;</ns0:ref><ns0:ref type='bibr'>Wen et al. 2020)</ns0:ref>. By contrast, there are comparatively fewer studies that reported prediction of MCI to AD conversion using deep learning of MRI data <ns0:ref type='bibr' target='#b27'>(Lian et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Lin et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Liu et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Shmulev & Belyaev 2018;</ns0:ref><ns0:ref type='bibr' target='#b1'>Basaia et al. 2019;</ns0:ref><ns0:ref type='bibr'>Wen et al. 2020)</ns0:ref>. A few ML studies used extracted brain structures or cortical thicknesses, and some used 3D patches from predetermined locations across the brain, but not whole-brain MRI data, to predict MCI to AD conversion <ns0:ref type='bibr' target='#b27'>(Lian et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Liu et al. 2018;</ns0:ref><ns0:ref type='bibr'>Wen et al. 2020)</ns0:ref>. Most of the few prediction studies used single timepoint MRI data. To our knowledge there are only two studies that predicted disease progression using longitudinal imaging data. Bhagwat used a neural network (albeit not deep learning) and extracted cortical thicknesses from MRIs at two time points to predict decline in Mini-Mental Status Exam (MMSE) scores <ns0:ref type='bibr' target='#b2'>(Bhagwat et al. 2018)</ns0:ref>. Ostertag et al. used a CNN model on whole-brain MRI at two time points to predict decline in MMSE score but did not test their model on an independent testing dataset <ns0:ref type='bibr' target='#b36'>(Ostertag et al. 2019)</ns0:ref>. These two studies mixed NC, MCI and AD participants and thus accuracies are not applicable to prediction of MCI to AD conversion. To our knowledge, there are no published studies to date on deep learning to predict MCI to AD conversion using longitudinal and whole-brain 3D MRI.</ns0:p><ns0:p>The goal of our study was thus to develop and evaluate a novel deep-learning algorithm to predict MCI to AD conversion at three years using longitudinal and whole-brain 3D MRI. Longitudinal data include MRI obtained at initial presentation with diagnosis of MCI and at 12month follow up. Whole-brain 3D MRI volumes were used without a priori segmentation of regional structural volumes or cortical thicknesses. Several convolutional model architectures, transfer learning methods, and methods of merging longitudinal whole-brain 3D MRI data were evaluated to derive the final optimal deep-learning predictive model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> shows the overall design of the experiment. 3D MRIs of the AD and NC cohort were trained in a CNN classification model to obtain weights for transfer learning to be used in the CNN prediction of MCI patient conversion to AD in 3 years. Two (zero-shot and fine tuning) transfer learning methods for prediction were evaluated <ns0:ref type='bibr' target='#b37'>(Pan & Yang 2010)</ns0:ref>. The zero-shot transfer method used the intact weights obtained from the NC-AD classification without any additional training. The fine-tuning transfer method kept the weights in the convolutional layers frozen while allowing the remaining fully connected layers to change during additional training against the MCI images.</ns0:p></ns0:div>
<ns0:div><ns0:head>Participants</ns0:head><ns0:p>Data used in this study was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). Patients were taken from the ADNI1, ADNIGO, ADNI2, and ADNI3 patient sets. For the prediction task, inclusion criteria were patients diagnosed with MCI at baseline with MRI taken at baseline and ~12 months after baseline, and with a final diagnosis at 3 years post baseline of either MCI (labeled as stable MCI or sMCI) or AD (labeled as progressive MCI or pMCI). Patients who converted from MCI to AD before their 12-month follow up image were excluded since the analysis of their longitudinal image at that point would represent a diagnostic classification not a prediction. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>56.25% for training and 18.75% for validation from the complete data in each fold split. In the end, we had four trained models for each experiment configuration, and reported the mean and standard deviation (SD) BA.</ns0:p><ns0:p>For the AD-NC classification to obtain weights for transfer learning, a separate group of participants with a baseline diagnosis of AD or NC with images taken at baseline and ~12 months from baseline were selected. Data for training/validation and testing were randomly split up as 70% and 30% respectively. The training/validation set consisted of 387 patients (160 AD and 227 NC), while the independent testing set consisted of 170 patients (77 AD and 93 NC). Using 4-fold cross validation on the training/validation set resulted in 52.5% for training and 17.5% for validation from the complete data in each fold split. The networks were trained to classify the patients between AD and NC against the ground truth diagnosis of each patient using ADNI criteria. For all experiments the study size represents the total available patients in ADNI database fully meeting all inclusion criteria. Preprocessing 3D volumes of T1-weighted MRI were used as input to the networks. To remove intensity inhomogeneity from the image inputs, T1-weighted images with nonparametric non-uniform intensity normalization (N3) correction <ns0:ref type='bibr' target='#b45'>(Sled et al. 1998)</ns0:ref> were selected from the ADNI database. All MRIs were skull stripped with DeepBrain <ns0:ref type='bibr' target='#b15'>(Itzcovich 2018)</ns0:ref>, then linearly registered against a 2mm standard brain with nine degrees of freedom (translation, rotation, scaling) using FSL FLIRT <ns0:ref type='bibr' target='#b17'>(Jenkinson et al. 2012)</ns0:ref>, and finally min/max intensity normalized. Resulting images had a resolution of 91x109x91 voxels. During training, data augmentation was performed on the training set by rotating each MRI by up to 5% in any direction and randomly flipping them left to right along the sagittal axis.</ns0:p><ns0:p>Training, validation, and testing Images were split into testing, validation, and testing sets at the patient level in order to avoid data leakage. For both classification and prediction tasks, after assigning labels (either AD vs NC or sMCI vs pMCI), SciPy's train_test_split function was used to randomly split training/validation and testing and cross validation <ns0:ref type='bibr'>(Virtanen et al. 2020)</ns0:ref>. Stratification was done based on the labels in order to maintain a consistent distribution of diagnoses across the training, validation, and testing datasets. Randomization seed was set up as a constant to ensure the same train/validation/test split was obtained for each experiment run. Balanced accuracy (BA), defined as the average of sensitivity and specificity, was used as the main binary classifier metric to eliminate the inflated accuracy effect caused by imbalanced data sets. For purposes of computing accuracy, we use a standard value of 0.5 as the threshold between the two labels (either NC & AD or pMCI & sMCI). We also computed the area under the receiver operating characteristic curve (AUC) for each run, since it provides a more general measure of the potential performance of a network across a range of thresholds. To prevent data leakage, once the test partition was randomly selected, the test set images were set aside and not used for any training or validation. We repeated each training experiment 4 times, using the standard k-fold cross validation approach where the training/validation set is partitioned into four subsets, using each subset once for validation, and reported both the mean and standard deviation of the BAs for each experiment. This ensures that each image is included at least once in both the validation and training sets, minimizing the potential selection bias that a single random data split may introduce. For each cross-validation fold, classification task training was performed for 200 epochs with early stopping based on no improvement in loss function for 80 epochs. Fine tuning after freezing all convolutional layers was performed for 100 epochs with early stopping patience of 40. The weights of the epoch ending with the lowest loss were saved and used to obtain the validation BA, and then the network was run against the test set for the test BA. Additional attempts at fine tuning by also unfreezing the last convolutional block were noted to degrade accuracy, so this approach was not considered any further.</ns0:p></ns0:div>
<ns0:div><ns0:head>CNN Architecture</ns0:head><ns0:p>The neural network models (Figure <ns0:ref type='figure'>2</ns0:ref>) consisted of a convolutional section followed by a fully connected section (head). Three (sequential, residual with bottleneck, and wide residual) types of convolutional blocks (Figure <ns0:ref type='figure'>3</ns0:ref>) and three head architectures (Figure <ns0:ref type='figure'>4</ns0:ref>) were examined. In addition, we also evaluated these networks using a single (baseline) timepoint MRI and MRIs at two time points (baseline and 12 months). For the dual timepoint networks, three types of networks were explored to incorporate longitudinal data: (1) Siamese network (two identical parallel channels with weights tied together) using a subtraction layer as the merging function, (2) Siamese network with a concatenation merge layer, and (3) a twin network (identical channels with weights independently optimized). Since the flattened set of post-convolution features in the twin architecture is different in each channel, as they are the result of different parameters, there is no rationale for directly subtracting them, so we only considered a concatenation merge option for the twin architecture. For all networks, the final binary classifier layer was fully connected with sigmoid activation. When performing the prediction tasks (using both zero-shot learning and fine-tuning) in the single timepoint experiments, we attempted the same task using both the initial baseline MRI as well as the 1-year follow up image. Using either the single timepoint with the 1-year image or both timepoints together longitudinally represents, for patients who have had MCI for one year and not yet progressed to AD, a prediction of whether they will eventually convert to AD within two more years.</ns0:p><ns0:p>After initial experimentation, an optimal set of blocks was identified for the sequential and wide residual network styles, namely using 6 blocks with widths (number of activation maps) = {64, 128, 256, 512, 1024, 2048}. In the case of the sequential convolution network, each block reduced the resolution via maximum pooling as the width increased, down to 1x1x1 in the final convolutional block. In the case of the wide residual network, convolutions with the use of strides gradually reduced the resolution down to 2x2x2, with a global maximum pooling layer in PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the head portion of the network. When a convolutional layer processes an input whose size is odd-numbered in any of its dimensions (2n-1 for any integer n) the resulting output of a stride 2 convolution will be of size n for the corresponding dimension. For example, since the input image has resolution 91x109x91, the output after the first stride 2 convolution will be 46x55x46. For the bottleneck residual, the final convolutional resolution was also 2x2x2, achieved via strides, but this required 7 blocks with widths = {64, 64, 128, 256, 512, 1024, 2048}. The bottleneck architecture used 1x1x1 convolutions for resolution reduction, so the behavior was slightly different when the resolution had odd numbers, allowing for 7 instead of 6 blocks until the resolution was down to 2x2x2. The portion of the network with flattened non-convolutional fully connected layers after the last convolutional layer up until the final binary classifier is known as the 'head.' After initial analysis of networks with varying heads, global maximum pooling followed directly by a single final dense prediction layer was selected as the optimal fully connected layer architecture.</ns0:p><ns0:p>Training was initially attempted using both non-adaptive (SGD with Nesterov momentum), as well as adaptive (Adam) optimizers <ns0:ref type='bibr' target='#b22'>(Kingma & Ba 2019)</ns0:ref>. The Adam optimizer was able to achieve reductions in loss function with accompanying increase in accuracy much more rapidly and aggressively. However, without a scheduled reduction of the base learning rate, the network became unstable in the latter epochs with rapid swings in the loss function. The use of an exponentially decaying learning rate schedule consistently stabilized both the loss and accuracy curves in an optimal fashion. The final selected optimization approach was thus the Adam optimizer with an exponentially decreasing learning rate schedule with expected initial rate 𝐿 and final rate where is the current epoch and is the final expected number of 𝑅 𝑠𝑡𝑎𝑟𝑡 𝐿𝑅 𝑒𝑛𝑑 𝑡 𝑇 epochs:</ns0:p><ns0:formula xml:id='formula_0'>𝐿𝑅 𝑒𝑝𝑜𝑐ℎ = 𝐿𝑅 𝑠𝑡𝑎𝑟𝑡 ( 𝐿𝑅 𝑒𝑛𝑑 𝐿𝑅 𝑠𝑡𝑎𝑟𝑡 ) 𝑡/𝑇</ns0:formula><ns0:p>L2 regularization as also added in the convolutional layers in all networks. For the sequential model, we used a regularization parameter of 0.005, and for the residual models we used a parameter of 0.0001. All training was performed using Tensorflow2/Keras python library, on Google Compute Platform virtual instances with Tesla V-100 GPU acceleration.</ns0:p></ns0:div>
<ns0:div><ns0:head>Network visualization by heatmap</ns0:head><ns0:p>To visualize the brain regions that are most relevant to the network, the Grad-CAM <ns0:ref type='bibr' target='#b41'>(Selvaraju et al. 2017)</ns0:ref> technique was modified to work in 3 dimensions for generating heatmaps. Since the models reduced the resolution of the image information within the convolution blocks down to 2x2x2 voxels or less, the 3D Grad-CAM technique was applied to higher convolutional layers (with resolution close to 40 voxels per axis) to obtain more useful visualization heatmaps. This</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science approach enabled visual highlighting of the sections of the images that were most significant to the network. Heatmaps were obtained during the execution of the prediction models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>AD versus NC classification to generate weights Figure <ns0:ref type='figure'>5</ns0:ref> shows training curves for sequential single channel and wide residual dual channel training for the classification experiments. Overall, loss functions converged and leveled off at around 75 epochs. Other models showed similar convergence characteristics. Prediction of AD conversion at 3 years Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> shows the training curve for one of the dual sequential transfer learning fine-tuning attempts for the prediction experiment. Additional training did not improve the accuracy even though there was some clear reduction in loss function during training. Other models showed similar characteristics. Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref> summarizes the results of the prediction experiments, showing the outcomes of the dual and single timepoint networks using both zero-shot and fine-tuning transfer learning. The bottleneck residual convolutional style, because it performed much worse for classification, was not considered for prediction. For the single timepoint experiments, the 12-month images performed better than the initial baseline images during the classification task. The dual timepoint networks performed better than the single timepoints. For zero-shot best results were obtained from the sequential model, with the dual sequential producing a 0.795 average BA against the test set, followed by the single timepoint sequential (using the 1-year image as input) with a BA of 0.774. Transfer learning with fine tuning in all cases resulted in lowered accuracy as compared with the zero-shot approach. This occurred whether fine tuning was attempted by unfreezing only the fully connected layers or also the last convolutional layer.</ns0:p><ns0:p>Training time for each fine-tuning experiment was approximately 15-30 minutes. Prediction of conversion took using a trained model (either zero-shot or fine-tuned) took two seconds or less.</ns0:p><ns0:p>To visualize the brain regions that are most relevant to ML algorithms, post-training heatmaps for the wide residual dual network were generated (Figure <ns0:ref type='figure'>7</ns0:ref>). The highlighted structures included the lateral ventricles, periventricular white matter, and cortical surface gray matter.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This study developed and evaluated a few sophisticated ML algorithms to predict which MCI patients would convert to AD at three years using longitudinal whole-brain 3D MRI without a priori segmentation of regional structural volumes or cortical thicknesses. MRI data used for prediction were obtained at baseline and one year after baseline. The sequential convolutional approach yielded slightly better performance than the residual-based architecture, the zero-shot transfer learning approach yielded better performance than fine tuning, and the CNN using longitudinal data performed better than the CNN using a single timepoint MRI in predicting MCI conversion to AD. The best CNN model for predicting MCI conversion to AD at 3 years yielded a BA of 0.793.</ns0:p><ns0:p>Our predictive model used whole-brain MRIs without extract regional brain volumes and cortical thicknesses. We also evaluated multiple longitudinal network configurations (i.e., Siamese and non-Siamese twin networks with subtraction and concatenation as the merge function). Longitudinal images were found to be optimally processed by a twin architecture with concatenation merge. The dual timepoint network performed better regardless of whether the initial or the follow up image was used for the single timepoint. Restricting the network in a Siamese configuration where the weights of both channels are identical or using a subtraction merge function resulted in worse prediction, which suggests that the networks take full advantage of the additional information provided by the second time point data when they were allowed to train each channel with separate weights.</ns0:p><ns0:p>We employed 3D MRI instead of 2D multi-slice MRI. Previous studies have also reported MCI to AD prediction using a sequential full volume 3D architecture have obtained BAs of 0.75 <ns0:ref type='bibr' target='#b1'>(Basaia et al. 2019</ns0:ref>) and 0.73 <ns0:ref type='bibr'>(Wen et al. 2020)</ns0:ref>, while a study using residual architecture showed a resulting BA of 0.67 <ns0:ref type='bibr' target='#b42'>(Shmulev & Belyaev 2018)</ns0:ref> but did not involve longitudinal MRI data. Some studies used predetermined 3D patches uniformly sampled across the brain <ns0:ref type='bibr' target='#b27'>(Lian et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Liu et al. 2018;</ns0:ref><ns0:ref type='bibr'>Wen et al. 2020)</ns0:ref>. A limitation of the 3D-patch approach is that a subsequent fusion of the results via some kind of ensemble or voting method is needed to obtain a subject-level prediction, and brain-wide anatomic relationships are not taken into account. There are two previous related studies that used longitudinal MRI data for prediction of MCI or AD disease progression. Bhagwat et al. employed baseline and 1-year MRIs with Siamese neural network with concatenation merge to predict a pattern of decline in patients' MMSE score, yielding an accuracy of 0.95 <ns0:ref type='bibr' target='#b2'>(Bhagwat et al. 2018)</ns0:ref>. In contrast to our study, regional cortical thicknesses, a non-convolutional method, and clinical variables were used. The use of clinical variables could have contributed substantially to higher accuracy. ). The idea behind Siamese networks is that both images are processed by the convolutional layers with identical parameters, with equivalent flattened sets of features for each image at the end of the convolutions. Thus, theoretically, a direct subtraction merge of the corresponding flattened features would represent a measure of the progression of the images-presumably an MCI patient whose structural MRI features have worsened in a year would be more likely to progress to AD than a patient whose features remain stable. However, a simple subtraction merge may result in loss of predictive information if there are particular features that are predictive of progression regardless of whether they have changed between baseline and 1-year. Thus, we also explore the concatenation merge. In addition, a twin (non-Siamese) network with separate parameters may provide, partly due to the additional power of doubling the number of convolutional parameters, better predictive capacity, so we also explored this architecture. Since the flattened set of post-convolution features in the twin architecture is different in each channel, as they are the result of different parameters, there is no rationale for directly subtracting them, so we only considered a concatenation merge option for the twin architecture.</ns0:p><ns0:p>Although for the initial classification task, the twin wide residual network performed best among all architectures, after the transfer learning the twin sequential network was the overall best performer. In the single channel variants, the sequential networks performed best. The bottleneck variant of the residual network performed the worst amongst all architectures. In general, the residual networks provide the benefit of reducing the vanishing gradient problem, as compared with a non-residual sequential style. The bottleneck in particular is meant to strongly prevent vanishing gradients. Since vanishing gradients did not appear significantly during training, the advantages of the residual network appeared not to materialize, and thus, overall, the sequential networks seemed best fit for 3D MRI whole-brain analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Heatmaps</ns0:head><ns0:p>Heatmaps enabled visualization of the brain regions that were most relevant to ML algorithms to predict MCI and AD conversion. The most salient structures on the heatmaps were the lateral ventricles, periventricular deep white matter as well as extensive cortical gray matter. Ventricular enlargement and atrophy are known to be associated with AD. Reduction in white-matter volume has been described in AD, including some the specific regions that our heatmap analysis found to be on interest <ns0:ref type='bibr' target='#b46'>(Smith et al. 2000;</ns0:ref><ns0:ref type='bibr' target='#b9'>Guo et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b21'>Kao et al. 2019)</ns0:ref>, including the cingulate gyrus <ns0:ref type='bibr' target='#b3'>(Brun & Gustafson 1976;</ns0:ref><ns0:ref type='bibr' target='#b14'>Hirono et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b20'>Jones et al. 2006)</ns0:ref>, the middle occipital gyrus <ns0:ref type='bibr' target='#b51'>(Zhang & Wang 2015)</ns0:ref>, and the putamen <ns0:ref type='bibr' target='#b39'>(Pini et al. 2016)</ns0:ref>. Other brain regions that have shown to be associated with development of AD, such as the default node network and hippocampus, are not uniformly highlighted in the heatmaps. Our analysis approach is different from previous analysis and does not specifically identify networks, although amongst the heatmaps shown, there were components that were part of the default mode networks and hippocampus. In other words, our analysis did not specifically test whether hippocampus or default mode networks are predictive of MCI to AD conversion. It is possible that, given our MRI is based on structural changes, hippocampus and default mode networks might not have developed atrophy to be informative to prediction conversion.</ns0:p></ns0:div>
<ns0:div><ns0:head>Other technical considerations</ns0:head><ns0:p>We examined three different convolutional architectures to identify the best performance prediction model. Two residual variants were compared, with the wide residual network performing better than the bottleneck variant, and the non-residual sequential network performing better than both residual types. The two residual approaches compared here were 3D modifications of ResNet <ns0:ref type='bibr' target='#b11'>(He et al. 2016)</ns0:ref>. The bottleneck variation used pre-activation, a technique where the batch normalization and activation layers precede the convolutions. The term 'bottleneck' refers to a design where each residual block includes two initial layers with narrower widths. The second residual variant examined for comparison was the wide residual network <ns0:ref type='bibr' target='#b50'>(Zagoruyko & Komodakis 2016)</ns0:ref>. In this approach the widths were progressively increased, with an additional dropout layer between two convolutions in each residual block. The sequential model we tested was a 3D extension of the 2D VGG model <ns0:ref type='bibr' target='#b44'>(Simonyan & Zisserman 2014)</ns0:ref>, with sequential blocks formed by a combination of convolutional layers followed by pooling layers.</ns0:p><ns0:p>We also examined the relative performance of two transfer learning approaches. Zero-shot technique performed better than fine tuning. Further fine-tuning with the sMCI vs pMCI data reduced the accuracy of the prediction network from that obtained via the exclusive use of AD vs NC data for classification task training. The lack of training power of the MCI data suggests that brain images with either AD or NC, with their more discriminant anatomic features, are more suited for training a network eventually used for detecting the more subtle distinctions between pMCI and sMCI.</ns0:p><ns0:p>We also carefully prevented data leakage by splitting the training and testing datasets at patient level, ensuring that no data from the same patient would end up in both groups <ns0:ref type='bibr'>(Wen et al. 2020)</ns0:ref>.</ns0:p><ns0:p>Another type of leakage we avoided occurs when data are used for training the classification are also used for prediction task. Finally, in this study the testing set results were collected only after</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>all training was completed to prevent a third possible kind of leakage, namely where results from the test set influence the selection of hyperparameters or architecture. We also excluded patients who converted from MCI to AD before their 12-month follow up.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations and future directions</ns0:head><ns0:p>The increase in BA obtained by using the longitudinal MRI (0.795 vs 0.774) was modest, although both techniques represented an increase as compared to other published predictions of MCI conversion to AD. If the longitudinal MRI is otherwise available, it seems evident that the incremental improvement in predictive accuracy would justify its use. It is unclear, however, that without other reasons for performing a 1-year follow up MRI, this increase in predictive accuracy would represent a new indication from a cost-effectiveness perspective. Thus, a comprehensive cost-benefit model analysis would be useful in this area.</ns0:p><ns0:p>The study used only anatomical MRI data. Multiparametric MRI (such as diffusion-tensor imaging, task functional MRI and resting-state MRI) will be incorporated into these models in the future. Similarly, other modalities such at Positron Emission Tomography (PET) and nonimaging clinical data can also be included in the model. Further studies will need to apply this approach to other datasets to improve generalizability. Future studies should investigate MCI to AD conversion at 1, 2 and 5 years.</ns0:p><ns0:p>Our model is a predictive model approach that employs machine learning based on whole-brain anatomical MRI to predict MCI to AD conversion. Future studies will need to compare different predictive models including those that predict MCI to AD conversion based on extracted volume and cortical thickness as obtained using tools such as FastSurfer <ns0:ref type='bibr' target='#b12'>(Henschel et al. 2020</ns0:ref>). To do so, we will first systematically explore various methods to extract volume and cortical thickness, explore various approaches (such neural networks and support vector machines) to predict MCI to ADC conversion, and use these methods to do head-to-head comparisons on the same datasets.</ns0:p><ns0:p>Deep survival analysis <ns0:ref type='bibr' target='#b40'>(Ranganath et al. 2016</ns0:ref>) has been applied to the prediction of conversion to AD. <ns0:ref type='bibr'>Nakagawa et al. used</ns0:ref> deep survival analysis to model the prediction of conversion from either MCI or NC subjects to AD using volumetric data from MRI <ns0:ref type='bibr' target='#b34'>(Nakagawa et al. 2020)</ns0:ref>. A future extension of this analysis should investigate the use of data from the CNN models, both single-channel and longitudinal, using features extracted at the end of the convolutional layers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This is the first convolutional neural network model using longitudinal and whole-brain 3D MRIs without extracting regional brain volumes or cortical thicknesses to predict future MCI to AD conversion. This framework set the stage for further studies of additional data time points, Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Single and dual time point CNN architecture. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>Sequential, residual with bottleneck, and wide residual CNN blocks.</ns0:p><ns0:p>The convolutional layers portion of the network was organized as a series of blocks, each one with an increasing number K of activation maps (width), and with a corresponding decrease Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>Three head architectures. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:p>Training curves during classification. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>Heatmap visualization for 10 patients. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Prediction of AD at 3 years.</ns0:p><ns0:p>Results (BA and AUC mean and standard deviation) of prediction using zero-shot and finetuning. For single-timepoint networks, resluts are shown using both the baseline and the 1year MRIs. For zero-shot learning, each of the 4-fold classification trained weights were used as-is against each of the 4 validation fold sets for prediction (16 attempts) and against the prediction test set (4 attempts, with best result also shown). For fine-tuning, the weights from the best test zero-shot result were used as starting weights for training against each of the 4fold validation sets.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021) Manuscript to be reviewed Computer Science different image types, and non-image data to further improve prediction accuracy of MCI to AD conversion. Accurate prognosis could lead to better management of the diseases, thereby improving the quality of life.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>( A )</ns0:head><ns0:label>A</ns0:label><ns0:figDesc>Single timepoint CNN. For classification, input consisted of a single timepoint full-subject 3D MRI of patients diagnosed at baseline as either AD or CN, and output was binary classification of AD vs CN. For prediction, input was a single timepoint full-subject 3D MRI of patients diagnosed as MCI and output was a binary prediction of whether the patient progressed (pMCI) or remained stable (sMCI) 3 years later. (B) Dual timepoint CNN. Input included 3D MRI images obtained at both baseline and 12 months, with the patient population and output categories identical than those used for single timepoint for classification and prediction. Both kinds of networks began with a series of convolutional blocks, followed by flattening into one or more fully connected layers ending in a final binary choice of classification or prediction. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>in resolution obtained by either pooling or stride during convolution. The figures detail the individual layers that compose a single block. (A) Sequential convolutional block. Each block was composed of a single 3x3x3 convolution, followed by batch normalization, ReLU activation, and max pooling to reduce the resolution. (B)Residual bottleneck with preactivation convolutional block. Convolutions were preceded by batch normalization and ReLU activation. Two bottleneck 3x3x3 convolutions have a width of K/4 followed by a final 1x1x1 convolution with K width. In parallel the skip residual used a 1x1x1 convolution to match the width and resolution. In this architecture the first residual block was preceded by an initial batch normalization followed by a single 5x5x5 convolution, plus one final batch normalization and ReLU activation after the last block (not shown). (C) Wide Residual Network convolutional block. In this architecture the batch normalization and activations occured after the convolutional layers. Each block had two 3x3x3 convolutional layers with 3D spatial dropout in between, plus a 1x1x1 skip residual convolution to match width and resolution. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>( A )</ns0:head><ns0:label>A</ns0:label><ns0:figDesc>3D global maximum pooling fully connected block. The global pooling inherently flattened the nodes into a fully connected layer with N nodes directly followed by the final binary classifier layer. (B) Long fully connected block. After flattening into a layer of N nodes, there are two sets of fully connected (size 2048 and 1024), batch normalization, and leaky ReLU activation layers separated by a single dropout layer, before the final binary classifier. (C) Medium fully connected block. Initial 3D max pooling is followed by flattening into a fully connected layer of size N followed by an additional fully connected layer of size 128 and ReLU activation. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Loss and Accuracy curves during training for both training and validation sets. For sequential network and single timepoint, (A) loss, (B) accuracy. For wide residual network and dual timepoints, (C) loss, (D) accuracy. Solid lines are smoothed with 0.8 factor and faint lines show the unsmoothed values for each epoch.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,70.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>summarizes the participant demographics. Data were split into 75% and 25% for training/validation and testing, respectively, with the training/validation set composed of 415 patients (249 sMCI and 166 pMCI), and the testing set composed of 139 patients (84 sMCI and 55 pMCI). Then, we optimized the networks using a 4-fold cross-validation on the training/validation set, resulting in</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>details the results of the classification experiments to generate weights. For the single timepoint networks, sequential architecture performed best (BA = 0.860) followed by wide residual (BA = 0.840) and bottleneck residual (BA=0.727) on the testing data. For the dual timepoint networks, Siamese network with subtraction performed poorly overall with all architectures (BA 0<0.65) and that the twin non-Siamese approach with merge concatenation performed best for dual channels. The wide residual (BA=0.887) performed best followed by sequential (BA=0.876) and bottleneck residual (BA=0.800). Training time for each run was approximately 60-90 minutes. After model was trained, classification of a patient takes two seconds or less (most of this time is loading the images from storage into memory).</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b36'>Ostertag et al. 2019</ns0:ref>). Moreover, these two studies differed from ours in that they mixed AD, NC, and MCI patients together, and thus their prediction accuracies are not directly comparable to those from MCI to AD conversion studies (thus accuracies are not applicable to prediction of MCI to AD conversion) because the baseline diagnosis of NC or AD by itself is a strong predictor of neurocognitive decline.The use of a Siamese network architecture to analyze longitudinal changes in disease progression from medical images was explored byLi et al. and specifically studied in AD brain MRIs byBhagwat et al. and Ostertag et al. (Bhagwat et al. 2018;<ns0:ref type='bibr' target='#b36'>Ostertag et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b26'>Li et al. 2020</ns0:ref></ns0:figDesc><ns0:table /><ns0:note>Ostertag et al. used a similar Siamese network but employed whole-brain MRI to predict decline in patients' MMSE score, yielding a validation accuracy of 0.90, but no independent evaluation on a separate test dataset was performed (</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>AD vs NC classification to generate weights.Balanced accuracy and area under the receiver operating characteristic curve of the validation and test datasets obtained using single and dual time point networks with sequential, bottleneck residual and wide residual CNN blocks. Dual timepoint networks were twin (equal structure), non-Siamese (separate weights) and merged using concatenation. AUC = area under the receiver operating characteristic curve Best test average BAs are highlighted in bold for single channel (sequential) and dual channel (wide residual)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Model</ns0:cell><ns0:cell cols='2'>Validation mean ± SD</ns0:cell><ns0:cell cols='2'>Test mean ± SD</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Convolution</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Style</ns0:cell><ns0:cell>BA</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>BA</ns0:cell><ns0:cell>AUC</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Sequential</ns0:cell><ns0:cell>0.854 ± 0.027</ns0:cell><ns0:cell>0.918 ± 0.018</ns0:cell><ns0:cell>0.860 ± 0.016</ns0:cell><ns0:cell>0.922 ± 0.005</ns0:cell></ns0:row><ns0:row><ns0:cell>Single</ns0:cell><ns0:cell>Bottleneck Residual</ns0:cell><ns0:cell>0.689 ± 0.020</ns0:cell><ns0:cell>0.774 ± 0.017</ns0:cell><ns0:cell>0.727 ± 0.051</ns0:cell><ns0:cell>0.782 ± 0.052</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Wide Residual</ns0:cell><ns0:cell>0.835 ± 0.025</ns0:cell><ns0:cell>0.903 ± 0.027</ns0:cell><ns0:cell>0.840 ± 0.017</ns0:cell><ns0:cell>0.917 ± 0.006</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Sequential</ns0:cell><ns0:cell>0.855 ± 0.014</ns0:cell><ns0:cell>0.938 ± 0.007</ns0:cell><ns0:cell>0.876 ± 0.010</ns0:cell><ns0:cell>0.937 ± 0.012</ns0:cell></ns0:row><ns0:row><ns0:cell>Dual</ns0:cell><ns0:cell>Bottleneck Residual</ns0:cell><ns0:cell>0.772 ± 0.046</ns0:cell><ns0:cell>0.865 ± 0.037</ns0:cell><ns0:cell>0.800 ± 0.045</ns0:cell><ns0:cell>0.869 ± 0.043</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Wide Residual</ns0:cell><ns0:cell>0.856 ± 0.025</ns0:cell><ns0:cell>0.942 ± 0.012</ns0:cell><ns0:cell>0.887 ± 0.009</ns0:cell><ns0:cell>0.933 ± 0.003</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>BA = balanced accuracy.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:1:2:NEW 4 Apr 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "March 14, 2021
Dear Editors,
We appreciate the thoughtful comments from all the reviewers, and we have revised the manuscript to address their concerns. In addition to a number of clarifications throughout the text, we added a new Table 1 and substantially expanded Tables 2 and 3 (numbered 1 and 2 in the initial manuscript) to reflect additional experiments and calculations we performed in response to reviewer comments. Detailed responses to all the reviewers’ comments are presented below.
We believe the manuscript is now suitable for publication in PeerJ Computer Science.
Sincerely,
Dr. Tim Duong
On behalf of all authors
Reviewer 1
Basic reporting
Authors says 'Future studies should investigate MCI to AD conversion at 1, 2 and 5 years'. Recently, regarding this topic, deep survival analysis has been developed and applied to MRI data. Authors can add discussion about this.
Thank you for the suggestion. We have added the following to the discussion section:
“Deep survival analysis (Ranganath et al. 2016) has been applied to the prediction of conversion to AD. Nakagawa et al. used deep survival analysis to model the prediction of conversion from either MCI or NC subjects to AD using volumetric data from MRI (Nakagawa et al. 2020). A future extension of this analysis should investigate the use of data from the CNN models, both single-channel and longitudinal, using features extracted at the end of the convolutional layers.”
Experimental design
no comment
Validity of the findings
Authors used 4-Fold cross validation. Does this method need to be repeated more than once to avoid subject selection bias?
Thank you for the opportunity to clarify. We decided on a 4-fold approach instead of a much larger number of repetitions (e.g., 100x random repeated cross validation) as a way to balance the benefit of selection bias reduction against the cost of very long times required to train deep models. Published studies requiring long training of deep CNNs usually follow this approach, in contrast with shallower multi-layer perceptron models that have much shorter training times and thus can be more easily repeated many more times. Specifically, related studies of AD prediction using CNN referenced in our manuscript use a similar k-fold cross-validation approach (examples: Wen et al 2020, Basaia et al 2019, Shmulev & Belayev 2018). We have thus added the following explanation to the methods section:
“To prevent data leakage, once the test partition was randomly selected, the test set images were set aside and not used for any training or validation. We repeated each training experiment 4 times, using the standard k-fold cross validation approach where the training/validation set is partitioned into four subsets, using each subset once for validation, and reported both the mean and standard deviation of the BAs for each experiment. This ensures that each image is included at least once in both the validation and training sets, minimizing the potential selection bias that a single random data split may introduce.”
Many studies revealed default mode network and hippocampus contributes development of AD and MCI. Why doesn't heatmap in this study show their contributions? Please add discussion on this if possible.
Thank you for the suggestion. We have added this additional discussion:
“Other brain regions that have shown to be associated with development of AD, such as the default node network and hippocampus, are not uniformly highlighted in the heatmaps. Our analysis approach is different from previous analysis and does not specifically identify networks, although amongst the heatmaps shown, there were components that were part of the default mode networks and hippocampus. In other words, our analysis did not specifically test whether hippocampus or default mode networks are predictive of MCI to AD conversion. It is possible that, given our MRI is based on structural changes, hippocampus and default mode networks might not have developed atrophy to be informative to prediction conversion.”
Comments for the author
In abstract, what does '#D' means?
Thank you for pointing out this typographical error. The text has been corrected to state “3D”
I strongly recommend authors to upload the script on web.
We appreciate the recommendation. The scripts provided will be uploaded to a GitHub repository.
Reviewer 2
Basic reporting
In the present study, the authors investigate the performance of several deep learning models using structural MRI for the prediction of MCI conversion to AD dementia within 3 years. Furthermore, the authors explored the added value of including a second structural MRI performed 12 months after baseline; the authors thus had to exclude MCI participants who converted to AD dementia within 1 year. This implies that their prediction model was in fact a prediction model over 2 years in MCI subjects that remained stable over 1 year. The paper is well written and easy to follow.
Experimental design
Methods are clearly explained. The authors should have followed the TRIPOD guidelines for predictive models and provide the corresponding checklist (https://www.equator-network.org/reporting-guidelines/tripod-statement/).
We agree with the suggestion. We reviewed TRIPOD guidelines and revised our paper accordingly.
Below is a list of each TRIPOD checklist item and how we addressed in the paper and/or whether we added additional text to address it (new items are bolded):
Item
How addressed
1
Title addresses the items
2
Abstract addresses all checklist items
3a
Introduction summarizes the medical context, explains the rationale for using a longitudinal approach with 3D CNN model and references existing models
3b
Introduction clarifies the objective, including development and validation of model
4a
Methods specifies the data source (ADNI database) for training, validation and testing
4b
Indirectly specified via reference when mentioning that participants come from ADNI1, ADNI2, ADNI2, and ADNI3
5a
Indirectly specified by reference to ADNI
5b
Criteria is specified in Methods section
5c
Not applicable
6a
Outcome (sMCI vs pMCI) is defined in Methods section
6b
Not applicable (assessment is not blind)
7a
Predictors (ie, MRIs) identified in Methods section
7b
Not applicable (no blind assessment of images)
8
Added text: “For all experiments the study size represents the total available patients in ADNI database fully meeting all inclusion criteria.”
9
Not applicable – no missing data/imputation
10a
Discussed in detail in Methods section
10b
Discussed in detail in Methods section
10d
Discussed in Methods section
11
Not applicable – patients not divided into risk groups
13a
Detailed in Results section
13b
Table 1 added with demographic breakdown of the participants
14a
Already discussed in Methods Section, repeating number of participants and outcomes in Results section would be redundant
14b
Not applicable
15a
Model is described, but inclusion of coefficients/parameters (numbering in the millions in a deep network) not applicable for this kind of model
15b
Described in the Methods section
16
Performance measures (ie, BA and AUC) included in tables
18
Limitations included in Discussion section
19b
Interpretation and discussion of other results included in Discussion section
20
Implications included in Discussion section
21
Supplementary resources (ie, link to code in GitHub) will be added
22
Funding statement provided (separate from manuscript)
The authors should explore how changing random partitions for testing influences their results.
We appreciate the opportunity to clarify our approach to data splitting and its rationale. We set aside the testing partition to avoid any data leakage. We decided on a k-fold (with k=4) cross validation approach vs a repeating cross validation (where the training/validation partition is randomly redone many more times) as a way to balance the benefit of selection bias reduction against the cost of very long times required to train deep models. We added this additional language to the methods section:
“To prevent data leakage, once the test partition was randomly selected, the test set images were set aside and not used for any training or validation. We repeated each training experiment 4 times, using the standard k-fold cross validation approach where the training/validation set is partitioned into four subsets, using each subset once for validation, and reported both the mean and standard deviation of the BAs for each experiment. This ensures that each image is included at least once in both the validation and training sets, minimizing the potential selection bias that a single random data split may introduce.”
Validity of the findings
Their main novel finding was that including longitudinal MRI improved the prediction of conversion from MCI to AD dementia. However, this improvement was very small (0.02 in BA) and it is unlikely to be clinically relevant as the strategy of performing two structural MRI scans over a year does not seem to be sufficiently cost-effective.
We agree with the reviewer’s observation that the increase in BA of our best longitudinal model (0.795) as compared with our best non-longitudinal model (0.774) is only a modest 2.1 percentage points. We believe that a new demonstrable technique with an incremental increase in accuracy still represents a valuable addition to the existing literature where a variety of reports show non-longitudinal deep learning approaches with BAs of 0.75 or lower. Adding the additional longitudinal MRI to the analysis reduces the percentage of inaccurate predictions from 22.6% down to 20.5%, thus reducing the number of errors by a relative 9.3%.
We also recognize that this study does not attempt to demonstrate clinical cost-effectiveness of performing additional (or even the initial) MRIs, especially since there is no known intervention that will prevent AD progression altogether. In some cases (e.g., research cohorts) patients may already be followed with longitudinal MRIs--we are thus showing the relative increase in predictive accuracy that is obtained by fully incorporating longitudinal images into the model. These are relevant limitations, so we have added this additional language in the discussion, under limitations and future direction:
“The increase in BA obtained by using the longitudinal MRI (0.795 vs 0.774) was modest, although both techniques represented an increase as compared to other published predictions of MCI conversion to AD. If the longitudinal MRI is otherwise available, it seems evident that the incremental improvement in predictive accuracy would justify its use. It is unclear, however, that without other reasons for performing a 1-year follow up MRI, this increase in predictive accuracy would represent a new indication from a cost-effectiveness perspective. Thus, a comprehensive cost-benefit model analysis would be useful in this area.”
Moreover, the authors have not even demonstrated that their novel model actually outperforms standard regional volumetric measures such as hippocampus volume or medial temporal thickness that can be rapidly measured with other deep learning approaches such as e.g. FastSurfer. Therefore, with the data presented here, I cannot see how this model actually represents an advance towards better prediction of AD dementia in MCI patients.
Thank you for your comments.
We have added this additional language in the discussion section under limitations and future direction:
“Our model is a predictive model approach that employs machine learning based on whole-brain anatomical MRI to predict MCI to AD conversion. Future studies will need to compare different predictive models including those that predict MCI to AD conversion based on extracted volume and cortical thickness as obtained using tools such as FastSurfer (Henschel et al. 2020). To do so, we will first systematically explore various methods to extract volume and cortical thickness, explore various approaches (such neural networks and support vector machines) to predict MCI to ADC conversion, and use these methods to do head-to-head comparisons on the same datasets.”
Reviewer: Guilherme Folego
Basic reporting
The writing is fluid, and the text is reasonably easy to follow.
In general, there is sufficient background, and the reporting is sound.
For paper improvement, there are some remarks that need to be addressed, described below. Some details are missing, and some clarification is needed.
- Minor writing problems, for instance, 'T1-weighted #D MRI' (line 27), 'some used predetermination 3D patches' (line 90). Please review.
Thanks for pointing these out. We have corrected the two lines as follows:
“…T1-weighted 3D MRI…”
and
“and some used 3D patches from predetermined locations across the brain…”
- 'improve or slow the rate of decline of symptoms' (line 64). Even though the meaning could be inferred, this exact sentence is confusing. Please rewrite.
We have rewritten the sentence as follows:
“While there is no cure for AD, early diagnosis and accurate prognosis may enable or encourage lifestyle changes, neurocognitive enrichment, and therapeutic interventions that strive to improve symptoms, or at least slow their rate of decline, thereby improving the quality of life.”
- 'In contrast to traditional analysis methods such as logistic regression, ML does not require relationships between different input variables and the outcomes to be explicitly specified a priori' (lines 69-71). Actually, logistic regression is also a form of ML. Please use another example.
We agree. The example better applies more specifically to neural networks rather than machine learning as a whole, so we moved the sentence forward after neural networks are introduced and changed the reference to state that: “…neural networks do not require relationships…”
- 'To our knowledge, there are no published studies to date on deep learning to predict MCI to AD conversion using longitudinal and whole-brain 3D MRI.' (lines 100-101). In the cited paper 'Convolutional Neural Networks for Classification of Alzheimer’s Disease: Overview and Reproducible Evaluation', there are some references to the combination of 3D subject-level CNN (meaning whole-brain), Longitudinal, and sMCI vs pMCI, for instance, in Tables 5 and 6. Please clarify how the proposed method is different from this paper.
Thank you for the opportunity to clarify this point. Our reference to “longitudinal” means that, for a single prediction, both baseline and 1-year follow up MRIs are analyzed together through a dual-channel network. The experiments done in the paper by Wen et al. only use single-channel networks and thus one image at a time for each prediction. In their section 5.11 under “influence of the training dataset size” they explain their use of “longitudinal”: “We then assessed the influence of the amount of training data, comparing training using only baseline data to those with longitudinal data.” Thus, what they are doing is adding baseline and follow-up MRIs as separate instances within the data set and comparing what happens when their training data includes only the initial baseline MRIs to a training run with all the MRIs including baseline and follow up included. For each instance, the classification or prediction is still performed from a single MRI. Thus, the references to “longitudinal” in tables 5 and 6 are under the column “training data.” Separately, they allude to the kind of longitudinal analysis we have done (i.e., dual channel, use of more than one MRI together for classification or prediction) in section 2.4, where they list “other deep learning approaches for AD classification.” They state: “Several studies found during our literature search are out of our scope: either CNNs were not used in an end-to-end manner or not applied to images, other network architectures were implemented, or the approach required longitudinal or multimodal data.” [emphasis ours]. Further down in that section, they define the kind of longitudinal study that is out of their scope: “Longitudinal studies exploit information extracted from several time points of the same subject.”
- 'main binary classifier metric' (line 161). Given that BA only works at a determined threshold, it would be also interesting to report AUC, as it provides a general performance of the model across a range of thresholds.
We agree with the recommendation to also report AUC. We added the following language to the methods section:
“For purposes of computing accuracy, we use a standard value of 0.5 as the threshold between the two labels (either NC & AD or pMCI & sMCI). We also computed the area under the receiver operating characteristic curve (AUC) for each run, since it provides a more general measure of the potential performance of a network across a range of thresholds.”
We re-ran all the calculations using saved weights and Tables 1 and 2 (now numbered Tables 2 and 3 because we added a new Table 1) were updated to add the AUC values:
Table 2:
Model Convolution Style
Validation mean ± SD
Test mean ± SD
BA
AUC
BA
AUC
Single
Sequential
0.854 ± 0.027
0.918 ± 0.018
0.860 ± 0.016
0.922 ± 0.005
Bottleneck Residual
0.689 ± 0.020
0.774 ± 0.017
0.727 ± 0.051
0.782 ± 0.052
Wide Residual
0.835 ± 0.025
0.903 ± 0.027
0.840 ± 0.017
0.917 ± 0.006
Dual
Sequential
0.855 ± 0.014
0.938 ± 0.007
0.876 ± 0.010
0.937 ± 0.012
Bottleneck Residual
0.772 ± 0.046
0.865 ± 0.037
0.800 ± 0.045
0.869 ± 0.043
Wide Residual
0.856 ± 0.025
0.942 ± 0.012
0.887 ± 0.009
0.933 ± 0.003
Table 3:
Model
Convolution
style
Validation mean ± SD
Test mean ± SD
Zero-shot
Fine-tuning
Zero-shot
Fine-tuning
BA
AUC
BA
AUC
BA
(best)
AUC
BA
AUC
Single
(baseline)
Sequential
0.728 ± 0.043
0.790 ± 0.043
0.746 ± 0.0033
0.805 ± 0.028
0.765 ± 0.021
(0.79)
0.831 ± 0.015
0.754 ± 0.026
0.834 ± 0.015
Wide Residual
0.699 ± 0.034
0.775 ± 0.034
0.700 ± 0.017
0.774 ± 0.022
0.706 ± 0.031
(0.79)
0.816 ± 0.024
0.717 ± 0.030
0.816 ± 0.024
Single
(1 year)
Sequential
0.750 ± 0.038
0.807 ± 0.039
0.733 ± 0.050
0.814 ± 0.042
0.774 ± 0.013
(0.79)
0.857 ± 0.012
0.728 ± 0.008
0.836 ± 0.001
Wide Residual
0.729 ± 0.038
0.799 ± 0.028
0.704 ± 0.037
0.782 ± 0.033
0.743 ± 0.029
(0.77)
0.834 ± 0.020
0.719 ± 0.007
0.803 ± 0.001
Dual
Sequential
0.751 ± 0.027
0.808 ± 0.029
0.712 ± 0.027
0.772 ± 0.039
0.795 ± 0.010
(0.80)
0.874 ± 0.009
0.739 ± 0.012
0.828 ± 0.007
Wide Residual
0.727 ± 0.038
0.806 ± 0.033
0.719 ± 0.018
0.801 ± 0.032
0.753 ± 0.034
(0.79)
0.842 ± 0.010
0.775 ± 0.003
0.834 ± 0.001
- 'The neural network models [...]' (lines 173-). The network models are simply stated. I feel there is a need to add a discussion on the reasoning for choosing these models.
We concur with the recommendation. We have added the following additional explanation in the discussion section:
“The use of a Siamese network architecture to analyze longitudinal changes in disease progression from medical images was explored by Li et al. and specifically studied in AD brain MRIs by Bhagwat et al. and Ostertag et al. (Bhagwat et al. 2018; Ostertag et al. 2019; Li et al. 2020). The idea behind Siamese networks is that both images are processed by the convolutional layers with identical parameters, with equivalent flattened sets of features for each image at the end of the convolutions. Thus, theoretically, a direct subtraction merge of the corresponding flattened features would represent a measure of the progression of the images—presumably an MCI patient whose structural MRI features have worsened in a year would be more likely to progress to AD than a patient whose features remain stable. However, a simple subtraction merge may result in loss of predictive information if there are particular features that are predictive of progression regardless of whether they have changed between baseline and 1-year. Thus, we also explore the concatenation merge. In addition, a twin (non-Siamese) network with separate parameters may provide, partly due to the additional power of doubling the number of convolutional parameters, better predictive capacity, so we also explored this architecture. Since the flattened set of post-convolution features in the twin architecture is different in each channel, as they are the result of different parameters, there is no rationale for directly subtracting them, so we only considered a concatenation merge option for the twin architecture.”
- 'After initial analysis of networks with varying heads, global maximum pooling followed directly by a single final dense prediction layer was selected as the optimal fully connected layer architecture.' (lines 194-196). Given this description, 'heads' in Figures 2 and 4 might actually be misleading. Please adjust accordingly.
Thank you for the opportunity to clarify. The term “head” is used in the deep learning technical literature to refer to the final portion of a CNN, usually after the end of the convolutional layers, starting with the first flattened layer with one or more fully connected layers leading to the final classification layer. [This is because of the unfortunately confusing convention of considering the input of the network as the “bottom” and the output as the “top” even though this is the opposite of how the networks are usually depicted visually – see for example https://forums.fast.ai/t/terminology-question-head-of-neural-network/14819 and https://stackoverflow.com/questions/56004483/what-is-a-multi-headed-model-and-what-exactly-is-a-head-in-a-model]
Our use of “heads” follows this convention in figures 2 and 4 and in the text. To help minimize confusion, we added the following sentence:
“The portion of the network with flattened non-convolutional fully connected layers after the last convolutional layer up until the final binary classifier is known as the ‘head’”.
- 'Resulting images had a resolution of 91x109x91 voxels.' (lines 148-149). Considering down-sampling was performed in halves (stride=2 in Figure 3), how did authors deal with odd dimensions throughout the network? Additionally, given down-sampling was symmetrical, how did 91x109x91 voxels end up as 2x2x2 voxels or less? Please clarify.
Thank you for the opportunity to clarify. When a layer with an odd dimension 2n-1 undergoes a convolution with stride=2, the resulting layer will be of dimension n. Thus, for example in the case of a 91x109x91 image, the dimensions after successive stride 2 convolutions will be as follows:
91x109x91
46x55x46
23x28x23
12x14x12
6x7x6
3x4x3
2x2x2
We added the following explanation:
“When a convolutional layer processes an input whose size is odd-numbered in any of its dimensions (2n-1 for any integer n) the resulting output of a stride 2 convolution will be of size n for the corresponding dimension. For example, since the input image has resolution 91x109x91, the output after the first stride 2 convolution will be 46x55x46.”
- 'L2 regularization as also added in the convolutional layers in all networks.' (line 211). Please include the regularization parameter.
We have added the following sentence:
“For the sequential model, we used a regularization parameter of 0.005, and for the residual models we used a parameter of 0.0001.”
- 'All training was performed using Tensorflow2/Keras python library, on Google Compute Platform virtual instances with Tesla V-100 GPU acceleration.' (lines 211-213). It would be interesting to include some information on training and inference times.
We added the following sentence in the results section, under “AD vs NC classification to generate weights”:
“Training time for each run was approximately 60-90 minutes. After model was trained, classification of a patient takes a few seconds (most of this time is loading the images from storage into memory).”
We also added the following sentence in the results section under “Prediction of conversion at 3 years”:
“Training time for each fine-tuning experiment was approximately 15-30 minutes. Prediction of conversion took using a trained model (either zero-shot or fine-tuned) took only a few seconds.”
- 'Discussion' (lines 259-); 'We examined three different convolutional architectures [...]' (lines 314-). I feel that, at some point in the discussion, an analysis about the performances from the different networks is missing, for instance, why an architecture would be better than the others. Please complement.
The following additional discussion language was added:
“Although for the initial classification task, the twin wide residual network performed best among all architectures, after the transfer learning the twin sequential network was the overall best performer. In the single channel variants, the sequential networks performed best. The bottleneck variant of the residual network performed the worst amongst all architectures. In general, the residual networks provide the benefit of reducing the vanishing gradient problem, as compared with a non-residual sequential style. The bottleneck in particular is meant to strongly prevent vanishing gradients. Since vanishing gradients did not appear significantly during training, the advantages of the residual network appeared not to materialize, and thus, overall, the sequential networks seemed best fit for 3D MRI whole-brain analysis.”
Experimental design
This research is relevant and meaningful.
For paper improvement, there are some points in the text that need better descriptions and clarification. For a rigorous investigation, a few additional experiments are necessary.
- '(3) a twin network (identical channels with weights independently optimized).' (lines 180-181). The siamese networks were experimented with two merging options. It seems that the twin network was experimented with only one merging option. Which one? Why? Please clarify. For completeness, it would be interesting to experiment with both options using the twin network as well.
We have added the following explanation of why it did not make sense to apply subtraction to the twin network:
“Since the flattened set of post-convolution features in the twin architecture is different in each channel, as they are the result of different parameters, there is no rationale for directly subtracting them, so we only considered a concatenation merge option for the twin architecture.”
- 'Data were split into 75% and 25% for training and testing, respectively, with 4-fold cross validation.' (lines 34-35)
- 'Data for training and testing were randomly split up as 75% and 25%, with the training set composed of 415 patients (249 sMCI and 166 pMCI), and the testing set composed of 139 patients (84 sMCI and 55 pMCI). Training employed 4-fold cross validation.' (lines 129-132)
- 'Data for training and testing were randomly split up as 70% and 30%. The training set consisted of 387 patients (160 AD and 227 NC). Training employed 4-fold cross-validation.' (lines 136-138)
- 'Images were split into testing, validation, and testing sets at the patient level in order to avoid data leakage. [...]' (lines 154-159)
- 'We also carefully prevented data leakage [...]' (lines 336-)
I appreciate that the authors were concerned with data leakage, as this is very important. However, the text is not completely clear, and, more importantly, I'm afraid that the provided code does not represent what is described in the text.
In each data split, it seems that cross-validation was performed on the 'training' set, which makes it a combined training and validation set. If this is correct, a possible suggestion would be in the lines of: 'Data were split into 75% and 25% for training/validation and testing, respectively. Then, we optimized the networks using a 4-fold cross-validation on the training/validation set, resulting in 56.25% for training and 18.75% for validation from the complete data in each fold split. In the end, we had four trained models for each experiment configuration, and reported the mean and SD BA.'
We concur with the suggestion. We have updated the language based on your recommendation as follows:
“Data were split into 75% and 25% for training/validation and testing, respectively, with the training/validation set composed of 415 patients (249 sMCI and 166 pMCI), and the testing set composed of 139 patients (84 sMCI and 55 pMCI). Then, we optimized the networks using a 4-fold cross-validation on the training/validation set, resulting in 56.25% for training and 18.75% for validation from the complete data in each fold split. In the end, we had four trained models for each experiment configuration, and reported the mean and standard deviation (SD) BA.”
And for the classification task we updated the language to state:
“Data for training/validation and testing were randomly split up as 70% and 30% respectively. The training/validation set consisted of 387 patients (160 AD and 227 NC), while the independent testing set consisted of 170 patients (77 AD and 93 NC). Using 4-fold cross validation on the training/validation set resulted in 52.5% for training and 17.5% for validation from the complete data in each fold split.”
Additionally, in file data_split.py, there is a single call to train_test_split when splitting training and validations sets. It seems that there is no cross-validation. Please double check.
Thank you for the opportunity to clarify. The code in data_split.py performs the initial split between a training and a validation set and places the images in two separate directories. In order not to copy the image data 3 more times in storage for purposes of cross validation, we create additional directories with soft links. A sample script that automates this cross linking has been added to the uploaded code (cross_link.py). Once these directories are in place (with links to the images representing the four separate training/validation folds), when any of the models are instantiated in train.py, the property xval is set to a value between 1 and 4 and the code adds the proper suffix to point to the correct directory. The train.py code expects the cross-validation folds to be in directories named base_name_nn where nn is between 01 and 04 in our case.
- 'During training, data augmentation was performed on the training set by rotating each MRI by up to 5% in any direction' (lines 149-150). Given that registration includes rotation, I wonder how much this rotation augmentation might actually improve (or even deteriorate) results. For completeness, I suggest the authors perform an additional experimentation without this rotation augmentation.
We appreciate the suggestion and the opportunity to further explain our rationale. Data augmentation using minor random distortion such as small axial rotation, in the context of images which underwent affine registration as part of the preprocessing is a common technique that has been described before (e.g. Basaia et al 2019, Lian et al 2018). Since MRIs are obtained with patients having some degree of rotation and shift based on their positioning, registration normalizes this and aligns all images to the standard. Controlled data augmentation is then added during training in order to reduce potential overfitting. Since the training profiles for these networks show that overfitting is a significant limiting factor, there is no reason to suspect that augmentation will be detrimental, and it is likely that it will have some modest positive effect. We do not believe a test of the effect of particular part of the augmentation adds significant value to the question we are addressing in this study.
- 'For the single timepoint experiments, we used the 12-month images because they performed better than the initial baseline images during the classification task.' (lines 245-247)
- 'The dual timepoint network performed better regardless of whether the initial or the follow up image was used for the single timepoint.' (lines 274-275)
These are very important statements. If the 12-month images were used in the classification task, the results shouldn't change, as both baseline and 12-month images represent either AD or CN. However, if the 12-month images were used in the prediction task, then it changes the interpretation of the experiments and results, as it would mean a prediction at 2 years, instead of 3 years.
Nevertheless, using the follow up image for the single timepoint is an interesting experiment and should be reported throughout the paper, in Methods, Results, and Discussion.
Note that, in Figure 2, there are some differences between the architectures. First, as previously stated, there is the difference between baseline and follow up images, which should be compared in the single timepoint experiment. Secondly, the siamese architecture has twice the amount of input data. Finally, the twin architecture has twice the amount of input data and nearly twice the amount of parameters. I see this as an ablation study going from the 'baseline' model to the 'final/best' model.
For completeness, please report both single timepoint experiments.
We concur with the comments and the suggestion. For the prediction task, we performed both the zero-shot learning and fine-tuning experiments again using the baseline image (the values reported in our initial submission were using the 1-year image).
We added the following language to the methods section:
“When performing the prediction tasks (using both zero-shot learning and fine-tuning) in the single timepoint experiments, we attempted the same task using both the initial baseline MRI as well as the 1-year follow up image. Using either the single timepoint with the 1-year image or both timepoints together longitudinally represents, for patients who have had MCI for one year and not yet progressed to AD, a prediction of whether they will eventually convert to AD within two more years.”
In the results section, we updated the language to state:
“For the single timepoint experiments, the 12-month images performed better than the initial baseline images during the classification task.'
Table 3 (shown above, originally numbered Table 2) was updated to include the single timepoint experiments using the baseline and 1-year images separately.
Validity of the findings
It is not possible to completely assess the validity of the findings at this moment due to questions raised in '2. Experimental design'.
We have addressed the reviewer’s concerns above, and we believe the validity of the findings can now be fully assessed.
" | Here is a paper. Please give your review comments after reading it. |
117 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. While there is no cure for Alzheimer's disease (AD), early diagnosis and accurate prognosis of AD may enable or encourage lifestyle changes, neurocognitive enrichment, and interventions to slow the rate of cognitive decline. The goal of our study was to develop and evaluate a novel deep learning algorithm to predict mild cognitive impairment (MCI) to AD conversion at three years after diagnosis using longitudinal and whole-brain 3D MRI.</ns0:p><ns0:p>Methods. This retrospective study consisted of 320 normal cognition (NC), 554 MCI, and 237 AD patients. Longitudinal data include T1-weighted 3D MRI obtained at initial presentation with diagnosis of MCI and at 12-month follow up. Whole-brain 3D MRI volumes were used without a priori segmentation of regional structural volumes or cortical thicknesses. MRIs of the AD and NC cohort were used to train a deep learning classification model to obtain weights to be applied via transfer learning for prediction of MCI patient conversion to AD at three years post-diagnosis. Two (zero-shot and fine tuning) transfer learning methods were evaluated. Three different convolutional neural network (CNN) architectures (sequential, residual bottleneck, and wide residual) were compared. Data were split into 75% and 25% for training and testing, respectively, with 4-fold cross validation. Prediction accuracy was evaluated using balanced accuracy. Heatmaps were generated.</ns0:p><ns0:p>Results. The sequential convolutional approach yielded slightly better performance than the residualbased architecture, the zero-shot transfer learning approach yielded better performance than fine tuning, and CNN using longitudinal data performed better than CNN using a single timepoint MRI in predicting MCI conversion to AD. The best CNN model for predicting MCI conversion to AD at three years after diagnosis yielded a balanced accuracy of 0.793. Heatmaps of the prediction model showed regions most relevant to the network including the lateral ventricles, periventricular white matter and cortical gray matter.</ns0:p><ns0:p>Conclusions. This is the first convolutional neural network model using longitudinal and whole-brain 3D MRIs without extracting regional brain volumes or cortical thicknesses to predict future MCI to AD conversion at 3 years after diagnosis. This approach could lead to early prediction of patients who are likely to progress to AD and thus may lead to better management of the diseases.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Background</ns0:head><ns0:p>Alzheimer's disease (AD) is a progressive neurodegenerative disease characterized by loss of memory and other cognitive functions <ns0:ref type='bibr' target='#b32'>(McKhann et al. 2011)</ns0:ref>. Mild Cognitive Impairment (MCI) is considered a transitional state between normal aging and dementia. Many patients progress from MCI to AD, but others remain stable without developing AD. Although diagnoses of MCI and AD are typically made using neuropsychological tests <ns0:ref type='bibr' target='#b38'>(Petersen et al. 1999;</ns0:ref><ns0:ref type='bibr' target='#b16'>Jak et al. 2009)</ns0:ref>, imaging methods are also used to diagnose AD and to monitor disease progression because they provide neural correlates of the underlying brain dysfunction in a longitudinal non-invasive manner <ns0:ref type='bibr' target='#b18'>(Johnson et al. 2012)</ns0:ref>. While there is no cure for AD, early diagnosis and accurate prognosis may enable or encourage lifestyle changes, neurocognitive enrichment, and therapeutic interventions that strive to improve symptoms, or at least slow down mental deterioration, thereby improving the quality of life <ns0:ref type='bibr' target='#b6'>(Epperly et al. 2017)</ns0:ref>.</ns0:p><ns0:p>Machine learning (ML) is increasingly being used in medicine from disease classification to prediction of clinical progression (de Bruijne 2016; <ns0:ref type='bibr' target='#b8'>Erickson et al. 2017)</ns0:ref>. ML uses algorithms to learn the relationship amongst different data elements to inform outcomes. Neural networks, a form of ML, are made up of a collection of connected nodes that model the neurons present in a human brain <ns0:ref type='bibr' target='#b9'>(Graupe 2013)</ns0:ref>. Each connection, similar to a synapse, transmits and receives signals to other nodes. Each node and the connections it forms are initialized with weights which are adjusted throughout training and create mathematical relationships between the input data and the outcomes. In contrast to traditional analysis methods such as logistic regression, neural networks do not require relationships between different input variables and the outcomes to be explicitly specified a priori. In radiology, ML can accurately detect lung nodules on chest X-rays <ns0:ref type='bibr' target='#b11'>(Harris et al. 2019)</ns0:ref>. In cardiology, ML can detect abnormal EKG patterns <ns0:ref type='bibr' target='#b19'>(Johnson et al. 2018</ns0:ref>). ML has also been used to estimate risk, such as in the Framingham Risk Score for coronary heart disease <ns0:ref type='bibr' target='#b0'>(Alaa et al. 2019)</ns0:ref>, and to guide antithrombotic therapy in atrial fibrillation <ns0:ref type='bibr' target='#b30'>(Lip et al. 2010)</ns0:ref> and defibrillator implantation in hypertrophic cardiomyopathy <ns0:ref type='bibr' target='#b35'>(O'Mahony et al. 2014)</ns0:ref>. Convolutional neural networks (CNNs), a deep-learning method, are widely used for image analysis and analysis of complex data <ns0:ref type='bibr' target='#b25'>(Lecun et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b24'>Krizhevsky et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b45'>Simonyan & Zisserman 2014)</ns0:ref>.</ns0:p><ns0:p>Deep learning classification amongst normal cognition (NC), MCI and AD based on magnetic resonance imaging (MRI) data have been reported <ns0:ref type='bibr' target='#b4'>(Cheng et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b23'>Korolev et al. 2017;</ns0:ref><ns0:ref type='bibr'>Wen et al. 2020)</ns0:ref>. By contrast, there are comparatively fewer studies that reported prediction of MCI to AD conversion using deep learning of MRI data <ns0:ref type='bibr' target='#b28'>(Lian et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Lin et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b31'>Liu et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Shmulev & Belyaev 2018;</ns0:ref><ns0:ref type='bibr' target='#b1'>Basaia et al. 2019;</ns0:ref><ns0:ref type='bibr'>Wen et al. 2020)</ns0:ref>. A few ML studies used extracted brain structures or cortical thicknesses, and some used 3D patches from predetermined locations across the brain, but not whole-brain MRI data, to predict MCI to AD conversion <ns0:ref type='bibr' target='#b28'>(Lian et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b31'>Liu et al. 2018;</ns0:ref><ns0:ref type='bibr'>Wen et al. 2020)</ns0:ref>. Most of the few prediction studies used single timepoint MRI data. To our knowledge there are only two studies that predicted disease progression using longitudinal imaging data. Bhagwat used a neural network (albeit not deep learning) and extracted cortical thicknesses from MRIs at two time points to predict decline in Mini-Mental Status Exam (MMSE) scores <ns0:ref type='bibr' target='#b2'>(Bhagwat et al. 2018)</ns0:ref>. Ostertag et al. used a CNN model on whole-brain MRI at two time points to predict decline in MMSE score but did not test their model on an independent testing dataset <ns0:ref type='bibr' target='#b36'>(Ostertag et al. 2019)</ns0:ref>. These two studies mixed NC, MCI and AD participants and thus accuracies are not applicable to prediction of MCI to AD conversion. To our knowledge, there are no published studies to date on deep learning to predict MCI to AD conversion using longitudinal and whole-brain 3D MRI.</ns0:p><ns0:p>The goal of our study was thus to develop and evaluate a novel deep-learning algorithm to predict MCI to AD conversion at three years after diagnosis using longitudinal and whole-brain 3D MRI. Longitudinal data include MRI obtained at initial presentation with diagnosis of MCI and at 12-month follow up. Whole-brain 3D MRI volumes were used without a priori segmentation of regional structural volumes or cortical thicknesses. Several convolutional model architectures, transfer learning methods, and methods of merging longitudinal whole-brain 3D MRI data were evaluated to derive the final optimal deep-learning predictive model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the overall design of the experiment. 3D MRIs of the AD and NC cohort were trained in a CNN classification model to obtain weights for transfer learning to be used in the CNN prediction of MCI patient conversion to AD in 3 years after diagnosis. Two (zero-shot and fine tuning) transfer learning methods for prediction were evaluated <ns0:ref type='bibr' target='#b37'>(Pan & Yang 2010)</ns0:ref>. The zero-shot transfer method used the intact weights obtained from the NC-AD classification without any additional training. The fine-tuning transfer method kept the weights in the convolutional layers frozen while allowing the remaining fully connected layers to change during additional training against the MCI images.</ns0:p></ns0:div>
<ns0:div><ns0:head>Participants</ns0:head><ns0:p>Data used in this study was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). Patients were taken from the ADNI1, ADNIGO, ADNI2, and ADNI3 patient sets. For the prediction task, inclusion criteria were patients diagnosed with MCI at baseline with MRI taken at baseline and ~12 months after baseline, and with a final diagnosis at 3 years post baseline of either MCI (labeled as stable MCI or sMCI) or AD (labeled as progressive MCI or pMCI). Patients who converted from MCI to AD before their 12-month follow up image were excluded since the analysis of their longitudinal image at that point would represent a diagnostic classification not a prediction. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>56.25% for training and 18.75% for validation from the complete data in each fold split. In the end, we had four trained models for each experiment configuration, and reported the mean and standard deviation (SD) BA.</ns0:p><ns0:p>For the AD-NC classification to obtain weights for transfer learning, a separate group of participants with a baseline diagnosis of AD or NC with images taken at baseline and ~12 months from baseline were selected. Data for training/validation and testing were randomly split up as 70% and 30% respectively. The training/validation set consisted of 387 patients (160 AD and 227 NC), while the independent testing set consisted of 170 patients (77 AD and 93 NC). Using 4-fold cross validation on the training/validation set resulted in 52.5% for training and 17.5% for validation from the complete data in each fold split. The networks were trained to classify the patients between AD and NC against the ground truth diagnosis of each patient using ADNI criteria. For all experiments the study size represents the total available patients in ADNI database fully meeting all inclusion criteria. Preprocessing 3D volumes of T1-weighted MRI were used as input to the networks. To remove intensity inhomogeneity from the image inputs, T1-weighted images with nonparametric non-uniform intensity normalization (N3) correction <ns0:ref type='bibr' target='#b46'>(Sled et al. 1998)</ns0:ref> were selected from the ADNI database. All MRIs were skull stripped with DeepBrain <ns0:ref type='bibr' target='#b15'>(Itzcovich 2018)</ns0:ref>, then linearly registered against a 2mm standard brain with nine degrees of freedom (translation, rotation, scaling) using FSL FLIRT <ns0:ref type='bibr' target='#b17'>(Jenkinson et al. 2012)</ns0:ref>, and finally min/max intensity normalized. Resulting images had a resolution of 91x109x91 voxels. During training, data augmentation was performed on the training set by rotating each MRI by up to 5% in any direction and randomly flipping them left to right along the sagittal axis.</ns0:p><ns0:p>Training, validation, and testing Images were split into testing, validation, and testing sets at the patient level in order to avoid data leakage. For both classification and prediction tasks, after assigning labels (either AD vs NC or sMCI vs pMCI), SciPy's train_test_split function was used to randomly split training/validation and testing and cross validation <ns0:ref type='bibr' target='#b48'>(Virtanen et al. 2020)</ns0:ref>. Stratification was done based on the labels in order to maintain a consistent distribution of diagnoses across the training, validation, and testing datasets. Randomization seed was set up as a constant to ensure the same train/validation/test split was obtained for each experiment run. Balanced accuracy (BA), defined as the average of sensitivity and specificity, was used as the main binary classifier metric to eliminate the inflated accuracy effect caused by imbalanced data sets. For purposes of computing accuracy, we use a standard value of 0.5 as the threshold between the two labels (either NC & AD or pMCI & sMCI). We also computed the area under the receiver operating characteristic curve (AUC) for each run, since it provides a more general measure of the potential performance of a network across a range of thresholds. To prevent data leakage, once the test partition was randomly selected, the test set images were set aside and not used for any training or validation. We repeated each training experiment 4 times, using the standard k-fold cross validation approach where the training/validation set is partitioned into four subsets, using each subset once for validation, and reported both the mean and standard deviation of the BAs for each experiment. This ensures that each image is included at least once in both the validation and training sets, minimizing the potential selection bias that a single random data split may introduce. For each cross-validation fold, classification task training was performed for 200 epochs with early stopping based on no improvement in loss function for 80 epochs. Fine tuning after freezing all convolutional layers was performed for 100 epochs with early stopping patience of 40. The weights of the epoch ending with the lowest loss were saved and used to obtain the validation BA, and then the network was run against the test set for the test BA. Additional attempts at fine tuning by also unfreezing the last convolutional block were noted to degrade accuracy, so this approach was not considered any further.</ns0:p></ns0:div>
<ns0:div><ns0:head>CNN Architecture</ns0:head><ns0:p>The neural network models (Figure <ns0:ref type='figure'>2</ns0:ref>) consisted of a convolutional section followed by a fully connected section (head). Three (sequential, residual with bottleneck, and wide residual) types of convolutional blocks (Figure <ns0:ref type='figure'>3</ns0:ref>) and three head architectures (Figure <ns0:ref type='figure'>4</ns0:ref>) were examined. In addition, we also evaluated these networks using a single (baseline) timepoint MRI and MRIs at two time points (baseline and 12 months). For the dual timepoint networks, three types of networks were explored to incorporate longitudinal data: (1) Siamese network (two identical parallel channels with weights tied together) using a subtraction layer as the merging function, (2) Siamese network with a concatenation merge layer, and (3) a twin network (identical channels with weights independently optimized). Since the flattened set of post-convolution features in the twin architecture is different in each channel, as they are the result of different parameters, there is no rationale for directly subtracting them, so we only considered a concatenation merge option for the twin architecture. For all networks, the final binary classifier layer was fully connected with sigmoid activation. When performing the prediction tasks (using both zero-shot learning and fine-tuning) in the single timepoint experiments, we attempted the same task using both the initial baseline MRI as well as the 1-year follow up image. Using either the single timepoint with the 1-year image or both timepoints together longitudinally represents, for patients who have had MCI for one year and not yet progressed to AD, a prediction of whether they will eventually convert to AD within two more years.</ns0:p><ns0:p>After initial experimentation, an optimal set of blocks was identified for the sequential and wide residual network styles, namely using 6 blocks with widths (number of activation maps) = {64, 128, 256, 512, 1024, 2048}. In the case of the sequential convolution network, each block reduced the resolution via maximum pooling as the width increased, down to 1x1x1 in the final convolutional block. In the case of the wide residual network, convolutions with the use of strides gradually reduced the resolution down to 2x2x2, with a global maximum pooling layer in the head portion of the network. When a convolutional layer processes an input whose size is odd-numbered in any of its dimensions (2n-1 for any integer n) the resulting output of a stride 2 convolution with zero-padding will be of size n for the corresponding dimension. For example, since the input image has resolution 91x109x91, the output after the first stride 2 convolution will be 46x55x46. For the bottleneck residual, the final convolutional resolution was also 2x2x2, achieved via strides, but this required 7 blocks with widths = {64, 64, 128, 256, 512, 1024, 2048}. The bottleneck architecture used 1x1x1 convolutions for resolution reduction, so the behavior was slightly different when the resolution had odd numbers, allowing for 7 instead of 6 blocks until the resolution was down to 2x2x2. The portion of the network with flattened nonconvolutional fully connected layers after the last convolutional layer up until the final binary classifier is known as the 'head.' After initial analysis of networks with varying heads, a global maximum pooling operation resulting in a fully connected layer with a number of nodes equal to the number of activation maps (width) of the last convolutional step, followed directly by a single final dense prediction layer, was selected as the optimal fully connected layer architecture (Figure <ns0:ref type='figure' target='#fig_3'>4A</ns0:ref>).</ns0:p><ns0:p>Training was initially attempted using both non-adaptive (SGD with Nesterov momentum), as well as adaptive (Adam) optimizers <ns0:ref type='bibr' target='#b22'>(Kingma & Ba 2019)</ns0:ref>. The Adam optimizer was able to achieve reductions in loss function with accompanying increase in accuracy much more rapidly and aggressively. However, without a scheduled reduction of the base learning rate, the network became unstable in the latter epochs with rapid swings in the loss function. The use of an exponentially decaying learning rate schedule consistently stabilized both the loss and accuracy curves in an optimal fashion. The final selected optimization approach was thus the Adam optimizer with an exponentially decreasing learning rate schedule with expected initial rate 𝐿 and final rate where is the current epoch and is the final expected number of 𝑅 𝑠𝑡𝑎𝑟𝑡 𝐿𝑅 𝑒𝑛𝑑 𝑡 𝑇 epochs:</ns0:p><ns0:formula xml:id='formula_0'>𝐿𝑅 𝑒𝑝𝑜𝑐ℎ = 𝐿𝑅 𝑠𝑡𝑎𝑟𝑡 ( 𝐿𝑅 𝑒𝑛𝑑 𝐿𝑅 𝑠𝑡𝑎𝑟𝑡 ) 𝑡/𝑇</ns0:formula><ns0:p>L2 regularization as also added in the convolutional layers in all networks. For the sequential model, we used a regularization parameter of 0.005, and for the residual models we used a parameter of 0.0001. All training was performed using Tensorflow2/Keras python library, on Google Compute Platform virtual instances with Tesla V-100 GPU acceleration.</ns0:p></ns0:div>
<ns0:div><ns0:head>Network visualization by heatmap</ns0:head><ns0:p>To visualize the brain regions that are most relevant to the network, the Grad-CAM <ns0:ref type='bibr' target='#b42'>(Selvaraju et al. 2017)</ns0:ref> technique was modified to work in 3 dimensions for generating heatmaps. Since the models reduced the resolution of the image information within the convolution blocks down to 2x2x2 voxels or less, the 3D Grad-CAM technique was applied to higher convolutional layers (with resolution close to 40 voxels per axis) to obtain more useful visualization heatmaps. This approach enabled visual highlighting of the sections of the images that were most significant to the network. Heatmaps were obtained during the execution of the prediction models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>AD versus NC classification to generate weights Figure <ns0:ref type='figure'>5</ns0:ref> shows training curves for sequential single channel and wide residual dual channel training for the classification experiments. Overall, loss functions converged and leveled off at around 75 epochs. Other models showed similar convergence characteristics. </ns0:p></ns0:div>
<ns0:div><ns0:head>Prediction of AD conversion at 3 years after diagnosis</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref> shows the training curve for one of the dual sequential transfer learning fine-tuning attempts for the prediction experiment. Additional training did not improve the accuracy even though there was some clear reduction in loss function during training. Other models showed similar characteristics. Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> summarizes the results of the prediction experiments, showing the outcomes of the dual and single timepoint networks using both zero-shot and fine-tuning transfer learning. The bottleneck residual convolutional style, because it performed much worse for classification, was not considered for prediction. For the single timepoint experiments, the 12-month images performed better than the initial baseline images during the classification task. The dual timepoint networks performed better than the single timepoints. For zero-shot best results were obtained from the sequential model, with the dual sequential producing a 0.795 average BA against the test set, followed by the single timepoint sequential (using the 1-year image as input) with a BA of 0.774. Transfer learning with fine tuning in all cases resulted in lowered accuracy as compared with the zero-shot approach. This occurred whether fine tuning was attempted by unfreezing only the fully connected layers or also the last convolutional layer.</ns0:p><ns0:p>Training time for each fine-tuning experiment was approximately 15-30 minutes. Prediction of conversion took using a trained model (either zero-shot or fine-tuned) took two seconds or less.</ns0:p><ns0:p>To visualize the brain regions that are most relevant to ML algorithms, post-training heatmaps for the wide residual dual network were generated (Figure <ns0:ref type='figure'>7</ns0:ref>). The highlighted structures included the lateral ventricles, periventricular white matter, and cortical surface gray matter.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This study developed and evaluated a few sophisticated ML algorithms to predict which MCI patients would convert to AD at three years post-diagnosis using longitudinal whole-brain 3D MRI without a priori segmentation of regional structural volumes or cortical thicknesses. MRI data used for prediction were obtained at baseline and one year after baseline. The sequential convolutional approach yielded slightly better performance than the residual-based architecture, the zero-shot transfer learning approach yielded better performance than fine tuning, and the CNN using longitudinal data performed better than the CNN using a single timepoint MRI in predicting MCI conversion to AD. The best CNN model for predicting MCI conversion to AD at 3 years after diagnosis yielded a BA of 0.793.</ns0:p><ns0:p>Our predictive model used whole-brain MRIs without extract regional brain volumes and cortical thicknesses. We also evaluated multiple longitudinal network configurations (i.e., Siamese and non-Siamese twin networks with subtraction and concatenation as the merge function). Longitudinal images were found to be optimally processed by a twin architecture with concatenation merge. The dual timepoint network performed better regardless of whether the initial or the follow up image was used for the single timepoint. Restricting the network in a Siamese configuration where the weights of both channels are identical or using a subtraction merge function resulted in worse prediction, which suggests that the networks take full advantage of the additional information provided by the second time point data when they were allowed to train each channel with separate weights.</ns0:p><ns0:p>We employed 3D MRI instead of 2D multi-slice MRI. Previous studies have also reported MCI to AD prediction using a sequential full volume 3D architecture have obtained BAs of 0.75 <ns0:ref type='bibr' target='#b1'>(Basaia et al. 2019</ns0:ref>) and 0.73 <ns0:ref type='bibr'>(Wen et al. 2020)</ns0:ref>, while a study using residual architecture showed a resulting BA of 0.67 <ns0:ref type='bibr' target='#b43'>(Shmulev & Belyaev 2018)</ns0:ref> but did not involve longitudinal MRI data. Some studies used predetermined 3D patches uniformly sampled across the brain <ns0:ref type='bibr' target='#b28'>(Lian et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b31'>Liu et al. 2018;</ns0:ref><ns0:ref type='bibr'>Wen et al. 2020)</ns0:ref>. A limitation of the 3D-patch approach is that a subsequent fusion of the results via some kind of ensemble or voting method is needed to obtain a subject-level prediction, and brain-wide anatomic relationships are not taken into account. There are two previous related studies that used longitudinal MRI data for prediction of MCI or AD disease progression. Bhagwat et al. employed baseline and 1-year MRIs with Siamese neural network with concatenation merge to predict a pattern of decline in patients' MMSE score, yielding an accuracy of 0.95 <ns0:ref type='bibr' target='#b2'>(Bhagwat et al. 2018)</ns0:ref>. In contrast to our study, regional cortical thicknesses, a non-convolutional method, and clinical variables were used. The use of clinical variables could have contributed substantially to higher accuracy. Ostertag et al. used a similar Siamese network but employed whole-brain MRI to predict decline in patients' MMSE score, yielding a validation accuracy of 0.90, but no independent evaluation on a separate test dataset was performed <ns0:ref type='bibr' target='#b36'>(Ostertag et al. 2019</ns0:ref>). Moreover, these two studies differed from ours in that they mixed AD, NC, and MCI patients together, and thus their prediction accuracies are not directly comparable to those from MCI to AD conversion studies (thus accuracies are not applicable to prediction of MCI to AD conversion) because the baseline diagnosis of NC or AD by itself is a strong predictor of neurocognitive decline.</ns0:p><ns0:p>The use of a Siamese network architecture to analyze longitudinal changes in disease progression from medical images was explored by Li et al. and specifically studied in AD brain MRIs by <ns0:ref type='bibr'>Bhagwat et al. and Ostertag et al. (Bhagwat et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Ostertag et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Li et al. 2020</ns0:ref>). The idea behind Siamese networks is that both images are processed by the convolutional layers with identical parameters, with equivalent flattened sets of features for each image at the end of the convolutions. Thus, theoretically, a direct subtraction merge of the corresponding flattened features would represent a measure of the progression of the images-presumably an MCI patient whose structural MRI features have worsened in a year would be more likely to progress to AD than a patient whose features remain stable. However, a simple subtraction merge may result in loss of predictive information if there are particular features that are predictive of progression regardless of whether they have changed between baseline and 1-year. Thus, we also explore the concatenation merge. In addition, a twin (non-Siamese) network with separate parameters may provide, partly due to the additional power of doubling the number of convolutional parameters, better predictive capacity, so we also explored this architecture. Since the flattened set of post-convolution features in the twin architecture is different in each channel, as they are the result of different parameters, there is no rationale for directly subtracting them, so we only considered a concatenation merge option for the twin architecture.</ns0:p><ns0:p>Although for the initial classification task, the twin wide residual network performed best among all architectures, after the transfer learning the twin sequential network was the overall best performer. In the single channel variants, the sequential networks performed best. The bottleneck variant of the residual network performed the worst amongst all architectures. In general, the residual networks provide the benefit of reducing the vanishing gradient problem, as compared with a non-residual sequential style. The bottleneck in particular is meant to strongly prevent vanishing gradients. Since vanishing gradients did not appear significantly during training, the advantages of the residual network appeared not to materialize, and thus, overall, the sequential networks seemed best fit for 3D MRI whole-brain analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Heatmaps</ns0:head><ns0:p>Heatmaps enabled visualization of the brain regions that were most relevant to ML algorithms to predict MCI and AD conversion. The most salient structures on the heatmaps were the lateral PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ventricles, periventricular deep white matter as well as extensive cortical gray matter. Ventricular enlargement and atrophy are known to be associated with AD. Reduction in white-matter volume has been described in AD, including some the specific regions that our heatmap analysis found to be on interest <ns0:ref type='bibr' target='#b47'>(Smith et al. 2000;</ns0:ref><ns0:ref type='bibr' target='#b10'>Guo et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b21'>Kao et al. 2019)</ns0:ref>, including the cingulate gyrus <ns0:ref type='bibr' target='#b3'>(Brun & Gustafson 1976;</ns0:ref><ns0:ref type='bibr' target='#b14'>Hirono et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b20'>Jones et al. 2006)</ns0:ref>, the middle occipital gyrus <ns0:ref type='bibr' target='#b51'>(Zhang & Wang 2015)</ns0:ref>, and the putamen <ns0:ref type='bibr' target='#b39'>(Pini et al. 2016)</ns0:ref>. Other brain regions that have shown to be associated with development of AD, such as the default node network and hippocampus, are not uniformly highlighted in the heatmaps. Our analysis approach is different from previous analysis and does not specifically identify networks, although amongst the heatmaps shown, there were components that were part of the default mode networks and hippocampus. In other words, our analysis did not specifically test whether hippocampus or default mode networks are predictive of MCI to AD conversion. It is possible that, given our MRI is based on structural changes, hippocampus and default mode networks might not have developed atrophy to be informative to prediction conversion.</ns0:p></ns0:div>
<ns0:div><ns0:head>Other technical considerations</ns0:head><ns0:p>We examined three different convolutional architectures to identify the best performance prediction model. Two residual variants were compared, with the wide residual network performing better than the bottleneck variant, and the non-residual sequential network performing better than both residual types. The two residual approaches compared here were 3D modifications of ResNet <ns0:ref type='bibr' target='#b12'>(He et al. 2016)</ns0:ref>. The bottleneck variation used pre-activation, a technique where the batch normalization and activation layers precede the convolutions. The term 'bottleneck' refers to a design where each residual block includes two initial layers with narrower widths. The second residual variant examined for comparison was the wide residual network <ns0:ref type='bibr' target='#b50'>(Zagoruyko & Komodakis 2016)</ns0:ref>. In this approach the widths were progressively increased, with an additional dropout layer between two convolutions in each residual block. The sequential model we tested was a 3D extension of the 2D VGG model <ns0:ref type='bibr' target='#b45'>(Simonyan & Zisserman 2014)</ns0:ref>, with sequential blocks formed by a combination of convolutional layers followed by pooling layers.</ns0:p><ns0:p>We also examined the relative performance of two transfer learning approaches. Zero-shot technique performed better than fine tuning. Further fine-tuning with the sMCI vs pMCI data reduced the accuracy of the prediction network from that obtained via the exclusive use of AD vs NC data for classification task training. The lack of training power of the MCI data suggests that brain images with either AD or NC, with their more discriminant anatomic features, are more suited for training a network eventually used for detecting the more subtle distinctions between pMCI and sMCI.</ns0:p><ns0:p>We also carefully prevented data leakage by splitting the training and testing datasets at patient level, ensuring that no data from the same patient would end up in both groups <ns0:ref type='bibr'>(Wen et al. 2020)</ns0:ref>.</ns0:p><ns0:p>Another type of leakage we avoided occurs when data are used for training the classification are also used for prediction task. Finally, in this study the testing set results were collected only after all training was completed to prevent a third possible kind of leakage, namely where results from the test set influence the selection of hyperparameters or architecture. We also excluded patients who converted from MCI to AD before their 12-month follow up.</ns0:p><ns0:p>In several cases for both classification and prediction we observed that the BA for the testing dataset had a slightly higher mean and lower standard deviation than the corresponding results for validation. This higher variation in the validation experiments could potentially be explained by the fact that each cross-validation fold has a different validation set of images while all the testing results are obtained from the same single test set applying the different trained models. This higher variation also means that a single lower BA result in one of the validation folds could pull down the mean validation BA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations and future directions</ns0:head><ns0:p>The increase in BA obtained by using the longitudinal MRI (0.795 vs 0.774) was modest, although both techniques represented an increase as compared to other published predictions of MCI conversion to AD. If the longitudinal MRI is otherwise available, it seems evident that the incremental improvement in predictive accuracy would justify its use. It is unclear, however, that without other reasons for performing a 1-year follow up MRI, this increase in predictive accuracy would represent a new indication from a cost-effectiveness perspective. Thus, a comprehensive cost-benefit model analysis would be useful in this area.</ns0:p><ns0:p>The study used only anatomical MRI data. Multiparametric MRI (such as diffusion-tensor imaging, task functional MRI and resting-state MRI) will be incorporated into these models in the future. Similarly, other modalities such at Positron Emission Tomography (PET) and nonimaging clinical data can also be included in the model. Further studies will need to apply this approach to other datasets to improve generalizability. Future studies should investigate MCI to AD conversion at 1, 2 and 5 years post-diagnosis.</ns0:p><ns0:p>Our model is a predictive model approach that employs machine learning based on whole-brain anatomical MRI to predict MCI to AD conversion. Future studies will need to compare different predictive models including those that predict MCI to AD conversion based on extracted volume and cortical thickness as obtained using tools such as FastSurfer <ns0:ref type='bibr' target='#b13'>(Henschel et al. 2020</ns0:ref>). To do so, we will first systematically explore various methods to extract volume and cortical thickness, explore various approaches (such neural networks and support vector machines) to predict MCI to ADC conversion, and use these methods to do head-to-head comparisons on the same datasets.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Deep survival analysis <ns0:ref type='bibr' target='#b40'>(Ranganath et al. 2016</ns0:ref>) has been applied to the prediction of conversion to AD. <ns0:ref type='bibr'>Nakagawa et al. used</ns0:ref> deep survival analysis to model the prediction of conversion from either MCI or NC subjects to AD using volumetric data from MRI <ns0:ref type='bibr' target='#b34'>(Nakagawa et al. 2020)</ns0:ref>. A future extension of this analysis should investigate the use of data from the CNN models, both single-channel and longitudinal, using features extracted at the end of the convolutional layers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This is the first convolutional neural network model using longitudinal and whole-brain 3D MRIs without extracting regional brain volumes or cortical thicknesses to predict future MCI to AD conversion. This framework set the stage for further studies of additional data time points, different image types, and non-image data to further improve prediction accuracy of MCI to AD conversion. Accurate prognosis could lead to better management of the diseases, thereby improving the quality of life. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Single and dual time point CNN architecture. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>Sequential, residual with bottleneck, and wide residual CNN blocks.</ns0:p><ns0:p>The convolutional layers portion of the network was organized as a series of blocks, each one with an increasing number K of activation maps (width), and with a corresponding decrease Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>Three head architectures. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:p>Training curves during classification. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>Heatmap visualization for 10 patients. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Prediction of AD at 3 years.</ns0:p><ns0:p>Results (BA and AUC mean and standard deviation) of prediction using zero-shot and finetuning. For single-timepoint networks, resluts are shown using both the baseline and the 1year MRIs. For zero-shot learning, each of the 4-fold classification trained weights were used as-is against each of the 4 validation fold sets for prediction (16 attempts) and against the prediction test set (4 attempts, with best result also shown). For fine-tuning, the weights from the best test zero-shot result were used as starting weights for training against each of the 4fold validation sets.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>( A )</ns0:head><ns0:label>A</ns0:label><ns0:figDesc>Single timepoint CNN. For classification, input consisted of a single timepoint full-subject 3D MRI of patients diagnosed at baseline as either AD or CN, and output was binary classification of AD vs CN. For prediction, input was a single timepoint full-subject 3D MRI of patients diagnosed as MCI and output was a binary prediction of whether the patient progressed (pMCI) or remained stable (sMCI) 3 years later. (B) Dual timepoint CNN. Input included 3D MRI images obtained at both baseline and 12 months, with the patient population and output categories identical than those used for single timepoint for classification and prediction. Both kinds of networks began with a series of convolutional blocks, followed by flattening into one or more fully connected layers ending in a final binary choice of classification or prediction. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>in resolution obtained by either pooling or stride during convolution. The figures detail the individual layers that compose a single block. (A) Sequential convolutional block. Each block was composed of a single 3x3x3 convolution, followed by batch normalization, ReLU activation, and max pooling to reduce the resolution. (B)Residual bottleneck with preactivation convolutional block. Convolutions were preceded by batch normalization and ReLU activation. Two bottleneck 3x3x3 convolutions have a width of K/4 followed by a final 1x1x1 convolution with K width. In parallel the skip residual used a 1x1x1 convolution to match the width and resolution. In this architecture the first residual block was preceded by an initial batch normalization followed by a single 5x5x5 convolution, plus one final batch normalization and ReLU activation after the last block (not shown). (C) Wide Residual Network convolutional block. In this architecture the batch normalization and activations occured after the convolutional layers. Each block had two 3x3x3 convolutional layers with 3D spatial dropout in between, plus a 1x1x1 skip residual convolution to match width and resolution. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>( A )</ns0:head><ns0:label>A</ns0:label><ns0:figDesc>3D global maximum pooling fully connected block. The global pooling inherently flattened the nodes into a fully connected layer with N nodes directly followed by the final binary classifier layer. (B) Long fully connected block. After flattening into a layer of N nodes, there are two sets of fully connected (size 2048 and 1024), batch normalization, and leaky ReLU activation layers separated by a single dropout layer, before the final binary classifier. (C) Medium fully connected block. Initial 3D max pooling is followed by flattening into a fully connected layer of size N followed by an additional fully connected layer of size 128 and ReLU activation. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Loss and Accuracy curves during training for both training and validation sets. For sequential network and single timepoint, (A) loss, (B) accuracy. For wide residual network and dual timepoints, (C) loss, (D) accuracy. Solid lines are smoothed with 0.8 factor and faint lines show the unsmoothed values for each epoch.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,70.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>summarizes the participant demographics. Data were split into 75% and 25% for training/validation and testing, respectively, with the training/validation set composed of 415 patients (249 sMCI and 166 pMCI), and the testing set composed of 139 patients (84 sMCI and 55 pMCI). Then, we optimized the networks using a 4-fold cross-validation on the training/validation set, resulting in</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>details the results of the classification experiments to generate weights. For the single timepoint networks, sequential architecture performed best (BA = 0.860) followed by wide residual (BA = 0.840) and bottleneck residual (BA=0.727) on the testing data. For the dual timepoint networks, Siamese network with subtraction performed poorly overall with all architectures (BA 0<0.65) and that the twin non-Siamese approach with merge concatenation performed best for dual channels. The</ns0:figDesc><ns0:table /><ns0:note>wide residual (BA=0.887) performed best followed by sequential (BA=0.876) and bottleneck residual (BA=0.800). Training time for each run was approximately 60-90 minutes. After model was trained, classification of a patient takes two seconds or less (most of this time is loading the images from storage into memory).</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>AD vs NC classification to generate weights.Balanced accuracy and area under the receiver operating characteristic curve of the validation and test datasets obtained using single and dual time point networks with sequential, bottleneck residual and wide residual CNN blocks. Dual timepoint networks were twin (equal structure), non-Siamese (separate weights) and merged using concatenation. AUC = area under the receiver operating characteristic curve Best test average BAs are highlighted in bold for single channel (sequential) and dual channel (wide residual)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Model</ns0:cell><ns0:cell cols='2'>Validation mean ± SD</ns0:cell><ns0:cell cols='2'>Test mean ± SD</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Convolution</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Style</ns0:cell><ns0:cell>BA</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>BA</ns0:cell><ns0:cell>AUC</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Sequential</ns0:cell><ns0:cell>0.854 ± 0.027</ns0:cell><ns0:cell>0.918 ± 0.018</ns0:cell><ns0:cell>0.860 ± 0.016</ns0:cell><ns0:cell>0.922 ± 0.005</ns0:cell></ns0:row><ns0:row><ns0:cell>Single</ns0:cell><ns0:cell>Bottleneck Residual</ns0:cell><ns0:cell>0.689 ± 0.020</ns0:cell><ns0:cell>0.774 ± 0.017</ns0:cell><ns0:cell>0.727 ± 0.051</ns0:cell><ns0:cell>0.782 ± 0.052</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Wide Residual</ns0:cell><ns0:cell>0.835 ± 0.025</ns0:cell><ns0:cell>0.903 ± 0.027</ns0:cell><ns0:cell>0.840 ± 0.017</ns0:cell><ns0:cell>0.917 ± 0.006</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Sequential</ns0:cell><ns0:cell>0.855 ± 0.014</ns0:cell><ns0:cell>0.938 ± 0.007</ns0:cell><ns0:cell>0.876 ± 0.010</ns0:cell><ns0:cell>0.937 ± 0.012</ns0:cell></ns0:row><ns0:row><ns0:cell>Dual</ns0:cell><ns0:cell>Bottleneck Residual</ns0:cell><ns0:cell>0.772 ± 0.046</ns0:cell><ns0:cell>0.865 ± 0.037</ns0:cell><ns0:cell>0.800 ± 0.045</ns0:cell><ns0:cell>0.869 ± 0.043</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Wide Residual</ns0:cell><ns0:cell>0.856 ± 0.025</ns0:cell><ns0:cell>0.942 ± 0.012</ns0:cell><ns0:cell>0.887 ± 0.009</ns0:cell><ns0:cell>0.933 ± 0.003</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>BA = balanced accuracy.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54832:2:0:NEW 26 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
Dear Editor,
Thanks for the opportunity to address these additional comments. Our responses to the reviewer’s questions are below. We believe the manuscript is now suitable for publication.
Sincerely,
Dr. Tim Duong
On behalf of all authors
Reviewer: Guilherme Folego
Basic reporting
I appreciate the authors incorporating the observations made in the reviews.
There are only a few questions/remarks left.
'slow the rate of cognitive decline' (line 22)
'to improve symptoms, or at least slow their rate of decline' (line 64)
Please note that English is not my native language, so take this with a grain of salt.
The first sentence makes sense to me, while the second one does not.
In the first one, you are trying to slow the rate of cognitive decline. In the second one, you are trying to slow the rate of symptoms decline.
In my understanding, a symptom is something bad, such as bad memory. When a symptom declines, I infer that the patient is getting better. If you slow this rate, then the patient will no longer get better. That's why I suggested rewriting this sentence.
Thanks for pointing out the potential confusion. In order to remove any ambiguity, we have changed line 64 to state
“…to improve symptoms, or at least slow down mental deterioration, ...”
Table 2 and Table 3
It is not common to see test performance better than validation performance. Do the authors have an opinion on why this happened? It might be interesting to include some discussion about this.
Thanks for pointing out this observation. We have seen a variety of results in the literature in this regard, including testing accuracies that are higher, lower, and similar than validation accuracies. For example, Wen et al 2020 performed a number of different machine learning approaches to AD/MCI/normal cognition classification and several of them showed a higher accuracy in the ADNI test group as compared to the validation group (e.g., see Table 6, last line, SVM sMCI vs pMCI, trained on AD vs CN – validation accuracy of 0.70 and test accuracy of 0.76). In order to address this observation and a possible cause factor we added the following to the discussion section:
“In several cases for both classification and prediction we observed that the BA for the testing dataset had a slightly higher mean and lower standard deviation than the corresponding results for validation. This higher variation in the validation experiments could potentially be explained by the fact that each cross-validation fold has a different validation set of images while all the testing results are obtained from the same single test set applying the different trained models. This higher variation also means that a single lower BA result in one of the validation folds could pull down the mean validation BA.”
'The neural network models [...]' (lines 189-)
I appreciate the authors adding a discussion about siamese networks and merging options.
I feel that the same discussion could be included regarding the types of convolutional blocks ('sequential, residual with bottleneck, and wide residual').
The discussion of the rationale and differences between the three specific types of convolutional blocks is already included in the discussion subsection “other technical considerations”
'After initial analysis of networks with varying heads, global maximum pooling followed directly by a single final dense prediction layer was selected as the optimal fully connected layer architecture.' (lines 224-226)
I apologize for not being clear.
My understanding from this sentence is that, after the conv or merge blocks, there is a global maximum pooling layer, and a single final dense prediction layer (with 1 hidden unit, aka, neuron). There is a mismatch between this understanding and what is represented by 'head' in Figures 2 and 4.
Figure 2 shows three FC layers. Figure 4A shows two FC layers. I believe that they should match.
Thanks for the opportunity to further clarify. Figure 4A shows the head that we ended up using and that is meant to be described in the quoted section above. The global maximum pooling function essentially does a pooling and a flattening all at once, and the resulting layer is a fully connected layer flattened out to N nodes where is N is the number of activation maps in the final convolutional layer. Then that is followed by a single node layer. To minimize the confusion, we adjusted the quoted text to say:
'After initial analysis of networks with varying heads, a global maximum pooling operation resulting in a fully connected layer with a number of nodes equal to the number of activation maps (width) of the last convolutional step, followed directly by a single final dense prediction layer, was selected as the optimal fully connected layer architecture (figure 4A).'
Figure 2 is meant to be a more conceptual high-level architecture view, showing how the different portions of the model come together. The detailed number of layers within the convolutional and fully connected sections vary and are specified explicitly in the text and the figures 3 and 4, and as such figure 2 does not attempt to illustrate the actual specific layer count. The caption in figure 2 states that we use “one or more fully connected layers.”
'When a convolutional layer processes an input whose size is odd-numbered in any of its dimensions (2n-1 for any integer n) the resulting output of a stride 2 convolution will be of size n for the corresponding dimension.' (lines 214-216)
I appreciate the clarification. I believe that this behavior really depends on the implementation. So I understand that the implementation used here adds a zero-padding before the convolution, so no information is lost.
Yes, we are using zero-padding. We added that to the explanation so now it states:
'When a convolutional layer processes an input whose size is odd-numbered in any of its dimensions (2n-1 for any integer n) the resulting output of a stride 2 convolution with zero-padding will be of size n for the corresponding dimension.'
Experimental design
'Using either the single timepoint with the 1-year image or both timepoints together longitudinally represents, for patients who have had MCI for one year and not yet progressed to AD, a prediction of whether they will eventually convert to AD within two more years.' (lines 203-206)
Given this, maybe the authors should adjust the title (and respective text in the paper) to 2 years.
We agree with the concerns. Changing to 2 years wouldn’t apply for the case of models with single timepoint (which would 3 years after diagnosis). We thus revise to “3 years after diagnosis” throughout, which we believe is the most accurate description in all cases.
Validity of the findings
Findings are sound and grounded.
" | Here is a paper. Please give your review comments after reading it. |
118 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In this paper, a novel feature selection method called Robust Proportional Overlapping Score (RPOS), for microarray gene expression datasets has been proposed, by utilizing the robust measure of dispersion i.e. Median Absolute Deviation (MAD). This method robustly identifies the most discriminative genes by considering the overlapping scores of the gene expression values for binary class problems. Genes with a high degree of overlap between classes are discarded and the ones that discriminate between the classes are selected.</ns0:p><ns0:p>The results of the proposed method are compared with five state-of-the-art gene selection methods based on classification error, Brier score, and sensitivity, by considering eleven gene expression datasets. Classification of observations for different sets of selected genes by the proposed method is carried out by three different classifiers i.e. random forest, k-nearest neighbors (k-NN), and support vector machine (SVM). Box-plots and stability scores of the results are also shown in this paper. The results reveal that in most of the cases cases the proposed method outperforms the other methods.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Feature or variable selection is the process of selecting a subset of features from a large feature space, especially in high dimensional datasets such as microarray gene expression, for model construction.</ns0:p><ns0:p>Selecting a subset of genes/features is a necessary task in classification and regression problems. In regression, the feature or gene selection is carried out to better estimate the average value of the target or response variable, whereas in classification it is used to improve the classification accuracy. The motivation behind feature selection is that there are redundant and/or irrelevant features that do not contribute in regulating the response variable and adversely effects the underlying algorithms. So it is necessary to select those features which are discriminative and can help in simplification of model construction. Moreover, a small number of features help in reducing the training time, increasing the generalizability of the models by minimizing their variances and reducing the curse of dimensionality in n < p problems. Feature selection can be categorized into three categories i.e. <ns0:ref type='bibr'>Wrapper, Embedded, and</ns0:ref> Filter. The details of these methods are given below.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Wrapper methods</ns0:head><ns0:p>In Wrapper methods, all possible subsets of features in the training set are evaluated by using a predictive model. Each subset is assigned a score based on model accuracy on the hold-out (testing) set. These methods are computationally expensive, since for each feature subset a new predictive model is to be trained. An example of the wrapper method can be found in <ns0:ref type='bibr' target='#b54'>(Saeys et al., 2007)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Embedded methods</ns0:head><ns0:p>These methods are somehow similar to the Wrapper procedures. The embedded feature selection methods differ from the wrapper procedures in the sense that the former do not need to train a new model for each feature subset. In these procedures, gene/feature selection is considered as a constituent of model construction. Some of the most common embedded methods include decision tree algorithm, regression with LASSO and Ridge regression. The last two methods shrink the coefficient of non-informative features to zero and almost zero, respectively. Classification tree based classifier <ns0:ref type='bibr' target='#b10'>(Breiman et al., 1984)</ns0:ref> is another example of this method.</ns0:p></ns0:div>
<ns0:div><ns0:head>Filter methods</ns0:head><ns0:p>In Filter methods, feature selection is carried out by applying a statistical measure such as the mutual information criteria <ns0:ref type='bibr' target='#b29'>Guyon and Elisseeff (2003)</ns0:ref>, the pointwise mutual information criteria <ns0:ref type='bibr' target='#b65'>Yang and Pedersen (1997)</ns0:ref> and Pearson product-moment correlation, Relief-based algorithms <ns0:ref type='bibr' target='#b64'>Urbanowicz et al. (2018)</ns0:ref>, etc., to each feature independently or by finding the association of the feature with the target or response variable. Features are then ranked according to their relevance score. Features with the highest relevance scores are selected for model construction. Other examples of such methods could be seen in <ns0:ref type='bibr' target='#b24'>(Ghosh et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>El-Hasnony et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b56'>Seo and Cho, 2020;</ns0:ref><ns0:ref type='bibr' target='#b0'>Algamal and Lee, 2019)</ns0:ref>.</ns0:p><ns0:p>The proposed method is based on a filtering approach, where the discriminative features or genes that affect the target variable are identified by using the robust measure of dispersion i.e. median absolute deviation (MAD) for binary class problems. Eleven benchmark gene expression datasets are used to assess the discriminative ability of genes selected by the proposed method. The performance of genes selected through the proposed method is evaluated by using different classifiers i.e. Random Forest (RF) <ns0:ref type='bibr' target='#b8'>(Breiman, 2001)</ns0:ref>, K-Nearest Neighbors (k-NN) <ns0:ref type='bibr' target='#b14'>(Cover and Hart, 1967)</ns0:ref> and Support Vector Machine (SVM) <ns0:ref type='bibr' target='#b40'>(Liao et al., 2006)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Feature selection and their utility in classification analyses can be found in several studies. <ns0:ref type='bibr' target='#b20'>Dramiński et al. (2008)</ns0:ref> introduced a method called 'relative importance'. In this method, the discriminative genes are identified by constructing a large number of decision trees, where the genes that mostly contributed to assigning the samples/observations to their true classes are selected. <ns0:ref type='bibr' target='#b63'>Ultsch et al. (2009)</ns0:ref> proposed a method called 'PUL' in which the informative genes are selected by the help of a measure (PUL-score) based on retrieval information. A method called minimal redundancy maximal relevance (mRMR) was introduced by <ns0:ref type='bibr' target='#b19'>Ding and Peng (2005)</ns0:ref>, in which genes having maximum relevance with the target class and minimum redundancy are selected. An ensemble version of <ns0:ref type='bibr' target='#b19'>Ding and Peng (2005)</ns0:ref> named 'mRMRe' was introduced by <ns0:ref type='bibr' target='#b18'>De Jay et al. (2013)</ns0:ref>. Principal component analysis technique was used by <ns0:ref type='bibr' target='#b44'>Lu et al. (2011)</ns0:ref>, where those genes are considered informative that corresponded to the component with less variation. A similar study can be found in <ns0:ref type='bibr' target='#b62'>Talloen et al. (2007)</ns0:ref>, where the factor analysis technique is used rather than principal component analysis. <ns0:ref type='bibr' target='#b63'>Ultsch et al. (2009)</ns0:ref>; <ns0:ref type='bibr' target='#b42'>Liu et al. (2013)</ns0:ref> compared different feature selection methods in their study. Identification of informative genes by calculating the p-value of the statistical tests such as the Wilcoxon rank-sum test and t-test can be found in <ns0:ref type='bibr' target='#b38'>Lausen et al. (2004)</ns0:ref>. Selection of discriminative genes by exploiting impurity measures i.e. Gini index, max minority, and information gain can be found in <ns0:ref type='bibr' target='#b61'>Su et al. (2003)</ns0:ref>. Features or genes can also be selected by analyzing the overlapping degree between the different classes for each gene. A large overlapping degree between the different classes for a particular gene indicates that the gene is non-informative in classifying the observation to their correct class. A study based on the overlapping score of the genes for a binary class problem can be found in <ns0:ref type='bibr' target='#b3'>Apiletti et al. (2007)</ns0:ref>. This method, named as 'painter's feature selection method' calculates the overlapping degree between the two classes for each gene by considering a single factor i.e. the size of the overlapping area. Genes that have maximum overlapped regions are assigned higher scores. Genes are then sorted in increasing order based on their scores. This idea was further extended by Apiletti Manuscript to be reviewed In this method, a minimum subset of genes that unambiguously assign the maximum number of training samples to their correct classes is identified by considering the gene masks and overlapping scores through the set covering approach. The final subset of discriminative genes is obtained by considering all the genes in the minimum subset and the genes with the smallest overlapping scores. A robust version of <ns0:ref type='bibr' target='#b4'>Apiletti et al. (2012)</ns0:ref> can be found in <ns0:ref type='bibr' target='#b46'>Mahmoud et al. (2014)</ns0:ref>, where expression interval for each gene is calculated by using the Interquartile Range. <ns0:ref type='bibr' target='#b46'>Mahmoud et al. (2014)</ns0:ref> also considered the proportion of overlapping samples (POS) in each class for each gene. Genes with lower POS i.e. proportional overlapping scores were considered informative.</ns0:p><ns0:p>After obtaining the POS, the relative dominant class (RDC) for each gene was also calculated which associates each gene with the class for which it has a stronger distinguishing capability. The final set of genes/features is obtained by combining the minimum gene set via gene masks top ranked genes based on proportional overlapping scores (POS). <ns0:ref type='bibr' target='#b39'>Li and Gu (2015)</ns0:ref> proposed a method called more relevance less redundancy algorithm. Another study by <ns0:ref type='bibr' target='#b50'>Nardone et al. (2019)</ns0:ref> introduced a two step procedure for the feature selection, where extensive experiments were performed to evaluate the performance of their proposed method on the publicly available datasets related to computational biology field. A novel supervised learning technique is introduced in <ns0:ref type='bibr' target='#b7'>Bidgoli et al. (2020)</ns0:ref>. This method is designed particularly for the multi class problems.Furthermore this method is an extended version of decomposition-based multi-objective optimization approach. A feature selection method for binary classification problems was introduced by <ns0:ref type='bibr' target='#b17'>Dashtban et al. (2018)</ns0:ref>, in which the traditional bat algorithm is extended with more refined formulations, improved and multi-objective operators and a novel local search strategy. Other examples of feature selection methods could be found in <ns0:ref type='bibr' target='#b49'>(MotieGhader et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b16'>Dashtban and Balafar, 2017;</ns0:ref><ns0:ref type='bibr' target='#b51'>Nematzadeh et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b45'>Maghsoudloo et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Rostami et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b57'>Shamsara and Shamsara, 2020;</ns0:ref><ns0:ref type='bibr' target='#b2'>Ao et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b60'>Statnikov et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b52'>Rana et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Chamikara et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b50'>Nardone et al., 2019)</ns0:ref> and the references cited therein.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHOD</ns0:head><ns0:p>Microarray gene expression data is usually in the form of a matrix i.e. Z = [z ji ] , where Z ∈ ℜ p×n and z ji is the observed expression value of j th gene for i th tissue sample, for j = 1, 2, 3, . . . , p and i = 1, 2, 3, . . . , n . Each tissue sample is categorized into one of the two classes i.e. 0 or 1. Let W ∈ ℜ n be the class labels vector such that its i th component w i takes a unique value c which is in the form of either 0 or 1.</ns0:p><ns0:p>The number of samples/observations in microarray gene expression datasets are usually smaller than the number of features, which is also called n < p problem. Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For each class and gene j, two expression intervals are defined as; R j,c = [d j,c , e j,c ], j = 1, 2, 3, . . . , p, c = 0, 1.</ns0:p><ns0:p>(1) such that d j,c = Q 1( j,c) − 0.9MAD ( j,c) and e j,c = Q 3( j,c) + 0.9MAD <ns0:ref type='bibr'>( j,c)</ns0:ref> where Q 1( j,c) , Q 3( j,c) and MAD ( j, c) are the first (lower) quartile, third (upper) quartile and median absolute deviation (MAD) of gene j for class c respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Overlapped region:</ns0:head><ns0:p>The overlapping region between the two classes is represented by R v j , which shows the intersection region between the expression values of the target classes for gene j. It is defined by;</ns0:p><ns0:formula xml:id='formula_0'>R v j = R j,1 ∩ R j,2 .<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Non-outlier sample set:</ns0:p><ns0:p>The non-outlier sample set is symbolized by N j , it is a set of observations with expression values lying within their own response class core intervals. It is given as:</ns0:p><ns0:formula xml:id='formula_1'>N j = i : z ji ∈ R j,c i , i = 1, 2, 3, . . . , n. (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>)</ns0:formula><ns0:p>Total core interval: Total core interval for gene j is denoted by R j , it is the area between a global minimum and global maximum boundaries of both classes' core intervals. It is given as:</ns0:p><ns0:formula xml:id='formula_3'>R j = [d j , e j ],<ns0:label>(4)</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>such that d j = min(d j,1 , d j,2</ns0:formula><ns0:p>), e j = max(e j,1 , e j,2 ) represent lowest and highest boundaries of core interval R j,c of gene/feature with response c = (0, 1) respectively. Non-overlapped sample set: For gene j, the non-overlapping set is represented by O ′ j , which contains the non-outlier samples given by N j , with expression values not falling inside the overlap interval. It is given as:</ns0:p><ns0:formula xml:id='formula_5'>O ′ j = {i : i ∈ N j ∧ z ji ∈ R j,1 ⊖ R j,2 }. (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>)</ns0:formula><ns0:p>Overlapped sample set:</ns0:p><ns0:p>The overlapping samples set for gene j is characterized by O j , which consists of the observations with expression values falling inside the overlap interval R v j . It is given as:</ns0:p><ns0:formula xml:id='formula_7'>O j = N j − O ′ j ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where O ′ j contains all the non-overlapping samples.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gene masks matrix:</ns0:head><ns0:p>The matrix of gene masks i.e. M = [m ji ] p×n is constructed as follows:</ns0:p><ns0:formula xml:id='formula_8'>m ji = 1, i f z ji ∈ R j,1 ∩ R j,2 , 0, otherwise, j = 1, 2, 3, . . . , p, (7) such that R j,1 = [d j,1 , e j,1 ] and R j,2 = [d j,2 , e j,2 ], d j,1 = Q 1( j,1) −0.9MAD ( j,1) , e j,1 = Q 3( j,1) +0.9MAD ( j,1) , d j,2 = Q 1( j,2) − 0.9MAD ( j,2</ns0:formula><ns0:p>) and e j,2 = Q 3( j,c) + 0.9MAD ( j,2) respectively.</ns0:p><ns0:p>In the above expressions Q 1( j,c) , Q 3( j,c) and MAD ( j,c) represent the lower (first) quartile, upper (third) quartile and median absolute deviation respectively for each class c, where c is either 0 or 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Relative dominant class(RDC):</ns0:head><ns0:p>For each gene, Relative dominant class (RDC) is calculated, which associates each feature/gene with the class it is more capable to differentiate. It is defined as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_9'>RDC j = argmax c ∑ j∈U c I(m ji = 1) |U c | ,<ns0:label>(8</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where U c represents class c samples set i.e. U c ∈ {i c j = c} . Proposed (RPOS) score:</ns0:p><ns0:p>The proposed method (RPOS) is defined as.</ns0:p><ns0:formula xml:id='formula_10'>RPOS j = 4 R v j |O j | R j |N j | 2 ∏ c=1 φ c ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>where R v j is the length of overlap interval, R j is the length of total core interval, |O j | is the total number of overlapped samples and |N j | is the total number of non-outlier samples for gene j.</ns0:p><ns0:formula xml:id='formula_11'>φ c = |O j,c | |O j | ,</ns0:formula><ns0:p>where |O j,c | represents the overlapped samples lying in class. The number 4 is multiplied to keep the RPOS scores between 0 and 1. Smaller value of RPOS represents that a particular gene is more informative in classifying the tissue sample to its correct class.</ns0:p><ns0:p>The proposed method thus takes the following steps in selecting the most discriminative genes.</ns0:p><ns0:p>1. The proposed method initially identifies the minimum subset of genes via the greedy approach given in <ns0:ref type='bibr' target='#b4'>Apiletti et al. (2012)</ns0:ref>. The greedy approach utilizes gene mask matrix given in Equation ( <ns0:ref type='formula'>7</ns0:ref>) and RPOS scores in Equation ( <ns0:ref type='formula' target='#formula_10'>9</ns0:ref>) to form this subset. Gene that has the highest number of bits equals 1 is included in the subset. If more than one genes having the same number of bits 1 exist the one with smaller RPOS is selected. Using AND operator, the gene masks of the remaining genes, are updated for the selection of the second gene and so on. This process is repeated until the desired number of genes are selected, or the genes have no 1's in their gene masks. For further details on greedy approach gene selection, see <ns0:ref type='bibr' target='#b4'>Apiletti et al. (2012)</ns0:ref>.</ns0:p><ns0:p>2. The genes that are not selected in the minimum subset are arranged according to the RPOS scores and relative dominant class (RDC) by round-robin fashion method, in ascending order. A smaller score represents the higher discriminative ability of gene/feature.</ns0:p><ns0:p>3. After arranging the genes in step (2), the required top most ranked genes are selected.</ns0:p><ns0:p>4. The final set of genes for the model construction is obtained by combining the genes in steps ( <ns0:ref type='formula'>1</ns0:ref>)</ns0:p><ns0:p>and (3).</ns0:p><ns0:p>The general workflow of the proposed (RPOS) method, along with its pseudo-code, is given in FIGURE <ns0:ref type='figure' target='#fig_4'>2</ns0:ref> and Algorithm 1, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 1 Algorithm of RPOS Method For Gene Selection 1: Inputs: X,Y and number genes (r) to be selected. 2: Output: Sequence of selected genes T .</ns0:p><ns0:p>3: for all j ∈ H do 4:</ns0:p><ns0:p>for c = 0, 1 do 5:</ns0:p><ns0:p>Compute the relative dominant class for each gene i.e. R ( j,c) in Equation (1).</ns0:p><ns0:p>6: end for 7:</ns0:p><ns0:p>for i → N do 8:</ns0:p><ns0:p>Compute the gene mask for each gene i.e. m ji as defined in Equation ( <ns0:ref type='formula'>7</ns0:ref>). 9:</ns0:p><ns0:p>Compute the RPOS j scores for each gene as defined in Equation ( <ns0:ref type='formula' target='#formula_10'>9</ns0:ref>).</ns0:p><ns0:p>10:</ns0:p><ns0:p>Assign RDC J to each gene as defined in Equation ( <ns0:ref type='formula' target='#formula_9'>8</ns0:ref>).</ns0:p><ns0:p>11:</ns0:p><ns0:p>end for 12:</ns0:p><ns0:p>let M ∈ ℜ P×N be the gene mask matrix M = [m ji ], where its i th value for j th gene is either 0 or 1.</ns0:p><ns0:p>13:</ns0:p><ns0:p>Compute the total or aggregate mask of genes and denote it by M .. (H).</ns0:p><ns0:p>14:</ns0:p><ns0:p>Use the Greedy search approach to select the minimum subset of genes from M, M .. (H) and RPOS j and denote it by H * .</ns0:p><ns0:p>15:</ns0:p><ns0:p>Perform H = H − H * , this will exclude the genes selected in minimum subset from the whole set of genes.</ns0:p><ns0:p>16:</ns0:p><ns0:p>Arrange the genes in RDC j in the increasing order of RPOS J for each class. 17: end for 18: Obtaining final listed or ranked genes. Increase T by one gene in a round-robin fashion method.</ns0:p></ns0:div>
<ns0:div><ns0:head>23:</ns0:head><ns0:p>end while 24: end if 25: return T The proposed (RPOS) method is novel in the sense that it utilizes median absolute deviation (MAD) for the construction of core expression intervals of the expression values of genes. The drawback of POS in <ns0:ref type='bibr' target='#b46'>Mahmoud et al. (2014)</ns0:ref> is that the gene masks are calculated on the basis of expression intervals by using the interquartile range approach. The construction of gene masks can be affected by outliers because of the smaller breakdown point, i.e. 25% of interquartile range. The breakdown point of (MAD) is 50%, which is less vulnerable to outliers, thereby reducing the effect of outliers while constructing the gene masks.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS AND RESULTS</ns0:head><ns0:p>This section provides a detailed description of the experiments executed for assessing the proposed method against the other methods on benchmark gene expression datasets. A common practice for investigating the efficacy of gene selection methods is to check the discriminative ability of the selected genes by using different classifiers. This is usually done by recording classification accuracy of the classifiers applied on datasets with selected genes only while discarding the rest of the genes. <ns0:ref type='bibr' target='#b25'>Golub et al. (1999)</ns0:ref> have used different feature/gene selection techniques given in <ns0:ref type='bibr' target='#b60'>Statnikov et al. (2005)</ns0:ref>, and it has been observed that gene selection methods have a significant effect on the classifier's accuracy. This approach has been widely used in several other studies <ns0:ref type='bibr' target='#b4'>(Apiletti et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b46'>Mahmoud et al., 2014)</ns0:ref>. Before listing the results from the analyses done in this paper following the above-mentioned approach, a brief description of datasets is given below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Microarray gene expression datasets</ns0:head><ns0:p>In this research work, a total of 10 microarray gene expression datasets are taken as standard benchmark binary classification problems. These datasets are taken from various open sources with a varying number of genes and observations. A brief description of the benchmark datasets used in the current paper is given in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The table provides the number of samples, number of genes, class-wise distribution of considered in this paper are Wilcoxon Rank Sum Test <ns0:ref type='bibr' target='#b40'>(Liao et al., 2006)</ns0:ref>, Proportional Overlapping Score (POS) based method <ns0:ref type='bibr' target='#b46'>(Mahmoud et al., 2014)</ns0:ref>, Genes Selection by Clustering (GClust) <ns0:ref type='bibr' target='#b33'>(Khan et al., 2019)</ns0:ref>, Maximum Relevance Minimum Redundancy (mRmR) <ns0:ref type='bibr' target='#b19'>(Ding and Peng, 2005)</ns0:ref> and Significant Features by SVM and t-test (sigF) <ns0:ref type='bibr' target='#b15'>(Das et al., 2020)</ns0:ref>. The performance of the selected genes is investigated by the average values of the performance metrics, i.e. classification error rate, Brier score and sensitivity using the testing parts of each dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and discussion</ns0:head><ns0:p>The results of the proposed method and other methods included in this study are obtained for all the datasets. The results of three datasets i.e. 'TumorC', 'Breast' and 'Srbct' are given in Tables 2, 3 and 4.</ns0:p><ns0:p>These results are based on 70% training and 30% testing parts portioning of the datasets. The results of the remaining eight datasets are given in Supplemental File (Tables <ns0:ref type='table' target='#tab_0'>1-15</ns0:ref>). From Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> given below, it is clear that for 'TumorC' dataset the proposed method (RPOS) performed better than all the other methods in terms of all the performance metrics considered, except the Wilcoxon rank-sum test, which performed better for the number of genes, i.e. 5, 10, 15 and 20 on Support vector machine classifier in terms of classification error rate. Similarly, from Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>, it is evident that for 'Breast' dataset the proposed method (RPOS) outperformed the other methods on all the classifiers. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>performs better in terms of the classification error rate. In contrast, in terms of Brier score and sensitivity, the POS method performs better than all the other methods. For a set of 15 discriminative genes, POS outperforms all the other methods on Random forest classifier. For the rest of the gene numbers, i.e. 20, 25 and 30, the proposed method outperforms all the other methods on Random forest classifier. In the case of k-Nearest neighbours classifier, the proposed method RPOS gives similar results in terms of sensitivity for the number of genes 5 and 10. Similarly, for the number of genes 15, the results of the proposed method RPOS and POS are the same. For a set of 20 discriminative genes, the performance of the proposed method RPOS and POS equally performs in terms of classification error rate, Brier score and sensitivity. The proposed method RPOS outperforms all the other methods for the set of genes, i.e. 25 and 30. On support vector machine (SVM) classifier, the proposed method RPOS outperforms all the other methods except for the set of genes 25 and 30 where the method POS performs better than all the other methods in terms of sensitivity and classification error rate.</ns0:p><ns0:p>Boxplots of the results of the proposed method and the other methods for twenty number of genes are also constructed given in Figure <ns0:ref type='figure'>3</ns0:ref>. From the boxplots in Figure <ns0:ref type='figure'>3</ns0:ref>, it is clear that the proposed method (RPOS) outperforms all the other methods except for the datasets 'Srbct' and 'Prostate' where the proposed method RPOS and the method POS almost provide similar results. In the case of the dataset 'GSE4045' the sigF method outperforms all the other methods. Similarly, in the case of dataset 'Colon', the performance of the proposed method RPOS and the method POS is similar, while the method sigF outperforms all the other methods. The proposed method RPOS outperforms the rest of the methods on the dataset 'Leukaemia'. Overall the proposed method RPOS outperforms all the other methods on 5 out of 10 datasets and provides similar results to that of the method POS on 3 datasets.</ns0:p><ns0:p>To further investigate the efficiency of the proposed method RPOS, and the other methods, plots of classification error rates, Brier Scores and sensitivity for a various number of genes are given in Figures 4, 5 and 6 respectively. From Figure <ns0:ref type='figure'>4</ns0:ref> it is clear that for the datasets 'Breast', 'DLBCL' and 'Lung', the classification error rate of the proposed method RPOS is less than all the other methods for various number genes. For 'TumorC' dataset the classification error of the method, i.e. Wilcoxon rank-sum test is less than all the other methods for the number of genes 5, 10, and 15 while it increases as the number of genes increases. For the remaining set of genes, the proposed method RPOS performs better than all the other methods. A similar pattern of classification error rates can be seen for the dataset 'Srbct'. In the case of 'Leukeamia' dataset, the performance of the proposed method RPOS and the method POS for the number of genes 10, 20 and 25 are almost similar. In contrast, for the remaining set of genes, the proposed method RPOS performs better than the others.</ns0:p><ns0:p>To assess the performance of the proposed methods RPOS and the remaining methods in terms of Brier score, the results are shown by the plots given in Figure <ns0:ref type='figure'>5</ns0:ref>, where it is clear that the proposed method RPOS outperforms all the other methods for a various number of genes. Figure <ns0:ref type='figure'>6</ns0:ref> are the plots of the sensitivity of the proposed method RPOS and the rest of the methods. It is evident from the figure that for the datasets 'TumorC', 'DLBCL' and 'Lung' the sensitivity of the proposed method RPOS is higher than the rest of the methods for various number of genes. For the dataset 'Breast' the sensitivity of the method, i.e. mRmR is more elevated in almost all the cases except the number of genes 20, where the method Wilcoxon performs better than all the other methods. In case of the dataset 'Srbct' POS and sigF methods give higher sensitivity than the other methods. Wilcoxon rank-sum test outperforms the remaining methods in the case of 'Leukaemia' dataset. Overall the proposed method RPOS outperforms all the other methods in 4 out of 7 datasets in terms of the performance metric, i.e. classification error rate and provides comparable results in the remaining three datasets. In terms of the performance metric, i.e.</ns0:p><ns0:p>Brier scores of the proposed method RPOS outperforms all the other methods in all the seven datasets considered. In terms of sensitivity, the proposed method RPOS outperforms the rest of the methods in 4 out of 7 datasets, while gives comparable results on the remaining three datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science sen 0.995 0.991 0.956 0.998 0.977 0.805 1.000 1.000 0.807 1.000 0.927 0.910 0.995 1.000 0.962 1.000 0.756 0.764 Err 0.009 0.007 0.016 0.010 0.009 0.081 0.036 0.036 0.053 0.000 0.066 0.071 0.011 0.003 0.020 0.002 0.144 0.130 20 BS 0.002 0.002 0.029 0.021 0.023 0.088 0.002 0.002 0.041 0.000 0.069 0.072 0.007 0.001 0.019 0.010 0.098 0.082 sen 0.987 0.990 0.956 1.000 0.986 0.875 1.000 1.000 0.895 1.000 0.919 0.911 0.997 0.999 0.986 1.000 0.797 0.816 Err 0.009 0.004 0.017 0.011 0.009 0.067 0.038 0.020 0.060 0.000 0.066 0.074 0.011 0.008 0.030 0.000 0.134 0.098 25 BS 0.002 0.002 0.031 0.021 0.024 0.084 0.002 0.001 0.039 0.001 0.071 0.072 0.006 0.002 0.023 0.008 0.087 0.067 sen 0.992 0.997 0.956 1.000 0.987 0.881 1.000 1.000 0.870 1.000 0.923 0.915 0.999 0.997 0.977 1.000 0.826 0.885 Err 0.006 0.006 0.023 0.007 0.005 0.075 0.034 0.002 0.047 0.000 0.064 0.065 0.009 0.014 0.018 0.000 0.131 0.129 30 BS 0.002 0.002 0.029 0.022 0.024 0.094 0.002 0.001 0.040 0.001 0.069 0.070 0.006 0.002 0.017 0.006 0.087 0.090 sen 0.992 0.997 0.957 1.000 0.994 0.883 1.000 1.000 0.866 1.000 0.914 0.924 0.998 0.999 0.951 1.000 0.828 0.855 The primary aim of this research article was to devise a gene selection method to improve classification performance of machine learning algorithms on high dimensional microarray gene expression datasets.</ns0:p><ns0:p>We, however, provide indices of the top 10 selected genes by our proposed method for two of the datasets i.e. leukemia and breast. This is done for readers who might want to further assess the biological significance of the selected genes by our proposed method. Indices of the genes selected for the Leukemia dataset are <ns0:ref type='bibr'>(15,</ns0:ref><ns0:ref type='bibr'>29,</ns0:ref><ns0:ref type='bibr'>38,</ns0:ref><ns0:ref type='bibr'>48,</ns0:ref><ns0:ref type='bibr'>312,</ns0:ref><ns0:ref type='bibr'>338,</ns0:ref><ns0:ref type='bibr'>459,</ns0:ref><ns0:ref type='bibr'>573,</ns0:ref><ns0:ref type='bibr'>760,</ns0:ref><ns0:ref type='bibr'>4847)</ns0:ref> while those of Breast dataset are <ns0:ref type='bibr'>(346,</ns0:ref><ns0:ref type='bibr'>1481,</ns0:ref><ns0:ref type='bibr'>1726,</ns0:ref><ns0:ref type='bibr'>1873,</ns0:ref><ns0:ref type='bibr'>2942,</ns0:ref><ns0:ref type='bibr'>3259,</ns0:ref><ns0:ref type='bibr'>3857,</ns0:ref><ns0:ref type='bibr'>4067,</ns0:ref><ns0:ref type='bibr'>4174,</ns0:ref><ns0:ref type='bibr'>4435)</ns0:ref>. Based on the top 10 genes selected by RPOS, we achieved 95.4% classification accuracy via SVM classifier for Leukemia dataset, and for the Breast dataset the accuracy achieved is 98.6%. Studies based on biological significance of the genes for the two datasets are given in <ns0:ref type='bibr' target='#b36'>(Kuang et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chen and Lin, 2011;</ns0:ref><ns0:ref type='bibr' target='#b6'>Bhojwani et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Castillo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b55'>Savitsky et al., 1995;</ns0:ref><ns0:ref type='bibr' target='#b5'>Beckman et al., 1999)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This paper has presented the idea of gene selection for microarray datasets via proportional overlapping analysis with the help of a more robust measure of dispersion i.e. median absolute deviation (MAD).</ns0:p><ns0:p>The core intervals of the classes in the binary class problems are constructed in a robust manner so as to minimize the effect of outliers present in the gene expression datasets in conjunction with the minimum subset of genes selected via greedy search approach. The genes having the smallest RPOS score is considered as the most discriminative, because they will have no or minimum overlapping region between the binary classes. The relative dominant class (RDC) for each gene is also calculated. Genes in the relative dominant class are arranged according to an increasing order of RPOS scores. This forms two mutually exclusive groups of genes based on RDC and RPOS scores. The genes are arranged according to RDC and RPOS scores in a round robin-fashion to develop a gene ranking list. These ranked genes do not contain the genes selected via greedy search approach. The final set of genes is selected by combining Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the chosen genes via greedy search approach, and the topmost ranked genes in the genes ranking list.</ns0:p><ns0:p>The dimension of datasets is then reduced by including the selected genes only and discarding the rest.</ns0:p><ns0:p>Classification methods; random forest, support vector machine and k-nearest neighbour methods have been used to assess the performance of the proposed method in comparison with other widely used gene selection methods.</ns0:p><ns0:p>The results of the proposed method indicate that it performs better in terms of almost all the performance metrics considered, i.e. classification error rate, Brier score and sensitivity. The efficiency of the proposed method is also supported by constructing boxplots for the error rate. Furthermore, the stability of the proposed method is also assessed for various number of genes. The results show that the proposed method is more stable for varying number of genes as compared to the rest of the methods.</ns0:p><ns0:p>The reason for selecting the most discriminative gene for binary classification by the proposed method is that the core intervals of the classes are constructed by the more robust measure of dispersion i.e.</ns0:p><ns0:p>median absolute deviation (MAD) than the measure of the interquartile range (IQR) used in <ns0:ref type='bibr' target='#b46'>Mahmoud et al. (2014)</ns0:ref>. Moreover, the breakdown point of MAD is 50% while that of IQR is 25%, which make the former less vulnerable to the outliers present in the gene expression datasets.</ns0:p><ns0:p>For future work in the direction of the current study, one could use the robust measures of dispersions like Q n and S n statistics rather than median absolute deviation (MAD). This study can be extended to multiclass problems as well. Moreover, one could use this technique in situations where the response variable is continuous.</ns0:p><ns0:p>Although this method is efficient and selects the most discriminative genes, however, there is still the possibility that two (or more) genes selected in the final set might be similar. This could cause the problem of redundancy in the selected set. One of the possible ways to eliminate this problem is to use the Least Absolute Shrinkage and Selection Operator (LASSO) method in conjunction with the proposed method. Another way to deal with this issue is to divide the entire set of features into a set of clusters and then apply the proposed method on each cluster <ns0:ref type='bibr' target='#b33'>(Khan et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b57'>Shamsara and Shamsara, 2020;</ns0:ref><ns0:ref type='bibr' target='#b59'>Sharbaf et al., 2016)</ns0:ref>. The final set of genes, in that case, will be the combination of genes selected from all the clusters. Extending performance assessment of selected genes to other recent classification methods <ns0:ref type='bibr' target='#b31'>(Khan et al., 2020a;</ns0:ref><ns0:ref type='bibr' target='#b27'>Gul et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Khanal et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b32'>Khan et al., 2020b)</ns0:ref> could further validate the proposed gene selection methods.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Gene expression data</ns0:figDesc><ns0:graphic coords='4,220.10,63.78,256.83,135.62' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>represents the common layout of a gene expression dataset. Observations/samples are listed in the rows while the genes are given in the columns. Corresponding to each sample the gene expression values for each gene are given in the cells. Further definitions used in this paper are given below: Class interval: 3/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Workflow of RPOS</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,380.24,358.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>19: if r ≤ |H * | then 20: Then T includes the genes which are first r genes in H * . 21: while |T | < r do 22:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021) Manuscript to be reviewed Computer Science samples in the data and source against each dataset.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>are divided into two mutually exclusive parts in the following manner: In the first part, seventy percent (70%) of the observations from each dataset randomly selected without replacement are considered as training part, while the remaining thirty percent (30%) of the observations are considered as a testing part. In the second part, thirty percent (30%) of the observations in each dataset randomly selected without replacement are considered as training part. In comparison, the remaining seventy percent (70%) of the observation are considered as testing part. A split sample analysis of 500 runs is carried out for each combination of gene selection methods and the corresponding classifiers using 70% training, 30% testing and 30% training, 70% testing partitions. The classifiers which are considered in this study are Random forest (RF), support vector machine (SVM) and k-Nearest neighbours (k-NN). For Random forest, R package i.e. randomForest Liaw and Wiener (2002) is used with default parameters ntree = 500, mtry = √ p and nodesize = 1. For the implementation of support vector machine R package kernlab Karatzoglou et al. (2004) is used with default parameters. Similarly for k-Nearest neighbor classifier R package caret from Jed Wing et al. (2019) is used the default parameter value of k = 5. Using the training parts of each dataset, a set of discriminative genes, i.e. 5, 10, 15, 20, 25 and 30 are selected by different gene selection methods to train the classifiers. Gene selection methods</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 .Figure 4 .Figure 5 .Figure 6 .</ns0:head><ns0:label>3456</ns0:label><ns0:figDesc>Figure 3. Boxplots of classification error rates for 20 number of genes for the datasets; (A): TumorC, (B): Breast, (C): srbct, (D): DLBCL, (E): Prostate, (F): nki, (G): Lung, (H): GSE4045, (I): Colon, (J): Leukaemia. 11/18</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Datasets description showing number of samples, number of genes, class wise distribution of samples in the data.Experimental setup for the analyses done in the paper is as follows. The datasets considered in this study</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='3'>Samples Genes Class wise distribution</ns0:cell><ns0:cell>Source</ns0:cell></ns0:row><ns0:row><ns0:cell>Leukeamia</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell>7029</ns0:cell><ns0:cell>49/23</ns0:cell><ns0:cell>Alon et al. (1999)</ns0:cell></ns0:row><ns0:row><ns0:cell>nki</ns0:cell><ns0:cell>144</ns0:cell><ns0:cell>76</ns0:cell><ns0:cell>96/48</ns0:cell><ns0:cell>Karatzoglou et al. (2004)</ns0:cell></ns0:row><ns0:row><ns0:cell>Colon</ns0:cell><ns0:cell>62</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>40/22</ns0:cell><ns0:cell>Golub et al. (1999)</ns0:cell></ns0:row><ns0:row><ns0:cell>Breast</ns0:cell><ns0:cell>78</ns0:cell><ns0:cell>4948</ns0:cell><ns0:cell>34/44</ns0:cell><ns0:cell>Michiels et al. (2005)</ns0:cell></ns0:row><ns0:row><ns0:cell>GSE4045</ns0:cell><ns0:cell>37</ns0:cell><ns0:cell>22215</ns0:cell><ns0:cell>29/8</ns0:cell><ns0:cell>Laiho et al. (2007)</ns0:cell></ns0:row><ns0:row><ns0:cell>Prostate</ns0:cell><ns0:cell>412</ns0:cell><ns0:cell>10936</ns0:cell><ns0:cell>343/69</ns0:cell><ns0:cell>Statnikov et al. (2005)</ns0:cell></ns0:row><ns0:row><ns0:cell>Srbct</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>2308</ns0:cell><ns0:cell>28/25</ns0:cell><ns0:cell>Statnikov et al. (2005)</ns0:cell></ns0:row><ns0:row><ns0:cell>Lung</ns0:cell><ns0:cell>148</ns0:cell><ns0:cell>12600</ns0:cell><ns0:cell>134/14</ns0:cell><ns0:cell>Gordon et al. (2002)</ns0:cell></ns0:row><ns0:row><ns0:cell>DLBCL</ns0:cell><ns0:cell>76</ns0:cell><ns0:cell>7070</ns0:cell><ns0:cell>58/19</ns0:cell><ns0:cell>https://file.biolab.si/biolab/supp/</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>bi-cancer/projections/info/DLBCL.html</ns0:cell></ns0:row><ns0:row><ns0:cell>TumorC</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>7129</ns0:cell><ns0:cell>39/21</ns0:cell><ns0:cell>https://www.openml.org</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Experimental setup</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>, gives the results for the dataset 'Srbct', where the proposed method (RPOS) shows better results for the number of genes five on Random forest (RF) classifier than the other methods. For the number of genes 10, the Wilcoxon rank-sum test8/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Classification error rate, sensitivity and Brier score produced by Random Forest, k-Nearest Neighbors and Support Vector Machine classifiers on TumorC dataset based on genes selected by the given methods. .015 0.015 0.021 0.274 0.266 0.234 0.016 0.017 0.033 0.280 0.260 0.255 0.074 0.014 0.054 0.170 0.249 0.247 sen 0.440 0.553 0.404 0.217 0.091 0.312 0.478 0.620 0.555 0.245 0.347 0.365 0.561 0.721 0.666 0.754 0.033 0.115 Err 0.306 0.286 0.300 0.482 0.423 0.373 0.336 0.281 0.333 0.459 0.377 0.392 0.281 0.217 0.296 0.211 0.379 0.378 25 BS 0.015 0.013 0.026 0.274 0.263 0.237 0.018 0.015 0.028 0.284 0.245 0.253 0.031 0.012 0.049 0.157 0.245 0.250 sen 0.399 0.497 0.411 0.217 0.120 0.257 0.364 0.623 0.539 0.262 0.331 0.348 0.518 0.716 0.678 0.821 0.044 0.062 Err 0.335 0.275 0.309 0.482 0.441 0.379 0.373 0.302 0.331 0.467 0.388 0.400 0.283 0.226 0.286 0.213 0.380 0.384 30 BS 0.014 0.012 0.020 0.274 0.263 0.240 0.020 0.016 0.025 0.282 0.253 0.259 0.024 0.014 0.039 0.151 0.252 0.248 sen 0.317 0.505 0.423 0.217 0.064 0.304 0.304 0.573 0.560 0.174 0.331 0.382 0.480 0.665 0.661 0.854 0.019 0.066</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>RF</ns0:cell><ns0:cell>kNN</ns0:cell><ns0:cell>SVM</ns0:cell></ns0:row><ns0:row><ns0:cell>Genes</ns0:cell><ns0:cell cols='3'>POS RPOS GClust sigF Wilc mRmR POS RPOS GClust sigF Wilc mRmR POS RPOS GClust sigF Wilc mRmR</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.362 0.221 0.334 0.482 0.450 0.451 0.423 0.269 0.383 0.407 0.398 0.396 0.362 0.264 0.333 0.277 0.442 0.373</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='3'>BS 0.013 0.011 0.023 0.274 0.268 0.276 0.017 0.017 0.026 0.259 0.260 0.257 0.035 0.012 0.037 0.188 0.254 0.244</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.311 0.643 0.363 0.217 0.236 0.261 0.344 0.630 0.454 0.346 0.348 0.389 0.557 0.700 0.579 0.773 0.070 0.189</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.336 0.257 0.313 0.482 0.401 0.348 0.332 0.220 0.355 0.471 0.391 0.395 0.341 0.242 0.336 0.302 0.396 0.349</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>10 BS 0.015 0.015 0.022 0.274 0.252 0.231 0.019 0.015 0.029 0.282 0.254 0.260 0.086 0.015 0.072 0.204 0.241 0.230</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.358 0.569 0.375 0.217 0.278 0.408 0.427 0.722 0.505 0.332 0.365 0.387 0.532 0.699 0.589 0.790 0.150 0.268</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.351 0.288 0.312 0.482 0.391 0.338 0.293 0.242 0.344 0.415 0.382 0.386 0.312 0.249 0.311 0.228 0.400 0.351</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>15 BS 0.016 0.014 0.026 0.274 0.250 0.228 0.018 0.014 0.039 0.272 0.249 0.249 0.052 0.013 0.066 0.166 0.245 0.232</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.286 0.523 0.399 0.217 0.177 0.439 0.462 0.652 0.511 0.246 0.363 0.398 0.472 0.714 0.588 0.824 0.095 0.279</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.297 0.274 0.303 0.482 0.464 0.371 0.305 0.272 0.345 0.426 0.393 0.383 0.270 0.208 0.313 0.233 0.387 0.387</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>20 BS 0</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Classification error rate, sensitivity and Brier score produced by Random Forest, k-Nearest Neighbors and Support Vector Machine classifiers on Breastcancer dataset based on genes selected by the given methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>RF</ns0:cell><ns0:cell>KNN</ns0:cell><ns0:cell>SVM</ns0:cell></ns0:row><ns0:row><ns0:cell>Genes</ns0:cell><ns0:cell cols='3'>POS RPOS GClust sigF Wilc mRmR POS RPOS GClust sigF Wilc mRmR POS RPOS GClust sigF Wilc mRmR</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.296 0.239 0.261 0.490 0.390 0.455 0.314 0.206 0.313 0.448 0.405 0.402 0.310 0.260 0.512 0.522 0.384 0.367</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='3'>BS 0.013 0.010 0.165 0.287 0.254 0.277 0.014 0.011 0.290 0.275 0.261 0.254 0.021 0.011 0.262 0.251 0.262 0.244</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.784 0.837 0.810 0.621 0.722 0.661 0.706 0.862 0.798 0.703 0.761 0.760 0.704 0.776 0.558 0.506 0.714 0.791</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.308 0.224 0.261 0.514 0.360 0.462 0.276 0.240 0.297 0.501 0.390 0.396 0.272 0.214 0.522 0.484 0.351 0.456</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>10 BS 0.013 0.011 0.168 0.278 0.225 0.266 0.013 0.013 0.202 0.304 0.251 0.254 0.022 0.013 0.261 0.260 0.237 0.260</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.757 0.858 0.818 0.613 0.709 0.654 0.786 0.842 0.819 0.677 0.754 0.764 0.750 0.796 0.575 0.412 0.704 0.743</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.323 0.179 0.202 0.519 0.337 0.414 0.297 0.204 0.241 0.514 0.391 0.401 0.262 0.215 0.515 0.462 0.350 0.427</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>15 BS 0.013 0.009 0.145 0.275 0.222 0.246 0.012 0.008 0.182 0.324 0.255 0.256 0.046 0.010 0.262 0.260 0.235 0.255</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.709 0.864 0.848 0.643 0.767 0.719 0.781 0.835 0.810 0.685 0.768 0.763 0.741 0.781 0.564 0.354 0.742 0.798</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.290 0.195 0.199 0.481 0.377 0.468 0.279 0.207 0.257 0.473 0.408 0.395 0.225 0.215 0.542 0.409 0.386 0.474</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>20 BS 0.014 0.011 0.155 0.265 0.234 0.258 0.014 0.010 0.188 0.284 0.260 0.256 0.041 0.013 0.259 0.254 0.251 0.265</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.767 0.853 0.851 0.694 0.717 0.686 0.794 0.815 0.840 0.734 0.745 0.763 0.793 0.798 0.526 0.390 0.673 0.781</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.300 0.186 0.223 0.495 0.366 0.473 0.256 0.186 0.271 0.462 0.404 0.393 0.250 0.223 0.523 0.406 0.377 0.427</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>25 BS 0.012 0.010 0.156 0.270 0.229 0.265 0.012 0.010 0.178 0.279 0.260 0.251 0.033 0.010 0.264 0.254 0.246 0.259</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.777 0.883 0.832 0.694 0.726 0.659 0.801 0.829 0.838 0.693 0.753 0.759 0.790 0.798 0.567 0.397 0.691 0.790</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.268 0.197 0.242 0.411 0.350 0.454 0.261 0.192 0.258 0.436 0.387 0.394 0.249 0.198 0.455 0.418 0.351 0.457</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>30 BS 0.009 0.008 0.158 0.248 0.222 0.261 0.011 0.009 0.182 0.281 0.250 0.253 0.031 0.009 0.260 0.253 0.236 0.262</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.823 0.870 0.836 0.733 0.736 0.694 0.813 0.834 0.818 0.708 0.776 0.767 0.787 0.794 0.661 0.365 0.698 0.767</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>10/18</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54506:1:3:NEW 26 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Classification error rate, sensitivity and Brier score produced by Random Forest, k-Nearest Neighbors and Support Vector Machine classifiers on srbct dataset based on genes selected by the given methods.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>RF</ns0:cell><ns0:cell>kNN</ns0:cell><ns0:cell>SVM</ns0:cell></ns0:row><ns0:row><ns0:cell>Genes</ns0:cell><ns0:cell cols='3'>POS RPOS GClust sigF Wilc mRmR POS RPOS GClust sigF Wilc mRmR POS RPOS GClust sigF Wilc mRmR</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.048 0.019 0.096 0.040 0.021 0.390 0.078 0.034 0.100 0.000 0.074 0.078 0.086 0.021 0.035 0.007 0.328 0.412</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='3'>BS 0.005 0.002 0.096 0.029 0.023 0.236 0.007 0.001 0.057 0.002 0.071 0.074 0.037 0.003 0.028 0.011 0.217 0.255</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.919 0.988 0.961 0.980 0.978 0.549 1.000 1.000 0.718 1.000 0.915 0.914 0.878 0.998 0.942 0.984 0.608 0.574</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.018 0.021 0.027 0.035 0.013 0.086 0.039 0.038 0.055 0.000 0.071 0.069 0.016 0.011 0.029 0.006 0.204 0.143</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>10 BS 0.002 0.003 0.029 0.027 0.022 0.089 0.004 0.002 0.041 0.000 0.076 0.071 0.016 0.002 0.031 0.013 0.138 0.093</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>sen 0.999 0.991 0.957 0.981 0.977 0.879 1.000 1.000 0.852 1.000 0.925 0.918 0.992 0.995 0.943 0.998 0.766 0.785</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>Err 0.004 0.014 0.016 0.001 0.013 0.165 0.039 0.035 0.075 0.000 0.074 0.071 0.004 0.004 0.015 0.002 0.188 0.182</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>15 BS 0.002 0.002 0.028 0.021 0.024 0.142 0.002 0.002 0.047 0.000 0.071 0.073 0.005 0.001 0.015 0.010 0.118 0.129</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Dear Editor,
Thank you for allowing us to improve our article “Robust proportional overlapping analysis for
feature selection in binary classification within functional genomic experiments” and resubmit.
We highly appreciate the very useful comments/suggestions of the unknown reviewers.
All the comments/concerns of the reviewers have been addressed. This has significantly
improved the article.
Best regards,
Zardad Khan for all authors.
POINT-BY-POINT REPLY TO REVIEW ERS’ COM M ENTS
Reviewer 1 (Anonym ous)
1. The manuscript needs English proofreading.
Answer:
Thank you for the suggestion. The paper is proof read and is in a much better shape.
2. Several data are unbalanced data. So, it is better to use g-mean instead of classification
accuracy.
Answer:
Thank you for the comment.
Classification accuracy is considered as good performance metric for the datasets having
balanced class distribution. Even for moderately/slightly skewed data in terms of class
distribution, classification accuracy can still be useful. The reason behind using the classification
accuracy as a performance metric is that, since majority of the datasets are moderately/slightly
skewed with respect to class distribution i.e. Breast, Prostate, Dsrbct, TumorC and Colon etc.
Moreover to cope with the class imbalance problem, additional performance metric i.e.
sensitivity has also been computed, which is considered more popular metric for imbalanced
classification.
3. The authors need to specify how they tune the SVM parameters.
Answer:
Thank you for the comment.
Support vector machine (SVM) classifier has been used with the default parameters as
implemented in the kernlab R package. It is fair in the sense that all the gene selection methods
were evaluated by using the same settings.
4. Need to add several recent papers, such as
https://doi.org/10.1007/s11634-018-0334-1
Answer
Thank you for the comment.
Recent papers are added including the above one (suggested). Please see the updated
references.
Reviewer 2 (Anonym ous)
1. The whole manuscript needs English expert for editing.
Answer:
Thank you for the suggestion. The paper is proof read and is in a much better shape.
2. For the SVM hyper parameters, the authors need to specify how they choose them.
Answer.
Thank you for the comment.
Support vector machine (SVM) classifier has been used with the default parameters as
implemented in the kernlab R package. It is fair in the sense that all the gene selection methods
were evaluated by using the same settings.
Reviewer 3 (Anonymous)
1. line 194: Why cross-validation is not considered here?
Answer:
Thank you for the comment.
The use of split sample estimation method was meeting our purpose of keeping different training
set sizes i.e. 30% and 70%, for assessing the methods. Moreover, this process is repeated a
sufficiently large number of times (Monte Carlo estimation) and assessment on a large number
of splits into training and test sets gives similar validation as that of cross validation. For
example, 50 runs of 10-fold cross validation is expected to give similar validation as that of 500
runs of 90% training – 10% testing split sample estimation.
2. line 196: Why 500 runs are considered, any specific reason for this? What is the reason
behind choosing Random forest (RF), support vector machine (SVM), and k-Nearest
neighbours?
Answer:
Thank you for the comment.
The reason behind using 500 runs is that, there is no significant change in the results if we
increase the number of runs more than 500. The reason for using the given classifiers is that
they are widely used method. We have given names of further classifiers that can be used in the
conclusion section.
3. Table 1: the gene count for nki dataset is very low, not acceptable for classification. The
class sizes (96/48) is not clear to me, can you please explain them.
Answer:
Thank you for the comment.
The proposed method as like other gene selection method, could be used for datasets with any
number of genes. The aim is to select the most informative genes. Column name Class size is
now changes as Class wise distribution. Class wise distribution (96/48) means that there are 96
observations of one class and 48 belongs to another class in the given dataset.
4. line 303: If there is a possibility of improving the method as you have mentioned in the last
para before the Reference section. Then why have not you used that in this paper?
Answer:
Thank you for the comment.
There is always a gap for improvement and that is the beauty of research. We could not cover
all the possibilities for improving the method further as it would make the paper more lengthy.
We believe that the contents given in the current version of the paper are sufficient for
publication in your esteemed journal.
5. line 222: Do provide the selected gene names and give their biological significance? Why
they provide better classification results please justify that?
Answer:
Thank you for comment.
A discussion on point 5 is now given in the paper, please read p. 9, l. 271 onwards.
6. Table 4: Why maximum genes count are 30?
Answer:
Thank you for the comment.
The proposed method could be used for selecting any number of genes. The genes count
depends on the researcher choice. Our aim was to provide the same grounds for the gene
selection methods considered in comparison with the proposed method.
7. line 294: ''median absolute deviation (MAD) than the measure of the interquartile range (IQR)
used in ?''. Reason for ?
Answer:
Thank you for the comment.
This has been explained in manuscript. Please read p. 7, l. 161.
8. The references are very old please update them with the latest references.
Answer:
Thank you for the suggestion. The reference list is now updated.
9. The authors should read the entire manuscript carefully and improve the quality of the
language. Lots of typo errors are exist.
Answer:
Thank you for the suggestion.
The paper has been read carefully and all the corrections have been made.
10. According to me, the paper has lacked novelty. The latest related research discussion is
required to judge the performance of the proposed feature selection method.
Answer:
We respect your concern.
Novelty of the proposed method has been explained on page 7. Further two recent gene
selection methods have been considered for the analysis. Furthermore, additional related
methods as suggested by Reviewer 1.
" | Here is a paper. Please give your review comments after reading it. |
119 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background Until now, there are still a limited number of resources available to predict and diagnose COVID-19 disease. The design of novel drug-drug interaction for COVID-19 patients is an open area of research. Also, the development of the COVID-19 rapid testing kits is still a challenging task. Methodology. This review focuses on two prime challenges caused by urgent needs to effectively address the challenges of the COVID-19 pandemic, i.e., the development of COVID-19 classification tools and drug discovery models for COVID-19 infected patients with the help of artificial intelligence (AI) based techniques such as machine learning and deep learning models. Results. In this paper, various AIbased techniques are studied and evaluated by the means of applying these techniques for the prediction and diagnosis of COVID-19 disease. This study provides recommendations for future research and facilitates knowledge collection and formation on the application of the AI techniques for dealing with the COVID-19 epidemic and its consequences. Conclusions. The AI techniques can be an effective tool to tackle the epidemic caused by COVID-19. These may be utilized in four main fields such as prediction, diagnosis, drug design, and analyzing social implications for COVID-19 infected patients.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>The novel coronavirus has been reported in Wuhan (China) in December 2019. Wuhan became the epicenter of coronavirus <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Coronavirus infected 138,987,378 persons and 2,988,860 deaths in 210 countries as on <ns0:ref type='bibr'>15 April, 2021 [2, 3]</ns0:ref>. World Health Organization (WHO) entitled the disease caused by coronavirus as COVID-19 and declared it an epidemic in February 2020 <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. The virus, also known as SARS-CoV-2, is a novel and evolving virus. The treatment of SARS-CoV-2 is based on the symptoms present in the patient. The most common and specific symptoms are fever and cough with some other non-specific symptoms such as fatigue, headache, and dyspnea <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>. Supplement Table <ns0:ref type='table'>1</ns0:ref> shows the contribution of these symptoms in infected persons <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>. The common transmission methods are human contact and respiratory droplets. The COVID infection depends upon age, preexisting health conditions, hygiene and social habits, location, and frequency of person interactions <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>.</ns0:p><ns0:p>Supplement Table <ns0:ref type='table'>2</ns0:ref> shows the estimation of the severity of COVID-19 disease infected peoples <ns0:ref type='bibr' target='#b8'>[8,</ns0:ref><ns0:ref type='bibr' target='#b9'>9]</ns0:ref>. The risk associated with infection is broadly classified into three categories namely infection risk, severity risk, and outcome risk <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>. The infection risk is associated with a specific group/person having COVID-19. The person/group having severe symptoms of COVID-19 and require intensive care and hospitalization is known as severe risk. If the treatment is not effective towards the infected person/group then there is less possibility to recover or die. Self-isolation and social distancing are the most effective strategy to alleviate this epidemic. Isolation and house quarantine are core strategies to alleviate the infectious disease and reduce the transmission via diminishing the contact of those that are infected <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>. Most governments have imposed lockdown to save lives. However, the economy of every country was greatly affected by the lockdown process. The Organization of Economic Cooperation and Development declared that the growth rate may be slow down by 2.4% <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref>.</ns0:p><ns0:p>The main problems associated with coronavirus's pandemic can be resolved through Artificial Intelligence (AI) <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>. AI has the potential to screen the population and predicting the risk of infection. The prediction process utilizes information such as how much time a person spends in a highly infected area and how many persons are infected in that region. AI developed a spatial prediction model based on this information and envisage the infection transmission <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>. AI-based prediction model uses information about the infected person through the symptoms. Coronavirus infected person does not show the symptoms in most of the cases. Due to this, it is very difficult to detect an infected person. The same thing has been reported in Wuhan city. In Wuhan, 50 % of the infected persons are asymptomatic carriers <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>. The exhaustive testing of coronavirus is required to develop a better predictive model. The same scenario has been implemented in South Korea to prevent the spread of virus infection. The exhaustive testing produces a large amount of data about the infected and non-infected person. Based on these data, AI can be used to suppress the spread of infection, development of vaccines, diagnosis, and social-economic impact <ns0:ref type='bibr' target='#b16'>[16,</ns0:ref><ns0:ref type='bibr' target='#b17'>17]</ns0:ref>. Recently, most of the AI researchers are working on the above-mentioned areas. The number of preprints available on the Internet is a witness of this work <ns0:ref type='bibr'>[18]</ns0:ref>. Recently, many medical image processing techniques based upon chest CT and chest X-ray images have been considered. Also, meta-heuristics techniques can be useful to diagnose COVID-19 patients <ns0:ref type='bibr' target='#b20'>[19]</ns0:ref>.</ns0:p><ns0:p>By the end of November 2020, more than 75,000 scholarly articles were published and indexed on Pubmed on COVID-19 <ns0:ref type='bibr' target='#b21'>[20]</ns0:ref>. However, these articles did not address in-depth the key issues in applying computational intelligence to combating the COVID-19 pandemic. Thus, it is time to discuss and summarize studies related to artificial intelligence from such a large number of articles. Considering the above observations, now it is the time to systematically categorize and review the current progress of research on artificial intelligence. Accordingly, this survey aims to assemble a. Related surveys Since the start of the COVID-19 pandemic, many research paper was published (just Scopus has already indexed over 50000 papers alone). Consequently, several surveys and systematic review studies have tried to systemize and summarize the state of research and knowledge in this emerging research sub-field, including on the use of artificial intelligence methods. Related survey papers are summarized in Supplement Table <ns0:ref type='table'>3</ns0:ref> and discussed in more detail below.</ns0:p><ns0:p>Albahri et al. <ns0:ref type='bibr' target='#b22'>[21]</ns0:ref> reviewed the state-of-the-art techniques for coronavirus prediction algorithms based on data mining and ML assessment. They used Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) as a methodological guideline. The main focus of the survey study was on the development of different AI and ML applications, systems, algorithms, methods and techniques. However, only eight articles were fully evaluated and included in this review, which outlined the insufficiency of research in this important area. <ns0:ref type='bibr'>Lalmuanawma et al. [22]</ns0:ref> aimed to review the role of AI and ML as one significant method in the arena of screening, predicting, forecasting, contact tracing, and drug development for SARS-CoV-2. The study concluded that the use of modern technology with AI and ML dramatically improves the screening, prediction, contact tracing, forecasting, and drug/vaccine development although they also noted the lack of deployment of AI models to show their real-world operation. Ozsahin et al. <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> analyzed the use artificial intelligence (AI) techniques to diagnose COVID-19 with chest computed tomography (CT). Their study included 30 articles from ArXiv, MedRxiv, and Google Scholar identified using the selective assessment method. Pham et al. <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> present an overview of AI and big data, then identify the applications aimed at fighting against COVID-19, next highlight challenges and issues associated with state-of-the-art solutions, and finally come up with recommendations for the communications to effectively control the COVID-19 situation. They based their study on the selected assessment of peerreviewed papers and preprints from IEEE Xplore, Nature, ScienceDirect, Wiley, arXiv, medRxiv, and bioRxiv. Rasheed et al. <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref> presented the collation of the current state-of-the-art technological approaches applied to the context of COVID-19, while covering multiple disciplines and research perspectives. <ns0:ref type='bibr'>Tseng et al. [26]</ns0:ref> focused on categorizing and reviewing the current progress of computational intelligence for fighting COVID-19, which additionally to machine learning and neural networks also discuss fuzzy logic, probabilistic and evolutionary computation based methods. Tayarani <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref> presented the applications of artificial intelligence techniques in COVID-19. They discussed the machine learning techniques for prediction and treatment of infected persons.</ns0:p><ns0:p>However, since the body of knowledge on COVID-19 related research problems is rapidly updated and supplemented, there is a need to provide a new survey of papers to summarize the most recent state-of-the-art in the application of AI techniques for in the COVID-19 related research fields.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Survey Methodology</ns0:head><ns0:p>This survey capitalizes on previous literature to describe approaches for handling COVID-19 that can empower the research community to develop new AI-based methods for the prediction of cases, diagnosis of patients, drug discovery, and design of vaccines.</ns0:p><ns0:p>Similarly to the survey of <ns0:ref type='bibr'>Lalmuanawma et al. [22]</ns0:ref> and Pham et al. <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> we used the selective assessment method, while similarly to the survey of Ozsahin et al. <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> we focused on less formal databases such as BiorXiv and ArXiv, which allows to analyzes the most recent trends in research without waiting for formal publication in indexation by major databases, which can take a significant amount of time. Due to the nature of this study being a relatively new research subject, we mostly focused on the pre-print papers.</ns0:p><ns0:p>The rationale for this survey is the extreme growth of research papers published on COVID-19 papers, which requires them to be analyzes, categorized, and reflected upon. Fig. <ns0:ref type='figure' target='#fig_8'>1</ns0:ref> shows the</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Coronavirus Overview</ns0:head><ns0:p>A coronavirus is a group of viruses that can be transferred between human beings and animals. The novel coronavirus is known as SARS-CoV-2. Coronavirus (CoV) belongs to the family Coronavirinae of the order Nidovirales <ns0:ref type='bibr' target='#b29'>[29]</ns0:ref>. CoV is broadly classified into four main classes namely α, β, γ, and δ. The first two (i.e., α and β) contaminate mammals only. The later ones (i.e., γ and δ) contaminate birds. They may contaminate mammals in a rare case. The genome of CoV consists of a single-stranded positive-sense RNA whose length is 30 kb <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>. CoV is the largest among the existing RNA viruses. It has also a 5' cap and 3' poly-A tail <ns0:ref type='bibr' target='#b30'>[30]</ns0:ref>.</ns0:p><ns0:p>Based on the literature available, CoVs infecting human beings include two α-CoVs (229E and NL63), and five β-CoVs (OC43, HKU1, MERS-CoV, SARS-CoV, and SARS-CoV-2. SARS-CoV-2 is a novel class of β-coronavirus genera that consists of bat-SARS-like (SL)-CoV ZC45, bat-SL-CoV ZXC21, SARS-CoV, and MERS-CoV. From recent studies, it found that SARS-CoV-2 came from wild animals. However, the exact source of this virus is unknown. There is a substantial genetic difference between now and SARS-CoV and numerous similarities among them in terms of chemical and physical characteristics <ns0:ref type='bibr' target='#b31'>[31]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Data Gathering Systems for COVID-19</ns0:head><ns0:p>As we knew that the COVID-19 is an infectious disease. The spread of this virus can be stopped through no human interaction. AI-based tools are used to collect the COVID-19 data, develop a vigilant system, and visualize the COVID-19 data <ns0:ref type='bibr' target='#b32'>[32]</ns0:ref>. Recently, smartphone applications are also developed to diagnose the user's health and trace the spread of infection. The main intention behind these applications is to identify vulnerable communities, provide real-time information to both patients and medical staff, detect infected hotspot areas, and generate advice for patient's health <ns0:ref type='bibr' target='#b33'>[33]</ns0:ref>. Fig. <ns0:ref type='figure'>2</ns0:ref> shows the categorization of data gathering systems for COVID-19.</ns0:p></ns0:div>
<ns0:div><ns0:head>a. An early cautionary and vigilant system</ns0:head><ns0:p>BlueDot is an effective analysis tool, which is developed in Canada <ns0:ref type='bibr'>[34]</ns0:ref>. It utilizes natural language processing (NLP) and machine learning technique. It was able to identify the outbreak of COVID-19 and generated vigilant alerts to users. It was used to generate the cautionary warnings to cities where the people reached from Wuhan city after January 2020 <ns0:ref type='bibr'>[35]</ns0:ref>. In Belgium, the telecom operators are integrated with healthcare for analysing the infection spread in particular areas and identifying the infected hotspot areas. They have segregated the population into different regions according to the spread of infection. The same concept has also been used in some other countries. It provides real-time monitoring of patients and the consultant can use this information to prepare the prevention plans for virus infection in time <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref>.</ns0:p><ns0:p>Similarly, Austria Telecom has agreed with its authorities to deliver the customer data. The customer data is used to trace their movements in the hotspot Lombardy region. MIT's consortium is working on a smartphone application to detect the spread of virus infection. Global Positioning System (GPS) in a smartphone is used for checking the intersection of user's trails with trails of infected persons. They used cryptographic techniques to protect the data. The application generates early cautionary signs to analyse the risk of infection spread after contacting with the infected persons. Instagram developed a COVID-19 tracker named as 'RT.live'. It shows the state-by-state information on coronavirus infection in the US. The data analysis algorithm is used to estimate the reproduction of virus infection.</ns0:p><ns0:p>'HealthyTogether' application is developed by the Utah government for reducing the spreading of COVID-19. It is used to analyze the symptoms, determine the nearby testing center, and assessment of test results. Singapore Government agency developed a 'TraceTogether' application to protect the community from COVID-19. It helps contact tracers notify quickly through Bluetooth connection.</ns0:p><ns0:p>India Government developed the 'Aarogya Setu' application to monitor the infection caused by COVID-19 patients. This application has used a questionnaire to determine whether the person is infected or not. It is helpful to find if the infected patient was somewhere near the user. Another vigilant system was HealthMap to impose quarantines and restrict the movement of peoples. Supplement Table <ns0:ref type='table'>5</ns0:ref> shows a brief description of the vigilant system for COVID-19. Supplement Table <ns0:ref type='table'>6</ns0:ref> depicts the functionality analysis of different real-time surveillance mobile apps for COVID-19 <ns0:ref type='bibr' target='#b37'>[37]</ns0:ref>. <ns0:ref type='table'>7</ns0:ref> depicts the comparative analysis of different dashboards concerning the statistical information reported.</ns0:p></ns0:div>
<ns0:div><ns0:head>c. COVID-19 datasets and resources</ns0:head><ns0:p>Artificial Intelligence techniques require big data for further assessing the drug discovery, risk assessment, spread of infection, treatment, and cure of COVID infected patients. The datasets are broadly classified into text, social media, biomedical, speech, and case studies. The classification of COVID-19 datasets is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>i. Text Datasets</ns0:head><ns0:p>The text datasets include risk factors, non-pharmaceutical interventions, incubation period, the spread of virus infection, and stability of the environment <ns0:ref type='bibr' target='#b40'>[40,</ns0:ref><ns0:ref type='bibr' target='#b41'>41]</ns0:ref>. <ns0:ref type='bibr'>WHO</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>ii. Social Media Datasets</ns0:head><ns0:p>COVID-19 Tweets and Covid-19 Twitter dataset comprise of coronavirus related tweets, which are misclassified information and rumors on Twitter <ns0:ref type='bibr' target='#b42'>[42]</ns0:ref>. These datasets contain the reactions from different persons on the tweets <ns0:ref type='bibr' target='#b43'>[43]</ns0:ref>. COVID-19 Real World Worry Dataset <ns0:ref type='bibr' target='#b44'>[44]</ns0:ref> consists of labeled texts of persons' emotional responses towards COVID-19. It also consists of public infection caused by coronavirus and the requirement of hospital beds. Supplement Table <ns0:ref type='table'>8</ns0:ref> depicts the downloadable links and description of datasets <ns0:ref type='bibr' target='#b48'>[48]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Prediction Models</ns0:head><ns0:p>AI-based prediction models can be used to predict the mortality rate, infection in patients through cough and biomarkers, genome structure associated with fatality rate, and severity rate. The epidemic of the tracker was developed by Metabiota. It utilized the forecasting machine learning model used for the prediction of infection spread. Robert Koch Institute developed a SIR prediction model, which uses quarantines, social distancing, and lockdowns. This model was implemented in the R language. It was helpful to reduce the spreading of infections.</ns0:p><ns0:p>Yan et al. <ns0:ref type='bibr' target='#b49'>[49]</ns0:ref> proposed a machine learning technique to predict the severity of COVID-19. The survival rate of patients at Tongji Hospital in Wuhan is assessed through the prognostic biomarker. The prediction accuracy obtained from the proposed machine learning technique was approximately 90%. <ns0:ref type='bibr'>Jiang et al. [50]</ns0:ref> developed an AI framework that has the predictive capability to analyze the severity of patients. They developed an algorithm to identify the clinical characteristics of infected persons. The predictive model used clinical characteristics to predict the severe illness of fifty-three patients with 80% accuracy. The predictive model is tested on two hospitals in Wenzhou, Zhejiang, China.</ns0:p><ns0:p>Alotaibi et al. <ns0:ref type='bibr' target='#b52'>[51]</ns0:ref> used machine learning techniques namely Support Vector Machine (SVM), Artificial Neural Network (ANN) and Random forest to predict the severity of infected patients. The prediction accuracies obtained from SVM, ANN, and Random forest are 86.67%, 83.33% and 90.83%, respectively. Chatterjee et al. <ns0:ref type='bibr' target='#b53'>[52]</ns0:ref> developed a susceptible, exposure, infectious and recovered (SEIR) model to study the impact of healthcare during the pandemic situation. By using this model, hospitalizations and ICU requirements can be reduced to 90%. Ghosal et al. <ns0:ref type='bibr' target='#b54'>[53]</ns0:ref> used a linear regression to predict the number of deaths in India. The predicted death rate is 211 and 467 by the end of 5 th and 6 th week, respectively. Imarn et al. <ns0:ref type='bibr' target='#b55'>[54]</ns0:ref> developed the AI4COVID-19 tool for the preliminary diagnosis of COVID-19. AI4COVID used a two-second cough recording of infected persons. After the analysis of cough samples, the AI tool generates preliminary diagnosis for patients. AI4COVID distinguished COVID and non-COVID patients with 90% accuracy. Feng et al. <ns0:ref type='bibr' target='#b56'>[55]</ns0:ref> developed a diagnosis model for the preliminary identification of infected patients. The prediction was based on travel history, clinical symptoms, and test results. Lasso regression applied to features obtained from clinical symptoms for prediction. The features are extracted from Rao et al. <ns0:ref type='bibr' target='#b57'>[56]</ns0:ref> implemented machine learning algorithms to determine the possible causes of coronavirus in the quarantine areas through a mobile-based survey. The algorithms can easily predict the no-risk, moderate risk, and high risk of infection through travel history and contact with infected persons.</ns0:p><ns0:p>Tang et al. <ns0:ref type='bibr' target='#b58'>[57]</ns0:ref> implemented a machine learning technique for automatic severity assessment of infected patients using chest CT images. They extracted features from lungs and applied them to a random forest model for predicting the severity of COVID-19. The accuracy obtained from the random forest model was 87.50%. Qi et al. <ns0:ref type='bibr' target='#b59'>[58]</ns0:ref> used logistic regression (LR) and random forest (RF) to extract the features from 72 pneumonia lesions of 31patients. The sensitivity and specificity obtained from LR are 1.0 and 0.89, respectively. However, the RF model provided a sensitivity of 0.75 and a specificity of 1.0. Patrikar et al. <ns0:ref type='bibr' target='#b60'>[59]</ns0:ref> modified the SEIR framework to study the effect of social distancing on the spread of coronavirus. They found that the infection can be reduced to 78% through social distancing. Supplement Table <ns0:ref type='table'>9</ns0:ref> shows the comparative analysis of prediction models for COVID-19.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.'>Screening Techniques for COVID-19</ns0:head><ns0:p>The diagnosis of a COVID-19 patient is a challenging task. The testing of every patient is timeconsuming. Due to the pandemic situation, faster and cheaper tests are required to generate the medical report. AI-based techniques are used to screen COVID-19 patients. There are three different methods to screen patients using AI. These are face recognition, wearable devices, and virtual healthcare assistant (see Fig. <ns0:ref type='figure'>4</ns0:ref>) <ns0:ref type='bibr' target='#b61'>[60]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>a. Screening through patients' face</ns0:head><ns0:p>Face recognition technology is used to eliminate the spread of infection caused by coronavirus through human contact. It can be used with a temperature detection tool for the efficient identification of COVID-19 patients. Drones/Robots equipped with the thermal scanner are used for detecting fever in patients from an appropriate distance. Chinese firm Baidu developed infrared cameras to scan the crowds for temperature scanning <ns0:ref type='bibr' target='#b62'>[61]</ns0:ref>. They can screen hundreds of persons within one minute. However, the fever can be wrongly detected if a person wears some items on the face. AI-based face recognition tools can be installed at schools, colleges, railway, airport, and community places. These tools can automatically detect the persons having fever, tracing their positions, and detect whether the person has a mask or not.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Screening through a wearable device</ns0:head><ns0:p>Nowadays, wearable health devices such as Fitbits and Garmins are more popular to monitor physiological parameters such as heart rate, blood pressure, oxygen levels, body temperature, movement, and sleep for a better lifestyle. Apple developed an AI-based watch to determine the temperature and heart rate to identify the symptoms of patients <ns0:ref type='bibr' target='#b63'>[62]</ns0:ref>. OURA developed an activity tracking ring that uses body temperature, breathing rate, and heart rate to determine the onset patterns, progress, and recovery of the patient <ns0:ref type='bibr' target='#b64'>[63]</ns0:ref>. Stanford Medicine and Google company came together to utilize the data collected from the wearable device to detect the symptoms of an infected person. They used body temperature and heart rate for fighting against the COVID-19 infection. Central Queensland University collaborated with Cleveland Clinic to analyze the data collected from Whoop's wearable devices <ns0:ref type='bibr'>[64]</ns0:ref>. Shanghai Public Health Center has developed in-built temperature sensors in wearable devices to detect the body temperature for COVID-19 patients regularly. The detected temperature is continuously sent to the nursing station for patient monitoring. Canada based Proxxi technologies developed a wrist device named 'Halo' that communicate to user through vibration <ns0:ref type='bibr' target='#b67'>[65]</ns0:ref>. It gives an alert to the user about his come within range of 6 feet of another wearable user. It uses Bluetooth technology to communicate with others and keep the data record about users whom the wearable user met.</ns0:p></ns0:div>
<ns0:div><ns0:head>c. Screening through Chatbots</ns0:head><ns0:p>In this pandemic situation, chatbot developers and healthcare systems are integrated for the prediction of infected patients. Chatbots are used as an accelerator tool for the healthcare of COVID infected persons. AI-based Stallion used the capabilities of natural language processing (NLP) to develop a Chatbot as a virtual healthcare agent <ns0:ref type='bibr' target='#b68'>[66]</ns0:ref>. It endorses protection measures, monitors symptoms, and generates suggestions to individuals for home quarantine or hospital admissions. Some countries developed a 'Self-Triage' system that uses a questionnaire about the symptoms of patients. Microsoft developed a healthcare virtual assistant that will help to determine the appropriate action using the symptoms of patients. They included risk assessment, clinical triage, and COVID-19 question-answering in their chatbot <ns0:ref type='bibr' target='#b69'>[67]</ns0:ref>.</ns0:p><ns0:p>Google Cloud developed a virtual assistant that will provide information about COVID-19. The virtual assistant quickly responds to questions raised by users and provide optimal information to users <ns0:ref type='bibr' target='#b70'>[68]</ns0:ref>. BITS students developed a chatbot to yield awareness among the users. AI-enabled doctor video bot, named AskDoc, is developed to provide answers about the COVID-19 queries using voice and text. Facebook implemented a WhatsApp bot to support the users to update them on the outbreak of COVID-19 <ns0:ref type='bibr' target='#b71'>[69]</ns0:ref>. Zoe chatbot gives users answers and appropriate information <ns0:ref type='bibr' target='#b72'>[70]</ns0:ref>. Supplement Table <ns0:ref type='table'>10</ns0:ref> shows the AI-based tools/techniques for the screening of COVID-19 symptoms.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.'>Diagnosis of COVID-19</ns0:head><ns0:p>In this pandemic situation, a quicker diagnosis is required. The most widely used technique for diagnosis is a real-time reverse transcription-polymerase chain reaction (RT-PCR) <ns0:ref type='bibr' target='#b73'>[71]</ns0:ref>. The well-known radiological imaging techniques are X-ray and computed (CT) <ns0:ref type='bibr' target='#b74'>[72]</ns0:ref>. Due to less sensitivity (60%-70%) of RT-PCR, symptoms can be detected through radiological images <ns0:ref type='bibr' target='#b75'>[73]</ns0:ref>. However, radiological images are sensitive to detect the infection caused by COVID-19 and can be used to monitor the patients <ns0:ref type='bibr' target='#b76'>[74]</ns0:ref>. A less number of clinical expertise is available as compared to the COVID-19 cases that arose in this pandemic situation <ns0:ref type='bibr' target='#b77'>[75]</ns0:ref>. Therefore, AI-based tools and techniques can be used for faster diagnosis.</ns0:p></ns0:div>
<ns0:div><ns0:head>a. Radiological Imaging</ns0:head><ns0:p>Researchers studied 33% of chest CTs have rounded lung opacities. It is observed from the CT scan that symptoms may not be detected in the initial two days <ns0:ref type='bibr' target='#b78'>[76]</ns0:ref>. The abnormal finding is detected in the CT scan of patients after ten days of symptoms observed <ns0:ref type='bibr' target='#b79'>[77]</ns0:ref>. Initially, due to the low sensitivity of RT-PCR kits, clinical experts suggested using Chest CT for diagnosis <ns0:ref type='bibr' target='#b80'>[78]</ns0:ref>. The traditional methods take 15 minutes for analysis of the chest CT scan. Nowadays, machine learning or deep learning techniques are used for automated analysis of CT scans and chest X-rays. These techniques will help to speed up the analysis process <ns0:ref type='bibr' target='#b81'>[79,</ns0:ref><ns0:ref type='bibr'>80,</ns0:ref><ns0:ref type='bibr' target='#b84'>81]</ns0:ref>. Fig. <ns0:ref type='figure'>5</ns0:ref> depicts the distribution of radiological images for diagnosis of COVID-19.</ns0:p></ns0:div>
<ns0:div><ns0:head>i. AI Tools for Radiological Images</ns0:head><ns0:p>Baidu's team developed a LinerFold software that diagnoses the infection of COVID-19 in 27 seconds <ns0:ref type='bibr' target='#b85'>[82]</ns0:ref>. The prediction time is reduced from 55 minutes to 27 seconds and helps in developing the drug for coronavirus. It can identify lesions in terms of volume, proportion, and numbers. The accuracy obtained from the system is 92% on available datasets. China scientists developed a healthcare application named InferVISION to investigate COVID-19 patients. InferVISION utilized NVIDIA's Clara SDK <ns0:ref type='bibr' target='#b86'>[83]</ns0:ref>. It can identify the positive cases within a very small amount of time. Shenzhen-based company has developed a MicroMultiCopter that can carry medical samples from infected and dense areas. It can also be used for food and medical items delivery. LinkingMed technique to analyze the CT scan in less than sixty seconds. It provides 92% accuracy on test datasets.</ns0:p><ns0:p>Canadian based DarwinAI developed a neural network to analyze the X-rays for COVID-19 infection. Some hospitals do not have testing kits and radiologists to analyze the infection. In that case, an x-ray is an alternative to testing kits. DarwinAI developed a COVID-Net at University of Waterloo. DarwinAI trained from 17000 X-rays images. They are working on COVID-Net for identifying the risk of infection associated with workers <ns0:ref type='bibr' target='#b87'>[84]</ns0:ref>.</ns0:p><ns0:p>Mumbai-based company Qure.ai developed an AI-based chest X-ray system named qXR <ns0:ref type='bibr' target='#b88'>[85]</ns0:ref>. The qXR is used to detect COVID patients from chest X-rays. qXR utilized deep learning models to detect lung abnormalities. It will help trainee doctors for their second opinion about the patient. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The accuracy obtained to detect the COVID infected patients is approximately 95% over 11,000 patients. Lunit's software named 'INSIGHT CXR' is used to scan the abnormalities of the lungs <ns0:ref type='bibr'>[86]</ns0:ref>. These tools will help to handle the coronavirus pandemic.</ns0:p><ns0:p>Ron Li implemented an Epic model that can assess whether the patients have to shift to ICU or not <ns0:ref type='bibr' target='#b89'>[87]</ns0:ref>. He explored the 'Deterioration Index' to identify whether the patient's condition is deteriorating or not. Epic trained 130,000 patients for assessing the validity of the Deterioration Index. He modified the model for the evaluation of COVID-19 patients in March. Six different organizations have been evaluated the performance of the Epic model on 3000 COVID patients and proved its performance.</ns0:p><ns0:p>Johns Hopkins University (JHU) developed a diagnostics tool for coronavirus infection <ns0:ref type='bibr' target='#b89'>[87]</ns0:ref>. Researchers in JHU developed a vigilant system for respiratory failure that can be caused by COVID. The respiratory diagnosis model will help the doctors to assess the infected patients. It will also envisage the need for ventilators and critical hospital instruments.</ns0:p><ns0:p>Maghdid et al. <ns0:ref type='bibr' target='#b90'>[88]</ns0:ref> proposed a mobile application to scan the CT images. CAD4COVID is an AIbased software to distinguish infected patients from chest X-rays <ns0:ref type='bibr'>[89]</ns0:ref>. Huazhong University of Science and Technology developed an algorithm for estimation of COVID-19 infected person with 80% accuracy. However, they tested on 53 patients of two different Chinese hospitals. Supplement Table <ns0:ref type='table'>11</ns0:ref> depicts the description of AI tools used for the treatment of COVID-19.</ns0:p></ns0:div>
<ns0:div><ns0:head>ii. Deep Learning Architectures for Radiological Images</ns0:head><ns0:p>Hemdan et al. <ns0:ref type='bibr'>[90]</ns0:ref> developed an automated diagnose framework called COVIDX-Net for the analysis of X-ray images. COVIDX-Net utilizes seven different deep learning models and tested over 50 X-ray images. The accuracy obtained from COVIDX-Net is 90%. Wang et al. <ns0:ref type='bibr' target='#b91'>[91]</ns0:ref> developed a deep convolutional neural network (CNN) model (COVID-Net) for the identification of infection in chest X-ray images. COVID-Net model tested over 13,800 chest X-ray images and obtained 93.3% accuracy in recognizing normal, typical pneumonia, and COVID-19 cases. Ioannis et al. <ns0:ref type='bibr' target='#b92'>[92]</ns0:ref> evaluated CNN for the classification of COVID-19 cases. They used the transfer learning technique on 1427 X-ray images, achieving 98.75% and 93.48% accuracy for two and three class classification, respectively. Khan et al. <ns0:ref type='bibr' target='#b93'>[93]</ns0:ref> proposed a CoroNet architecture for detection of COVID-19 in chest X-ray of infected patients. CoroNet utilized the concept of Xception model. The classification performance of CoroNet was 99% and 89.6% for binary and three-class, respectively.</ns0:p><ns0:p>Ozturk et al. <ns0:ref type='bibr' target='#b94'>[94]</ns0:ref> Manuscript to be reviewed Computer Science and 87.02% for binary and multi-class, respectively. Sethy et al. <ns0:ref type='bibr' target='#b95'>[95]</ns0:ref> suggested deep learning based methodology for detection of infected patients using chest X-ray images. The proposed methodology used CNN models with support vector machine (SVM). The classification accuracy of ResNet50 model with SVM classifier is 95.38%. To overcome the shortcoming of hyperparameter tuning associated with transfer models, Kaur et al. <ns0:ref type='bibr' target='#b96'>[96]</ns0:ref> utilized the strength Pareto evolutionary algorithm-II (SPEA-II) for chest X-ray images. They modified AlexNet to extract the features from X-ray images and applied on classification process. The proposed approach outperforms the competitive models in terms of performance measures. Jain et al. <ns0:ref type='bibr' target='#b97'>[97]</ns0:ref> developed a deep learning model for diagnosis of chest X-ray images. IncpetionV3, Xception, and ResNeXt are used in the development of proposed model. The classification accuracies obtained from this model are 99% and 96% for training and testing, respectively.</ns0:p><ns0:p>Chowdhury et al. <ns0:ref type='bibr' target='#b98'>[98]</ns0:ref> designed an automatic coronavirus detection technique from X-ray images. This technique was evaluated on Chest X-ray dataset and attained the classification accuracy of 99.7%. Islam et al. <ns0:ref type='bibr' target='#b100'>[99]</ns0:ref> combined convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose coronavirus infection in the patients. The performance of this model was validated on 4575 X-ray images. The sensitivity and specificity of this model were 99.3% and 99.2%, respectively. Nour et al. <ns0:ref type='bibr' target='#b101'>[100]</ns0:ref> developed a CNN model to extract discriminative features from chest X-ray. These features were applied on three well-known machine learning algorithms namely, k-nearest neighbor (KNN), SVM, and decision tree for classification. The sensitivity and specificity obtained from SVM-based classifier are 89.39% and 99.75%, respectively. A number of research articles were published in the field of diagnosis of COVID-19 using chest X-ray images <ns0:ref type='bibr' target='#b102'>[101]</ns0:ref><ns0:ref type='bibr' target='#b103'>[102]</ns0:ref><ns0:ref type='bibr' target='#b104'>[103]</ns0:ref><ns0:ref type='bibr' target='#b105'>[104]</ns0:ref><ns0:ref type='bibr' target='#b106'>[105]</ns0:ref><ns0:ref type='bibr' target='#b107'>[106]</ns0:ref><ns0:ref type='bibr' target='#b108'>[107]</ns0:ref><ns0:ref type='bibr' target='#b109'>[108]</ns0:ref><ns0:ref type='bibr' target='#b110'>[109]</ns0:ref><ns0:ref type='bibr' target='#b111'>[110]</ns0:ref><ns0:ref type='bibr' target='#b112'>[111]</ns0:ref><ns0:ref type='bibr' target='#b113'>[112]</ns0:ref>.</ns0:p><ns0:p>Wang et al. <ns0:ref type='bibr' target='#b114'>[113]</ns0:ref> implemented deep learning approaches to extract specific features from CT scan images. These features are used to detect coronavirus infection using the transfer learning model. The developed model is tested on 1065 images of COVID-19. The accuracy obtained from their model is 89.5%. Tan et al. <ns0:ref type='bibr' target='#b115'>[114]</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b119'>[117]</ns0:ref> developed a COVNet model for extracting 2D and 3D global features from chest CT for the classification of COVID-19 patients. The extracted features are utilized to differentiate CoVID-19 infected, non-pneumonia, and community-acquired pneumonia (CAP). The classification accuracy obtained from COVNet was 96.00%. Mei et al. <ns0:ref type='bibr' target='#b120'>[118]</ns0:ref> utilized machine learning techniques for the analysis of chest CT scans of COVID-19 patients. The clinical symptoms and laboratory testing were integrated with CT scans for analysis.</ns0:p><ns0:p>Hasan et al. <ns0:ref type='bibr' target='#b121'>[119]</ns0:ref> combined both deep learning technique and Q-deformed entropy for classification of COVID-19 using CT scan images. The features were extracted from CT scan images using Q-deformed entropy and CNN. The extracted features were applied on LSTM for distinguishing the COVID-19 and non COVID-19. The proposed approach attained the classification accuracy of 99.68%. Wu et al. <ns0:ref type='bibr' target='#b122'>[120]</ns0:ref> developed a multi-view fusion model for identification of coronavirus infection in the CT scan images. This model was evaluated on the CT images of 495 patients. This model takes approximately ten minutes for analysis of CT images of infected patients. They used Youden index for fusion process. For testing phase, the sensitivity and specificity obtained from multi view model are 81.1% and 61.5%, respectively. Ko et al. <ns0:ref type='bibr' target='#b123'>[121]</ns0:ref> developed a FCONet model for COVID-19 classification using chest CT. FCONet utilized VGG16, ResNet50, InceptionV3, and Xception. The performance of FCONet was validated on 3993 chest CT images. The accuracy obtained from ResNet50 was 96.97%. Researchers use deep learning architectures for analysis of CT scan images <ns0:ref type='bibr' target='#b124'>[122,</ns0:ref><ns0:ref type='bibr' target='#b125'>123]</ns0:ref>.</ns0:p><ns0:p>The comparative analysis of deep learning techniques on radiological images such as chest X-ray and CT scan images are illustrated in Supplement Tables <ns0:ref type='table'>12 and 13</ns0:ref>, respectively, while the usage of deep learning models is summarized in Fig. <ns0:ref type='figure'>6</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Non-invasive techniques</ns0:head><ns0:p>Several techniques do not need any specialized radiological equipment for diagnosis and treatment of COVID-19. Cho et al. <ns0:ref type='bibr' target='#b126'>[124]</ns0:ref> used a GRU neural network to determine the respiratory patterns of patients. The developed model is trained on footage obtained from Kinect depth cameras <ns0:ref type='bibr' target='#b127'>[125]</ns0:ref> and recognizes the COVID-19 patients.</ns0:p><ns0:p>Cascella et al. <ns0:ref type='bibr' target='#b128'>[126]</ns0:ref> suggested that COVID-19 patients have respiratory patterns, which are different from the common cold and flu. However, the abnormal respiratory patterns have no direct correlation with the diagnosis and treatment of COVID-19. The wearable devices and mobile applications can be utilized in the diagnosis and treatment of COVID-19. These devices and apps may utilize the body temperature, heart rate, cough samples, and breath rate. </ns0:p></ns0:div>
<ns0:div><ns0:head>a. Envisaging virus-host interactome</ns0:head><ns0:p>The prediction of protein structure is necessary to develop new drugs. Senior et al. <ns0:ref type='bibr' target='#b131'>[128]</ns0:ref> developed an AlphaFold model to envisage the structures of proteins associated with SARS-CoV-2. ResNet is used to extract the features from amino acid sequences <ns0:ref type='bibr' target='#b132'>[129]</ns0:ref>. Heo and Feig <ns0:ref type='bibr' target='#b133'>[130]</ns0:ref> implemented dilated ResNet to envisage the protein structure of SARS-CoV-2. They refined the AlphaFold's predicted structures using molecular dynamics. Ge et al. <ns0:ref type='bibr' target='#b134'>[131]</ns0:ref> suggested an approach to construct a knowledge graph that involving human proteins, viral proteins, and drugs. The knowledge graph is used to envisage possibly candidate drugs. Nguyen et al. <ns0:ref type='bibr' target='#b135'>[132]</ns0:ref> developed a SARS-based model using mathematical deep learning to determine possible inhibitors. 84 SARS coronavirus inhibitors are envisaged from ChEML and PDBind databases. Zhou et al. <ns0:ref type='bibr' target='#b136'>[133]</ns0:ref> developed network-based model to repurpose the drugs for SARS-CoV-2. Hu et al. <ns0:ref type='bibr' target='#b137'>[134]</ns0:ref> implemented a neural network to predict the affinities of SARS-CoV-2 proteins. They found ten possible drugs with their binding affinity scores among 4895 drugs.</ns0:p><ns0:p>Exscienta designed an AI-based drug molecule for coronavirus as reported in news <ns0:ref type='bibr' target='#b138'>[135]</ns0:ref>. It is the first company who has designed the drug molecule for coronavirus. The traditional drug discovery research took 4-5 years for developing new drugs. However, it will take one year to develop the molecular structure. Insilco Medicine company used generative adversarial networks to determine the molecule structure. This structure is used to discover drugs. Insilco company screened 100 molecules for synthesis and testing. AI can be used to develop antibodies and vaccines for COVID-19 <ns0:ref type='bibr' target='#b139'>[136]</ns0:ref>. It can be done in two ways namely, from scratch and drug repurposing. Google' DeepMind developed an AlphaGo algorithm to envisage the protein structure of the virus, which can help develop new vaccines against coronavirus <ns0:ref type='bibr' target='#b140'>[137]</ns0:ref>. However, the experimentation has to be performed for validation of the designed protein.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Envisaging interaction among coronavirus and drugs</ns0:head><ns0:p>AI can be used to screen the exiting drug molecules and find their suitability against the coronavirus. South Korea and the USA used an AI-based algorithm to identify the 'Atazanavir' drug for repurposed to the treatment of COVID-19 <ns0:ref type='bibr' target='#b141'>[138]</ns0:ref>. Researchers of Benevolent AI identified 'Baricitinib' and 'Myelofibrosis' drugs for the treatment of COVID-19 <ns0:ref type='bibr' target='#b142'>[139]</ns0:ref>. Singaporean Firm Gero used a deep learning technique to recognize 'Afatinib' for the treatment of COVID-19. Zhang et al. <ns0:ref type='bibr' target='#b144'>[140]</ns0:ref> Manuscript to be reviewed Computer Science antivirals drugs that may be effective against coronavirus. BERT algorithm used to compute the binding affinities of existing drugs. Hofmarcher et al. <ns0:ref type='bibr' target='#b147'>[142]</ns0:ref> used Long Short-Term Memory (LSTM) model on SMILES data to screen 900 mln compounds from the ZINC dataset. 30,000 possible compounds are selected for treatment of SARS-CoV-2. These treatments may be available in near future. The main reason behind this that medical trials, checks, and control are needed before the approval of these drugs. After the identification and screening of drugs, a vaccine formulation may take a minimum of 18 months <ns0:ref type='bibr' target='#b148'>[143]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.'>Social Implications</ns0:head><ns0:p>Nowadays, the different myths regarding the incorrect data of infection, unsuitable drugs, and misclassified infected zones have been propagated on social media. However, these rumors and hate speech can be greatly affected by the social life of human beings. WHO put these things in infodemic, i.e., the huge amount of data is available with a mixture of accurate and inaccurate data. Due to this, persons are unable to find verified sources whether the given information is correct or not in this pandemic situation.</ns0:p></ns0:div>
<ns0:div><ns0:head>a. Analysis of Social Media</ns0:head><ns0:p>In this epidemic situation, social media should assess the quality and accuracy of the information posted. Nowadays, Facebook and Google are working against the misinformation, phishing funding websites, and viruses floating on their platforms. When you search the COVID on YouTube, then it links to the user either government organization or WHO for retrieving correct information. Videos posted on YouTube are screened and dropped immediately from the site after the false information is confirmed. <ns0:ref type='bibr'>Eichstaedt et al. [144]</ns0:ref> analyzed the tweet posted on Twitter by the user during this epidemic situation of COVID-19. AI-based text analyzer utilizing the number of cases and death in a particular region to investigate the mental health of the user. MIT developed a neural network-based model to determine the effectiveness of quarantine measures and the spread of virus infection <ns0:ref type='bibr' target='#b150'>[145]</ns0:ref>. Khataee et al. <ns0:ref type='bibr' target='#b151'>[146]</ns0:ref> reported the effects of social distancing during the COVID epidemic. They can assess the local elements of the pandemic. They reported that the US has not taken precautionary measures to stop infection caused by a coronavirus. Rosenberg et al. <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> analyzed the tweets related to coronavirus-related information. They explored the various myths about the virus. Galotti et al. <ns0:ref type='bibr' target='#b152'>[147]</ns0:ref> explored the social media tweets posted on Twitter. They developed an Inodemic Risk Index (IRI) to determine verified human beings, unverified human beings, verified, and unverified bots. IRI utilizes the number of users and messages posted by users and their reliability. They highlighted the impact of infodemics, social outcomes, and controlled pandemics. Cinelli et al. <ns0:ref type='bibr' target='#b153'>[148]</ns0:ref> studied the contents of social media. They assessed the development of the discourse on Twitter, YouTube, Instagram, and other social media. They analyzed the comments, likes, and action upon comments for 45 days. Mejova et al. <ns0:ref type='bibr' target='#b154'>[149]</ns0:ref> studied the advertisements related to coronavirus posted on Facebook. Facebook Ad Library is used to examine all the advertisements that have the phrases 'coronavirus' and 'covid' across the world.</ns0:p><ns0:p>They established that 5% of advertisements having misclassified and misconception information. Zarocosta <ns0:ref type='bibr' target='#b155'>[150]</ns0:ref> studied the information shared and posted on different social media regarding COVID-19. AI tools can be used to track the spreading of rumors regarding the coronavirus. Pandey et al. <ns0:ref type='bibr' target='#b156'>[151]</ns0:ref> reported the breach in delivering genuine information to users in India. They retrieved the information and found their verified sources using artificial intelligence and NLP. They tested their approach on Sanitation and Hygienic information for this epidemic situation. AIbased chatbots can be used to propagate COVID-19 information that can filter the misinformation. WHO developed a multilingual chatbot to explore the information posted on social media and news channels <ns0:ref type='bibr' target='#b157'>[152]</ns0:ref>. This virtual assistant can be used to verifying the information before further processing.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Analysis of Hate Speech</ns0:head><ns0:p>Hate speech is a major concern in the last few months. The verbal and non-verbal abuse statements may give rise to physical violence against the corona warriors. Such instances were reported in the lockdown situation of various countries. Velasquez et al. <ns0:ref type='bibr' target='#b158'>[153]</ns0:ref> analyzed the hate speeches regarding COVID-19 posted on different social media and their movement from one media to another. They have also analyzed the methods for transmission and found that hate speeches are rapidly spread in the epidemic situation of COVID. Schild et al. <ns0:ref type='bibr' target='#b159'>[154]</ns0:ref> reported the Sinophopic behavior of tweets posted on Twitter and other media. They trained machine learning models on the information obtained from the contents of COVID-19. Web can be used for spreading misinformation and hate speech on COVID-19 information. AI-based tools play a vital role in fighting against hate speech <ns0:ref type='bibr' target='#b160'>[155]</ns0:ref>. Supplement Table <ns0:ref type='table'>14</ns0:ref> depicts the impact of rumors and hate speech on social life.</ns0:p></ns0:div>
<ns0:div><ns0:head n='10.'>Future Research Directions</ns0:head><ns0:p>The possible research directions for application of AI on COVID-19 are described below:</ns0:p></ns0:div>
<ns0:div><ns0:head>a. Interpretable prediction</ns0:head><ns0:p>The results obtained from the AI-based prediction model should be interpretable and easy to use <ns0:ref type='bibr' target='#b161'>[156,</ns0:ref><ns0:ref type='bibr' target='#b162'>157]</ns0:ref>. Therefore, soon one can utilize information extraction or image captioning kin of techniques to provide more interpretable prediction results.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Mobile-based AI Tools</ns0:head><ns0:p>The AI based models can be deployed on lightweight devices such as mobile. Our objective is not to achieve diagnostic tools for hospitals and clinics. On Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science c. Drug Discovery</ns0:head><ns0:p>Till now many researchers have worked on developing drugs for COVID-19 infected patients. However, no effective drug is available for COVID-19 diagnosis. Therefore, one may utilize the existing medicines to build an efficient drug for COVID-19 infected patients <ns0:ref type='bibr' target='#b165'>[159]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>d. AI-based Drones to combat COVID-19</ns0:head><ns0:p>As we knew that the main source of infection transmission is human contact. Due to this, AI-based robots are used to disinfect the patient's rooms and interact with patients. For disinfection, robots are transmitting ultraviolet light over infected space to remove the virus. Robots are transferred the face image and voice of doctor on their screen during the interaction with patients. Hence, the medical staff is safe due to less contact with patients. Drones are widely used for transferring patient samples and medical apparatus. These are also used to disinfect the unreachable infected areas. During the lockdown, the drones are used to track the person who is come out from home. It provides faster delivery and less risk of infection. The drones supported by AI and computer vision techniques can play an essential role in the aerial monitoring of disease spread; for logistics and medical supply delivery <ns0:ref type='bibr' target='#b166'>[160]</ns0:ref>, as well as for social distance checking <ns0:ref type='bibr' target='#b167'>[161]</ns0:ref>. They also were used to perform aerial spray and disinfection of residential areas <ns0:ref type='bibr' target='#b168'>[162]</ns0:ref>. However, there are data security, and privacy concerns that need to be resolved for successful application of AI-supported drone technology in public spaces on a large scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>e. 3D Printing techniques for developing COVID-19 prevention and fighting tools</ns0:head><ns0:p>The 3D printing techniques may be used to design the face masks and face shields to be used for protection against COVID-19 <ns0:ref type='bibr' target='#b169'>[163]</ns0:ref>. Combined with face-scanning technology and computeraided design (CAD) tools these can be made to fit the individual face and head measurements. The 3D techniques also can be used to build other COVID-19 fighting tools such as main components of respiratory support equipment <ns0:ref type='bibr' target='#b170'>[164]</ns0:ref>. These components can be designed using low-cost consumer filament extrusion printers. The 3D printing technology can quickly address the deficiencies of medical materials and spare parts of medical equipment, however, the processing time, high cost, and lack of manpower can be potential barriers for applying 3D printing on a larger scale <ns0:ref type='bibr' target='#b171'>[165]</ns0:ref>. AI techniques can play a role in optimizing the 3D design process and reducing the cost of printing <ns0:ref type='bibr' target='#b172'>[166,</ns0:ref><ns0:ref type='bibr' target='#b173'>167,</ns0:ref><ns0:ref type='bibr' target='#b174'>168]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='11.'>Findings, Lessons and Recommendations</ns0:head><ns0:p>However, numerous challenges and research limitations have been indicated in the academic literature and need to be addressed in the future. Some of these challenges are related to nature and behaviour of COVID-19 because understanding how the virus spreads and how people can be infected caused by the complexity of this epidemic disease is extremely difficult. The lack of largescale datasets in the academic literature for COVID-19 is considered a challenging task for AI researchers because it hinders the understanding of viral patterns and features. In order to make AI and big data platforms and applications a trustful solution to fight the COVID-19 virus, a critical challenge is the collection of large-scale datasets and making them open for research.</ns0:p><ns0:p>AI and big data-based algorithms should be optimized further to enhance the accuracy and reliability of the data analytics for better COVID-19 diagnosis and treatment. AI is able to provide viable solutions for fighting the COVID-19 pandemic in several ways. For example, AI has proved very useful for supporting outbreak prediction, coronavirus detection as well as infodemiology and infoveillance by leveraging learning-based techniques such as ML and DL from COVID-19centric modeling, classification, and estimation. Moreover, AI has emerged as an attractive tool for facilitating vaccine and drug manufacturing. By using the datasets provided by healthcare organizations, governments, clinical labs and patients, AI leverages intelligent analytic tools for developing effective and safe vaccine/drug against COVID-19, which would be beneficial from both the economic and scientific perspectives. Moving forward, it is imperative for AI-designers and researchers to work together with medical professionals to create and develop these systems that are applicable to real-world datasets</ns0:p></ns0:div>
<ns0:div><ns0:head n='12.'>Conclusions</ns0:head><ns0:p>In this paper, the impact of artificial intelligence (AI) techniques is assessed on early cautionary and vigilant systems that focus on COVID-19 warning and diagnostics. The COVID-19 datasets and visualization techniques are discussed with their applications. The AI-based diagnosis and treatment of COVID patients are assessed. The impact of AI on drug discovery and design is evaluated. The effect of COVID-19 is also evaluated on social and economic aspects. From the literature review, we can conclude that the AI techniques are widely used to identify the novel drug discovery beforehand of the outbreak of COVID-19. The AI techniques can help to search for the optimal drug against COVID-19. AI can be used to build the biomedical knowledge structure that connects the drugs and viruses to repurpose the existing drugs that are used to treat other diseases. The existing drug and coronavirus interaction can be modelled by AI techniques. Other promising applications of the AI methods are contact tracing, drone-based surveillance, and smart face mask design. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>b. Data visualization systems PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52767:1:2:NEW 27 Apr 2021) Manuscript to be reviewed Computer Science Data visualization techniques/dashboards utilized AI for tracking and forecasting of COVID infection. They provide a global overview of infected persons. Data visualization system is categorized into two broad categories namely local and global dashboards [38]. The Center for System Science and Engineering of Johns Hopkins University developed the COVID-19 dashboard (JHU-CSSE) for global viewing of COVID-19. NextStrain generates the genomic epidemiology of COVID-19. It consists of a dropdown list for Asia, Africa, Europe, and Oceania. BBC dashboard provides the visualization of coronavirus infected areas in the World. It provides the effect of lockdown on the death cases in six countries namely the UK, Italy, Spain, France, Germany, and the US. The New York Times offers the dashboard for virus infection in various areas of the World [39]. HealthMap uses the World map for showing the visualization of infected cases in a particular area. Bing's AI tracker is used to show infected, recovered, and fatal cases of different parts of the World. The filter is applied to a particular country to show the infected persons in a particular province. South Africa developed COVID-19 ZA South Africa dashboard for tracking the infected, recovered, and death cases in South Africa. It also tracks the number of tests conducted and positive cases in different provinces. Tableau implemented a COVID-19 Data Hub for a daily global tracker. COVID19 India is initiated by crowdsourcing. The infected, recovered, and deaths are shown in both tabular and graphical manner. Supplement Table</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Global Research Database provides scientific information on COVID-19. COVID-19 Open Research Dataset is an open dataset. The Kaggle challenges provide the COVID-19 data for analysis. LitCOVID and AI COVID-19 dataset consist of clinical trial data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52767:1:2:NEW 27 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>developed a DarkCovidNet model for the detection of infected patients using chest X-ray images. DarkCovidNet model has seventeen convolutional layers and different filtering for each layer. The classification accuracies obtained from DarkCovidNet were 98.08% PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52767:1:2:NEW 27 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>hybridized super resolution generative adversarial network (SRGAN) model and VGG16 to detect infected patients by chest CT. SRGAN was used to enhance the resolution of CT images. VGG16 was used to differentiate the infected and healthy region of CT. The developed model is validated over 275 COVID-19 and 195 normal CT images. The classification accuracy obtained from the model was 97.87%. Xu et al. [115] used different CNN models to detect infected patients. CT images are processed to extract the interesting regions. 3D CNN model is used to segment the CT images. Thereafter, the segmented images are further classified into three different classes. The overall accuracy of deep learning models was 86.70%. Singh et al. [116] developed an automatic chest CT analysis system to classify the infected persons whether these are positive or not. The CNN hyper-parameters are tuned through multi-objective differential evolution (MODE). CNN with optimized parameters is used for COVID-19 patient classification. The proposed model outperforms the other competitive models by 1.927%. Li et al. PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52767:1:2:NEW 27 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Drug Design PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52767:1:2:NEW 27 Apr 2021) Manuscript to be reviewed Computer Science AI has the potential to discover, design, and repurpose the existing drugs to combat the COVID-19 as shown in Fig. 7. During this pandemic situation, several research laboratories are trying to develop vaccines/drugs against COVID-19. They are using AI techniques to discover new vaccines or repurposing existing drugs [127].</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>used fully connected ANN to envisage binding affinities from the PDBbind database. They explored the existing molecules for the treatment of SARS-CoV-2 [141]. Beck et al. [139] developed Molecule Transformer-Drug Target Interaction (MT-DTI) model to determine PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52767:1:2:NEW 27 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>can work on AI-based COVID-19 prediction and diagnostic tools on lightweight devices [158]. PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52767:1:2:NEW 27 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 1 Number</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,303.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,270.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,125.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,245.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,245.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,249.75' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52767:1:2:NEW 27 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Editor,
First, we would like to express our gratitude to you, the editorial team, and the reviewers whose valuable comments on this paper have significantly improved its quality. We have addressed all the comments in the new version of the paper and will list all the changes item-by-item in response to the mentioned comments below. A number of minor alterations have been made to improve expression.
Editor:
We would like to first thank you for your valuable comments and appreciate the time that you spent for reviewing our work. We also admire your vigilance in finding the oversights. We have addressed all your comments as follows and hope you find them satisfactory:
Suggested revisions:
1. More detail is needed on the description of the methods and which ones lead to better performance.
Response- As suggested, the detail description of methods has been added in the revised manuscript. Tables 12 and 13 have been incorporated to show the performance of each methods. Figs. 6 and 7 have also added in the revised manuscript.
2. A more critical analysis of findings and results is missing.
Response- To address your comment, a new Section entitled “Finding, Lessons and Recommendations” has been added to discuss the critical analysis of our finding.
3. The paper needs to be proofread to correct typographical errors and improve grammar.
Response- We have carefully checked and proofread the article, and corrected and spelling and grammar errors found.
4. The paper needs to be distinguished from other similar reviews.
Response- As suggested, a new section entitled “Related surveys” has been added in the revised manuscript to discuss similar reviews on the application of artificial intelligence for solving COVID-19 related challenges.
5. The paper should more carefully address the paper selection criteria and whether non-peer-reviewed papers have been considered.
Response- To address your comment, a section entitled “Research methodology” has been incorporated to explain the paper selection criteria. Table 4 has been added in the revised manuscript.
Reviewer #1:
We would like to first thank you for your valuable comments and appreciate the time that you spent for reviewing our work. We also admire your vigilance in finding the oversights. We have addressed all your comments as follows and hope you find them satisfactory:
Suggested revisions:
1. The description of the methodology of survey should be improved to allow for replication. Specifically, indicate the query string. Were any papers found excluded from further analysis and why? Why you analyze only Preprints rather than complete publications registered on scientific bibliography databases such as Web of Science or Scopus?
Response- As suggested, a section entitled “Research methodology” has been incorporated to discuss the paper selection and elimination criteria. The published papers have been considered in the revised manuscript.
2. The Present a more extensive bibliographic analysis of literature sources on COVID-19, including the most popular venues of publications and leading research teams around the world. You can use, for exampling, the guidelines presented in Conducting systematic literature reviews and bibliometric analyses, Australian Journal of Management 45(2):175-194, DOI: 10.1177/0312896219877678 on how to visualize the results of literature reviews.
Response- As suggested, the necessary corrections have been made in the revised manuscript.
3. I suggest to present a summary for each type of system or application discussed (such as including information on the number of instances and attributes for biomedical datasets; or what are the most popular deep learning architectures used). This would provide a deeper insight on the analyzed research domain.
Response- To address your comment, Tables 12 and 13 have been added to show the performance comparison in terms of techniques and datasets. Beside this, Table 9 has also added in the revised manuscript to show the performance comparison of existing techniques.
4. Add a critical discussion section and discuss the current limitations of such systems as well as the successes.
Response- To address your comment, a new Section entitled “Finding, Lessons and Recommendations” has been added to discuss the critical analysis of our finding.
5. Present a more extensive and in-depth conclusions of where the COVID-19 related research is heading.
Response- As suggested, the detail description of techniques involved in COVID-19 has been presented.
Reviewer #2:
We would like to first thank you for your valuable comments and appreciate the time that you spent for reviewing our work. We also admire your vigilance in finding the oversights. We have addressed all your comments as follows and hope you find them satisfactory:
Suggested revisions:
1. This paper surveys contributions of AI to COVID-19. It groups the surveyed articles into five categories: AI systems that predict the COVID-19 disease, diagnose the disease, recommend treatment, design new drugs, and predict social and economic impact. Therefore, the title of the paper is somewhat misleading as it spans more than prediction and diagnosis. It is therefore recommended to change the title.
Response- As suggested, title of manuscript has been changed. The new title is “Overview of current state of research on the application of artificial intelligence techniques for COVID-19”.
2. The survey is definitely an interesting idea and could be of interest to many readers. In the data gathering part, the grouping of different data gathering and prediction systems is interesting. It is recommended to refer to “surveillance” systems and to epidemiology for those forecasting the spread of the disease.
Response- To address your comment, a new Section entitled “An early cautionary and vigilant system” has been added in the revised manuscript. Table 6 has been added to show the performance comparison of different surveillance mobile apps.
3. Regarding the form, there are many spelling and grammatical errors in the paper. It is recommended to have the paper proofread by a native speaker.
Response- As suggested, the paper has been proofread several times before this round of submission. We hope that you find the English satisfactory this time.
4. The methodology of querying only pre-print papers may not be sufficient to cope with the extent of the literature in this domain, so that it is recommended to broaden the base of papers studied. However, the information collected from the body of literature is already interesting so that the extension to a broader literature could wait until a next paper. This should be discussed in future directions.
Response- A section entitled “Research methodology” has been incorporated to discuss the paper selection and elimination criteria. The published papers have been considered in the revised manuscript. A new Section “Future research directions” has also added in the revised manuscript.
5. One general impression reading about the different AI systems presented is that we do not have much data regarding their effectiveness. They definitely aim at addressing diverse challenges related to COVID-19, but do not have measurements showing that they do so. This is expected due to the novelty of the disease, but it should be better explained in the paper. You are proposing ideas for addressing the challenge, but these ideas need to be tested to be shown effective.
Response- To address your comment, Tables 12 and 13 have been added to show the performance comparison in terms of techniques and datasets.
6. It is recommended to better highlight the systems that being used or are going to be used in medicine or in the field, by contrast to the systems only used by computer systems on existing datasets. The reader would like to know, among these systems, which ones actually advance the diagnosis, treatment, prediction etc. of COVID-19 on real persons, not only on existing datasets.
Response- Section “Diagnosis techniques for COVID-19” has shown the tools and techniques used in real-life scenarios. Tables 6 and 10 have been added to show the tools and techniques used for real-time prediction and diagnosis of COVID-19.
7. The paper is interesting to read and covers a broad range of AI applications in the fight against COVID-19. The language should be improved before publication
Response- The necessary corrections have been made in the revised manuscript.
Reviewer #3:
We would like to first thank you for your valuable comments and appreciate the time that you spent for reviewing our work. We also admire your vigilance in finding the oversights. We have addressed all your comments as follows and hope you find them satisfactory:
Suggested revisions:
1. The review is riddled with typographical and grammatical errors that are too numerous to detail here. Furthermore, the writing is not professional enough to meet standards for an international audience. While I am sympathetic towards the barriers that non-native English speakers have to overcome, these language issues actually hampered my ability to follow the paper. I strongly advise having a native English speaker proofread the review..
Response- As suggested, the paper has been proofread several times before this round of submission. We hope that you find the English satisfactory this time.
2. Introduction and background are adequate but suffer from logical gaps that weaken the section. For instance, in line 58, outcome risk is not clearly defined. Instead a confusing sentence is presented, “If the treatment is not effective towards infected person/group then there is less possibility to recover or die.” I am assuming that there is a lower possibility of recovering rather than dying.
Response- As suggested, Introduction section has been rewritten. The necessary corrections have been made.
3. The field has been extensively reviewed (a quick PubMed search revealed several such papers: https://pubmed.ncbi.nlm.nih.gov/?term=artificial%20intelligence%20covid-19%20review). It is unclear what distinguishes this review from existing ones. This would be a good point to emphasize in the abstract and the introduction.
Response- As suggested, a new section entitled “Related surveys” has been added in the revised manuscript to discuss similar reviews on the application of artificial intelligence for solving COVID-19 related challenges.
4. The Introduction does not explicitly make it clear which audience the review is addressing. However, the implication is that this is a review meant to initiate AI researchers towards COVID-19 work.
Response- Introduction section has been rewritten. The main focus of this manuscript is to motivate the AI researcher for solving COVID-19 situations in broader way.
5. There is a great lack of detail in the description of the methods and I highlight two major points here. First, it is unclear what search terms and their combinations were used to query the preprint servers. Were any synonyms such as “machine learning” used? Second, what were the inclusion/exclusion criteria for the manual selection of papers?
By using preprint servers as a primary resource to search for papers, I am concerned that the Survey Methodology is biased towards work that is not peer-reviewed or is undergoing peer review. This suggests a wide range of maturity of these articles and it is not clear whether this is accounted for when choosing and highlighting papers. The survey largely ignores the body of work published in peer-reviewed journals. At the very least, the survey would be more comprehensive if open access journal articles were included in this review. Alternatively, providing clear statistics on the fates of these articles and the process to eliminate works-in-progress can be considered.
Response- A section entitled “Research methodology” has been incorporated to discuss the paper selection and elimination criteria. The published papers have been considered in the revised manuscript.
6. While the review is coherently organized into sections and sub-sections, the content in each of these is inconsistent with the title. For example, Section 7 is titled “Treatment of COVID-19” but its sub-sections largely focus on radiology images, which has more to do with diagnosis. On a related note, there is a much larger issue pervasive throughout the paper that I discuss below in my general comments.
Response- The manuscript has been restructured. The necessary corrections have been made.
7. The review does a reasonable job in outlining future directions. However, each sub-section is sparse on details and does not come across as a well thought out roadmap for the field. I am also not convinced by the use of drones to combat COVID-19 as issues such as trust and safety will need to be addressed. A better approach would be to adopt a more speculative tone and condense this section to 2-3 broad sections that talk about ongoing work. This will help demonstrate feasibility.
Response- We have discussed the security and privacy concerns associated with the use of drone technology.
8. The most important concern that I have is that there is a recurring abuse of terminology when it comes to the COVID-19 aspects of the paper. For instance, prediction is a broader term that spans can be made at the patient level (the progression of the disease) or the population level (the spread of the disease). On the other hand, diagnosis specifically focuses on the patient and involves either assigning a COVID-positive or COVID-negative status or more deeper classification (e.g., severity). In this paper, these terms are used interchangeably and are confusing when trying to follow the organization of the review. Similarly, in line 175, it is debatable whether apps that collect responses to questionnaires are actually “predicting” COVID-19 status.
Response- We have revised the use of term “prediction” and have replaced it in different statements with “analysis”, “assessment”, “identification”, “diagnosis” or “monitoring”, depending on the context of use.
9. The lack of mastery of terminology, particularly in molecular biology, comes through and adversely affects the quality of the review. For instance, in line 243, the RCSB Protein Data Bank contains protein structures (and not “gene structures” as mentioned here). Another example, is the following phrase in line 610: “cell-lines of existing medicines”. What does this mean?
Response- As suggested, the necessary corrections have been made in the revised manuscript.
10. Are Tables 1 and 2 directly extracted from existing papers or were they generated for this review? If it is the latter, what data were used to populate these tables and how were these constructed?
Response- Tables 1 and 2 are extracted from the existing papers. The source of information has been mentioned in the revised manuscript.
11. The paper is very inconsistent in when it chooses to highlight the performance of tools and when it merely reports the existence of a particular tool. What was the rationale for diving deeper for some papers?
Response- We have added additional columns in Tables 8 reporting on deep learning/machine learning techniques on radiology datasets and COVID-19 datasets. Tables 12 and 13 have been added in the revised manuscript.
" | Here is a paper. Please give your review comments after reading it. |
120 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In this era of advancements in deep learning, an autonomous system that recognizes handwritten characters and texts can be eventually integrated with the software to provide better user experiences. Like other languages, Bangla handwritten text extraction also has various applications such as post-office automation, signboard recognition, and many more. A large-scale and efficient isolated Bangla handwritten character classifier can be the first building block to create such a system. This study aims to classify the handwritten Bangla characters. The proposed methods of this study are divided into three phases. In the first phase, seven convolutional neural networks i.e., CNN-based architectures are created. After that, the best performing CNN model is identified, and it is used as a feature extractor. Classifiers are then obtained by using shallow machine learning algorithms. In the last phase, five ensemble methods have been used to achieve better performance in the classification task. To systematically assess the outcomes of this study, a comparative analysis of the performances has also been carried out. Among all the methods, the stacked generalization ensemble method has achieved better performance than the other implemented methods. It has obtained accuracy, precision, and recall of 98.68%, 98.69%, and 98.68% respectively on the Ekush dataset. Moreover, the use of CNN architectures and ensemble methods in large-scale Bangla handwritten character recognition has also been justified by obtaining consistent results on the BanglaLekha-Isolated dataset. Such efficient systems can move the handwritten recognition to the next level so that the handwritings can easily be automated.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Bangla is one of the prestigious languages all over the world. About 230 million people from around the world speak Bangla as their native language <ns0:ref type='bibr' target='#b26'>(Khan, Al Helal & Ahmed, 2014)</ns0:ref>, and approximately 37 million people use it as a second language for both speaking and writing purposes. Thus, by the total number of speakers worldwide, Bangla is the fifth most spoken native language <ns0:ref type='bibr' target='#b3'>(Alom et al., 2018)</ns0:ref> and the seventh most spoken language as well. As Bangla is such a renowned language, works related to Bangla language such as Bangla handwritten character recognition is getting more attention among machine learning practitioners. In this age of artificial intelligence and automation, the prominent applications of Bangla handwritten recognition cannot be overlooked. It can play a significant role in many aspects such as post-office automation, national ID number recognition, parking lot management system, and online banking <ns0:ref type='bibr' target='#b3'>(Alom et al., 2018)</ns0:ref>. This recognition system can also play an essential part in signboard translation, digital character conversation, keyword spotting, scene image analysis, text-to-speech conversion <ns0:ref type='bibr' target='#b31'>(Manoharan, 2019)</ns0:ref>, meaning translation, and most notably in Bangla optical character recognition (OCR) <ns0:ref type='bibr' target='#b30'>(Manisha, Sreenivasa & K., 2016)</ns0:ref>. But it has been a great challenge to provide such a system for Bangla than most other languages. Bangla has a very complex and rich handwriting pattern as opposed to the simple handwriting pattern of other renowned languages. In Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>, a complexity comparison of a Bangla handwritten character has been illustrated i.e., Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>(b), a Bangla handwritten character, has been compared with an English handwritten character i.e., Fig. <ns0:ref type='figure' target='#fig_3'>1(a)</ns0:ref>, and also with an Arabic handwritten character i.e., Fig. <ns0:ref type='figure' target='#fig_3'>1(c</ns0:ref>). From the figure, it is apparent that Bangla characters have a more complex structure than Arabic and English characters. Bangla script consists of 11 vowels and 39 consonants. These 50 characters are the basic alphabets of the Bangla language. In addition to that, there are more than 170 characters <ns0:ref type='bibr' target='#b16'>(Das et al., 2014)</ns0:ref> conjunct-consonant characters that are formed by combining two or more than two basic characters, and these compound characters very close resemblance with each other. For having very complex shaped cursive characters, morphological complexity, the variety of writing styles, and scarcity of the complete Bangla handwritten dataset, recognizing Bangla handwritten characters has become more challenging for a system. To deal with these emerging challenges, researchers have utilized many different methods such as deep convolutional neural networks (DCNNs) <ns0:ref type='bibr' target='#b3'>(Alom et al., 2018)</ns0:ref>, convolutional neural network (CNN) with transfer learning <ns0:ref type='bibr' target='#b45'>(Reza, Amin & Hashem, 2020)</ns0:ref>, and ensemble learning (Rahaman Mamun, Al Nazi & Salah Uddin Yusuf, 2018). For the complex patterns that lie into the handwritten Bangla characters, their recognition is a very difficult task. There are various techniques that the researchers have followed in Bangla handwriting recognition challenges. However, convolutional neural network (CNN)-based architectures have proved to be the de-facto standard in this domain of handwritten character recognition <ns0:ref type='bibr' target='#b42'>(Rahman et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b4'>Azad Rabby et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b9'>Chatterjee et al., 2020)</ns0:ref>, and many more. In this study, different convolutional neural networks have been utilized for Bangla handwritten character recognition. On the other hand, ensemble learning is a special method where a model is built with the help of different base classifiers to improve the quality of predictions and overall performance. The ensemble learning methods have been utilized widely for image recognition and classification for a while to achieve better performance. Many studies have explored the possibilities of ensemble methods in image analysis <ns0:ref type='bibr' target='#b24'>(Ju, Bibaut & van der Laan, 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>Das et al., 2018)</ns0:ref>. In light of other works for obtaining outstanding results, in this study, various ensemble methods have also been used for classifying Bangla handwritten characters. This study has developed good performing machine learning models to classify Bangla handwritten characters efficiently. The key research contributions of this study are: 1. This study has developed seven convolutional neural network (CNN)-based models to classify isolated Bangla handwritten characters efficiently. 2. This is one of the first reported works that utilize various ensemble methods to recognize Bangla handwritten characters. 3. This study has classified more Bangla handwritten characters than other related state-of-theart existing works with better performances. In the following section of this article, an extensive literature review of the related works is presented. After that, the methods, materials, and experimental framework of this study are subsequently presented. Then, the results are described with appropriate discussion, and finally, concluding remarks are demonstrated. <ns0:ref type='bibr' target='#b3'>(Alom et al., 2018)</ns0:ref> have introduced the popular deep convolutional neural networks to recognize Bangla handwritten characters. They have performed their experiments on the CMATERdb <ns0:ref type='bibr' target='#b46'>(Sarkar et al., 2012)</ns0:ref>. The used architectures have included DenseNet <ns0:ref type='bibr' target='#b23'>(Huang et al., 2017)</ns0:ref>, All-Conv Net <ns0:ref type='bibr' target='#b54'>(Springenberg et al., 2015)</ns0:ref>, VGG Net <ns0:ref type='bibr' target='#b52'>(Simonyan & Zisserman, 2015)</ns0:ref>, FractalNet <ns0:ref type='bibr' target='#b28'>(Larsson, Maire & Shakhnarovich, 2019)</ns0:ref>, ResNet <ns0:ref type='bibr' target='#b21'>(He et al., 2016)</ns0:ref>, and NiN <ns0:ref type='bibr' target='#b29'>(Lin, Chen & Yan, 2014)</ns0:ref>. In another work <ns0:ref type='bibr' target='#b9'>(Chatterjee et al., 2020)</ns0:ref>, to resolve the issue of a higher number of iterations, the authors have used transfer learning with the ResNet50 network in a need of proper training of the model. To make the training faster, a one cycle policy and the varying image sizes methods have been applied as well. An accuracy of 94.3% at the character level has been achieved by <ns0:ref type='bibr' target='#b6'>(Bhattacharya et al., 2016)</ns0:ref>. In <ns0:ref type='bibr' target='#b4'>(Azad Rabby et al., 2018)</ns0:ref>, two datasets have been used for experiments using a CNN-based architecture. An approach of Xception-based ensemble learning has been adopted by <ns0:ref type='bibr' target='#b41'>(Rahaman Mamun, Al Nazi & Salah Uddin Yusuf, 2018)</ns0:ref> to recognize handwritten Bangla digits. <ns0:ref type='bibr' target='#b42'>(Rahman et al., 2015)</ns0:ref> have developed a CNN-based model to classify the basic handwritten characters. <ns0:ref type='bibr' target='#b2'>(Alif, Ahmed & Hasan, 2018)</ns0:ref> have modified ResNet18 architecture by adding a dropout layer for handwritten classification. Further, in <ns0:ref type='bibr' target='#b12'>(Chowdhury et al., 2019)</ns0:ref>, CNN-based work has been proposed by the authors. <ns0:ref type='bibr'>(Shopon, Mohammed & Abedin, 2017)</ns0:ref> have proposed a CNN-based model to classify only Bangla handwritten digits. Also, a study of handwritten digit classification has been conducted by <ns0:ref type='bibr'>(Sharif et al., 2017)</ns0:ref>. Apart from CNNs, Bangla handwritten characters have been classified using a harmony search algorithm too <ns0:ref type='bibr' target='#b47'>(Sarkhel, Saha & Das, 2015)</ns0:ref>. The modified quadratic discriminant function (MQDF) method has also been used for the recognition task <ns0:ref type='bibr' target='#b35'>(Pal, Wakabayashi & Kimura, 2007)</ns0:ref>. A deep network has been used in this domain-specific classification as well <ns0:ref type='bibr' target='#b48'>(Sazal et al., 2013)</ns0:ref>. This technique varies from the conventional methods of character preprocessing for the creation of handcrafted characteristics such as loops and strokes. Various research for the recognition of handwritten characters and texts from other languages have also been conducted such as recognizing Baybayin scripts using SVM <ns0:ref type='bibr' target='#b53'>(Sitaula, Basnet & Aryal, 2021)</ns0:ref>, analyzing handwritten Hebrew document <ns0:ref type='bibr' target='#b7'>(Biller et al., 2016)</ns0:ref>, recognizing English handwritings <ns0:ref type='bibr' target='#b38'>(Pham et al., 2020)</ns0:ref>, and many more. For English handwritten digit and character recognition tasks CNN-based architectures have yielded better performance than other techniques <ns0:ref type='bibr' target='#b5'>(Baldominos, Saez & Isasi, 2019;</ns0:ref><ns0:ref type='bibr' target='#b44'>Ranzato et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b15'>Cireşan et al., 2011)</ns0:ref>, and so on have applied different variants of CNN on the MNIST dataset and obtained good results. Similarly, good results have been obtained on the EMNIST dataset using convolutional neural networks <ns0:ref type='bibr' target='#b36'>(Peng & Yin, 2017;</ns0:ref><ns0:ref type='bibr' target='#b32'>Mor et al., 2019)</ns0:ref>. So far, the described related works for various handwritten character recognition are mostly CNN-based. However, one of the objectives of this study is to explore the usability of ensemble methods for the commenced recognition task. Although the ensemble techniques have not been used vastly for handwritten classification like CNN, they have been used widely in other image classification tasks. <ns0:ref type='bibr' target='#b49'>(Shibly, Tisha & Ripon, 2021</ns0:ref>) has used the stacked generalization method for handwritten character recognition. Before this, SVM based stacked generalization has also been used for image classification <ns0:ref type='bibr' target='#b57'>(Tsai, 2005)</ns0:ref>. The stacked ensemble generalization approach has already demonstrated the promising performance of disease detection in the medical field <ns0:ref type='bibr' target='#b43'>(Rajaraman et al., 2018)</ns0:ref>. Stacked generalization has also been used for multiclass motor imagery-based brain-computer interfaces <ns0:ref type='bibr' target='#b33'>(Nicolas-Alonso et al., 2015)</ns0:ref>. The other ensemble methods like bagging <ns0:ref type='bibr' target='#b22'>(Hothorn & Lausen, 2003)</ns0:ref>, boosting <ns0:ref type='bibr' target='#b18'>(Fidalgo et al., 2018)</ns0:ref>, random forest <ns0:ref type='bibr' target='#b19'>(Gislason, Benediktsson & Sveinsson, 2006)</ns0:ref> have also been implemented in image classification. These ensemble methods have been utilized to improve the performances of the classifiers. The extensive literature review suggests that the convolutional neural network-based architectures are widely used in the recognition of Bangla handwritten characters. But the number of characters recognized by most of the works is limited in comparison with a large number of characters in the Bangla language. This study is an attempt to recognize as many handwritten characters as possible with widely used CNN architectures. Moreover, the literature review also suggests that there is a gap of knowledge about the applicability of ensemble methods in handwritten Bangla character recognition tasks. This study also explores few ensemble methods to improve the recognition performance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Related Works</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.'>Methods</ns0:head><ns0:p>In this section of the article, the methodologies of this study are presented with appropriate explanations. The three phases of the applied methodology are described in this section. A highlevel overview of the workflow of this study is illustrated in Fig. <ns0:ref type='figure'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.'>Convnet Architectures</ns0:head><ns0:p>Six popular convnet architecture has been used to classify Bangla handwritten characters. These architectures are AlexNet <ns0:ref type='bibr' target='#b27'>(Krizhevsky, Sutskever & Hinton, 2017)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b52'>(Simonyan & Zisserman, 2015)</ns0:ref>, VGG19 <ns0:ref type='bibr' target='#b52'>(Simonyan & Zisserman, 2015)</ns0:ref>, ResNets <ns0:ref type='bibr' target='#b21'>(He et al., 2016)</ns0:ref>, Xception <ns0:ref type='bibr' target='#b21'>(He et al., 2016)</ns0:ref>, and DensNet <ns0:ref type='bibr' target='#b23'>(Huang et al., 2017)</ns0:ref>. Additionally, a VGG-like small convnet has also been developed by us. Each architecture has its unique style, and they try to solve the image classification problem in their own different ways. The convnets are very useful for extracting features from images. Handcrafted features for image classification are not efficient enough to build robust classifiers. With the help of convolution, which is an attempt to mimic how the brain convolves, a convnet model identifies various shapes that lie in an image. As the final step of convolution, a feature vector of fixed length for every image is generated. After that, those features are used to classify the images. The convnets can also have classification layers on top of the feature extraction layers. For having both feature extraction and classification ability integrated into a single architecture, a convnet is convenient to use. Convolutional neural network-based models have also been proved to be better performers in image classification and recognition. For these reasons, different convnets have been applied for the recognition task. Most importantly, a wide range of convnets has been used to find the optimal model for Bangla handwritten character recognition task. 3.1.1. AlexNet, VGG16, VGG19, Small CNN AlexNet, VGG16, VGG19, and our developed small CNN are four convolutional neural network models that have similar sequential structures. These four convolutional neural networks have been used for recognition tasks for their simple yet very powerful architectures. Among them, AlexNet is one of the very first convnet architectures that utilize graphical performance units (GPU) for processing images. The comparative description of four architectures is presented in Table <ns0:ref type='table'>1</ns0:ref>. AlexNet consists of eight layers. The architecture starts with a convolutional layer followed by a max-pooling layer. This convolutional -max-pooling layer combination repeats two times. After that, there are three convolutional layers and a single max-pooling layer. The last three layers of AlexNet are dense while the last one being the output layer. VGG16, VGG19, and small CNN have an AlexNet-like structure. However, the VGG-networks are deeper than AlexNet. VGG16 and VGG19 are sixteen-and nineteen-layers architectures, respectively. On the other hand, the small CNN architecture has eight layers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.2.'>ResNets</ns0:head><ns0:p>Residual Networks i.e., ResNets <ns0:ref type='bibr' target='#b21'>(He et al., 2016)</ns0:ref> have also been used for the classification challenges. The convolutional and the identity blocks are two building blocks of a ResNet that have been presented in Fig. <ns0:ref type='figure'>3</ns0:ref>. The ResNets are unique for their residual or skip connections. The information propagates from a specific layer to another specific layer, they follow two paths. In a convolution block, the information goes through a series of convolutional -batch normalization layers. The information goes through a point-wise convolutional layer as well. Then the weights learned from the two paths are added after the end of the convolutional block. Similar things happen in identity block. To solve a sophisticated classification task, a deeper network is needed. But deeper networks tend to fail in learning due to degradation problems. Even sometimes the performance of a network falls due to adding new layers. Various skip connections in ResNets have been employed to prevent this degradation within a network.</ns0:p><ns0:p>A ResNet can be of 18, 34, 50, 101, or 152 layers. All the ResNets start with a 7×7 convolutional layer with 64 filters and strides of 2 followed by a batch normalization layer and a 3×3 maxpooling layer having strides of 2. After that, for ResNet50 architecture, there is a convolutional block is followed by two identity blocks. Then there is another convolutional block followed by three identity blocks. This pattern continues two more times except with five identity blocks and two identity blocks, respectively. After the last identity block, the outputs are reduced with average pooling, and finally, a SoftMax classification layer is added.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.3.'>Xception</ns0:head><ns0:p>There are skip connections in the Xception architectures as well. Inception network <ns0:ref type='bibr' target='#b55'>(Szegedy et al., 2015)</ns0:ref> inspired the creation of this architecture. The modification of this architecture over the Inception is that each inception module has been replaced with a depthwise separable convolutional layer. Just like a typical convolutional layer, the depthwise convolutional layer has the same output. But to reach the same output, a separable convolutional layer requires fewer computations than the regular convolutional layer. Such improvement over the convolutional layer makes a model be trained faster. In Fig. <ns0:ref type='figure'>4</ns0:ref>, the Xception architecture is presented. Xception begins with two typical convolutional layers having and filters, respectively with a fixed filter size of . This is followed by 32 64 3 × 3 five Xception blocks. Each input of such a block can either be passed through two separable convolutional layers and a max-pooling layer or can be passed through a pointwise convolution via shortcut connection except the fourth Xception block. The fourth Xception block consists of only one separable convolutional layer, and this block repeats eight times. After the last Xception block, there are two separable convolutional layers followed by a global average pooling layer. After that, the fully connected layers are added with the last being the output layer.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.4.'>DenseNet</ns0:head><ns0:p>Another used convnet in this work is DenseNet. The idea behind this model is that every convolutional layer in a certain block is connected to every other convolutional layer of that block. Every layer not only processes the input forwarded by the immediately preceding layer but also processes the input combination forwarded by all the previous layers. Basically, a DenseNet model consists of a few dense blocks. Before every dense block, there is a convolutional layer followed by an average-pooling layer except the first dense block. The block diagram of DenseNet is displayed in Fig. <ns0:ref type='figure'>5</ns0:ref>. There is a convolutional layer with a filter size of followed by a max-pooling layer with a pool size of before the first dense block. The 7 × 7 3 × 3 other transitional convolutional layers have a filter size of . After the last dense block, there 1 × 1 are a global average-pooling layer and few customizable classification layers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.'>Image Data Augmentation</ns0:head><ns0:p>It has been proved that image data augmentation helps a classifier to recognize images more accurately <ns0:ref type='bibr' target='#b37'>(Perez & Wang, 2017)</ns0:ref>. Here, while creating the individual CNN models, different types of data augmentation have been applied to the training images. The images have been rotated from degrees to degrees. The height and width shift range also has been adjusted 9 15 from to and the zoom range is varied from to . These types of augmentation 0.09 0.15, 0.09 0.15 provide more generalization to the classifiers. Different image augmentations used in this study are presented in Table <ns0:ref type='table'>2</ns0:ref> considering as rotation range, height shift range, as width shift</ns0:p><ns0:formula xml:id='formula_0'>𝐴 1 𝐴 2 𝐴 3</ns0:formula><ns0:p>range, and as zoom range.</ns0:p><ns0:p>𝐴 4</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.'>Convnet as Feature Extractor</ns0:head><ns0:p>The described convnets can be used as feature extractors as well. The feature extractor model can be developed by removing the fully connected layers from a convnet. The flatten layer after the last pooling layer produces feature vectors. These can be used to train a separate classifier. Among the described architectures, ResNet50 has shown the highest accuracy in the classification task. This convnet has been used as a feature extractor in this work to train separate classifiers with traditional machine learning algorithms, such as Logistic Regression, Decision Tree, Naïve Bayes, and Support Vector Machine. Using a pre-trained ResNet50 feature extractor, features from train and test images have been extracted. The flatten layer of the model produces a feature vector of size 2048. Fitting a classifier on this huge feature space is not feasible due to a lack of computational resources. To overcome this issue, three dense layers with 1024, 512, and 80 neurons have been added before the final classification layer. This has been done to narrow down the feature space to a manageable size.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.'>Ensemble Methods</ns0:head><ns0:p>The classification task has also been carried out using some ensemble techniques. In ensemble learning, a prediction has been made based on multiple learning algorithms rather than single learning algorithms <ns0:ref type='bibr' target='#b34'>(Opitz & Maclin, 1999)</ns0:ref>. It is a technique used in machine learning fields where more than one model is trained for the same task as opposed to a typical machine learning technique for solving a particular task. In ensemble learning, hypotheses from different models are combined to create a more generalized, accurate, and robust model to solve a specific problem. Few ensemble techniques can develop a generalized model from multiple models. Regarding this, the most commonly used ensemble techniques are majority voting, weighted majority voting, Borda count, bagging, boosting, stacked generalization, and so on. One of the objectives of this study is to explore the potential of ensemble methods in Bangla handwritten character recognition. Regarding this, in this study, stacked generalization, bootstrap aggregating, adaptive boosting, extreme gradient boosting, and random forest ensemble methods have been used for the handwritten recognition task.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.1.'>Stacked Generalization</ns0:head><ns0:p>Stacked generalization follows a very general process. The mechanism of this method is illustrated in Fig. <ns0:ref type='figure'>6</ns0:ref>. In this method, on the training dataset, different first-level classifiers are k fitted. There are many methods to obtain these first-level classifiers <ns0:ref type='bibr' target='#b0'>(Aggarwal, 2015)</ns0:ref>. In this work, a 10-fold cross-validation method has been employed on a training set, and ten first-level classifiers have been developed. The data augmentation technique for each first-level classifier has been presented in the figure. After obtaining first-level classifiers, they are stacked, and their outputs are concatenated. After the concatenation, three dense layers have been added. The last layer of the second-level stacked model is the output layer. After creating the stacked model, it has been trained with the validation set, where the output of each first level classifier works as the new dataset for training the second level classifier. This is a lesser-used method for image classification with great potential. To explore its usability and to improve recognition performance, this method has been employed.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.2.'>Bootstrap Aggregating</ns0:head><ns0:p>The second ensemble method that has been employed is bootstrap aggregating. It is also known as bagging. In bagging, from training data, a portion of images are selected randomly and put in a bag with replacement. Usually, instances from the training set go to a bucket 1 -1/e = 63.2% with the other 36.8% of the copied instances from randomly selected data. By doing this, a bag with the size of the original training set is constructed. This is the principal idea behind bootstrap aggregating. In Fig. <ns0:ref type='figure'>7</ns0:ref>, the overall working procedure of this method has been demonstrated along with the data augmentation technique applied on each classifier. After creating ten bags from the training dataset, an independent classifier has been fitted on each bag. And for testing, majority voting has been applied to the predictions made by individual classifiers. This method is an attempt to reduce variance from the classifier <ns0:ref type='bibr' target='#b0'>(Aggarwal, 2015)</ns0:ref>. Like stacked generalization, this method has also been used to achieve better performance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.3.'>Boosting and Random Forest</ns0:head><ns0:p>Two boosting methods have been used in this work -adaptive boosting and extreme gradient boosting. In AdaBoost, the classification starts with equal weight to each of the training instances. The weight associated with an instance indicates the probability of being chosen in bootstrap for training on a certain iteration. If an instance is misclassified, the weight associated with that instance increases for the next iteration. If an instance is correctly classified, the weight gets decreased. The iteration of training repeatedly terminates when the classifier's accuracy becomes 100% or the classifier performs worse than the base classifier <ns0:ref type='bibr' target='#b0'>(Aggarwal, 2015)</ns0:ref>. Another boosting method is XGBoost <ns0:ref type='bibr' target='#b11'>(Chen & Guestrin, 2016)</ns0:ref>. It is a tree-based ensemble technique where the errors are minimized by taking extreme measures using a gradient descent algorithm. The last ensemble technique that has been used is the random forest. A random forest is nothing but a combination of many decision trees. In this method, k decision trees are created using training data to form a forest, and for the test data, the majority voting technique is followed. The majority class predicted by individual decision trees is the predicted class. The trees are not developed with all features. Another way to create a random forest to use a bootstrap aggregating technique <ns0:ref type='bibr' target='#b20'>(Han, Kamber & Pei, 2012)</ns0:ref>. A portion of the training data points are selected with replacement, and for each decision tree, a newly sampled training dataset is used. Random forest resolves the problem of overfitting in decision tree classifiers <ns0:ref type='bibr' target='#b58'>(Vinet & Zhedanov, 2010)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Experimental Setup</ns0:head><ns0:p>This study aims at classifying Bangla handwritten characters. For creating the convnets, Keras ('Keras') on top of TensorFlow ('TensorFlow') -a python library has been used. For a few of the experimentations, the sci-kit-learn ('scikit-learn') library of python has also been utilized. The models have been trained on a computer having Ryzen , CPU, GB ram, and Nvidia 5 1600 8 GeForce TI GPU with Linux Mint operating system. 1050 19.3</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.'>Dataset</ns0:head><ns0:p>We have used two publicly available Bangla handwritten character datasets for the experiments -Ekush <ns0:ref type='bibr' target='#b40'>(Rabby et al., 2019)</ns0:ref> and BanglaLekha Isolated <ns0:ref type='bibr' target='#b8'>(Biswas et al., 2017)</ns0:ref>. Primarily, the experiments have been conducted on the Ekush dataset. For comparative analysis purposes, the experiments have been carried out on the BanglaLekha-Isolated dataset too. There are other datasets too like ISI <ns0:ref type='bibr' target='#b10'>(Chaudhuri, 2006)</ns0:ref>, NumtaDB <ns0:ref type='bibr' target='#b1'>(Alam et al., 2018)</ns0:ref>, and CMATERdb <ns0:ref type='bibr' target='#b46'>(Sarkar et al., 2012)</ns0:ref> are female. Table <ns0:ref type='table'>3</ns0:ref> shows the details of the Ekush dataset. On the other hand, the 50% BanglaLekha-Isolated dataset has 84 classes distributed in 50 basic characters, 10 numerals, and 24 compound characters. The images of both datasets are greyscaled images. The images of the Ekush dataset have a fixed dimension of 28×28 pixels whereas the dimension of the images of the BanglaLekha-Isolated dataset varies from 36×36 pixels to 191×191 pixels. During experiments, images from both datasets have been kept with a fixed dimension of 28×28. In Fig. <ns0:ref type='figure'>8</ns0:ref>(a), few vowel modifiers from the Ekush dataset are displayed. The vowel modifiers are labeled from 0 to 9 in the dataset. These characters cannot be found isolated. They are found before or after, or both before and after consonants or compound characters. These are the representative or short forms of actual vowels. There are eleven vowels in the Bangla language. They are labeled from 10 to 20. These characters can be in a word independently. But if they are to be used as phonetics, their corresponding vowel modifiers work on the consonants and compound characters. Few of the vowels are displayed in Fig. <ns0:ref type='figure'>8(b</ns0:ref>). In Fig. <ns0:ref type='figure'>8(c</ns0:ref>), a few of the 39 consonants of the Bangla language are displayed. They are labeled from 21 to 59. A few of the 52 compound characters of the Ekush dataset are shown in Fig. <ns0:ref type='figure'>8(d</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Results</ns0:head><ns0:p>The overall performances of the experiments are generally well and competitive to other existing works. In the previous section, the three stages of the experimental setup have been described. In the first stage, the different convnets have demonstrated results ranging from 96.25% accuracy to 97.81% accuracy on the Ekush dataset. For the BanglaLekha-Isolated dataset, accuracies have varied from 88.88% to 93.55%. The most accurate performance that has been recorded is 98.68% in terms of accuracy for the Ekush dataset. This has been obtained by the stacked generalization ensemble method. For the BanglaLekha-Isolated dataset, the highest achieving performer has been the bootstrap aggregating method with 93.55% test accuracy. Apart from the classifiers' performances, other aspects are needed to be discussed too. In this section, the results of the study along with various aspects are presented elaborately with appropriate discussion.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.'>Result Comparison</ns0:head><ns0:p>The result of this study has outperformed most of the existing works in the domain of Bangla handwritten character classification. In Table <ns0:ref type='table'>4,</ns0:ref> Manuscript to be reviewed Computer Science our work has exemplary and outstanding results in terms of better performances and classifying Bangla handwritten characters on a large scale as well. Most of the current works have classified less than 122 characters. Very few of them have classified 122 or more than 122 Bangla handwritten characters. In contrast, this study has beaten (Azad <ns0:ref type='bibr' target='#b4'>Rabby et al., 2018)</ns0:ref> which has the same dataset as ours. They have the same train-test split as our experiments with 15% images in the test set. They have reported a 97.73% test accuracy obtained by their EkushNet architecture whereas our stacked generalization method has been able to achieve 98.68% test accuracy. Our developed bootstrap aggregating method has also outperformed the performance of the EkushNet model. <ns0:ref type='bibr' target='#b6'>(Bhattacharya et al., 2016)</ns0:ref> and <ns0:ref type='bibr' target='#b35'>(Pal, Wakabayashi & Kimura, 2007)</ns0:ref> have classified 152 and 138 handwritten characters, respectively but have not gained performance like our results. We have achieved a maximum of 98.68% accuracy for the classification task while the other works' performances have not exceeded 98% accuracy. To further validate our obtained results, the proposed methods have also been applied on the BanglaLekha Isolated dataset <ns0:ref type='bibr' target='#b8'>(Biswas et al., 2017)</ns0:ref> which has the second-highest number of Bangla isolated characters among all publicly available datasets in this domain. Our proposed models on this dataset have also outperformed other works. Stacked generalization and bootstrap aggregating methods have yielded 92.67% and 93.55% test accuracy whereas <ns0:ref type='bibr' target='#b39'>(Purkaystha, Datta & Islam, 2017</ns0:ref>) have obtained 88.93% test accuracy. They also have not considered all 84 classes of the dataset rather they have built their model on 80 classes which further proves the superiority of our methods as all 84 classes have been used for classification in our work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.'>Performances of Convnets</ns0:head><ns0:p>During training, after each epoch, the training and validation accuracies of the convnets with their respective losses have been observed to check on the stability of models. The models have been trained for a hundred epochs, and at the end of each epoch, they have been validated against the validation images. In Fig. <ns0:ref type='figure'>9</ns0:ref>, the validation accuracies over the number of epochs for each convnets for the Ekush dataset have been compared. The figure depicts that all seven models have generally smooth learning curves. The comparison of the validation accuracies of the models indicates that the ResNet50 model has been better than other models. And, the DenseNet model has been worse than others. The Xception model has a performance almost like the ResNet50, and AlexNet has a performance close to DenseNet. The other three models -small CNN, VGG16, and VGG19 are somewhere in between good and bad. On the other hand, Fig. <ns0:ref type='figure' target='#fig_3'>10</ns0:ref> depicts the validation loss over the epochs of each convnet for the Ekush dataset. Unlike validation accuracy curves, the validation loss curves of convnets are not smooth. The validation loss of the small CNN model is lesser than other models in a certain epoch while DenseNet has a higher loss than others. The small CNN and DenseNet models have minimum losses of 0.0459 and 0.1841, respectively. These experimental values indicate that the small CNN model has been more stable than other models in terms of validation loss, and the DenseNet model has been more unstable than others. The other models have stability between the two discussed models. Moreover, the ResNet50 model has outperformed all the other six models with 97.81% test accuracy. It has also 97.82%, 97.81%, and 97.81% precision, recall, and F1-score, respectively. All the convnets' precision, recall, F1-score, and accuracy on the test set of the Ekush dataset are given in Table <ns0:ref type='table'>5</ns0:ref>. From there, it can be observed that the Xception model has come close to ResNet50 with 97.63% accuracy, 97.64% precision, and 97.63% recall. While these two models have performed better, VGG16, VGG19, and small CNN models have average performances with accuracies of 96.97%, 97.05%, and 97.30%, respectively. On the contrary, DenseNet and AlexNet have comparatively poor performances than others with accuracies of 96.25%, and 96.80%, respectively. The reason behind the good performance of ResNet50 and Xception is that they have very deep and complex structures. These architectures have been able to extract the patterns of Bangla handwritten characters efficiently. It is also to be noted that the poorperforming models such as DenseNet and AlexNet have also taken more time to train than the models with better performance. They have also been slower than the better-performing models like ResNet50 and Xception in the testing phase. The performances of convnets on the BanglaLekha-Isolated dataset are also presented in Table <ns0:ref type='table'>5</ns0:ref>. Among the convnets, the ResNet50 model has been the top performer with 92.63% test accuracy while the AlexNet model has yielded only 88.88% accuracy.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.'>Performances of ResNet as Feature Extractor</ns0:head><ns0:p>In the second phase of the study, the best performing convnet has been used as a feature extractor for the classification task. The goal of this experiment has been to explore the applicability of regular classification algorithms in image classification. As it is known that the images are subject to contextual information. Therefore, simply passing the original pixel values of images into a classifier without giving additional information would not yield better performance. To use those classifiers, contextual features from images are needed to be extracted. For feature extraction purposes, the best pre-trained best-performing convnet has been used by removing its classification layers -in this case, that is ResNet50. As a convnet has been used, the performance of the classifiers depends on how well the convnet can extract features from images. Better convnet will help the classifiers to obtain better performance. In Table <ns0:ref type='table'>5</ns0:ref>, for both datasets, the performances of logistic regression, support vector machine, naïve Bayes, and decision tree classifiers with pre-trained ResNet50 as feature extractor has been presented. Among them, the SVM classifier performed better than other classifiers with 97.75% accuracy, 97.76% precision, and 97.75% recall on the Ekush dataset. On the other hand, logistic regression has been the top performer on the extracted features with 92.17% accuracy, 92.24% precision, and 92.17% recall. Although the shallow classifiers could not beat the original ResNet50 classifier which has 97.81% accuracy, they have worked exceptionally well with a very small feature set for the Ekush dataset. The same minor downgrade in performances of shallow classifiers from the original ResNet50 model has been observed for the BanglaLekha-Isolated dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.4.'>Results of Ensemble Methods</ns0:head><ns0:p>In the final stage of the experiments, few ensemble methods have been employed for the classification task. The ensemble methods are broadly categorized into two types. One type of ensemble method is where the classifiers have been built from scratch on the original training images, and the other type is where the classifiers have been trained on the training set with extracted features by the ResNet50 model. The classifiers of the first type of ensemble method have comparatively better performances than the second. Even they have demonstrated better performances than all the other methods employed during this study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.4.1.'>Performance of Stacked Generalization Ensemble Method</ns0:head><ns0:p>To create the second-level ensemble classifier, ten different classifiers have been trained initially.</ns0:p><ns0:p>After training the first-level classifiers, they have been tested with the original test images. In Fig. <ns0:ref type='figure' target='#fig_3'>11</ns0:ref>, their performances on the Ekush dataset are presented. The performances of first-level classifiers have a resemblance with their original convnets. ResNet50 and Xception models have better performances than others, and AlexNet has the lowest performance. The best first-level classifier has been the ResNet50 model that has been trained on the 7 th fold with rotation range = 14, height shift range = 0.10, width shift range=0.10, and zoom range = 0.10. It has a test accuracy of 97.90% which is better than its ResNet50 counterpart trained on the whole training set. This better-performing trend only has been observed in the ResNet models. The other models have lost some performance for being trained on a subset of the original training set. The stacked generalization model has outperformed the other individual classifiers with 98.68% test accuracy. Using different models, different image augmentations, and a different subset of the training set on each first level classifier has helped to achieve better performance of the second level classifier. Precision, recall, F1-score, and the accuracy comparison of first and second-level classifiers on the Ekush dataset are given in Fig. <ns0:ref type='figure' target='#fig_3'>11</ns0:ref>. From the figure, it can be seen that the stacked model has the highest performance in all the evaluation metrics. The stacked model has 98.69% precision, 98.68% recall, 98.68% F1-score, and 98.68% accuracy on the Ekush dataset while the first level models do not have the same level of performance. On the other hand, from Table <ns0:ref type='table'>5</ns0:ref>, the stacked model has achieved 92.78% precision, 92.67% recall, 92.72% F1-score, and 92.67% accuracy on the BanglaLekha-Isolated dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.4.2.'>Performance of Bootstrap Aggregating Ensemble Method</ns0:head><ns0:p>Like the stacked generalization method, better performing convnets have been utilized to create ten individual classifiers. Each of them has been trained on one of the ten bags of training images. Figure <ns0:ref type='figure' target='#fig_3'>12</ns0:ref> shows these ten models' test performances on the Ekush dataset. Like the first-level classifiers of the stacked generalization method, the individual classifiers of the bootstrap aggregating method have similar performances. ResNet50 model with image augmentation of rotation range=10, height shift range=0.1, width shift range=0.1, and zoom range=0.1 has been the better performer than others. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>AlexNet model has worse performance than other models. It can also be seen from the figure that other ResNet18, VGG16, and Xception models have come close to ResNet50 in terms of test accuracy. After finishing the training of these convnets, the test images have been tested using the majority voting method on the output of ten convnets. The performance of the predictions based on ten models has outperformed all the individual models. The evaluation metrics-wise comparison of individual models and bagged models has been shown in Fig. <ns0:ref type='figure' target='#fig_3'>12</ns0:ref> as well. The figure depicts that the bagging method has yielded 98.38%, 98.37%, 98.37%, and 98.37% precision, recall, F1-score, and accuracy, respectively. On the other hand, the individual models do not have any value of evaluation metrics larger than 97.5%. This increase in model performances of both stacked generalization and bagging methods justifies the efficiency of these classification methods. The bootstrap aggregating method has also obtained superior performances than individual convnets for the BanglaLekha-Isolated dataset. From Table <ns0:ref type='table'>5</ns0:ref>, it can be said that the bagging method has achieved 93.60% precision, 93.55% recall, 93.55% F1-score, and 93.55% accuracy. In fact, the bagging method has achieved the best result among all the other experiments on the BanglaLekha-Isolated dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.4.3.'>Performance of Boosting and Random Forest</ns0:head><ns0:p>Among the other ensemble classifiers, the random forest has performed better than the other two for both datasets. The precision, recall, F1-score, and accuracy of these models are demonstrated in Table <ns0:ref type='table'>5</ns0:ref>. The random forest has secured first place with 97.32% accuracy, 97.33% precision, and 97.32% recall on Ekush. On the other hand, AdaBoost has worse performance than XGBoost with 96.37% accuracy, 96.42% precision, and 96.36% recall while XGBoost has 96.58%, 96.59%, and 96.58% accuracy, precision, and recall, respectively on the Ekush dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.5.'>Ablation Studies</ns0:head><ns0:p>Batch size and optimizers are two important parameters when it comes to large-scale image classification. As the task of image classification is very computationally expensive, choosing the appropriate batch size and optimizer can play significant roles in model performance. For this purpose, a few pre-experiments with batch size and optimizers have been carried out. For choosing a batch size, a small CNN model has been developed and trained for 40 epochs with various batch sizes. From the experiments, it has been observed that a larger batch size usually helps the model to obtain better performance in terms of test accuracy. Experiment with 1024 batch size has yielded better performance than that of 512 batch size on Ekush dataset. The larger batch sizes do not always yield better performance either. Classifiers with 1536 or 2048 batch sizes have performed less accurately than that of 1024 batch sizes. In the second ablation study, a ResNet50 model has been developed and trained for ten epochs with five optimizers -adam, RMSprop, stochastic gradient descent (SGD), Adagrad, and Adadelta. Among different optimizers, SGD has performed worse than other optimizers in terms of validation accuracy (89.52%). RMSprop has higher validation accuracy than others (96.48%).</ns0:p><ns0:p>As the batch size of 1024 and RMSprop optimizer have better performance in the preexperiments, this combination of batch size and optimizer has been used for the rest of the experiments. Time is another important feature that must have to be considered when it comes to model evaluation. Although training time is often overlooked if the model can classify an instance in a real-world environment faster. But due to a lack of available computational resources, training a convnet can become time-consuming. With the experimental setup of this study, the VGG19 model has taken on an average 0.3271 millisecond time to process an image while training. This is the lowest time among the seven convnets. In Table <ns0:ref type='table' target='#tab_2'>6</ns0:ref>, the time to process an image by each model, and time to predict an image are given. It can be seen in the table that the DenseNet model has taken more time than the other models to process an image during training. AlexNet has also taken a comparatively longer time to be trained. On the other hand, the VGG16 model has the lowest average prediction time per image. It takes 0.1572 milliseconds to classify an input image. Besides, the DenseNet model needs more time than any other model to predict an image (0.4307 ms). Another interesting thing is that all the models tend to work faster in the testing phase than in the training phase except for VGG19. VGG19 has taken 0.336 milliseconds to test an image while it has taken 0.3271 milliseconds to process an image during training. The most significant improvements have been seen in the time of DenseNet to predict. It predicts in 1 millisecond less than the training time per image. The other models have shown improvements in testing time too. However, in terms of performancetime trade-off, ResNet50 and Xception have been better classifiers. They have achieved better test accuracy than other models with relatively faster predictions. It is also a point to be noted that all the ablation experiments have only been applied on Ekush dataset under the assumption that a similar pattern of performances will be found for the BanglaLekha-Isolated dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.'>Discussion</ns0:head><ns0:p>Among the individual convnets, the ResNet50 model has the best performance for the Bangla handwritten recognition task for both datasets. And among the regular classifiers using ResNet50 as a feature extractor, the SVM classifier has the best performance for both datasets. And, finally, among the ensemble methods, stacked generalization has the best performance for the Ekush dataset while the bootstrap aggregating model has the best performance on the BanglaLekha-Isolated dataset. Among all the classifiers, the stacked generalization and bootstrap aggregating methods have produced the best classifiers for two datasets, respectively. Two of the seven convnets have performed with the best accuracies on the primary dataset of our experimental setup. Those convnets are the ResNet50 and the Xception network. Similarly, two of the ensemble methods have the highest performances. Stacked generalization ensemble and bootstrap aggregating methods have been proven to be exemplary in the field of Bangla handwritten recognition. These ensemble methods have not been used in this domain-specific classification task before. Additionally, the convnets have been used as feature extractors too. Moreover, various experiments have been carried out to tune the different aspects related to this PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science study too. These types of classifiers can be used for handwritten automation in various realworld applications. In the first phase of the study, on the primary dataset, the reason for ResNet50 and Xception models having better results is their complex and sophisticated structures. For having more than 120 classes in the dataset, and for the complexity of various handwritten Bangla characters as well, the classification requires more robust and advanced architectures. As mentioned, these two models have more robust and deeper architectures than most of the other convnets that have been used, they have been able to yield outstanding performances. The training time has also been faster for them as they utilize their shortcut connections. Another aspect is to discuss the excellent performances of stacked generalization ensemble and bootstrap aggregating methods. To build classifiers using these two methods, individual convnets have been trained separately. When the final classifier predicts an image into a specific class, it uses all those bits of knowledge learned from several convnets instead of a single model. Those several convnets are also trained differently one from another with different architectures and with different image augmentation. Using these wide ranges of model learnings has allowed stacking and bagging methods to perform exceptionally better than single convnets. However, the other ensemble methods have not performed as well as stacking and bagging. For the other ensemble methods, the dataset after extracting features using ResNet50 has been used. The performance generally relies on the feature extraction process. Better feature extraction helps to yield better model performances. As the other ensemble classifiers have been trained on the features extracted by ResNet50, it is obvious that they will not be able to beat the original ResNet50 model. A similar justification is applicable for worse performances by logistic regression, SVM, naïve Bayes, and decision tree classifiers. Nevertheless, achieving a respected result with fewer features (80 instead of 1024) is a positive outcome.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1.'>Misclassification</ns0:head><ns0:p>After testing the convnets, their confusion matrices and class-wise classification performances have been observed. In this large-scale classification, the classifiers have performed poorly on a few classes from 122 classes of Bangla handwritten characters from the Ekush dataset. Although the classifiers have good performance overall, they have not been able to perform better for a few classes. All convnets have demonstrated a similar trend in misclassifying instances from a few specific classes. In Table <ns0:ref type='table'>7</ns0:ref>, a few examples of classes where the ResNet50 classifier has shown poor, average, and good performance with their supports are presented with the number of false negatives, and the number of false positives. There are some reasons behind failing to perform better for some specific classes. One of them is the lack of a sufficient number of training and testing images in a class. For example, a handwritten character labeled as 111 has fewer training and testing instances. The lower support value for this class from Table <ns0:ref type='table'>7</ns0:ref> indicates the insufficiency of the number of instances during testing. This class has only 1237 training images, while the other classes have more than 4500 images each on average. Although, this is not only the reason for poor performance. Similar patterns of Bangla handwritten characters across different classes are also responsible. There are few characters in Bangla that have almost the same pattern. A few of the examples are given in Fig. <ns0:ref type='figure' target='#fig_3'>13</ns0:ref>. Character labeled as 111 has similarity with character labeled as 69, class 19 has similarity with class 84, and so on. These close resemblances between two classes make a classifier predict one class into another. The empirical data also supports that. Among the 47 false-negative instances for class 111, 23 of them are misclassified as class 69. And among 32 false-positive instances, 15 of them are from class 69. Similar misclassifications have happened for other classes that are presented in Table <ns0:ref type='table'>7</ns0:ref> under the 'poor' category. For having similar patterns, often people mistake one character for another. Thus, this results in writing them wrongly. For this reason, there are also some wrongly labeled images in the dataset. For example, from Fig. <ns0:ref type='figure' target='#fig_3'>13</ns0:ref>, the image from the left labeled as 19, and the corresponding image from the right labeled as 84 are the same characters but labeled as different classes. Moreover, for those that are categorized under average and good performance, the ResNet50 classifier does not have a majority portion of misclassified instances that are in a specific class, rather the misclassified instances are evenly distributed for the classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.'>Conclusion and Future Works</ns0:head><ns0:p>Through various experiments, the applicability of convolutional neural network-based ensemble learning methods for handwritten character recognition has been proved. The empirical evidence shows that predictions are more accurate with the models that combine results of multiple convnets than a single convnet. Single convnets may not have the best performance of this work but the robust architectures among them surely have demonstrated impressive results. The ResNet50 and the Xception models have standout performances in the image recognition tasks. The outstanding results that have been attained are also among the top performances that have been reported in the domain of Bangla handwritten character recognition. Another significant aspect is to classify more than 120 handwritten characters. Recognizing them on such a large scale and with such good performance is the major contribution of this work. However, there are some limitations associated with this study. Only six of the popular convolutional neural networks have been used. The other convnets like Inception <ns0:ref type='bibr' target='#b55'>(Szegedy et al., 2015)</ns0:ref>, FractalNet <ns0:ref type='bibr' target='#b28'>(Larsson, Maire & Shakhnarovich, 2019)</ns0:ref>, NiN <ns0:ref type='bibr' target='#b29'>(Lin, Chen & Yan, 2014)</ns0:ref>, etc. can be used for classification tasks too. Additionally, the training of models from scratch can be a tedious job. Exploring the applicability of the pre-trained transfer learning models in the classification task can be a new research direction. Bangla is a very rich language. There are more than 200 compound characters in it. Among them, only 52 compound characters are classified in this study. There is a lack of an available compound character dataset. A dataset with more compound characters can be curated. In Ekush <ns0:ref type='bibr' target='#b40'>(Rabby et al., 2019)</ns0:ref> dataset, there are few wrongly labeled images. Those are needed to be labeled properly. Additionally, only five of the ensemble methods have been used for the classification challenge. The other ensemble methods can be applied for this classification task as well. An important aspect of the ensemble methods is that for boosting and random forest, the reduced dataset has been used. For this reason, the expected performance has not been achieved. To achieve better performance, the classifiers using these methods are needed to be trained from scratch directly on the training images like stacked generalization and bagging ensemble methods. Doing these can produce better performance. Moreover, the applicability of developed classifiers in this study is only limited to isolated handwritten characters. The systems that can recognize words and sentences from images are needed to be created. Extracting words and sentences from images is also very important. Those systems can help us to obtain true autonomous experience in the field of handwriting recognition and extraction. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>). They are labeled from 60 to 111. Compound characters in Bangla can be constructed by joining two or more consonants. Finally, in Fig.8(e), four of the ten Bangla handwritten digits of this dataset are shown. They are labeled from 112 to 121.4.2. Training, Testing, and ValidatingThe datasets have been divided into three sets -train, validation, and test.The train set has approximately 75%, the validation set has approximately 10%, and the test set has approximately 15% of the data. For the Ekush dataset, 547,131 of 729,750 images in the whole dataset are used PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021)Manuscript to be reviewedComputer Sciencefor training. And for validation and test, the number of images is 72,842 and 109,777, respectively. For the BanglaLekha-Isolated dataset, for training convnets, 224, 211 images have been used. And in the validation and test sets, there have been 24,881 and 33,310 images, respectively. For training the first level classifiers of the stacked generalization model, the training set has been divided into ten folds using 10-fold cross-validation and a classifier on each fold has been fitted. After combining the first-level classifiers, the second-level classifier has been trained with the validation set of the original dataset. The reason behind not using the training set again is that the first-level classifiers have already known the training set. That is why the original validation set which is completely unknown for the first level classifiers has been used. After training the second level classifier, it has been tested against the original test images -the same test images that have been used for testing throughout the study. For the bootstrap aggregating method, train and validation sets have been combined. From the combined set, ten bags of datasets have been sampled with replacement using the bootstrap aggregating method. After fitting a convnet on each of the bags, the final classifier (based on the results of ten convnets) has been tested with the original test set.4.3. Optimizers, Loss Function, Batch Size, and Evaluation MetricsTo select appropriate optimizers, few models have been trained with various optimizers. The optimizer that has the best performance has been used for model training. Similarly, preexperiments with various batch sizes have also been conducted and the best performing batch size has been used for actual model creation. As the models have to accomplish a multi-class classification task, the categorical cross-entropy loss function has been used. Moreover, four evaluation metrics have been used throughout the whole working process of this studyprecision, recall, F1-score, and accuracy.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>a comparison of this work with other existing works has been presented after an extensive literature review. The comparison has proven that PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,320.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,249.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,161.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,258.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,258.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,354.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,228.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,228.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,245.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,243.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>. But they do not have as many classes as Ekush and BanglaLekha-Isolated datasets. Ekush dataset has the highest number of classes among all Bangla handwritten character datasets. It has 122 classes of characters of four types like modifiers, basic characters, compound characters, and numerals. This dataset contains different images. The creators of this</ns0:figDesc><ns0:table><ns0:row><ns0:cell>734,036</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>dataset have collected handwritten images in a form from</ns0:cell><ns0:cell>3,086</ns0:cell><ns0:cell>people among which</ns0:cell><ns0:cell>50%</ns0:cell><ns0:cell>are</ns0:cell></ns0:row><ns0:row><ns0:cell>male and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 6 (on next page)</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Training and testing time of different models on Ekush dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021) Manuscript to be reviewed 2 PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57252:1:1:NEW 27 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Rebuttal letter for the article entitled “Convolutional neural network-based ensemble methods to recognize Bangla handwritten character”
April 27, 2021
Dear Editors,
We thank the reviewers for their feedback on our article. We have tried to address all the comments of the reviewers and incorporate them into the revised manuscript. Especially, few new experiments have been conducted to prove the validity of our methods. We have applied our methods to another publicly available Bangla handwritten character dataset and presented the results in the manuscript. The obtained results on the other dataset have been excellent which validates the major findings of our work.
I hope, with the revision made by us, our manuscript is now will be accepted for publication in PeerJ Computer Science Journal.
Thank you
Mir Moynuddin Ahmed Shibly
Email: shiblygnr@gmail.com, 2016-3-60-057@std.ewubd.edu
On behalf of all authors
Reviewer 1: Lijia Deng
Basic reporting
There are too many long paragraphs, and the text is not aligned at both ends, which affects reading.
The long paragraphs have been decomposed into smaller paragraphs over the whole manuscript. And, it has been instructed to use “left justify” for the manuscript, that is why we did not align the texts at both ends.
Experimental design
The experimental design is good, and a large number of experiments have been conducted to verify the model.
Thank you for your compliment.
Validity of the findings
The curves of images 3 and 4 are not very distinguishable, and the viewpoints mentioned in the article are not well drawn.
The curves have been redrawn to make them distinguishable (now Fig. 9 and Fig. 10 in the revised manuscript). The article has been reorganized and few more experiments have been conducted to establish that the CNN-based ensemble methods can improve the performance of Bangla handwritten character recognition.
Table 8 for comparison with other studies hopes to be sorted for easy comparison.
The table has been sorted as per the suggestion (now Table 4 in the revised manuscript).
Comments for the author
Very meaningful work, but some details need to be revised. Great work!
Thank you for your compliment and we have revised the manuscript.
Reviewer 2
Basic reporting
This paper explores new methods of using ensemble on top of CNN-based feature extraction to improve Bangla character recognition (or classification in machine learning terms). The authors employ datasets used by previous work in literature and obtain better results as reported, with more classes considered. This work, along with others that consider other languages are very good contributions to many communities. In this first round of review, however, several flaws in the methods and experiments, if not also in others, are identified as below, and improvements are suggested to be made to make the paper better and more convincing.
English and format: While the use of English throughout the manuscript is professional, I suggest changing the following:
- Use of enumeration without using “etc”, maybe by replacing it with “such as”.
The manuscript has been re-written avoiding “etc”.
- Line 84, change to the format as instructed: (Smith, 2001a; Smith, 2001b)
The reference issues have been taken care of (see line number 70, 76, 77, 127, and so on).
- Line 481, use 500,000 instead of 500000. Similarly for lines 465 and 466, and other places with big numbers.
Big numbers have been changed according to the prescribed format (line number 333, 357, 358, and so on)
- Inconsistent uses of metric names, such as F1 in line 506 and f1 in 515 or in other places.
The consistency of evaluation metrics names has been maintained (line number 379, 439, 440, 500, and so on)
Introduction: while the authors provide much information, some suggestions to improve are made as follows:
- Reference for the first paragraph should be made about how significant Bangla is
Two references have been added about the significance of Bangla (line number 35 and 38).
- When stating Bangla is more difficult than other languages (which is a very strong claim indeed), the authors should present the comparison clearly, not assuming by looking at only the statistics of Bangla, readers should be able to agree. Such claims might or might not be true with various readers, for example, those are from English-, Chinese-, or Arabic-speaking countries.
- Another suggestion for comparison is to present a figure to illustrate how complex Bangla characters can be.
The suggestion of the reviewer has been noted and a new figure has been added to demonstrate the complexity of Bangla handwritings by comparing with other languages (Fig. 1). And in the revised manuscript, from line number 48 to 54, the comparison has been described.
Literature:
- References should be added for well-known CNN architectures such as Resnet50 (from Kaiming He).
References to the popular CNN architectures have been made in the revised manuscript (line number 96 to 99).
- Literature review does not clearly state the position of this paper: how this paper, in particular, is different from or similar to the previous work. This is important because it reinforces the contribution.
The reviewer’s comment has been duly noted and a paragraph stating the position of the work based on other literature has been explained at the end of the literature review (line number 141 to 148 in the revised manuscript)
- When it comes to classifying characters only, this work is similar to the vast literature of recognizing, say English characters using HMM or LSTM with IAM, SD19, or other datasets. See for example https://arxiv.org/pdf/2008.08148.pdf. So I suggest the authors make a little review along this line of work to connect more of your paper to the community.
To connect more of the community, based on the reviewer’s suggestions, few pieces of literature of the other languages have been added. Such as classifying Baybayin scripts, Hebrew handwritings, English handwriting recognition, and so on. This part of newly added literature to connect more of the community can be found in the revised manuscript from lines number 119 to 127.
Experimental design
Methods (Section 3):
- Your model is very important. As a result, I suggest you describe the motivation as to why you choose to use such components, e.g, why CNN, why specific ensemble methods? Likewise, the authors need to answer many questions “why” about the choices of architectures.
The importance of why we have used various CNN architectures and various ensemble methods has been added to the manuscript. From line number 155 to 171, the overall reasons for choosing CNN architectures have been described. And from line number 257 to 269, the reasons for ensemble methods have been explained. The reasons for choosing individual CNN architectures and ensemble methods have also been described – e.g. the reason to choose AlexNet, VGG16, VGG19, and small CNN is explained in line number 174, 175. Similarly, for the others appropriate explanations are presented – e.g., ResNets (line number 193 to 196), Xception (line number 208 to 211), stacked generalization (line number 281 to 283), bootstrap aggregating (line number 295 and 296), and so on.
- After that, the authors can describe the architecture in detail.
As per the reviewer’s suggestion, the CNN architectures have been described in a detailed manner. In table 1, AlexNet, VGG16, VGG19, and our developed small CNN architectures have been described. ResNets, Xception, and DenseNet have been described and their corresponding figures have been presented in Fig. 3, Fig. 4, and Fig. 5, respectively. Stacked generalization and bootstrap aggregating methods have also been presented with figures e.g., Fig. 6 and Fig. 7, respectively.
- However, sections 3.2 and 3.3 should be purged because they are common knowledge
Sections 3.2 and 3.3 from the previous version of the manuscript have been removed as the reviewer has suggested.
- Section 3.1 should be placed in section 4 (at the beginning of it). Section 3 should be devoted to describing the model.
The dataset section has been removed from section 3.1 and placed at the beginning of section 4 (see line number 324 at the revised manuscript).
- Sections 3.5.1 and 3.5.2 have very long enumerations of components. Maybe the authors should trim it down to a small paragraph each or a small table each.
The long enumerations have been removed and have been presented with a help of a table (Table 2) and the respective figures of stacked generalization and bootstrap aggregating methods (Fig. 6 and Fig. 7).
The research question of Bangla character recognition is useful and well-defined. However, there are the following suggestions to improve:
- First, section 3.1 should be moved to here to describe the datasets. Please state clearly which datasets you are using. For each such dataset, you need to compare it with previous work using it (at least one state-of-the-art baseline).
Section 3.1 of the original manuscript has been moved to section 4. A clear description of which datasets we have used has been added in section 4.1 (starting from line number 324). I am quoting from the revised manuscript from line number 325 to 328: “We have used two publicly available Bangla handwritten character datasets for the experiments – Ekush (Rabby et al., 2019) and BanglaLekha Isolated (Biswas et al., 2017)”. The reasons why these two specific datasets have been chosen are also described (section 4.1 in the revised manuscript).
- Section 4.1 is not clear because the authors refer to only 1 dataset.
This issue has been resolved.
- References to Tensorflow/Keras need to be made.
References to Keras, TensorFlow and other libraries have been added (line number 318 to 322).
- The use of PCA seems not well-motivated. This linear method would probably make the extracted features lose some information. To answer this question clearly, the authors should investigate the use of current architecture with and without PCA (e.g. in an ablation study). The reasons are two-fold:
1/ The drawback of PCA as said. Why don’t you use an FC layer to narrow down the classes, e.g. 2048 -> 1024 -> 112 as in normal usages of CNN? This is probably the easiest and most straightforward.
2/ Can PCA be implemented end-to-end along with your model, and how much slower it could be when combining with CNN? Usually, with high dimensional data, PCA is slow.
As per the reviewer’s comments, we have eliminated PCA from our experimental design. We have carried the experiments of shallow machine learning classifiers without using PCA. The suggestion of the reviewer has been employed and the output of ResNet50 has been narrowed down by following 2044 -> 1024 -> 512 -> 80 features. And with those extracted features, the shallow classifiers have been fitted.
- Section 4.3, the formula in lines 513 to 516, and descriptions of related metrics are common knowledge and should be trimmed. If needed, make related references to those metrics.
The formulae of the evaluation metrics have been removed.
- Table 8 is probably the most important result to present and should be made first and foremost. Other ablation studies such as with different architectures, optimizers, methods should be placed later.
Table 8 in the original manuscript (now Table 4 in the revised manuscript) has been made the first table of the result section (section 5). And two tables of ablation studies for batch size and optimizers (Table 3 and Table 4 of the original manuscript) have been removed from the revised manuscript. Because their descriptions are given in the texts in the revised manuscript from line number 542 to 559. The descriptions of all the ablation studies have been moved to the end of the results section (section 5.5 from line number 542 to 580).
- However, the results in Table 8 are questionable because it is needed to make a fair comparison between your methods and others, based on the same set of datasets (including splits between train/val/test). Important changes should be made:
1/ Compare your methods with others on their datasets (at least one good baselines, explain why you choose the baseline(s))
The comparison in Table 4 (Table 8 in the original manuscript) has been explained properly. To compare our methods on another dataset, we have conducted the experiments on the BanglaLekha-Isolated dataset. We compare the performances of our method with a good baseline (Purkaystha, Datta & Islam, 2017) which has one of the best performances on the BanglaLekha-Isolated dataset. This comparison of our methods on a different dataset can be found from line number 409 to 412.
2/ Compare other methods with your current datasets (again, with a good baseline).
We have also compared our methods on the primary dataset that we have used with other work (Azad Rabby et al., 2018) on this dataset. This comparison can be found from line number 400 to 405. (Azad Rabby et al., 2018) have reported the highest accuracy on the Ekush dataset so far. Our methods (stacked generalization and bootstrap aggregating) have been able to demonstrate better performance than (Azad Rabby et al., 2018) with a similar train/val/test split.
Validity of the findings
The authors employ widely used components of CNN, Ensemble, and PCA. However, they claim their combination is novel, which was made clear in the literature review. Nonetheless, they need to motivate their choices more convincingly. Experimental results should be made more clearly and in a more convincing way as suggested below.
Dataset: The authors adhere to PeerJ regulations by providing citations to datasets being used. However, the descriptions of datasets lack important characteristics such as:
- the resolutions and the format (RGB or not)
- the authors enumerate Bangla datasets without stating directly which datasets they are going to use in specific, which they should heavily focus on
The resolution and the formats of the images have been added to the revised manuscript (see line number 337 to 339). As per the reviewer’s comment, we also have clearly stated the datasets that we have used for various experiments (see line number 325 to 328). According to the PeerJ policy, we also have updated the repository in Figshare with the codes of new experiments that we have conducted.
The repository link: https://figshare.com/projects/Convolutional_Neural_Network-based_Ensemble_Methods_to_Recognize_Bangla_Handwritten_Character/96599
Comments for the author
The research question of Bangla character recognition is useful for the community and well-defined. However, your choices of architectures, their descriptions, and experiments should be further improved to make the paper more convincing.
We have tried our best to address every comment and suggestion of the reviewers.
References
Rabby AKMSA, Haque S, Islam MS, Abujar S, Hossain SA. 2019. Ekush: A Multipurpose and Multitype Comprehensive Database for Online Off-Line Bangla Handwritten Characters. In: Communications in Computer and Information Science. DOI: 10.1007/978-981-13-9187-3_14.
Azad Rabby AKMS, Haque S, Abujar S, Hossain SA. 2018. Ekushnet: Using convolutional neural network for Bangla handwritten recognition. Procedia Computer Science 143:603–610. DOI: 10.1016/j.procs.2018.10.437.
Biswas M, Islam R, Shom GK, Shopon M, Mohammed N, Momen S, Abedin A. 2017. BanglaLekha-Isolated: A multi-purpose comprehensive dataset of Handwritten Bangla Isolated characters. Data in Brief. DOI: 10.1016/j.dib.2017.03.035.
Purkaystha B, Datta T, Islam MS. 2017. Bengali Handwritten Character Recognition Using Deep Convolutional Neural Network. In: 22–24. DOI: 10.1109/ICCITECHN.2017.8281853.
" | Here is a paper. Please give your review comments after reading it. |
121 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cloud Computing is one of the evolving fields of technology, which allows storage, access of data, programs, and their execution over the internet with offering a variety of information related services. With cloud information services, it is essential for information to be saved securely and to be distributed safely across numerous users. Cloud information storage has suffered from issues related to information integrity, data security, and information access by unauthenticated users. The distribution and storage of data among several users are highly scalable and cost-efficient but results in data redundancy and security issues. In this article, a biometric authentication scheme is proposed for the requested users to give access permission in a cloud-distributed environment and, at the same time, alleviate data redundancy. To achieve this, a cryptographic technique is used by service providers to generate the bio-key for authentication, which will be accessible only to authenticated users. A Gabor filter with distributed security and encryption using XOR operations is used to generate the proposed bio-key (biometric generated key) and avoid data deduplication in the cloud, ensuring avoidance of data redundancy and security. The proposed method is compared with existing algorithms, such as convergent encryption (CE), leakage resilient (LR), randomized convergent encryption (RCE), secure de-duplication scheme (SDS), to evaluate the de-duplication performance. Our comparative analysis shows that our proposed scheme results in smaller computation and communication costs than existing schemes.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Data redundancy is indirectly proportional to the data authentication. When redundancy increases, then possibility of authentication of certain redundant data is very less. If data redundancy is reduced, possibility of data authentication is high. Deduplication is hot topic in recent years, inspite of rapid growth in cloud computing and big data. Cost of cloud storage is highly reduced by using deduplication process which avoids storing of same data at multiple times in real time. Secured deduplication is provided by encrypting client data in server. It must bring confident among client to believe service provider. Usually traditional techniques does not support deduplication with security. In this article, we implemented biometric techniques to ensure deduplication with data security.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Cloud authentication issues:</ns0:head><ns0:p>The advancement in data sharing and processing over the cloud makes consequence of innovative technologies like smart mobile devices, mobile applications with sensors, Internet spread, and usage, data availability in social media. These technologies make wider influences on big data in our day-to-day activity. Many organizations like Amazon, flip kart, Netflix performs data collection, mining, and analysis from various sources. Sharing of large volumes of data over the network has been made easy for accessible through cloud storage. The increasing need for storage disks over the network looks for authentication of stored data has resulted in security concerns in the cloud and distributed storage. Large amounts of storage in cloud infrastructure are occupied by duplicate data records. Aiming to address these technical concerns, researchers have focused on techniques for data de-duplication using biometric de-duplication with user authentication. The link between intrinsic individual characters with their behavior, physical, physiological is used to authenticate an individual with biometric data recognition. While comparing with knowledge-based authentication, biometric can provide stronger security guarantees. Building a biometric-enabled technology on the cloud is important for safety as well as security enhancement. The security is provided to areas such as forensics, surveillance, defense, banking, and personal authentication.</ns0:p><ns0:p>Further biometric-based authentication process has been proven to provide stronger security guarantees and robustness in contrast to traditional methods for sensitive applicationsAh <ns0:ref type='bibr' target='#b0'>Kioon et al. (2013)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Cloud data redundancy issues:</ns0:head><ns0:p>Data de-duplication is the process of avoiding redundant data copies reducing the overhead by eliminating duplicate stored data. Today's world is hyper-connected with online communication, payments, ticket services, using managed or unmanaged networks, and devices scaling across several endpoints. These devices are currently being protected with security technologies and enabled with cryptographic single factor, two factors, multi-factor authentication methods <ns0:ref type='bibr' target='#b2'>Alomar et al. (2017)</ns0:ref>. Human in every communication uses multi-factor methods for fast, reliable, and user-friendly authentication while accessing online services. The digital age of information allows the distribution and replication of data across the network.</ns0:p><ns0:p>During this process, the same data may be shared, circulated, stored multiple times. This highlights the need for smart technologies to tackle the challenge of data deduplication along with authentication methods for users to access the data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3'>Cloud data authentication for deduplication:</ns0:head><ns0:p>This smart connected world makes the data secure with users through authentication processes <ns0:ref type='bibr' target='#b2'>Alomar et al. (2017)</ns0:ref>. A user needs to identify himself in the system by sending authentication messages. When message 'A' is sent by the user, the system computes F(A) randomly and checks with stored data 'B'.</ns0:p><ns0:p>A single authentication password alone cannot ensure the authentication of the user Benarous et al.</ns0:p><ns0:p>(2017) <ns0:ref type='bibr' target='#b33'>Mohsin et al. (2017)</ns0:ref>. Accessing sensitive data offline or online requires a fundamental security system for authentication <ns0:ref type='bibr' target='#b33'>Mohsin et al. (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>Balloon (2001)</ns0:ref>, <ns0:ref type='bibr' target='#b37'>Ometov et al. (2018)</ns0:ref> (Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). Traditional authenticated transactions like applying seals, wax seals are physical security systems <ns0:ref type='bibr' target='#b28'>Konoth et al. (2016)</ns0:ref>.</ns0:p><ns0:p>Sender-based information validation alone cannot provide standard authentication <ns0:ref type='bibr' target='#b25'>Ibrokhimov et al. (2019)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows how the technology is evaluated from single security techniques to multiple security techniques.</ns0:p><ns0:p>Initially, a single data factor was used for authentication, which was eventually compromised by the research community <ns0:ref type='bibr' target='#b27'>Kim and Hong (2011)</ns0:ref>, <ns0:ref type='bibr' target='#b15'>Dasgupta et al. (2016)</ns0:ref>. Examples of single-factor security are user ids and passwords. Password authentication is considered the weakest level of security <ns0:ref type='bibr' target='#b10'>Bonneau et al. (2015)</ns0:ref>, <ns0:ref type='bibr' target='#b48'>Wang and Wang (2015)</ns0:ref>. Sharing of security information like a password can easily lead to <ns0:ref type='formula'>2013</ns0:ref>), rainbow attacks <ns0:ref type='bibr' target='#b22'>Heartfield and Loukas (2015)</ns0:ref>, and other social engineering attacks <ns0:ref type='bibr' target='#b17'>Grassi et al. (2016)</ns0:ref>. When users choose password-based authentication, the complexity of the authentication must be ensured <ns0:ref type='bibr' target='#b19'>Gunson et al. (2011)</ns0:ref>. High protection of the accounts cannot be ensured by using a single authentication factor <ns0:ref type='bibr' target='#b43'>Sun et al. (2014)</ns0:ref>. The next level involves a two-factor authentication, which is achieved by an identity or security question <ns0:ref type='bibr' target='#b11'>Bruun et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b21'>Harini and Padmanabhan (2013)</ns0:ref>.</ns0:p><ns0:p>At present, three categories of groups are available for connecting individuals with security credentials <ns0:ref type='bibr' target='#b40'>Scheidt and Domangue (2006)</ns0:ref>:</ns0:p><ns0:p>• Ownership authentication -requires ID cards, smartphones.</ns0:p><ns0:p>• Knowledge authentication -requires passwords, secret keys.</ns0:p><ns0:p>• Biometric authentication -biometric data (fingerprints, iris scans, face scans) Multi-factor authentication provides and ensures higher safety levels, with two or more credentials <ns0:ref type='bibr' target='#b9'>Bhargav-Spantzel et al. (2007)</ns0:ref>, <ns0:ref type='bibr' target='#b7'>Banyal et al. (2014)</ns0:ref>, <ns0:ref type='bibr' target='#b13'>Council and Committee (2010)</ns0:ref>. Biometrics is widely used in the multi-factor authentication process based on individual biological and behavioral characteristics <ns0:ref type='bibr' target='#b23'>Huang et al. (2014)</ns0:ref>. The higher level of security is offered explicitly by biometrics recognition using more security factors <ns0:ref type='bibr' target='#b46'>Tahir and Tahir (2008)</ns0:ref> and the evolutionary history of the authentication is described in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.</ns0:p><ns0:p>High-security infrastructures utilize multiple authentication factors for protecting the information. For example, ATM cash withdrawal has a combination of ownership patterns (card) which is accessed by using knowledge factors, such as PIN, to transfer money and manage accounts <ns0:ref type='bibr' target='#b14'>Coventry et al. (2003</ns0:ref><ns0:ref type='bibr' target='#b37'>),Ometov et al. (2018)</ns0:ref>. To make this system even more robust, a card with a PIN is further authenticated using a one-time password, while accessing the sensitive data <ns0:ref type='bibr' target='#b3'>Aloul et al. (2009)</ns0:ref>. Facial recognition methods are also suggested for user authentication purposes <ns0:ref type='bibr' target='#b37'>Ometov et al. (2018)</ns0:ref>. In a recent survey, it is also pointed out that, most business enterprises choose multi-factor-based security and authentication for the secure processing of transactions. At present, most enterprises use biometric systems for employee verification, bank management, lockers, vehicles, etc., making the multi authentication factor stronger and interesting, which paved the way to introduce innovative biometrics, among the research community.</ns0:p><ns0:p>Most of the electronic devices use multiple authentication techniques for security-based access and allow only authenticated owners to use the devices <ns0:ref type='bibr' target='#b44'>Symeonidis et al. (2016)</ns0:ref>. One such potential application could be the usage of biometrics in a vehicle to authenticate its owner. With respect to market-based applications, authentication techniques can be broadly categorized as commercial, governmental, and forensic applications. Commercial purposes may include account access and ATM banking. Government needs may include document identification, government ID process, social security applications, military, border security controls. Lastly, forensic applications may include investigations, evidence identification, criminal identification <ns0:ref type='bibr' target='#b30'>Liu et al. (2018)</ns0:ref>, <ns0:ref type='bibr' target='#b35'>Nor et al. (2015)</ns0:ref>, <ns0:ref type='bibr' target='#b18'>Grigoras (2009)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2021:03:59044:1:2:NEW 5 May 2021)</ns0:ref> Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='1.4'>Contribution</ns0:head><ns0:p>In this research work, a multi-factor authentication technique with biometrics is proposed for the verification of users in cloud environments. The bio key of the data owner is first generated for the data stored in the cloud. Contributions on security and data redundancy are addressed in this paper through the following components:</ns0:p><ns0:p>1. A multi-factor authentication technique is used for bio key generation. Finger print of owner is processed for selecting appropriate features using edge detection with hashing function. Bio key generated based on extracted feature set.</ns0:p><ns0:p>2. We newly introduce bio key from data owner's fingerprint. It is chosen for the generation of bio keys which helps data to be shared with owners knowledge.</ns0:p><ns0:p>3. The encrypted data is stored in the cloud so that the data can be accessed by the user only when the bio key shared for the authentication process is satisfied.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>The redundancy of stored data is eliminated while keeping data security in mind.</ns0:head><ns0:p>This article is organized into 5 sections. Section 2 presents a literature survey and section 3 presents the proposed methodology and its design. Section 4 describes the evaluation results of the proposed method and finally, section 5 concludes our paper and discusses future work. <ns0:ref type='bibr'>[Kathrineet al, 2017]</ns0:ref> proposed a secure biometric authentication scheme for user identification and mutual authentication using elliptic curve cryptography for key generation and key exchange with optimal communication cost. <ns0:ref type='bibr'>[Farhana et al,m 2017]</ns0:ref> Wong and <ns0:ref type='bibr' target='#b49'>Kim (2012)</ns0:ref> provided the concepts used in biometric-based authentication in cloud computing. They discussed the challenges and limitations of traditional methods, along with attack scenarios, such as misuse of biometric data, to track individuals and leak confidential information related to health, gender, ethnicity, etc. At the same, they also argued that the privacy of cloud-based biometric authentication is not going to resolve the authentication issues technically, rather new legislation to enforce privacy-aware measures on cloud service providers related to the biometric collection, data processing, and template storage is opined, leading secure authentication in cloud environments. <ns0:ref type='bibr' target='#b16'>Dulari and Bhushan (2019)</ns0:ref> proposed a authentication method for user in cloud computing based distributed environment. In this process users biometric information are stored as template in cloud server. Further user verification is done with several participants. Here users feature vector query is compared with template saved in cloud server. In this method, homomorphic based encryption is used for matching the protocol. This matching protocol helps to compare the queried vector with template available as encrypted file . The metrics used for measuring output are Square of Euclidean distance, sensitive preservation of information and processing of authentication with high security .</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>LITERATURE REVIEW</ns0:head><ns0:p>Dulari and Bhushan (2019) proposed a security system called TORDES for cloud storage. This work uses legion containers in cloud storage. The data is stacked in TORDES for authentication with cryptobiometric systems to avoid unauthorized access. <ns0:ref type='bibr' target='#b26'>Indu et al. (2018)</ns0:ref>. surveyed the different biometrics applied on cloud security issues to identify malware. <ns0:ref type='bibr' target='#b51'>Ziyad and Kannammal (2014)</ns0:ref> analyzed cloud security and threat possibilities. They analyzed several security mechanisms and suggested the robust ones for both academic and industry environments. <ns0:ref type='bibr' target='#b51'>Ziyad and Kannammal (2014)</ns0:ref> proposed a multifactor biometric authentication system for cloud computing. The biometric features considered in this work were palm vein and fingerprints, where palm vein biometric data was stored in multicomponent smart cards and fingerprint data in the central database of a cloud server. <ns0:ref type='bibr' target='#b50'>Zahrouni et al. (2017)</ns0:ref> developed an application that serves as an extra layer of security on top of a pre-existing banking application by using a Biometric lock application, which allows a user to add a layer of security to their credit and debit cards, at the expense of minimal time overhead. <ns0:ref type='bibr' target='#b32'>Malathi and Raj.R (2016)</ns0:ref> proposed a biometrics-based user identification scheme using the features of a palm print, fingerprint, and iris for providing accurate personal identifications. <ns0:ref type='bibr' target='#b47'>Wang et al. (2018)</ns0:ref> carried over <ns0:ref type='bibr' target='#b5'>Amin et al. (2018)</ns0:ref> protocol as a case study to provide ideas for designing secure protocols for cloud environments in order to overcome existing security weaknesses in the protocol. They further improved the protocol using BAN logic and also used heuristic analysis to prove the security of the protocol. <ns0:ref type='bibr' target='#b36'>Obergrusberger et al. (2012)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>PROPOSED METHODOLOGY: BIO KEY WITH GABOR -XOR</ns0:head><ns0:p>In distributed cloud computing, biometric-based authentication plays a vital role in current research.</ns0:p><ns0:p>Distributed denial of service is the main threat in the cloud nowadays: several users trying to access a single cloud server leads to an increase in response time and complicates security. There are several methods to solve these issues in the cloud, even though there is a lack of confidentiality, reliability, and consistency of the data. To resolve that, we propose an approach called secure biometric authentication on distributed storage in the cloud. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the overview of the proposed architecture. The owner's biometric information is recorded for authentication. Once the owner registration is complete, the data is encrypted using a distributed approach and is stored in the cloud. While the user tries to access the content, a cloud server authenticates the user validity and contacts the data owner for the bio key to access the data. In the proposed architecture, the data is stored securely and with the owner's reference, users can access the content avoiding duplicate copies of the same content and enhancing security through the bio key. The flow of the proposed design can be described as:</ns0:p><ns0:p>• The owner of the data can upload content onto the cloud that is encrypted through a distributed model.</ns0:p><ns0:p>• The duplication of the content can be avoided by the cloud server. The features of the fingerprint of the owner are extracted and stored in a database so that the original content can be referred by the user with the owner's permission.</ns0:p><ns0:p>• Finger print of the user is converted in to bio key using edge detection and hash functions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Biometric based authentication with de-duplication</ns0:head><ns0:p>In this work, a fingerprint-based authentication along with a hash-based deduplication approach is used.</ns0:p><ns0:p>Among the various biometric techniques, fingerprints have been widely accepted and implemented for secure authentication. The input fingerprint image of the owner is normalized before processing and then the features of the fingerprint image are extracted using an optimized self-learning method and are stored in a database for authentication. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Step 1: read the input image with the pixels as I(x,y);</ns0:p><ns0:p>Step 2: The input image can be preprocessed (e.g., binarization, thinning) and enhanced using the following equations. During binarization, the grey level image is converted into a black and white image, where black represents the ridges and white represents the valleys. Then the Binarized image is thinned using three conditions. Thinning is the process of transforming the ridge pixels into one pixel:;</ns0:p><ns0:formula xml:id='formula_0'>I(x, y) = 1 |w| |h| ∑ i=0 ∑ j=0 (x, y)<ns0:label>(1)</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>g(x, y; θ , f ) = exp − 1 2 xθ σ 2 cos(2π f x 2 ))<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Where, x θ = xcos θ + ysin θ and y θ = −xsin θ + ycos θ , θ = Orientation, f=frequency and σ x , σ y =standard deviation of gausian envelop;</ns0:p><ns0:p>Step 3: The features like ridge end and bifurcation are extracted using the following equation;</ns0:p><ns0:p>7</ns0:p><ns0:formula xml:id='formula_2'>∑ i=0 N i = thenridgeend (3) 7 ∑ i=0 N i > 2thenbi f urcation<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Where N 0 , N 1 ..N 7 are the eight neighbors of the pixel (x, y) of Image I. ;</ns0:p><ns0:p>Step 4: the binary pattern code of the processed image is generated as follows;</ns0:p><ns0:formula xml:id='formula_3'>BC(g(x c , y c )) = 7 ∑ i=0 f (gx i ) − g(x c )x2 n<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where, f (m) = 1 for m >= 0 and f (m) =0 for m < 0;</ns0:p><ns0:p>Step 5: The bio key for each binary pattern is generated with QR decomposition formula [2];</ns0:p><ns0:p>Step 6: De-duplication using a hash-based technique of the key as;</ns0:p><ns0:formula xml:id='formula_4'>KH i = hash(KH i |G k ), KH j<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Where G k -Gaussian random number.;</ns0:p><ns0:p>Step 7: Store the bio key of each owner into the cloud data base and content storage using algorithm 2. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>server. The two components of the data packets are considered as X , Y and Z is the random number.</ns0:p><ns0:p>To encrypt the data packet, an data packet an XOR operation is performed and the data is encrypted that is sent to the cloud server. This approach is shown in fig 4. Hence, our proposed biometric security-based-distributed storage encryption with a de-duplication approach can obtain secure data transfer between the user and the cloud without duplication. This proposed approach can avoid duplication of content by sharing the key to the user for access. This will leads to reduce the storage of the cloud. One method can prove the secured authentication using biometric and de-duplication.</ns0:p><ns0:p>Algorithm 2: P Result: Encrypted data packet Input: Data packet with name label NL and pre-defined label PL;</ns0:p><ns0:p>Step 1: The name label of the data packet is { n 1 ,n 2 , , n l } and pre-defined label are declared as { p 1 , p 2 ,. . . , p l }; Step 2; while NL do while each packet of the input data do if lable <> PL then Initialized and J,γ, δ = 0;</ns0:p><ns0:p>Generate key k at random;</ns0:p><ns0:formula xml:id='formula_5'>if (I&&Z! = 0) then J=Z-I; γ=I XOR k; δ =J XOR k end else</ns0:formula><ns0:p>Encrypt the data packet using XOR operation; end end end Step 3: Output the encrypted data packet. That will store into the cloud server;</ns0:p><ns0:p>Step 4: De-duplication: algorithm 1 and algorithm 2 are combined.; Note: The step 4 combines two algorithms and computes when user request for data, cloud server authenticate it and search for owner of the data. Finally it shared the bio key to the user to access the content. ; seudo code : 2 (Distribute storage -security based encryption)</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EXPERIMENTAL RESULTS AND DISCUSSIONS</ns0:head><ns0:p>This proposed work experimented in AWS cloud services. Different type of services of AWS is initiated to each owner and user for a secure authentication process. The proposed work is implemented in three stages. The first stage is to generate the bio key of the owner of the data before upload. In the second stage, the hash function of this key is used for the de-duplication process. The third stage is the encryption of the data that are uploaded to the cloud server. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSION</ns0:head><ns0:p>Cloud storage has abundance data at every second for processing and storing in server by CSP. Data deduplication in the cloud brings concern on security of stored data. There is lot of possibilities for unauthorized access. In our proposed work, security using bio key generation makes users access data securely. Here, the main task of accomplishment is avoiding redundancy in a cloud server and biometric authentication using Gabor filters with XOR operation. This is a very complex fact due to biometric scanning and matching for authentication. The evolution of high technologies over time assures the security and integrity of the system. Also, data de-duplication is reducing multiple storages of the same data over the cloud network. In this article, we have initiated security with the de-duplication of the data in the cloud servers. Security of the data is computed using the user's biometric parameters and a bio Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Security development stages</ns0:figDesc><ns0:graphic coords='4,163.65,63.78,369.75,171.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Evolution of various authentication levels</ns0:figDesc><ns0:graphic coords='5,141.73,63.78,413.57,230.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>proposed a cloud-based mobile biometric authentication framework (BAM Cloud) using dynamic signatures and user authentication. The data is captured with a handheld mobile device and subsequently, storage, preprocessing, and training are done in a distributed manner on the cloud. The proposed method was implemented using Map Reduce on the Hadoop platform 4/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59044:1:2:NEW 5 May 2021) Manuscript to be reviewed Computer Science and for training a Levenberg-Marquardt backpropagation neural network model was used, achieving a speedup of 8.5x and an accuracy of 96.23 Al-Assam et al. (2019) surveyed various biometric-based authentication methods in cloud environments. The traditional password-based authentication lacks security when it comes to cloud data. To this end, multifactor authentication is suggested which allows two or more authentication parameters along with password-based security. This review focuses on the various available biometric authentication models and their advantages and disadvantages.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>proposed biometric observer techniques and provides a idea on fundamental trust rules in web as a prototype for implementation . The enrollment of users are supervised by an observer. further observers are those who enhance to provide authentication to biometric template. It provides more trustworthiness model for biometric identities. strong trust is build between observers and other due to best relation between both observers and individuals observed in the Database of system . Chandramohan et al. (2017) proposed a privacy-preserving model to prevent digital data loss in the cloud, helping the cloud requester/users) to trust their proprietary information and data stored in the cloud. Table 1 below represents the recent deduplication research problems and their methodology to overcome the identified problem. It is noticed that biometric based deduplication and authentication of data is not much used recently. Our proposed work shows advanced deduplication handling process with high security. Shabbir et al. (2021) noticed profits of Mobile Cloud Computing (MCC) in medical healthcare. It faces more challenges in security and privacy of customer data. Here they implement layered security modeling using Modular Encryption Standard (MES) to increase the security of MCC. The performance is better than other encryption techniques. Rehman et al. (2021) addressed invehicle communication problems using controller area network and electronic control units. Security during communicating inside the vehicle is tackled using novel approach CANintelliIDS. It detects the vehicle intrusion attack 5/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59044:1:2:NEW 5 May 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Fig 3 shows the biometric authentication process. A Gabor filter-based technique is used to enhance image equality by removing the noise. Subsequently, the enhanced image is 6/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59044:1:2:NEW 5 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Overview of the proposed architecture</ns0:figDesc><ns0:graphic coords='8,162.41,186.61,372.22,396.86' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>2453. 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Security based -distributed storage encryption246This proposed algorithm is taken from<ns0:ref type='bibr' target='#b29'>Kumar and Begum (2011)</ns0:ref>, where the input data packet is di-247 vided into two substrings. The substrings are further processed and then merged to store in the cloud248 8/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59044:1:2:NEW 5 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>figure 5, (a) represents the original input fingerprint image of the owner and (b) and (c) represents the equivalent orientation and frequency images of the input image. This is shown in Figure 6. Illustration of Figure 6: It illustrates the Binarized form of the input image to identify the ridges and valleys. Then the Binarized image is thinned into a single pixel to identify the features such as ridge ending and bifurcation which is shown in fig 6 (c). These variations of the fingerprint patterns are used to build the bio key with QR code as shown in Fig 6 (d).The comparative analysis of time is shown in Fig 7.The data redundancy makes memory engaged and takes more time to upload, download the data. While we upload, redundancy makes no space for new data and resource allocation takes more time. Next, while we download the data, due to redundancy system confuses which data to access. Memory space also very less. This makes more time to download the data from a cloud server.Also, Figure8shows time taken by proposed model in data encryption and decryption. To evaluate the performance of this fingerprint authentication de-duplication, the proposed</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Security based distributed storage encryption</ns0:figDesc><ns0:graphic coords='11,183.08,63.78,330.88,226.77' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. (a) Original fingerprint image (b) ridge orientation (c) ridge frequency</ns0:figDesc><ns0:graphic coords='11,291.86,326.33,109.19,114.15' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .Figure 7 .</ns0:head><ns0:label>67</ns0:label><ns0:figDesc>Figure 6. (a) binarization (b) thinned image (c) feature (Ridge & bifurcation) extraction (d) QR code of the fingerprint input image</ns0:figDesc><ns0:graphic coords='12,162.41,233.74,372.23,228.34' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. proposed method evaluation in terms of encryption and decryption time</ns0:figDesc><ns0:graphic coords='13,162.41,120.87,372.25,187.62' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 10 .Figure 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Figure 10. Performance comparison in terms of Data uploads time</ns0:figDesc><ns0:graphic coords='14,162.41,109.46,372.23,204.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Literature Survey of Existing Techniques</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Authors</ns0:cell><ns0:cell>Problem</ns0:cell><ns0:cell>Methodology</ns0:cell><ns0:cell>Advantages</ns0:cell><ns0:cell cols='2'>integrity confidentiality</ns0:cell></ns0:row><ns0:row><ns0:cell>Youngjoo Shin et al [62] (2020)</ns0:cell><ns0:cell>Data deduplication and security in MEC</ns0:cell><ns0:cell>server less efficient encrypted deduplication (SEED) +Lazy encryption</ns0:cell><ns0:cell>It takes 1000ms for processing128MB</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Shangping wang et al [63] (2020)</ns0:cell><ns0:cell>Double payment avoidance and data deduplication</ns0:cell><ns0:cell>Block chain technology</ns0:cell><ns0:cell>Avoids third party</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Jiaojiao Wu et al [64] (2020)</ns0:cell><ns0:cell>File deduplication and integrity</ns0:cell><ns0:cell>Confidentiality preserving auditing (CPDA) deduplication using public</ns0:cell><ns0:cell>Computation of is done by CSP authentication tag</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell>Wenting shen et al [65] (2020)</ns0:cell><ns0:cell>Deduplication and brute force dictionary attacks</ns0:cell><ns0:cell>Light weight cloud storage auditing for deduplication</ns0:cell><ns0:cell>light-weight computation on the user side</ns0:cell><ns0:cell>yes</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>realize indistinguishable</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Xueyan liu et al [66] (2020)</ns0:cell><ns0:cell>Data deduplication in files</ns0:cell><ns0:cell>Verifiable ABKS data is proposed over encrypted cloud</ns0:cell><ns0:cell>keywords, unforgetability confidentiality of of signature and</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>cipher texts</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>cloud storage</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Jianli Bai et al [67] (2020)</ns0:cell><ns0:cell>auditing and deduplication literatures fail to support the modifications</ns0:cell><ns0:cell>re-encryption algorithm and the secure encryption technology identity-based broadcast</ns0:cell><ns0:cell>ownership modification maintained and integrity is</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>No</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>of ownership</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Shynu P. G. et al [68] (2020)</ns0:cell><ns0:cell>Data deduplication</ns0:cell><ns0:cell>Modified Elliptic algorithms Curve Cryptography (MECC)</ns0:cell><ns0:cell>Recognize data the block level redundancy at</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>Yes</ns0:cell></ns0:row></ns0:table><ns0:note>across controller area network. at result it gains 10.79Naeem et al. (2021) discusses energy efficient WSN for high performance network lifetime. In this article hybrid technique called Distance aware residual energy-efficient stable election protocol is used with energy efficient election protocol for optimal transmission routes. In this outcome energy efficiency is increased for 10</ns0:note></ns0:figure>
<ns0:note place='foot' n='2'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59044:1:2:NEW 5 May 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59044:1:2:NEW 5 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "SECURE BIOMETRIC AUTHENTICATION WITH DE-DUPLICATION ON DISTRIBUTED CLOUD STORAGE
Esteemed reviewer,
Article is formatted as per given norms. Thanks for your valuable comments and making the article professional. This help me in future to bring out standard articles.
Comment 1: Corresponding Authorship
The corresponding author entered online does not match the corresponding author listed on your manuscript's Author Cover Page. We only use the information in the metadata provided in the submission system. Please make them match by either A) Selecting the correct corresponding author online and/or B) Uploading your manuscript with an updated Author Cover Page.
Response: Esteemed reviewer, as per given suggestion corresponding author is checked with list and updated in manuscript. Finally updated file is uploaded in system.
Comment 2: Figure/Table Citation
The submission appears to be missing a citation for Figures 4 and 5 in the text. Please can you add a citation for Figures 4 and 5 in your manuscript and re-upload the document.
Note: Citations must be organized, and cited for the first time, in ascending numerical order, meaning Figure 1 must always be cited first, Figure 2 must always be cited second, and so on. The same applies to Tables.
Response: Esteemed reviewer, missing citations are noted and cited in the article. After changes, file is re- uploaded and complied.
Comment 3: References
In the reference section, please provide the full author name lists for any references with 'et al.' including, but not limited to, these references:
• Ibrokhimov, S., Hui, K. L., Al-Absi, A. A., Sain, M., et al. (2019).
• Malathi, R. et al. (2016).
If you have used EndNote, you can change the references using the steps provided on our author instructions.
Response: Esteemed reviewer, as per suggestion references are modified with full author list.
Comment 4: Tracked Changes Manuscript File
We note you've manually tracked your changes. Please could you upload the manuscript with computer-generated tracked changes to the Revision Response Files section. The reviewers and Academic Editor will want to see all of the changes documented and will normally request it if some changes appear to be missing. Please use latexdiff to show changes between latex documents https://www.overleaf.com/learn/latex/Articles/Using_Latexdiff_For_Marking_Changes_To_Tex_Documents.
Response: Esteemed reviewer, changes are made using latexdiff and uploaded.
Comment 5: Re-used Text
We noticed that your manuscript contains a high level of apparently re-used text shared with:
• https://link.springer.com/chapter/10.1007%2F978-3-319-04519-1_6 (lines 171-177)
• https://link.springer.com/chapter/10.1007%2F978-3-319-03874-2_13 (lines 196-200)
We are unable to consider your submission unless you address this issue and submit a new version of the manuscript.
Response: Esteemed reviewer, rewritten text are checked and modified as per given information.
Comment 6: LaTeX Submission: .bib and .tex Files
We see that you've provided multiple versions of your LaTeX source files. Please supply the LaTeX source files used to generate Peerj_paper_clean.pdf and remove all other file versions from your submission.
Please provide the BIB and TEX files for your LaTeX manuscript file here. Please upload using the Primary Files, LaTeX Source File category.
Response: Esteemed reviewer, as per given suggestion multiple version are deleted. We generate on latex source file ,but conclusion statement doest not aligned correctly.
Comment 7: Tables
• We ask for tables in Word (composed in Word, not images pasted in Word docs), but if you have them composed in the LaTeX source file we can use that instead at the time of production.
• Please leave a note in the Confidential Information for PeerJ Staff if you choose to provide Table 1 in the LaTex source file so that staff will know that's where it can be found. (example note to staff: Table 1 is in the source file manuscript and can be found in the main.tex file).
Note: Do not include this in your rebuttal letter to the Editor and reviewers.
Response : Esteemed reviewer, this article does not have table.
Comment 8: Figures
• The figures must be uploaded in the Primary Files section, or be contained in the .tex file, not included in the zip file for the LaTeX source files (only the LaTeX source files (.tex, .bib, .cls, etc.) can be contained in a zip file folder). Please remove the figure files from the zip file, and upload them in the Primary Files section.
• Please upload ONE copy of each figure in either EPS, PNG, or PDF (vector PDFs only), measuring minimum 900 pixels and maximum 3000 pixels on all sides and eliminating excess white space around the images, as primary files here.
• Please use numbers to name your files, example: Fig1.eps, Fig2.png.
Response: Esteemed reviewer, as per guidelines figures are uploaded. All figures are formatted to respective pixels.
Comment 9: Figure Style
Where 3-D doesn't add substantive value to a figure, we ask that you remove it. Please format Figures 7, 10, and 11 as 2-D images. Please provide replacement figures measuring minimum 900 pixels and maximum 3000 pixels on all sides, saved as PNG, EPS or PDF (vector images) file format without excess white space around the images.
Response: Esteemed reviewer, as per given suggestion , figures are converted in to 2-D images .
" | Here is a paper. Please give your review comments after reading it. |
122 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Question classification is one of the essential tasks for automatic question answering implementation in natural language processing (NLP). Recently, there are several textmining issues such as text classification, document categorization, web mining, sentiment analysis, and spam filtering that has been successfully achieved by deep learning approaches. In this study, we illustrated and investigated our work on certain deep learning approaches for question classification tasks in an extremely inflected Turkish language. In this study, we trained and tested the deep learning architectures on the questions dataset in Turkish. In addition to this, we used three main deep learning approaches; Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN), and we also applied two different deep learning combinations of CNN-GRU and CNN-LSTM architectures. Furthermore, we applied the Word2vec technique with both skip-gram and CBOW methods for word embedding with various vector sizes on a large corpus composed of user questions. By comparing analysis, we conducted an experiment on deep learning architectures based on test and 10-cross fold validation accuracy. Experiment results were obtained to illustrate the effectiveness of various Word2vec techniques that have a considerable impact on the accuracy rate using different deep learning approaches. We attained an accuracy of 93.7% by using these techniques on the question dataset.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>With the rapid development of computer technology and the internet, a huge amount of textual data in digital form are generated every day <ns0:ref type='bibr' target='#b0'>(Wang and Qu, 2017)</ns0:ref>, and retrieve the given contents from a large amount of information rapidly and accurately. This has become an ordinary issue. Textual data is highly dimensional data having, comprising, consisting of irrelevant and unwanted features that are difficult to manage and maintain <ns0:ref type='bibr' target='#b1'>(Sharif et al., 2017)</ns0:ref>. In NLP, the role of the question classification system is to predict the form of precise response according to the query. However, in most of the cases in the NLP, what the user desires to ask of the right answer to the question individually. One of the most appealing areas is the question answering (QA) system for both company and organization which provides more appropriate access to information than the conventional search engine. Question Answering (QA) is one of the main technologies in a QA system that automatically seeks correct answers in natural language to random questions. For the information retrieval, the QA systems have been successfully applied in NLP and achieved significant results.</ns0:p><ns0:p>Furthermore, the implementation of a question classification system is identifying the category of sort questions that are asked in NLP. QA system usually consists of four steps, semantic understanding, question classification, text retrieval, and answer extraction <ns0:ref type='bibr'>(Le, Phan & Nguyen, 2015)</ns0:ref>. Most of the useful step is a question classification which can give useful information for subsequent execution and involves the customer category answer, the intent of question classification, and so on. The correlation between the questions and the category can be illustrated through a corresponding process:'</ns0:p><ns0:p>(1) 𝐹 : 𝑋 = {𝐶 1 , 𝐶 2 ,… ,𝐶 𝑛 } Where Turkish questions are represented by , while refers the set of categories, 𝑋 {𝐶 1 , 𝐶 2 ,… ,𝐶 𝑛 } and defines the question, is classified into a specific category through some rules. In the 𝐹 𝑋 𝐶 𝑖 significant part of the QA system, there are two key dialog implementations to determine the questions to the credible classification of the answers <ns0:ref type='bibr' target='#b3'>(Mohd & Hashmy, 2018)</ns0:ref>. Identifying the questions is one of the defined matters according to the nature of the question. For instance, the question about the comparison is that 'What is the difference between music and noisiness?' After categories, the questions of the QA system can perform the subsequent execution to improve the accuracy of the answer according to their intents. The other way of dialog implementation is to identify the questions based on the user requirement. For example, the question 'Who is the chief of army staff of Malaysia?' is a question about a character or is a type of human (person). As per the question categorization, the QA system should usage the search technique specific to the human (person) type. Consequently, more efficient question identification can enhance the performance of the QA system.</ns0:p><ns0:p>Question classification is one of the correlated problems to documents categorization (' <ns0:ref type='bibr' target='#b16'>Ehsan & Mojgan, 2014)</ns0:ref>. However, in recent times, document categorization has been provided a huge quantity of scientific contribution. While question classification is mainly in the Turkish language now is a novel academic issue. The most important difference between question classification and document categorization is that the document dimension is much longer than the question dimension. Therefore, each character and word in question classification could be meaningful. There are various approaches; machine learning, rule-based and hybrid approaches <ns0:ref type='bibr'>(Razzaghnoori, Sajedi & Reghuraj, 2018)</ns0:ref> have been used in the question classification tasks. In our research, some deep learning approaches such as LSTM, GRU, CNN, and their combination are utilized to classify user questions based on Word2vec techniques both Continue Bag of Words (CBOW) and skip gram. In this study, our main contributions are: most articles associated to question classification concentrates on the English language and they have not studied an agglutinative language where the structure of words is generated by putting suffixes (morphemes) to the root of the word. The Turkish language has some distinctive features, which makes it problematic and has been demonstrated challenging for NLP. The majority of the challenges drives from Turkish complex morphology and how it deals with syntax. For example, the Turkish language does not have grammatical gender and noun classes <ns0:ref type='bibr'>(Le, Phan & Nguyen, 2015)</ns0:ref>. Another issue with the Turkish language is that it may neglect the tools required to determime the text information. However, we could not find any Turkish question datasets at the start of this study, so we decided to covert an English question' dataset into a Turkish question dataset to see how the proposed approaches performed well in their best way.</ns0:p><ns0:p>On the other hand, <ns0:ref type='bibr' target='#b5'>Mikolov et al.(2013)</ns0:ref> introduced a new method of feature representation, which is applied in the feature extraction step is known as distributed representations of words or Word2vec <ns0:ref type='bibr' target='#b5'>(Mikolov et al., 2013)</ns0:ref>. The concept behind the word representation approach is the terms with a semantic or syntactic connection which is used with higher probability in a similar context <ns0:ref type='bibr' target='#b6'>(Liu et al., 2017)</ns0:ref>. Therefore, the vectors of those words need a little bit close to each other if word1 and word2 contain similar contexts.</ns0:p><ns0:p>The learning algorithms of Word2vec are representing the words in a vector space and achieve superior results in the NLP mechanism to identify the relevant words. In the case of distributed representation of words, the neural network is very extraordinary to compute vectors code of several linguistic regularities and patterns <ns0:ref type='bibr' target='#b5'>(Mikolov et al., 2013)</ns0:ref>. In some cases, most models can be illustrated as linear representations instantaneously. For instance, the outcome of a vector computation vec(''king'')-vec(''man'') ? vec(''women'') is closer to vec(''queen'') than to any other word vectors (' <ns0:ref type='bibr' target='#b5'>Mikolov et al., 2013)</ns0:ref>. The main purpose of our research is a comparative analysis between the deep learning architecture and'Word2vec method. Therefore, the major contributions of this study are presented as follows:  Most articles focused on the English language have associated to question classification and they have not worked an agglutinative language where the construction of words is produced by assigning suffixes (morphemes,) to the root of the words. In NLP, the Turkish language has proved the preprocessing problem. As a non-Indo-European language, there are several unique features in Turkish languages that make NLP challenging.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> Another contribution the impact of employing various Word2vec pre-trained word embeddings on various deep learning approaches. In our study, the first approach presented to use Word2vec methods that are Continuous Bag of Words, skip gram to cluster words in the corpus and convert all words into vectors in the space. For the extraction of word vectors, the Word2vec method is applied to extract as a variation of the query word vectors of words.</ns0:p><ns0:p>After that, the deep learning approaches such as CNN, GRU, LSTM, and their combinations including CNN-LSTM and CNN-GRU are applied for question classification. By using these four various approaches, the average correctness of CNN is 92.46%, LSTM achieved 90.89%, GRU obtained 91%, CNN-LSTM and CNN-GRU got 91.7% and 92.36% respectively over the Turkish question dataset was obtained.  Moreover, there was no Turkish question labeled dataset as well, so in this study, we added a new Turkish question dataset which is translated from UIUC' English question dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Question answering classification is an essential task of text classification. In the early 1950s, IBM was provided an environment to leading in examining text identification. Later, in the 1960s, Marun and Kahns introduced the keyword technique in the texts to automatically categories the chosen texts.</ns0:p><ns0:p>There are three individual stages in the classic question answering system <ns0:ref type='bibr' target='#b7'>(Ehrentraut et al., 2018)</ns0:ref>: 1. Question processing: this is an initial stage in questioning and answering systems where questions are asked by users <ns0:ref type='bibr' target='#b8'>(Madabushi, Lee & Barnden, 2018)</ns0:ref>. The aim of this stage is to understand to apply the logical calculations for the representation and categorization of the questions. 2. Extraction and processing of documents: a set of relevant documents are selected in this stage and a set of paragraphs are captured which depend on the concentrations of the issue. 3. Answer processing: The purpose of this stage is considered to respond based on the relevant fragments of the documents. The preprocessing of the data requires pairing an answer based upon the similar contexts of the question asked. The general architecture of natural language 2013).</ns0:p><ns0:p>Several different approaches have been successfully used in the question classification issue. Most of these approaches divided into four groups: rule-based approaches, machine learning approaches, deep learning approaches, and hybrid approaches <ns0:ref type='bibr'>(Razzaghnoori, Sajedi & Reghuraj, 2018;</ns0:ref><ns0:ref type='bibr' target='#b11'>Hao, Xie & Xu, 2015)</ns0:ref>. Author Galitsky worked on rule-based approaches to classify questions based on the pair of questions with manually written rules according to the provided contents <ns0:ref type='bibr' target='#b14'>(Galitsky;</ns0:ref><ns0:ref type='bibr' target='#b14'>2017)</ns0:ref>. While determining the particular rules are expenditure massive time and struggle to process a variety of question. However, the deep learning approaches, machine learning approaches, and rule-based techniques are capable to PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>automatically construct a precise classification implementation utilizing different features of questions <ns0:ref type='bibr' target='#b16'>(Ehsan & Mojgan, 2014)</ns0:ref>.</ns0:p><ns0:p>The machine learning techniques are superior to manual techniques are discussed by authors <ns0:ref type='bibr' target='#b15'>(Sarrouti & Alaoui, 2017)</ns0:ref>. They described machine learning techniques give a reasonably easy way to classify questions as compared to manual techniques. Therefore, with this kind of implementation, the system can learn easily from the data and can be customized to a new system. On the other hand, there are limited studies that have used the hybrid approaches for the classification of the question. Here, we discuss a little bit about some research. The author <ns0:ref type='bibr' target='#b16'>(Ehsan & Mojgan, 2014</ns0:ref>) introduced a hybrid approach for a Persian closed area of question classification system, in which researchers willing a dataset that consists of 9500 questions with the help of some researchers. They achieved a reasonable performance with an accuracy of 80.5% based on a large number of question classes.</ns0:p><ns0:p>Moreover, to resolve the question classification issues, there are several different machine learning approaches such as Neural Network, Random Forest, SVM, Decision Trees, Naive Base, KNN that have been applied for classifications. However, for the question classification task in NLP, the SVM ('Support Vector Machine') consider the key approach in machine learning <ns0:ref type='bibr' target='#b18'>(Sherkat & Farhoodi, 2014)</ns0:ref>. To obtain their objectives, the authors Sherkat and Farhoodi <ns0:ref type='bibr' target='#b18'>(Sherkat & Farhoodi, 2014)</ns0:ref> used the SVM and dimension reduction method which use a few linguistic features with a bag of the n-grams feature vector. Similarly, <ns0:ref type='bibr' target='#b19'>(Huang et al., 2017)</ns0:ref> applied a tree kernel with an SVM for identification the questions answering and successfully achieved an accuracy of 87.4% statistics. But during the experiment they did not use semantic and syntactic features. In the same way, the integrated approach is known as Hierarchical Directed Acyclic Graph (HDAG) with kernel function implemented by' <ns0:ref type='bibr' target='#b20'>(Chitra & Kalpana, 2013)</ns0:ref> the kernel function as known Hierarchical Directed Acyclic Graph (HDAG) which squarely perform some levels of chunks and their relatives on organized natural language database. Also, on the question answering corpus, Xu et al., was introduced a hierarchical technique based on the SNoW learning algorithm for the classification of the questions <ns0:ref type='bibr' target='#b21'>(Xu et al., 2019)</ns0:ref>. In their research, they used a two-phase classification process. In the first phase, they presented the four most potential coarse-grained questions, classes. In the second phase, the question is classified into one of the child classes of the four coarse-grained question classes with an accuracy of 84.2%. Furthermore, the author <ns0:ref type='bibr' target='#b22'>(Merchant & Pande, 2018)</ns0:ref> used the features roughly technique as a similar introduced by the authors of <ns0:ref type='bibr' target='#b3'>(Mohd & Hashmy, 2018)</ns0:ref>. Even though the enhancement they applied a dimensionality reduction method that is near to Principal Component Analysis (PCA) also known as Latent Semantic Analysis (LSA) to decrease the feature space for accurate classification. In their study, Back-Propagation Neural Networks (BPNN) and SVM are used. Their study present that BPNN achieve superior results than SVM. On the other hand,' to cluster words in the vocabulary <ns0:ref type='bibr'>(Razzaghnoori, Sajedi & Reghuraj, 2018)</ns0:ref> referred to some clustering algorithm in which perform to convert every question into a vector space. After that, Multi-Layered Perceptron (MLP) and SVM have used for the classification of the question. By evaluating the performance of such approaches, they obtain a reasonable accuracy of 73% using SVM and an accuracy of 72.52% by using MLP on 3 various datasets. Besides they prepared the UTQD-2016 dataset ('University of Tehran Question Dataset 2016'). In this corpus, many different types of questions taken from the jeopardy game are shown on official Iran's TV. In their third approach, they used the Word2vec method to convert each question into a matrix where every row presents a Word2vec representation of a word. After that, the authors used an LSTM <ns0:ref type='bibr'>(Razzaghnoori, Sajedi & Reghuraj, 2018)</ns0:ref> approach to classify the questions and reported 81.77% accuracy on three-question databases. The detailed summary of the related researches in question classification is illustrated in Table1.doc.</ns0:p></ns0:div>
<ns0:div><ns0:head>DEEP LEARNING APPROACHES</ns0:head><ns0:p>Deep learning approaches were derived from artificial neural networks and nowadays it is a principal area of machine learning and has successfully been applied to achieve excellent performance in various research areas. However, in the section, we evaluated to explain four types of deep learning models for solving questions answering classification issues.</ns0:p></ns0:div>
<ns0:div><ns0:head>Convolution Neural Network (CNN)</ns0:head></ns0:div>
<ns0:div><ns0:head>Long Short Term Memory (LSTM)</ns0:head><ns0:p>A Long Short Term Memory (LSTM) unit is a type of traditional RNN. It was initially introduced by German researchers Sepp <ns0:ref type='bibr'>Hochreiter and Juergen in 1997 (Hocheriter & Schmidhuber;</ns0:ref><ns0:ref type='bibr' target='#b24'>1997)</ns0:ref>. LSTM approach is a variant of the traditional RNN that can learn long sequential data and maintain the propagation of error through all layers <ns0:ref type='bibr'>(Samarawickrama & Fermando, 2018)</ns0:ref>. The LSTM contains special internal memory blocks and a gated mechanism that helps to solve the two popular drawbacks which are related to vanishing gradient or exploding in the conventional RNN. In LSTM the memory blocks consist of memory cells with self-connections and particular multiplicative units to handle the flow of information. An LSTM PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:ref> Manuscript to be reviewed Computer Science the time step t respectively, denoted the memory cell content, is the candidate state (c t ) ȃ t calculated in equation ( <ns0:ref type='formula'>5</ns0:ref>). , , and is the input, final output of the LSTM, and a previous x t h t h t -1 time step of the hidden unit. Update the cell state vector is calculated as in equation ( <ns0:ref type='formula'>6</ns0:ref>). To perform the hidden state ( ) of an LSTM unit that is passed to the next sample in a sequence, the h t output of the output gate , equation ( <ns0:ref type='formula'>3</ns0:ref>) is multiplied by the squashed cell state through tanh o t c t function in equation ( <ns0:ref type='formula'>7</ns0:ref>), where and are weight matrix, is a bias term, and Sigm (x</ns0:p><ns0:formula xml:id='formula_0'>) = W xo U ho b o . 1 1 + e -x</ns0:formula><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>𝑖 𝑡 = 𝑆𝑖𝑔𝑚(𝑊 𝑥𝑖 𝑥 𝑡 + 𝑈 ℎ𝑖 ℎ 𝑡 -1 + 𝑏 𝑖 ) (3) 𝑜 𝑡 = 𝑆𝑖𝑔𝑚(𝑊 𝑥𝑜 𝑥 𝑡 + 𝑈 ℎ𝑜 ℎ 𝑡 -1 + 𝑏 𝑜 ) (4) 𝑓 𝑡 = 𝑆𝑖𝑔𝑚(𝑊 𝑥𝑓 𝑥 𝑡 + 𝑈 ℎ𝑓 ℎ 𝑡 -1 + 𝑏 𝑓 ) (5) ȃ 𝑡 = 𝑡𝑎𝑛ℎ(𝑊 𝑥ȃ 𝑥 𝑡 + 𝑈 ℎȃ ℎ 𝑡 -1 + 𝑏 ȃ ) (6) 𝑐 𝑡 = 𝑓 𝑡 * 𝑥 𝑡 -1 + 𝑖 𝑡 * ȃ 𝑡 ) (7) ℎ 𝑡 = 𝑂 𝑡 * 𝑡𝑎𝑛ℎ (𝑐 𝑡 )</ns0:formula><ns0:p>The weights and bias computed during the training process are</ns0:p><ns0:formula xml:id='formula_2'>W i ,W o , W f , W ȃ ∈ R m×p , U i ,U o ,U f ,U ȃ ∈ R m×m , b i ,b o ,b f ,b ȃ ∈ R m×1 .</ns0:formula><ns0:p>* is element-wise multiplication of two vectors. Here 'Sigm' is an element-wise logistic sigmoid activation function and 'tanh' is an element-wise hyperbolic tangent activation function.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gated Recurrent Unit (GRU)</ns0:head><ns0:p>The GRU is an advance and simplified variant of LSTM that was initially proposed by <ns0:ref type='bibr' target='#b27'>(Cho et al., 2014)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>However, GRU store and filter information through internal memory capability and integrate the input gate and forget gate into a single update gate with the previous activation and the 𝒉 𝑡 -1 candidate state represented by . There are three major components of GRU are included update 𝒉 𝑡 gate, reset gate, and candidate state and its equations are as follows:</ns0:p><ns0:p>(8) </ns0:p><ns0:formula xml:id='formula_3'>𝑧 𝑡 = 𝜑(𝑉 𝑥𝑧 𝑥 𝑡 + 𝑈 ℎ𝑧 ℎ 𝑡 -1 + 𝐵 𝑧 ) (9) 𝑟 𝑡 = 𝜑(𝑉 𝑥𝑟 𝑥 𝑡 + 𝑈 ℎ𝑟 ℎ 𝑡 -1 + 𝐵 𝑟 ) (10) ℎ 𝑡 = 𝑡𝑎𝑛ℎ(𝑉 𝑥ℎ 𝑥 𝑡 + 𝑈 ℎℎ (𝑟 𝑡 * ℎ 𝑡 -1 ) + 𝐵 ℎ ) (11) ℎ 𝑡 = (1 -𝑧 𝑡 ) * ℎ 𝑡 -1 + 𝑧 𝑡 *</ns0:formula></ns0:div>
<ns0:div><ns0:head>PROPOSED METHODOLOGY</ns0:head><ns0:p>In the research methodology phase, the feature extraction methods have been briefly explained. These approaches are very important to identify the nature of the questions. Identifying the nature of the questions, these all approaches are very important. After that, question classification approaches and classifiers would be examined. Besides, we illustrated modified deep learning architecture which has been utilized in this phase. In our proposed deep learning framework, we demonstrated the process of transforming words into vectors and identifying the questions to relevant classes. After that, for the question classification algorithm, we used Word2vec technique both skip-gram and Continues Bag of Words for classifying questions. In order to extract features, we used one mathematical expression of words. In this mathematical illustration, we allocated every word to a vector such that if and have 𝑥 𝑓(𝑥) 𝑥 𝑦 syntactic and semantic similarity then and' will become nearby vectors.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑓(𝑦) 𝑓(𝑥)</ns0:head><ns0:p>In this study, we performed the multi-fusion CNN and RNN generated features to conduct the Turkish question classification. In the proposed methodology, we utilized two various modified variants of the recurrent neural network models, such as LSTM and GRU with a combination of CNN.</ns0:p></ns0:div>
<ns0:div><ns0:head>Modified LSTM</ns0:head><ns0:p>LSTM is different from standard RNN initially proposed by German researchers <ns0:ref type='bibr'>Sepp Hochreiter and Juergen in 1997 (Hocheriter & Schmidhuber;</ns0:ref><ns0:ref type='bibr' target='#b24'>1997)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_4'>𝑖 𝑡 = 𝜑(𝑊 𝑥𝑖 × [ 𝐶 𝑡 -1 ,ℎ 𝑡 -1 ,𝑥 𝑡 ] + 𝑏 𝑖 ) (4) 𝑓 𝑡 = 𝜑(𝑊 𝑥𝑓 × [ 𝐶 𝑡 -1 ,ℎ 𝑡 -1 ,𝑥 𝑡 ] + 𝑏 𝑓 ) (5) 𝐶 𝑡 = 𝑡𝑎𝑛ℎ(𝑊 𝑥𝑐 * [ℎ 𝑡 -1 ,𝑥 𝑡 ] + 𝑏 𝑐 ) (6) 𝐶 𝑡 = 𝑓 𝑡 * 𝐶 𝑡 -1 + 𝑖 𝑡 * 𝐶 𝑡 )</ns0:formula><ns0:p>In these equations, where is the input transfer matrix of , is the memory cell,</ns0:p><ns0:formula xml:id='formula_5'>x t W C t -1</ns0:formula><ns0:p>component-wise multiplication is presented as , while the hidden state vector and shows x h t -1 φ the sigmoid function.</ns0:p><ns0:p>The output gate control the present hidden state value , which uses memory cell content o t h t for the nonlinearity system result:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_6'>𝑜 𝑡 = 𝜑(𝑊 𝑥𝑜 × [𝐶 𝑡 ,ℎ 𝑡 -1 ,𝑥 𝑡 ] + 𝑏 𝑜 ) (7) ℎ 𝑡 = 𝑜 𝑡 * 𝑡𝑎𝑛ℎ (𝐶 𝑡 )</ns0:formula><ns0:p>According to the following steps, the current stage of the hidden state is used for the ℎ 𝑡 acquisition of . In other words, long short-term memory processes the word series ℎ 𝑡 + 1 recursively by computing their internal hidden state at each time step. The hidden activation ℎ 𝑡 of the final time step can be considered the linguistic description of the complete sequence and fed into the classification layer as input.</ns0:p></ns0:div>
<ns0:div><ns0:head>Modified GRU</ns0:head><ns0:p>The mathematical expression of GRU is defined as follows:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p><ns0:formula xml:id='formula_7'>(8) 𝑧 𝑡 = 𝜑(𝑉 𝑥𝑧 𝑥 𝑡 + 𝑈 ℎ𝑧 ℎ 𝑡 -1 + 𝐵 𝑧 ) (9) 𝑟 𝑡 = 𝜑(𝑉 𝑥𝑟 𝑥 𝑡 + 𝑈 ℎ𝑟 ℎ 𝑡 -1 + 𝐵 𝑟 ) (10) ℎ 𝑡 = (1 -𝑧 𝑡 ) ⊙ ℎ 𝑡 -1 + 𝑧 𝑡 (11) ⊙ 𝑡𝑎𝑛ℎ(𝑉 𝑥 𝑡 + 𝑈(𝑟 𝑡 ⊙ ℎ 𝑡 -1 ) + 𝐵 ℎ )</ns0:formula><ns0:p>Where and are the output and input vector at time . and are the update and reset gate ℎ 𝑡 𝑥 𝑡 𝑡 𝑧 𝑡 𝑟 𝑡</ns0:p><ns0:p>vector. and ⊙ are the sigmoid activation function and element-wise multiplication operation 𝜑 while are the corresponding bias. 𝐵</ns0:p></ns0:div>
<ns0:div><ns0:head>CNN-LSTM/GRU Models</ns0:head><ns0:p>In this section, we proposed deep learning hybrid architecture which includes the following parts: word embedding with the Word2vec methods, convolutional neural network, and recurrent neural network with its variants, such as LSTM and GRU, while fully connected layer employed as a softmax output. The word embedding method is applied to converts input text into numerical word vectors to translate into CNN and RNN models. In this way, several convolution kernels of various dimensions were used to capture more helpful features in question classification. With this technique, CNN preserves the temporal data and generates a single value using the maxpooling layer. Similarly, the RNN layer is utilized to obtain the temporal features at the input level and captures long-term dependencies. In word embedding, every question was illustrated as a word embedding matrix to create a classifier using a CNN. Provided a question consisting 𝑛 words , , ,... each word with its pre-trained d-dimension word embedding matrix is 𝑣 1 𝑣 2 𝑣 3 𝑣 𝑛 swapped, and stacked row-wise to produce occurrence matrices . Moreover, in this 𝑉 𝑖 ∈ 𝑅 𝑛 × 𝑑</ns0:p></ns0:div>
<ns0:div><ns0:head>Input Layer</ns0:head><ns0:p>The input layer is selected as an initial point of the networks. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science all questions the input layer transmits data samples as a sequence of unique indices of similar dimensions.</ns0:p></ns0:div>
<ns0:div><ns0:head>CNN LAYERS</ns0:head><ns0:p>The convolution layer is the most useful and basic layer of CNNs that perform the convolution process in the form of row representation through word vector obtain from the embedding layer. CNN layers contain a set of learnable filters or kernels which map to produce two-dimensional activation. Let's considered the words at time t with weight matrices w of dimension ℎ to perform the following convolutional compution:</ns0:p><ns0:formula xml:id='formula_8'>𝑤 ∈ 𝑅 ℎ * 𝑚 = f ( * w + bi) (12) 𝑐 𝑖 𝑋 𝑖 + ℎ -1</ns0:formula><ns0:p>Where refers to the non-linear Relu activation function and the feature map generate to 𝑓 represented by with h words every time frequently, while bias term denoted by .</ns0:p><ns0:formula xml:id='formula_9'>𝑐 𝑖 ∈ 𝑅 𝑛 -ℎ + 1 𝑏 𝑖</ns0:formula><ns0:p>After that, the max-pooling layer performs to receive created features from convolution which change the features map into its maximum activation value, as follow:</ns0:p><ns0:formula xml:id='formula_10'>= max (13) 𝑃 𝑖 𝐶 𝑖</ns0:formula><ns0:p>where, refers to the new feature map in order to obtain the various level of 𝑃𝑖𝜀𝑅 𝑛 -ℎ + 1 2</ns0:p></ns0:div>
<ns0:div><ns0:head>Feature mapping of CNNs layer with RNNs layer</ns0:head><ns0:p>Each input has a vector series, which scans it with a fixed filter distance. In this technique, the filter sizes of 3, 4, and 5 are used to carry the features of words. The CNNs layers efficiently reduce the input features vector, and give the better-compressed presentation through the maxpooling layer as compared to original raw features and the output generated by CNN's layers are further processed as inputs to the RNNs layers passing through the gating mechanism for learning the high informative features. Furthermore, in order to classify questions every layer process different features in a question with the Relu activation function in the feature map.</ns0:p></ns0:div>
<ns0:div><ns0:head>RNN Layers</ns0:head><ns0:p>RNN layers give exhibit temporal dynamic behavior <ns0:ref type='bibr'>(Choi et al., 2017)</ns0:ref> which processes the sequential data within the network. The recurrent layer has the capability to captures the longterm dependencies; therefore we feed original word embedding as an input to the RNN layer instead of those features generated by CNN. The purpose of selecting the RNN layers approach is to take the sequence data through utilizing the previous information. In these RNN layers, the final output of the layers has the equivalent number of units.</ns0:p><ns0:p>Due to sequence data, the RNN layers can learn temporal features from it. After this, we performed a different combination of CNN and RNN acquired features to carry out question classification. Based on this technique, sequential features to be perfectly maintained and a sequence created by max-pool layer instead of a sigle value. According to the following process, the data are fed into an RNN layer with many to one mechanism and a fully connected layer with softmax output.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL DESIGN & DATASET</ns0:head></ns0:div>
<ns0:div><ns0:head>Question database description</ns0:head><ns0:p>We evaluate the performance of proposed deep learning approaches on the Turkish dataset for question classification. There is an absence of a Turkish question database as compared to the English database. In this research, we used a dataset of Turkish question that is adapted from an English Question Dataset that has been used by Li and Roth's in 2002.</ns0:p><ns0:p>They referred to two-layered classification, which is extensively applied for question categorization. This dataset includes six offensive classes and fifty fine-grained classes that are reported as 'offensive fine' including ''LOCATION: city''. This dataset is divided into two parts are training and testing. We experimented to use 5400 questions for training data and the remaining 600 questions for testing data. The distribution of this dataset (Le, Phan & Nguyen, 2015) categorized into main classes and sub-classes are reported in Table2.doc. In our experiments, we reconstruct the Turkish dataset from the English dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental setting and Hyperparameters</ns0:head><ns0:p>Deep learning based approaches have the ability to acquire complex relationships among inputs and outputs <ns0:ref type='bibr' target='#b38'>(Srivastava et al., 2014)</ns0:ref>. In our experiment, we applied Adam optimizer to set their default optimal parameters setting with a learning rate of 0.005 and decay factor is 0.9. For the CNN layers, we applied three channels where each one uses a two-dimensional convolutional layer with kernel window size 3, 4, and 5. We used the rectified linear unit (ReLU) activation function for each convolutional layer. For each iteration of the training procedure, we fix the batch size to 32. For a fair comparative analysis, a few preprocessing steps are performed to improve the quality of the dataset. However, during the training process, many connections are involved as a result of sampling noise; while it did not exist in the test data. This problem may conduct to overfitting and minimize the prediction ability of the network <ns0:ref type='bibr'>(Srivastava et al.,</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science 2014). For this issue, we applied the dropout method to reduce the overfitting with the dropout probability of 0.2 for recurrent layers and 0.5 after the convolution layer. Moreover, for training the proposed models, we used Cross Entropy with regularization as minimizing the loss L 2 function, which referred to as follows:</ns0:p><ns0:formula xml:id='formula_11'>(23) J (w, b) = - 1 2 ∑ m i = 1 [y i logy i + (1 -y i ) log(1 -y i )] + λ 2m ∑ m l = 1 ǁwǁ 2 F</ns0:formula><ns0:p>Where is refer ground truth; and classification probability for each class represented by . We y i y i set , of Frobenius norm value by compressing , which is the coefficient for .</ns0:p><ns0:formula xml:id='formula_12'>w = 0.001 L 2 L 2</ns0:formula><ns0:p>During the training process, the result presents the regularization and dropout method can L 2 perform better to avoid overfitting. Table3.doc, provides the optimal values of hyperparameters, which have been applied for the training of the proposed framework.</ns0:p></ns0:div>
<ns0:div><ns0:head>Word2vec models</ns0:head><ns0:p>layer, where shows words. The projection layer is a multidimensional vector array that stores 𝑊 𝑛 the sum of various vectors. The output layer matches the layer which outputs the results of the vectors from the projection layer. It is shallow of two-layer neural networks that are educated to perform word embedding method. Specially, CBOW is similar to the feedforward Neural Network Language Model (NNLM) <ns0:ref type='bibr' target='#b4'>(Armeni, Willems & Frank, 2017)</ns0:ref> and predicts the output word from other near word vectors. The algorithm of the Word2vec model extracts features from a provided text corpus without any intervention from a human expert. Most essentially, if the text size is too small or only a separate word, it performs quite well. By providing a big corpus, it generates word vectors from a large number of texts and makes it appropriate by comparing the contextual data the input words performed similarly as shown in (https://github.com/akoksal/turkish-word2Vec). In the Word2vec space, every special word in the text is allocated to a connected vector (https://israelg99.github.io/2017-03-23-Word2Vec-Explained/). The meaning of words is one of the most significant intents in deep learning that are entirely accomplished with employ the Word2vec for classifying major entities <ns0:ref type='bibr' target='#b5'>(Mikolov et al., 2013)</ns0:ref>. For learning word embeddings from raw data, it is a computationally well-ordered predictive framework. There are two different techniques of Wored2vec, as follows.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.'>Skip gram</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>CBOW (Continuous Bag of Words)</ns0:head><ns0:p>Algorithmically, these are two approaches near to each other. By computationally, Continuous Bag of Words (CBOW) is a continuously distributed word representation approach, which classifies the core words (target) based on the neighboring words. The fundamental principle of CBOW includes identifying when a given word come from neighboring words analysis. The CBOW architecture shows the advantage that the information in the dataset is organized uniformly. Furthermore, the CBOW method derives the dictionary by mapping the ǁVǁ ∈ R m words ( , ) in the corpus to the projection layer. </ns0:p></ns0:div>
<ns0:div><ns0:head>Training Word2vec embedding model</ns0:head><ns0:p>As part of this research, we have the selected most widely applied deep learning architecture with the Word2vec model that focuses on question classification. For word embedding, we implemented various parameters in order to train Word2vec of both Skip gram and CBOW models on Wikipedia corpora. We explore and trained our Word2vec models on a Turkish Wikipedia dataset for question classification. As the largest encyclopedia in which documents are well organized by topics on the Internet, we preferred it as the dataset. Therefore, Wikipedia corpora are well suitable for the analysis of the Word2vec approach. However, in our study, we eliminate unnecessary words from the question classification corpus with less than 5 words during the training on Wikipedia corpus. Because these words have minor quantities of data which are usually helpless for training the Word2vec model (e.g., some have stop words, streaming, and emotions). Moreover, we conducted an experiment by using Wikipedia corpus, to build our skip-gram and CBOW model with various vectors lengths 100, 200, 300, and 400. In addition, to avoid the overfitting issues, we used the dropout technique <ns0:ref type='bibr' target='#b41'>(Hinton, 2014)</ns0:ref>, with a dropout rate of 0.5 for convolutional layers while 0.2 for recurrent layers.</ns0:p><ns0:p>For all the experiments, to train our Word2vec model, we used Gensim <ns0:ref type='bibr' target='#b36'>(Liu et al., 2018)</ns0:ref> to generate a set of word embeddings by setting the context window size W, the dimensionality D, the complete number of negative samples is presented by ns and the skip-gram is shown by sg. The value of the predefined parameter for the context window size is selected as W = {5}. Similarly, for this window size, to explore both of the high and low dimensions, we applied four various dimensionality sizes D = {100, 200, 300, 400} for the word2vec vectors. Therefore, we have chosen the default parameters for the training of deep learning approaches with the word2vec model. We fixed 5 as the negative sampling, batch words to 10,000 minimum counts of words to 5 and iteration to 5. In addition, we analyzed the effect of the dimensional vectors on the Turkish question dataset, while it includes near about one million Turkish articles and has been explored on Turkish Wikipedia. After removing the noisy words that are frequently less than 5, more than 200 thousand Turkish words are collected in the database. The parameters of deep learning approaches employing the Word2vec models are presented in Table3.doc.</ns0:p><ns0:p>Additionally, in this study accuracy was selected as an evaluation metric. It evaluates the correctness of the model and is calculated as the ratio of correctly classified instances (TP) divided by the total numbers of instances (TP + FP) on the entire dataset. The formula of accuracy in question classification defined as follows: Accuracy = ( <ns0:ref type='formula'>16</ns0:ref>) </ns0:p><ns0:formula xml:id='formula_13'>𝑇𝑃 (𝑚) (𝑛) + 𝑇𝑁 (𝑚) (𝑛) 𝑇𝑃 (𝑚) (𝑛) + 𝐹𝑃 (𝑚) (𝑛) + 𝐹𝑁 (𝑚) (𝑛) + 𝑇𝑁 (𝑚)<ns0:label>(</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS</ns0:head><ns0:p>In this part, we illustrate the performance of different deep learning algorithms using Word2vec embedding vectors of both 'CBOW and the skip-gram methods on the question dataset. The semantic and syntactic connections between words can be efficiently captured by these techniques. In this way, initially, the Word2vec model calculates word vectors in the vocabulary words. Similarly, the word2vec model initializes and selects random vectors from word vectors.</ns0:p><ns0:p>Then, these algorithms attempt to increase the cosine similarity between all terms and their contexts, which is described based on the method. Consequently, these algorithms will be capable to allow word vectors from a large amount of text of the Wikipedia corpus, while their nearness is associated with their corresponding words.</ns0:p><ns0:p>These deep learning techniques applied for questions classification is based upon the Word2vec models of skip-gram and CBOW with random vector approaches. To the best of our knowledge, this research concentrates on agglutinative language even this is the first time studied in an agglutinative language in detail for question classification. As a performance analysis, we experimented based on the word2vec models and achieved satisfactory results in a term of accuracy. Table4.doc to Table8.doc, compare the results of deep learning models with both word2vec word embedding techniques as a function of training volume based on the fixed epoch in question classification.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>This section analysis the competitiveness and effectiveness of our proposed results with various deep learning approaches with both word2vec word embedding techniques by taking into consideration an accuracy. Table4.doc to Table8.doc, show the accuracy comparison of deep learning models based on the various number of feature vectors with Word2Vec methods. The previous research on question classification focuses on various tasks of occurrence for relevant class categorization or named entity <ns0:ref type='bibr' target='#b37'>(Derici et al., 2015)</ns0:ref>. However, they integrated a rule-based method employing an HMM-based sequential categorization method <ns0:ref type='bibr'>(Donmez & Adali, 2017)</ns0:ref>. And their answers could not generalise to question classification tasks in an agglutinative language. Furthermore, some alternate studies have been performed on similar language work for questions classification <ns0:ref type='bibr' target='#b10'>(Razzaghnoori et al., 2018)</ns0:ref>;'(https://github.com/thtrieu/qclass_dl/blob/master/ProjectDescription.pdf,https://github.co m/thtrieu/qclass_dl/blob/master/Project Presentation.pdf) has not examined the influence of the Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Word2vec models on both variants skip-gram and' CBOW. Particularly, to evaluate the performance of question classification, they applied some parameters such as feature vector size, window size. In this study, the experimental results demonstrate the factors that those mentioned-above can certainly affect the performance of the question classification system.</ns0:p><ns0:p>In our research, generally, we examined four different deep learning models including CNN, GRU, LSTM, CNN-LSTM, and CNN-GRU based on Word2vec models of both CBOW and skip gram. However, by comparison, the CNN, CNN-LSTM and CNN-GRU models are capable to achieved significantly superior results in the term of accuracy when using skip-gram model on Turkish questions classification dataset as compared to CBOW model <ns0:ref type='bibr'>(Tables 4.doc,</ns0:ref><ns0:ref type='bibr'>7.doc,</ns0:ref><ns0:ref type='bibr'>8.doc)</ns0:ref>. On the other hand, CNN, CNN-LSTM, and CNN-GRU, commonly perform better than LSTM and GRU architectures by using CBOW model. In most of the cases, the CNN-LSTM and CNN-GRU approach achieved better results based on skip-gram than CBOW. Moreover, we have observed that an excellent result in CNN approach, an accuracy of 93.7%, based on the skip-gram with 300 feature vectors. Furthermore, we experienced that when utilizing the correct form of a dataset can probably incorporate more vocabulary for the question classification database. For this reason, the correlation between corpus and the classification dataset provides better question-level representation. Finally, on the same dataset, we compared the performance of our proposed approaches with a similar study ((https://github.com/thtrieeu/qclass_dl/blob/master/ProjectDescription.pdf,)', in which the researchers have used the LSTM approach to obtain 94.4% accuracy in English language; we noticed that our achieved results were low compared to this study carried out in English. The main cause behind of this is the Turkish language construction as we already described above in the introduction section. As a result, there is a lack of efficient Turkish Language lemmatization tools compared to English language.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>Question classification is an important area in natural language processing (NLP). Recently, there are several deep learning models have been used to solve these issues and have shown remarkable results in NLP. In this study, we applied four different deep learning approaches to the question dataset using Word2vec embedding vectors both skip-gram and CBOW models. We noticed that the employ of Word2vec models can efficiently learn the text semantic and syntactic connections between words significantly improved the performance of the classification models. In this research, initially, the Word2vec methods compute the word vectors from vocabulary words and initialized with random vectors. Therefore, by applying a huge amount of text for these algorithms generated by Wikipedia corpus, they will be capable to allow word vectors in the vector space such that their closeness is corresponding to their associated words. By comparative analysis, all deep learning models have revealed superior performance with word2vec models in the tasks of question classification. However, in some of the cases, we observed that the skip gram model performed well as compared to CBOW model.</ns0:p><ns0:p>In our future direction, for further improve the performance we recommend here in one sentence that may motivate us to explore for future work. To improve the accuracy of hybrid feature extraction techniques can be applied for more than one-word embedding methods together for a question classification system. Hopefully, the system will be capable to acquire the advantages for all embedding techniques when using this hybrid method.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>The general architecture of NLQA system </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>to learn long-term dependencies. We PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>𝑛) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021) , True Negative respectively, which indicate the correct TP 𝑇𝑁 classification for the relevant class while and refer to the False Positive, and False 𝐹𝑃 𝐹𝑁 Negative which determine the false classification for the relevant class. Particularly, in word embedding, we utilized the Word2vec model both of skip-gram and CBOW as feature extraction models. Furthermore, in terms of 10-cross validation accuracy, we also compared the results.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,230.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,291.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,291.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,254.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,204.37,525.00,291.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>on statistical machine translation. GRU is inspired by LSTM which controls the information flow inside the unit through update gate</ns0:figDesc><ns0:table><ns0:row><ns0:cell>z t</ns0:cell><ns0:cell>and reset gate</ns0:cell><ns0:cell>r t</ns0:cell><ns0:cell>without separate</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54597:1:3:NEW 21 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>ℎ 𝑡 )</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Where</ns0:cell><ns0:cell>, 𝑽 𝑥𝑧 𝑽 𝑥𝑟 and</ns0:cell><ns0:cell>𝑽 𝑥ℎ</ns0:cell><ns0:cell>refer to the weight matrix among the input layer and update gate, reset</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>gate, and candidate state while recurrent connection weight matrix is represented by</ns0:cell><ns0:cell>, 𝑼 ℎ𝑧 𝑼 ℎ𝑟</ns0:cell><ns0:cell>and</ns0:cell></ns0:row><ns0:row><ns0:cell>𝑼 ℎℎ</ns0:cell><ns0:cell cols='4'>respectively. is the time series sample input and hidden output is denoted by . is the 𝒙 𝑡 𝒉 𝑡 𝜑</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>sigmoid activation function of update and reset gates, * performs element-wise multiplication</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>operation and , and 𝑩 𝑧 𝑩 𝑟</ns0:cell><ns0:cell>𝑩 ℎ</ns0:cell><ns0:cell>are the corresponding biases.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>1|𝑤 𝑖 ,𝑐 𝑖 ;𝐸,𝐹) specific contexts that appeared close to a word or not. While, in this study, we refer for 𝑃(𝐷;𝜃) the skip-gram training is demonstrated by . The skip-gram algorithm works more 𝑃 𝑆𝐺 (𝒄|𝒘;𝐸,𝐹)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='5'>c 1 , c 2 ,…, c t</ns0:cell><ns0:cell>Then, in the projection layer the corpus</ns0:cell></ns0:row><ns0:row><ns0:cell>word</ns0:cell><ns0:cell>c t</ns0:cell><ns0:cell cols='4'>is mapped to the unique position</ns0:cell><ns0:cell>w t</ns0:cell><ns0:cell>and refers to the context size by . It is a k</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>methodological result that the corpus terms are read sequentially by Continuous Bag of Words [</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>c t -k , c t -k + 1 ,…, c t + k</ns0:cell><ns0:cell>] and achieves the corresponding word position [ w t -k , w t -k + 1 ,…, c t + k</ns0:cell><ns0:cell>]</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>in the projection layer by a hash table. Finally, it performs the following operations on the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Context ( ) of , where is the context accumulative sum of w t w t V t</ns0:cell><ns0:cell>w t</ns0:cell><ns0:cell>mentioned in equation (14).</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>As one observation, CBOW processes entire contexts sequentially.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>𝑉 𝑡 = ∑ 𝑡 + 𝑁 𝑡 -𝑁 𝐶𝑜𝑛𝑡𝑒𝑥𝑡(𝑤 𝑡 )</ns0:cell><ns0:cell>(14)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>vectors, where</ns0:cell><ns0:cell cols='2'>𝑉 𝑊</ns0:cell><ns0:cell>is present the word vocabulary and</ns0:cell><ns0:cell>𝑉 𝐶</ns0:cell><ns0:cell>is present the context vocabulary with</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>sizes of</ns0:cell><ns0:cell>𝑆 𝑊</ns0:cell><ns0:cell cols='2'>and</ns0:cell><ns0:cell>𝑆 𝐶</ns0:cell><ns0:cell>respectively. The learning procedure of skip-gram (SG) attempts to train the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>contextual distribution for separate word by optimizing the prospect function as follows:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>𝑆𝐺(𝒄/𝒘;𝐸,𝐹) = ∏ 𝑛 𝑖 = 1 𝑆𝐺(𝑐 𝑖 /𝑤 𝑖 ;𝐸,𝐹) = ∏ 𝑛 𝑖 = 1 ∑ 𝑐 ∈ 𝐶 𝑒𝑥𝑝(𝑤 𝑇 𝑒𝑥𝑝(𝑤 𝑇 𝑖 𝐸𝐹)𝑐 𝑖 𝑖 𝐸𝐹)𝑐</ns0:cell><ns0:cell>(15)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Where and are the parameter matrix of the shapes 𝐸 𝐹</ns0:cell><ns0:cell>(𝑆 𝑊 × 𝑑)</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>(𝑑 × 𝑆 𝐶 )</ns0:cell><ns0:cell>respectively. is 𝑑</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>the dimensionality of the embedding vector space. Contrary,</ns0:cell><ns0:cell>𝑃(𝑥 𝑖𝑜 =</ns0:cell><ns0:cell>presents</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "
Universiti Tun Hussein Onn Malaysia
Batu Pahat, 86400, Johor, Malaysia
Tel: +601137631213
http:// https://www.uthm.edu.my March 8th, 2021
Dear Editors
We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns.
In particular, all of the code we wrote is available and I have included multiple links throughout the paper to the appropriate code repositories.
We believe that the manuscript is now suitable for publication in PeerJ.
Dr. Muhammad Zulqarnain
Faculty of Computer Science and Information Technology
On behalf of all authors.
Note: (A point-by-point response to every change, new text, new section or sub-section and repleace sentances are highlighted in RED color; while moved insertion sentence are highlighted in GREEN color)
Reviewer 1 (Anonymous)
Basic reporting
In this manuscript, the authors investigate three-to-five deep learning approaches based on 10-fold cross-validation for question classification tasks in the Turkish language. Kindly suggestions:
1) Comment: Better to move the problem statement (lines 61-65, 78-80) to the right section related to the methodology of this study.
Response: As per your guideline, I have moved the problem statement lines from 61-65 and 78-80 to the right section in the research methodology from lines 303-306 and made them highlighted in green color.
2) Comment: The equations 2-7 with regard to LSTM (long short term memory), and the equations 8-11 about GRU (gated recurrent unit), might contribute limited knowledge.
Response: The equations 2-7 for LSTM and the equation 8-11 for GRU are the standard equations of LSTM and GRU. As far as my contributions are concerned, both LSTM and GRU equations are presented in the research methodology section as shown in modified LSTM and modified GRU which is illustrated in the equations (Lines no. 321-324, 330-330 for LSTM), and (345-348 for GRU) respectively.
3) Comment: The setup of the section “METHODOLOGY & RESULTS” might appear a bit unreasonable. That is because the proposed approach and focused novelties should be further improved to facilitate the recognition of other scholars.
Response: The setup of the section “METHODOLOGY & RESULTS” has been modified as per your comments. Moreover, we further improved the research novelties and mentioned in the section of research methodology.
4) Comment: The first technique of Wored2vec might not have been well introduced in the context. Further clarification would be grateful.
Response: Done, The technique of Word2vec has been further improved and clarified in the context (lines no. 487-496) are highlighted in red color
Experimental design
5) Comment: It's hard to ignore that the section of experiments might not have been well structured. That is to say, the arrangement of this article ought to be further enhanced before the next submission.
Response: Agreed, The experimental section has been well organized and re-arranged as per your precious comments. Furthermore, we added one more section of Experimental Design in this study.
Validity of the findings
6) Comment: The novelties of the presented study should be further highlighted in preparing the future version of this manuscript. That might be because innovation is the gold criteria to recognize the most promising study. And has been restructured the research methodology and experiment design.
Response: Done, In this revised manuscript, the novelties of this study have been further enhanced in the methodology section and highlighted in the following sub-sections such as modified LSTM, GRU, and Word2vec (Lines 312-351 for LSTM & GRU, 487-551 for Word2vec) respectively.
Comments for the Author
The authors have told us a story about their focus on the basis of their deep literature investigation.
Reviewer 2 (Anonymous)
Basic reporting
1) Comment: The writing needs to be substantially and thoroughly improved and proofread so that it can properly deliver the messages of this work to the reviewers and the readers. There are scattered grammar issues and typos across this manuscript in its current form.
To list a few:
Line 20: There are... have been ... -> ... that have been ...
Line 44: ... and how to ... -> ... (becomes an open question.)
Line 68: ... ... to the credible classify of the answers -> ... credible classification of ...
Line 71: After categories...
Line 96: Most articles focus on English language has ... -> ... focused on ...
Line 123: stag -> stage
Response: The whole manuscript has been rechecked grammatically as per your precious guideline. Finally, the research writing has been thoroughly improved and proofread accordingly
2) Comment: The related work part is kind of out-dated, for which the authors may refer to more recent works.
Response: Agreed, Updated as per your comment
Experimental design
3) Comment: The descriptions of the experimental settings are missing, including the data preprocessing steps, and the training policies and hyper-parameter settings of the deep learning models.
Response: Agreed, Experimental setting, training policies, hyper-parameter setting have described (Lines no. 459-480, 557-582) accordingly as per your comment. While the summary of major parameters utilized in the deep learning approaches with Word2vec is shown in Table 3.
4) Comment: The supplementary code files cannot be open properly. You may use UTF-8 for the file encoding. The publish and organization of a Turkish dataset is definitely meaningful and positive.
Response: Agreed, The supplementary file has been replaced with UTF-8 for the file encoding
Validity of the findings
5) Comment: When saying ``there are several unique features in Turkish languages that make NLP challenging'', please explain it with more details as it's the main motivation of this work.
Response: Noted, The Turkish dataset has been more explained in two different sections. One of the descriptions has been put in the introduction section (Line no. 82 to 102 are highlighted in red color), while some of the descriptions are mentioned in “Question database” section (Line no. 451 to 457).
Reviewer 3 (A Lamurias)
Basic reporting
1) Comments:
The whole manuscript should be revised because the quality of written language is very poor making it difficult to understand. It has many grammatical issues and repeated nonsensical text. Even the first sentence of the abstract is an example of this repetition: 'Question classification is one of the important tasks for automatic question classification in natural language processing (NLP).'. Throughout the text there are too many examples to list. Some technical terms used are also not very clear, for example, the authors say that Turkish is 'strongly inflective language' but they also say that it is agglutinative, so this issue should be clarified for the readers.
Response: The whole manuscript has been revised accordingly. Also, the technical issue has been resolved about the Turkish language and made cleared for the readers
2) Comment: I would also advise to dismiss most of the text about Word2vec and deep learning models since these are established and widely used algorithms. Instead, more examples about the problem at hand (Turkish question classification) should be given.
Response: Agreed. The repetition of text has been removed accordingly.
3) Comment: The manuscript also has an organization issue: 'DEEP LEARNING APPROACHES' starts with an overview of deep learning models but then starts describing the proposed model. Then on 'METHODOLOGY & RESULTS', the authors provide a full explanation of the word2vec algorithm, as well as experimental details (but no results since the next section is called 'Experimental Results').
Response: The manuscript has been re-organized accordingly. As well as experimental results of the Word2vec model with deep learning approaches have been provided in the section of Experimental Results and Discussion
Experimental design
4) Comment: The experimental design seems to be an application of existing architectures to Turkish question classification. I was unable to understand what adaptations were made to work with Turkish questions. Furthermore very few details are given about the dataset. A dataset was created by translating an English language dataset, but it was not clear how and by whom it was translated and if this dataset is or will be made publicly available. As neither the code or data used to train the models is provided, it would be difficult to replicate the results.
Response: Noted, The Turkish dataset has been more explained in two different sections. One of the descriptions has been put in the introduction section (Line no. 82 to 102) and, some of the descriptions are mentioned in “Question database” section (Line no. 451 to 457).
At the beginning of the study, we were unable to find any Turkish question dataset and then we decided to translate an English question dataset into a Turkish question dataset in order to assess the performance of the proposed approaches in their best way. After this study, we will plan to share this Turkish question dataset for researchers who wish to study in this area. Some of the difficulties regarding the Turkish language, we put in the introduction section (Lines no. 94- 102). Moreover, Code and data are available at https://github.com/zunimalik777/DeepLearning-Turkish-Word2vec-Analysis
Validity of the findings
5) Comment
The authors provide a comparison between all the architectures considered and different embeddings. They claim that in comparison to English questions, they have lower results due to 'an absence of effectiveness lemmatization methods for Turkish Language compared to English language.'. However this is the first that they mention lemmatization so I don't know why and how this would impact the results.
Response: As per your comment, we already mentioned the reason in lines no. 639-648 and referred to the introduction section. However, the most important reason for this is the language structure of Turkish as we mentioned earlier in the Introduction section (Lines no. 94-102).
Comments for the Author
This manuscript presents a deep learning approach to question classification applied to Turkish questions. The authors compare several deep learning architectures and they mention the challenge that is doing this type of task in Turkish texts. They provide a strong related work section on this task, and describe the models used in detail, however due to language issues most of the manuscript is difficult to understand.
" | Here is a paper. Please give your review comments after reading it. |
124 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In the last decade, Deep Learning is applied in a wide range of problems with tremendous success. This success mainly comes from large data availability, increased computational power, and theoretical improvements in the training phase. As the dataset grows, the real world is better represented, making it possible to develop a model that can generalize. However, creating a labeled dataset is expensive, time-consuming, and sometimes not likely in some domains if not challenging. Therefore, researchers proposed data augmentation methods to increase dataset size and variety by creating variations of the existing data. For image data, variations can be obtained by applying color or spatial transformations, only one or a combination. Such color transformations perform some linear or nonlinear operations in the entire image or in the patches to create variations of the original image. The current color-based augmentation methods are usually based on image processing methods that apply color transformations such as equalizing, solarizing, and posterizing. Nevertheless, these color-based data augmentation methods do not guarantee to create plausible variations of the image. This paper proposes a novel distribution-preserving data augmentation method that creates plausible image variations by shifting pixel colors to another point in the image color distribution. We achieved this by defining a regularized density decreasing direction to create paths from the original pixels' color to the distribution tails. The proposed method provides superior performance compared to existing data augmentation methods which is shown using a transfer learning scenario on the UC Merced Land-use, Intel Image Classification, and Oxford-IIIT Pet datasets for classification and segmentation tasks.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Since the first study conducted by <ns0:ref type='bibr' target='#b19'>Krizhevsky (Krizhevsky et al. (2012)</ns0:ref>) in the ImageNet competition in 2012, deep learning (DL) has been highly successful in image recognition problems. Today convolutional neural networks (CNN) are well-understood tools for image classification as a heavily employed DL approach. CNN's main strength comes from its ability to extract features automatically from regularly structured data such as speech signals, images, or medical volumes <ns0:ref type='bibr' target='#b11'>(Georgiou et al. (2020)</ns0:ref>), or even unstructured data such as point clouds <ns0:ref type='bibr' target='#b4'>(Charles et al. (2017)</ns0:ref>). However, training a DL network with high accuracy and generalization capability requires a large dataset representing the real world. Thus, deep learning algorithms' performance relies heavily on the variety and the size of the available training data.</ns0:p><ns0:p>Unfortunately, it may be challenging to obtain a sufficiently large amount of labeled samples <ns0:ref type='bibr' target='#b37'>(Wang et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>Kemker et al. (2018)</ns0:ref>). In some cases, gathering the data is complicated or even hardly possible.</ns0:p><ns0:p>Therefore, training DL becomes challenging due to insufficient training data or uneven class balance within the datasets (Yi-Min <ns0:ref type='bibr' target='#b41'>Huang and Shu-Xin Du (2005)</ns0:ref>).</ns0:p><ns0:p>One way to deal with an insufficient training data problem is using so-called data augmentation techniques to enlarge the training data by adding artificial variations of it <ns0:ref type='bibr' target='#b31'>(Simard et al. (2003)</ns0:ref>). Such enlarged training dataset can be even further extended by adding synthetically generated data <ns0:ref type='bibr' target='#b38'>Wong et al. (2019)</ns0:ref>. Data augmentation can be applied directly to the features, or it can be applied to the data source, which will be used to extract the features <ns0:ref type='bibr' target='#b35'>(Volpi et al. (2018)</ns0:ref>), e.g., CNN can extract features from the enlarged image dataset <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). The most challenging work is to improve the generalization ability of the trained model to avoid overfitting. If correctly done, data augmentation techniques can improve the performance and generalization ability of the trained model. Therefore, due to their success, data augmentation techniques are used in many studies that employs machine learning <ns0:ref type='bibr' target='#b1'>(Ali et al. (2020)</ns0:ref>, <ns0:ref type='bibr' target='#b16'>Islam et al. (2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b44'>Zheng et al. (2019)</ns0:ref>).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:1:2:NEW 17 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Data augmentation strategies can be divided into three groups as color transformations, geometric transformations, and techniques using neural networks. Methods using color transformations manipulate the pixels' spectral values by doing operations such as changing the contrast, brightness, color, injecting noise <ns0:ref type='bibr' target='#b33'>(Takahashi et al. (2020)</ns0:ref>), or applying some filtering techniques <ns0:ref type='bibr' target='#b46'>(Zhu et al. (2020)</ns0:ref>). As a colorbased approach, <ns0:ref type='bibr' target='#b45'>Zhong et al. (2020)</ns0:ref> proposed a random erasing technique that either randomly puts a filled rectangle or puts a random-sized mask into a random position. Methods using geometric transformations are manipulating pixel positions by doing operations such as scaling, rotation, flipping, or cropping <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). In particular, methods based on geometric transformation should be selected according to the target dataset. For example, in CIFAR-10, horizontal flipping is an efficient data enlargement method, but it can corrupt data due to different symmetries in the MNIST dataset <ns0:ref type='bibr' target='#b8'>(Cubuk et al. (2018)</ns0:ref>). Similarly, as in face recognition samples, if there is a dataset where each face is centered in the frame, geometric transformations give outstanding results <ns0:ref type='bibr' target='#b39'>(Xia et al. (2017)</ns0:ref>). Otherwise, one should ensure he/she does not alter the label of the image while using these augmentation variants.</ns0:p><ns0:p>Moreover, the possibility of distancing the training data from the test data should also be considered <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). As with geometric transformations, some color transformations can also distort important color information, changing the image label <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). Augmentation methods can also be combined to increase the variety in the resulting augmented images <ns0:ref type='bibr' target='#b8'>(Cubuk et al. (2018)</ns0:ref>). With a careful setting, these color and geometric transformations help generate a new dataset covering the span of image variations <ns0:ref type='bibr' target='#b14'>(Howard (2013)</ns0:ref>). As recent approaches, <ns0:ref type='bibr' target='#b21'>Mun et al. (2017), and</ns0:ref><ns0:ref type='bibr' target='#b24'>Perez and</ns0:ref><ns0:ref type='bibr' target='#b24'>Wang (2017)</ns0:ref> proposed to generate synthetic images which retain similar features to the original images samples using various types of Generative Adversarial Networks (GAN). However, <ns0:ref type='bibr' target='#b5'>Chen et al. (2020)</ns0:ref> observed that the cost of training is time-consuming while the variability of data produced is often limited. Data augmentation methods can produce good results with different parameters in different types of problems. Even a single augmentation method is employed, the best parameters should be determined. For a combination of augmentation methods, determining which data augmentation methods to use and their execution order in addition to their optimal parameters is challenging. Thereby, in <ns0:ref type='bibr' target='#b8'>Cubuk et al. (2018)</ns0:ref>, the auto augmentation method is proposed that searches many augmentation algorithms to find the highest validation accuracy automatically. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>(a), an image that contains a blue lake and sky and green trees is shown. Plausible variations of this lake image with some red trees and color changes in clouds and lake are presented in Figure <ns0:ref type='figure' target='#fig_0'>1(b-c</ns0:ref>).</ns0:p><ns0:p>Note that, there are differences in these 2 plausible images, i.e. there are more red trees in the Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> These plausible images are more probable to occur in the real world than the unplausible images given in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>(e-f). The color distribution of the lake image is shown as 3D scatter plot and color channel histograms in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. As seen in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>, some colors frequently occur in an image, while some are rare. For example, the blue lake (mode in distribution and its surroundings) is always seen, but a lake that goes into purple (tails of the distribution) is rare. Mainly images at the tail of the color distribution, in particular, are also infrequent, yet they are plausible. We were inspired by the mean-shift process by <ns0:ref type='bibr' target='#b10'>Fukunaga and Hostetler (1975)</ns0:ref> and <ns0:ref type='bibr' target='#b6'>Comaniciu and Meer (2002)</ns0:ref> to obtain a distribution-preserving data augmentation mechanism. The mean-shift process seeks the local mode without estimating the global density, hence avoiding a computationally intensive task.</ns0:p><ns0:p>Unlike the mean-shift process, we get a path towards the tails in a density decreasing manner in our method. In the original mean-shift, the data gets denser, and the mean-shift path becomes smooth as it goes towards the distribution mode. However, as we go in the opposite direction, the data becomes increasingly sparse, which can cause the obtained path to act chaotically. We developed a regularized density decreasing direction to create paths from colors of the original image pixels to the image data distribution tails to prevent this. Then we shift the pixel colors to any point in the obtained path so that modified pixel colors will be in alignment with the color distribution of the original image. Thus, the proposed data augmentation method considers the color distribution of the image to produce plausible images by changing colors in this way. Since we shift the color in the augmented image, we also increase the training data diversity. Source code of the proposed method is shared 1 to facilitate reproducibility. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>MATERIALS & METHODS</ns0:head><ns0:p>To obtain a density decreasing path, we used the opposite of the mean-shift direction. As we go in the density decreasing direction, the data becomes increasingly sparse, which can cause the obtained path to act chaotically. We enforce the density decreasing path's smoothness by implementing a regularization on the reverse mean-shift direction to prevent this. Such regularized density decreasing paths for 3 pixels are shown in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> as examples. The colors of these 3 pixels are chosen to be close to the tail of the distribution to demonstrate the behavior of the density decreasing path generation. One can easily see that paths are smooth even if they move to extremely sparse regions of the image color distribution. Density decreasing path may contain varying numbers of points where the distance between the consecutive points can also be different. We want to have the same number of points, L, in each density decreasing path (L = 64). We also want to equalize the distance between consecutive points in the density decreasing path.</ns0:p><ns0:p>We construct a refined density decreasing path while satisfying these two objectives using cubic spline interpolation on the density decreasing path we found. we further enriched the feature space using the image pyramid approach <ns0:ref type='bibr' target='#b0'>Adelson et al. (1984)</ns0:ref>. During the image pyramid generation, we halved the original image in width and height three times. This creates an image pyramid with four levels where each level contains four times fewer pixels than the higher level in the pyramid. We used Lanczos interpolation over 8 × 8 neighborhood for the down-sampling operation <ns0:ref type='bibr' target='#b34'>(Turkowski (1990)</ns0:ref>). Using an image pyramid with four levels increases the number of features in X by 32.8%. In Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>(a), there are structural missing regions in feature space. This is due to quantization error since decimal parts of colors are quantized in 8 bits RGB images. However, refined feature space in Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>(b) is denser, and the effect of image quantization errors is reduced, which demonstrates another benefit of the employed image pyramid approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:1:2:NEW 17 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Creation of Density Decreasing Paths</ns0:head><ns0:p>Let x be the color of a pixel in the image as a starting point of a path we aim to find. Then probability density function (PDF) on the color feature space X constructed from the image is given in Equation (1).</ns0:p><ns0:formula xml:id='formula_0'>P(x) = 1 n n ∑ j=1 K(x − x j ) (1)</ns0:formula><ns0:p>where K( <ns0:ref type='formula'>.</ns0:ref>) is a kernel function and x j are data points in X where we used Epanechnikov kernel.</ns0:p><ns0:p>We define a density decreasing path T = {x (0) , x (1) , . . . , x (i) , . . . } (Figure <ns0:ref type='figure'>6</ns0:ref>) where x (i+1) = x (i) + s (i) for i ≥ 0. Here, x (0) is the starting point of the path and s (i) is a density decreasing direction at point x (i) .</ns0:p><ns0:p>Also, x (i) is only defined in color space domain where 0 ≤ x <ns0:ref type='formula' target='#formula_5'>4</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_1'>(i) r , x (i) g , x (i) b ≤ 255. • • • • • • • • S(0) X(1) X(2) S(1) S(2) X(3) X(0) X(4) X(5) S(3) S(</ns0:formula><ns0:formula xml:id='formula_2'>X(�-1) X(�) S(�-1)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Figure 6. Density decreasing path</ns0:head><ns0:p>Now, we can define the pdf for the point x (i+1) as below:</ns0:p><ns0:formula xml:id='formula_3'>P(x (i+1) ) = 1 n n ∑ j=1 K(x (i+1) − x j ) = 1 n n ∑ j=1 K(x (i) + s (i) − x j )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Since the points x (i) and x j are constant, we can rewrite above equations as below:</ns0:p><ns0:formula xml:id='formula_4'>P(x (i+1) ) = P(x (i) + s (i) ) = 1 n n ∑ j=1 K(s (i) − x j ) where x j = x j − x (i)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>So, x j points are centered to x (i) where x (i) is shifted to the origin; thus, s (i) becomes a direction vector.</ns0:p><ns0:p>Finally, we define a gradient descent direction as below that will lead to a density decreasing path:</ns0:p><ns0:formula xml:id='formula_5'>x (i+1) = x (i) − ∇J(x (i) + s (i) )<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where J(x (i) + s (i) ) is the cost function to minimize which is defined as below:</ns0:p><ns0:formula xml:id='formula_6'>J(x (i) + s (i) ) → P(x (i) + s (i) ) subject to s (i) ∈ Ω Ω = { s (i) ≤ S length and 1 − s (i) S length , ŝ(prior) ≤ S angle } with ŝ(i) = s (i) S length</ns0:formula><ns0:p>and ŝ(prior) = s (i−1) s (i−1)</ns0:p><ns0:p>(5) Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Equation <ns0:ref type='formula'>5</ns0:ref>, S length is the maximum length and S angle is the maximum angle for the direction vector s (i) . Here, second constraint is only defined for i > 0 where ŝ(prior) is the prior direction in unit length and considered as constant. Owing to Equation <ns0:ref type='formula'>5</ns0:ref>, density decreasing direction s (i) is regularized in length and orientation to avoid chaotic shifts in sparse data regions.</ns0:p><ns0:p>The gradient descent method is not practical to minimize cost functions with constraints. In such cases, one can use the projected gradient descent (PGD) method, which can minimize a cost function subject to a constraint where this constraint defines a domain <ns0:ref type='bibr' target='#b2'>(Boyd and Vandenberghe (2004)</ns0:ref>). Although PGD works fine for cost functions with a single constraint, we can still use it efficiently since our cost function's length, and orientation constraints only form a single domain, Ω. Therefore, we use PGD to obtain a gradient descent path on the cost function J as defined in Equation <ns0:ref type='formula'>6</ns0:ref>. min</ns0:p><ns0:formula xml:id='formula_7'>s (i) P(x (i) + s (i) ) subject to s (i) ∈ Ω x (i+1) = P Ω (x (i) − ∇P(x (i) + s (i) ))</ns0:formula><ns0:p>P Ω (x new ) = arg min</ns0:p><ns0:formula xml:id='formula_8'>s (i) ∈Ω (x (i) + s (i) ) − x new (6)</ns0:formula><ns0:p>After doing some algebraic manipulations one can see that s (i) equals to the opposite of the mean-shift direction m (i) such that s (i) = −m (i) . First, we will limit the number of iterations in the PGD to L/2 since we aim to find a density decreasing path with a limited number of points. Next, we will stop the PGD iteration if (a) the norm of mean-shift direction is becoming smaller than a tolerance value (C tolerance ) or i+1) exiting from the image color space domain. Also, we set default value for C tolerance as 10 −2 d. The final density decreasing path generation method is presented in Algorithm 1.</ns0:p><ns0:formula xml:id='formula_9'>(b) next point x (</ns0:formula><ns0:p>Algorithm 1 Find Density Decreasing Path using PGD 1: Inputs:</ns0:p><ns0:formula xml:id='formula_10'>x (0) , h, I FLANN , L, C tolerance 2: for i = 0 : L/2 do 3: m (i) ← calculateMeanShiftDirection(x (i) , h, I FLANN ) 4: if ( m (i) < C tolerance ) then 5: break ⊲ Converged, exits loop 6: end if 7: x (i+1) = P Ω (x (i) − ∇P(x (i) + s (i) )) ⊲ Move to new point with PGD 8: if (x (i+1) is out of domain) then 9:</ns0:formula><ns0:p>break ⊲ Converged, exits loop 10:</ns0:p><ns0:p>end if 11: end for 12: T ← regularize({x (0) , x (1) , . . . , x (i) }) ⊲ regularize to equidistant L points 13: Return T Calculation of the mean-shift direction m (i) at point x is as given as below:</ns0:p><ns0:formula xml:id='formula_11'>m (i) = ∑ x j ∈N (x) K(x j − x)x j ∑ x j ∈N (x) K(x j − x) − x (7)</ns0:formula><ns0:p>where K(.) is the kernel function and x j are k nearest neighbours of x. We used FLANN proposed by <ns0:ref type='bibr' target='#b20'>Muja and Lowe (2014)</ns0:ref> to have fast k nearest neighbor search operations for efficiency. In this study, we used 256 as the default value of k. Note that, one need to put value of x (i) into the point x in the Equation <ns0:ref type='formula'>7</ns0:ref>given in the Algorithm 1. However, kernel functions require the selection of the bandwidth parameter h. Since each image's characteristic is different, we estimate bandwidth parameter h from the image to balance differences between images as an approximation to median pair-wise distances to closest points.</ns0:p><ns0:p>First, we find the Euclid distances of each pixel with its 4 neighbors. Then, we use Quick Select <ns0:ref type='bibr' target='#b7'>(Cormen et al. (2009)</ns0:ref>) algorithm to find the median value of these distances as our bandwidth h. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We used the PGD method, which first does a gradient descent step, then back-projection of gradient descent result to the domain Ω. In Figure <ns0:ref type='figure'>7</ns0:ref>, blue vectors are prior directions; green vectors are new directions in the domain, and red vectors are new directions out of the domain. Domain Ω is the region between dotted gray lines, determined by the constraints in each gradient descent step.</ns0:p><ns0:formula xml:id='formula_12'>(a) s (1) ∈ Ω (b) s (2) / ∈ Ω (c) P Ω (s (2) ) ∈ Ω (d) s (3) / ∈ Ω (e) P Ω (s (3) ) ∈ Ω Figure 7.</ns0:formula><ns0:p>Example cases for directions and back-projections to domain Note that prior direction and new direction form a plane where its normal is the cross product of these two vectors. Therefore, a rotation matrix can be formed, which aligns this normal vector to the canonical z-axis where the prior direction and new direction vectors transform onto xy-plane. Once prior and new directions are rotated, all the back-projection operations can be done in 2D easily then back-projected direction can be rotated back to the original space. We used the method proposed by <ns0:ref type='bibr' target='#b22'>Möller and Hughes (1999)</ns0:ref> to construct a rotation matrix that aligns normal vector to z-axis as given in Equation <ns0:ref type='formula' target='#formula_13'>8</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_13'>v = f × t, u = v v c = f • t, r = (1 − c)/(1 − c 2 ) →   c + rv 2 x rv x v y − v z rv x v z + v y rv x v y + v z c + rv 2 y rv y v z − v x rv x v z − v y rv y v z + v x c + rv 2 z   (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>)</ns0:formula><ns0:p>where f is plane normal calculated by cross product of prior direction and new direction, and t is z-axis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creation of Augmented Images</ns0:head><ns0:p>Each pixel has its corresponding density decreasing path with L colors. 0 th color (first path node) has the largest color deviation from the original pixel color towards the tail of image color distribution. (L − 1) th color (last path node) equals to original image pixel color. So, we can take a different node (color) from the corresponding path for each pixel of an augmented image. Here all 0 indices will yield to augmented image with the most perturbation, while L − 1 indices will yield to the original image. For each pixel, we can randomly choose an index number between 0 and L − 1, which will lead to different augmented images that allow the generation of any number of augmented images. However, utterly random selection will result in unnatural results. Thus, we want to sample from path nodes in a random but spatially smooth manner. We modified the Perlin noise generator, which is proposed by <ns0:ref type='bibr' target='#b26'>Perlin (1985)</ns0:ref> to obtain a smooth but random index map as seen in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS & RESULTS</ns0:head><ns0:p>We Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Qualitative Results</ns0:head><ns0:p>To demonstrate our data augmentation results qualitatively, we first downloaded sample images from</ns0:p><ns0:p>Pxfuel, which provides high-quality royalty-free stock photos. Note that augmented images' brightness is slightly increased to emphasize the difference between original images and augmented images. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Figure <ns0:ref type='figure' target='#fig_2'>12</ns0:ref>, augmentation results are shown for man, forest, food, car, and urban images. In the first row, in the augmented male image, the male's skin color becomes lighter, and the eye color shifts to green. Also, there are some color changes in the background and t-shirt of the man. In the second row, the augmented forest image contains red trees, although there are no red trees in the original image.</ns0:p><ns0:p>Here, red trees occur in the augmented image because the original image's color distribution contains colors towards the red tones in the distribution's tail. In the third row, in the augmented image, each olive type's colors in the original image are changed differently while the background is not changed. In the fourth row, the old car's rust tones in the original image are changed naturally in the augmented image.</ns0:p><ns0:p>In the fifth row, the trees and the building roof's colors become greener with slight color changes in the buildings and the road in the augmented urban image.</ns0:p><ns0:p>DPDA results shown in Figure <ns0:ref type='figure' target='#fig_2'>12</ns0:ref> are all plausible image augmentations. In all these visible results, some image pixel colors are shifted to the tail of the image's color data distribution. Thus, the image is transformed into a less occurring version of itself. This is quite useful to increase data variability of the training dataset since the proposed data augmentation approach generates fewer occurring images, and thus original dataset is enriched. Therefore, the over-fitting problem is reduced while increasing the training accuracy. Since the image color data guides data augmentation, the algorithm does not require different parameter selections for different images, i.e., images with different content, resolution, or camera characteristics. Accordingly, default DPDA parameters are used for the data augmentations in Figure <ns0:ref type='figure' target='#fig_2'>12</ns0:ref> (as qualitative experiments) and also for all the quantitative experiments.</ns0:p></ns0:div>
<ns0:div><ns0:head>Quantitative Results</ns0:head><ns0:p>Training a DL network from scratch requires a considerable amount of data and computational power.</ns0:p><ns0:p>Therefore, researchers and practitioners with limited data and computational resources prefer to reuse existing DL architectures, which are trained with millions of data and using server farms. This reuse methodology employs a transfer learning approach where a well-proven DL model is fine-tuned with a new dataset <ns0:ref type='bibr' target='#b28'>(Shao et al. (2015)</ns0:ref>). Pre-training a DL network with transfer learning yields successful results, even with a small train dataset. However, transfer learning provides excellent results if the data and pre-trained model are on a similar domain <ns0:ref type='bibr' target='#b42'>(Yosinski et al. (2014)</ns0:ref>). A model pre-trained with the Imagenet dataset gives better outcomes for the datasets in the same domain, such as CIFAR-10 or Caltech-101. On the other hand, if the model is tuned using a small amount of training data that is not in a similar domain, the performance benefits of transferring features decrease. So, data augmentation helps increase dataset size and variety to remedy such problems <ns0:ref type='bibr' target='#b28'>(Shao et al. (2015)</ns0:ref>).</ns0:p><ns0:p>Note that our aim is not to give an extensive study of the architecture of CNNs as done by Szegedy <ns0:ref type='formula'>2019</ns0:ref>)). For all the experiments, we used an SGD solver with a momentum of 0.9.</ns0:p><ns0:p>Weights are initialized from a Gaussian distribution N (µ, σ ) for µ = 0 and σ = 10 −2 . We found 20 epochs and a batch size of 32 typically sufficient for convergence.</ns0:p><ns0:p>The following methodology was utilized to create train and validation sets for all datasets used in this study. First, we randomly selected 20 images from each class as a validation set and used the same valida- As seen in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>, average accuracy improvement ranges from 1.51% to 4.43%. All data augmentation methods provide performance increase compared to baseline performance for all the train set sizes.</ns0:p><ns0:p>However, the DPDA method and the DPDA combined with the flip image consistently provide the best performances in every test. The results also show that data augmentation in data sets with fewer elements contributes more to accuracy. For example, the highest accuracy increase is 6.98% in the training set consisting of 20 images per class, which is obtained with DPDA+FI augmentation. Next, we compared the performance increase with various data augmentation methods, including the DPDA, using ResNet50 architecture. As seen in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>, average accuracy improvement ranges from 2.35% to 5.84%. All data augmentation methods provide performance increase compared to baseline performance for all the train set sizes. However, the DPDA method itself and in combination with the flip image method, consistently provide the best performances in every single test. The results also show that data augmentation in data sets with fewer elements contributes more to accuracy. For example, the highest accuracy increase is 8.49% in the training set consisting of 20 images per class, which is obtained with DPDA+FI augmentation. Intel Image Classification Dataset: Like the UC Merced Land-use dataset, first, we compare the performance increase obtained with various data augmentation methods, including DPDA, on the Intel Image Classification dataset using DenseNet201 architecture. As seen in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>, average accuracy improvement ranges from 2.25% to 4.94%. All data augmentation methods provide performance increase compared to baseline performance for all the train set sizes. However, the DPDA method provides the best performances in every single test. The results also reveal that data augmentation in data sets with fewer elements contributes more to accuracy. For instance, the highest accuracy increase is 7.23% in the training set consisting of 30 images per class, which is obtained with DPDA augmentation. Next, we compare the performance increase with various data augmentation methods, including the DPDA, using ResNet50 architecture. As seen in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>, average accuracy improvement ranges from 1.82% to 4.80%. Every data augmentation methods provide a performance increase compared to baseline performance for all the train set sizes. However, the DPDA method provides the best performances in every single test. The results also indicate that data augmentation in data sets with fewer elements contributes more to accuracy. For instance, the highest accuracy increase is 7.27% in the training set consisting of 20 images per class, which is obtained with DPDA augmentation. This result is also in compliance with the DenseNet comparison study. Oxford-IIIT Pet Dataset: First, we compare the performance increase with various data augmentation methods, including DPDA, on the Oxford-IIIT Pet dataset using DenseNet201 architecture. As seen in Table <ns0:ref type='table' target='#tab_7'>5</ns0:ref>, average accuracy improvement ranges from 2.34% to 3.34%. All data augmentation methods provide performance increase compared to baseline performance for all the train set sizes while DPDA being superior. The results also reveal that data augmentation in data sets with fewer elements contributes more to accuracy. For example, the highest accuracy increase is 6.68% in the training set consisting of 20 images per class, which is obtained with DPDA+FI augmentation.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:1:2:NEW 17 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Next, we compare the performance increase with various data augmentation methods, including the DPDA, using ResNet50 architecture. As seen in Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref>, average accuracy improvement ranges from 3.03% to 6.33%. Every data augmentation methods provide a performance increase compared to baseline performance for all the train set sizes. However, the DPDA+FI method provides the best performances in every single test. The results also show that data augmentation in data sets with fewer elements contributes more to accuracy. For instance, the highest accuracy increase is 12.66% in the training set consisting of 20 images per class, which is again obtained with DPDA+FI augmentation. <ns0:ref type='table' target='#tab_9'>7</ns0:ref>). The accuracy improvement obtained with DPDA is 8.49%. The second highest accuracy improvement achieved with the FI augmentation is 0.81%, which is much less than the DPDA accuracy improvement. On the other hand, every data augmentation method does not provide a performance increase compared to baseline performance. For instance, the GC decreases the accuracy by −2.20%. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Execution Time Analysis</ns0:head><ns0:p>There are n pixels in an image. During the execution of DPDA, for each pixel, we find a path with a length of up to L/2. For each point in a path, the nearest neighbor search using FLANN method <ns0:ref type='bibr' target='#b20'>(Muja and Lowe (2014)</ns0:ref>) is done to retrieve k neighbor points. Our data is 3 channel so dimension d is 3. For each image, the FLANN tree is constructed for once where tree construction has a computational complexity of O(ndKI(logn/logK)), where I is a maximum number of iterations, and K is the branching factor.</ns0:p><ns0:p>We used exact search in FLANN, which leads to O(Md(logn/logK)) for single nearest neighbor search where M is a maximum number of points to examine. However, we need to do a separate neighbor search for L/2 times for n pixels, which leads to nL/2 neighbor search operations. Thus, computational complexity of the all neighbour search operations is O(nLMd(logn/logK)). Considering FLANN tree construction operation and all neighbor search operations, we end up with O(nd(KI + LM)(logn/logK))</ns0:p><ns0:p>as our computational complexity due to neighbor search operations in DPDA.</ns0:p><ns0:p>Actual execution time with respect to image size (# of pixels) is shown in Figure <ns0:ref type='figure' target='#fig_15'>13</ns0:ref>. This figure shows that execution times for large images are pretty long. This is due to the nearest neighbor search operations' complexity even if FLANN is used for efficient nearest neighbor search operations. Fortunately, in DL architectures, images are generally in small size, i.e., 256 × 256. Note that, the execution time of compared methods are in the order of milliseconds, so we did not add them to Figure <ns0:ref type='figure' target='#fig_15'>13</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION & FUTURE WORKS</ns0:head><ns0:p>The proposed DPDA method employs a distribution preserving approach to create plausible variants of a given image, as shown in qualitative and quantitative results. These augmented images enrich the training dataset so that the over-fitting problem is reduced while higher training accuracies are obtained.</ns0:p><ns0:p>Obtained augmentation performance is demonstrated on UC Merced Land-use, Intel Image Classification, and Oxford-IIIT Pet datasets for classification and segmentation tasks. These experiments show the superiority of the proposed DPDA method compared to commonly used data augmentation methods such as image flipping, histogram equalization, gamma correction, and random erasing. We also combined our DPDA method with a geometric data augmentation method (flip), and in most cases, the performance of DPDA is slightly increased. This shows that the DPDA method can be combined with other data augmentation methods to increase performance further. Therefore, it is evident that DPDA is a good candidate for data augmentation tasks in different scenarios. This is consistent with the research outcomes in the literature where various data augmentation methods provide performance improvement in numerous machine learning tasks and datasets <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). Although the proposed method provides outstanding data augmentation capabilities, there is still room for further improvements. These improvements can be divided into three groups: computational efficiency improvements, augmentation performance improvements (reflecting on DL training), and usage dissemination improvements.</ns0:p><ns0:p>Data augmentation methods are generally fast, while the DPDA method is not as fast as its competitors.</ns0:p><ns0:p>The main reason for this speed bottleneck is the computational burden of neighbor search, which is also the reason for the slowness of mean-shift-based clustering or filtering methods. This bottleneck can be alleviated by changing FLANN with a faster or a specifically designed neighbor search method. Additional speed-ups can be obtained using CPU and GPU parallelization techniques since an image contains lots of pixels, and finding density decreasing path for each pixel is independent of other pixels that can be done in parallel. Since using GPU is a common approach for DL training, GPU parallelized DPDA method will not cause extra hardware procurement on its user. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Performance of the DPDA method can be increased using spatial regularization, i.e., using graph-cut, dealing with blocking artifacts due to JPEG compression. Similar images can be retrieved, and their color data can be added to the image color data to be augmented, which may increase the quality and variety of the color data distribution especially if the image size is small. DPDA uses Perlin noise to create different augmentations from a single density decreasing path per pixel. However, Perlin noise is spatially smooth approach but still a purely random one. Instead, an image can be segmented into background and foreground objects then randomization can be done in an object-wise manner.</ns0:p><ns0:p>DPDA code can be extended to multispectral and hyperspectral images, which have 4 or more channels.</ns0:p><ns0:p>Additionally, DPDA is not limited to the augmentation of images and can be easily adapted to augment</ns0:p><ns0:p>any training data since it already works in a feature space. This is quite useful for training traditional machine learning methods that generally work on data with already extracted features. Furthermore, DPDA can be ported to Python for easy integration with current Python-based DL frameworks.</ns0:p><ns0:p>As a future study, in addition to various performance improvements and support for augmentation in feature space, we plan to improve computational efficiency using special techniques and data structures with a parallelized implementation in Python.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this paper, a novel distribution-preserving data augmentation (DPDA) method that creates plausible variations of the given image is presented. There is no study using a distribution-preserving approach that creates plausible image variations to the best of our knowledge. The proposed method employs density decreasing direction to create paths from colors of the original pixels to the tails of the image data distribution. We achieved this by regularizing the opposite of the mean-shift direction with length and orientation constraints. Finally, we developed efficient mechanisms to obtain these density decreasing paths, fused with Perlin noise results to create as many augmented images as desired.</ns0:p><ns0:p>The proposed method's performance is presented in a transfer learning scenario using three different the other hand, the UC Merced land-use dataset is obtained from nadir as over-head imagery (that can be acquired using airborne and spaceborne platforms). Also, the resolution and camera characteristics of the ImageNet dataset are pretty different from the resolution and camera characteristics of the UC Merced Land-use dataset. Nevertheless, transfer learning able to cope with this challenging adaptation.</ns0:p><ns0:p>However, the UC Merced land-use dataset's size is small, limiting the applied transfer learning schema's adaptation performance. This is a common scenario since companies or institutions develop pre-trained models with large datasets and substantial computational resources. Despite this, researchers who use these pre-trained models with transfer learning to adapt them to their problem domain generally have small datasets and scarce computational resources. In this study, the transfer learning performance is further increased using data augmentation methods such as the proposed DPDA, image flipping, histogram equalization, gamma correction, and random erasing. On the other hand, for image classification and segmentation tasks, the proposed DPDA method consistently shows superior performance compared to commonly used data augmentation methods on different datasets and different training sizes using three different DL architectures. Therefore, we concluded that the proposed DPDA method provides successful data augmentation performance.</ns0:p><ns0:p>Although the proposed method provides superior data augmentation capabilities, there is still room for further improvements. However, we did not implement these improvements since we want to present our novel density-preserving data augmentation idea's baseline performance in its simplest form. Nevertheless, possible improvements and future studies are shared in 'Discussion & Future Works' section. Among these possible future studies, improving the computational efficiency of the proposed DPDA is the most important one since high computational complexity seems to be the most significant disadvantage of the proposed method. As a final remark, although we presented our DPDA method as an image augmentation study, it is not limited to images and can work for all kinds of the dataset with already extracted features since it works in feature space. This is an excellent property of the proposed DPDA method since most image data augmentation methods are only limited to the image domain. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Example plausible and unplausible images</ns0:figDesc><ns0:graphic coords='3,153.07,566.56,126.43,126.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>(c) compared to Figure 1(b). Although Figure 1(d) is also a plausible image, it contains a limited variation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Color distribution for the image in Figure 1(a)</ns0:figDesc><ns0:graphic coords='4,146.89,195.60,259.66,120.90' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Density decreasing paths for Figure 1(a)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. DPDA Process Diagram</ns0:figDesc><ns0:graphic coords='5,181.84,468.58,330.85,71.01' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Feature space of the image in Figure 1(a), without and with image pyramid</ns0:figDesc><ns0:graphic coords='6,63.19,14.23,452.82,227.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 8. Here, we choose parameter values randomly from a predefined range where parameters are roughness (N roughness ), noise scale (N scale ), and noise center (N center ). Finally, modified Perlin noise is generated using C x,y = 0.5(tanh(N scale * (N x,y − N center )) + 1) where N x,y = Perlin.generate(xN roughness , yN roughness , 1) is original Perlin noise generation function.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Effects of different Perlin noises (top row) on augmented images (bottom row)</ns0:figDesc><ns0:graphic coords='8,155.29,641.59,72.84,62.95' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>conducted qualitative and quantitative experiments using different datasets and DL networks to evaluate the effectiveness of the proposed DPDA method. Training and testing are carried out on a server running Ubuntu Linux with Intel i9 CPU (3.7 GHz), 128 GB RAM, Nvidia RTX 3070 GPU. Python using the Keras API and TensorFlow DL libraries are utilized for training the models. This section describes the datasets and experiments used to obtain qualitative and quantitative results. Datasets We used Pxfuel 2 for qualitative experiments, and three different datasets for quantitative experiments, namely the UC Merced Land-use Yang and Newsam (2010), the Intel Image Classification 3 , and the Oxford-IIIT Pet datasets Parkhi et al. (2012). UC Merced Land-use dataset consists of satellite images of size 256 × 256 and 0.3-meter resolution that are open to the public. There are a total of 21 classes and 100 images in each class (see Figure 9).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Example image classes from UCMerced land-use dataset</ns0:figDesc><ns0:graphic coords='9,153.43,241.51,60.87,60.87' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Example image classes from Intel Classification dataset</ns0:figDesc><ns0:graphic coords='9,153.43,405.57,60.87,60.87' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Example images from Oxford IIIT Pet dataset 2 https://www.pxfuel.com/ (Royalty-free stock photos free & unlimited download) 3 https://www.kaggle.com/puneet6060/intel-image-classification/</ns0:figDesc><ns0:graphic coords='9,359.23,591.87,97.58,73.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Figure 12. Plausible Image Augmentations</ns0:figDesc><ns0:graphic coords='10,205.91,601.00,136.87,91.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>et al. (2015), or<ns0:ref type='bibr' target='#b12'>He et al. (2016)</ns0:ref> but to briefly use them for evaluating the performance of the proposed DPDA method in transfer learning settings. Resnet50<ns0:ref type='bibr' target='#b12'>(He et al. (2016)</ns0:ref>) and DenseNet201<ns0:ref type='bibr' target='#b15'>(Huang et al. (2016)</ns0:ref>) network weights trained on the ImageNet are used as starting weights in the classification task since they are widely used in the current studies<ns0:ref type='bibr' target='#b18'>(Khan et al. (2020)</ns0:ref>. Then the models are fine-tuned during training<ns0:ref type='bibr' target='#b36'>(Vrbančič and Podgorelec (2020)</ns0:ref>) since initial layers of CNNs preserve more abstract, generic features. We just copy the weights in convolutional layers rather than the entire network, excluding fully connected layers. MobileNetV2<ns0:ref type='bibr' target='#b27'>(Sandler et al. (2018)</ns0:ref>) network weights trained on the ImageNet are used as starting weights in segmentation task as a base model and trained with CNN architecture based on U-Net(Silburt et al. (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>tion set in all tests. Then, we created different train sets in various sizes(N = 20, 30, 40, 50, 60, 70, 80) to investigate the effect of training dataset size on classification performance using the data augmentation approaches. For segmentation tests, we only used the train set size of 80. To avoid sample imbalance in the training datasets, we randomly selected the training datasets in equal numbers from each class. We evaluated the final classification performance of each dataset with the average accuracy over 10 runs. Weincreased the original training dataset size 5-fold, utilizing random erase (RE), flip image (FI), gamma correction (GC), histogram equalization combined with gamma correction (HE+GC), the proposed DPDA method, and the DPDA method combined with the flip image (DPDA+FI) separately. We implemented color-based augmentation methods as done in CLoDSA (Casado-García et al. (2019)) library. a performance comparison study using transfer learning with three different DL architectures, namely DenseNet, ResNet, and MobileNetV2. These architectures are trained using transfer learning on original and augmented versions of 3 datasets. In the experiments, DPDA, DPDA+FI, FI, RE, GC, HE+GC methods are used for the augmentation of images. Baseline performances are obtained by training on the original datasets using the transfer learning approach. Augmentation performances for classification on UC Merced Land-use, Intel Image Classification, and Oxford-IIIT Pet datasets and for segmentation on Oxford-IIIT Pet dataset compared to the baseline performances are presented. UC Merced Land-use Dataset: First, we compare the performance increase obtained with various data augmentation methods and DPDA on the UC Merced Land-use dataset using DenseNet201 architecture.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Execution time with respect to image size</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>DL architectures: DenseNet, ResNet, and MobileNetV2. These DL architectures are trained with millions of color images, where we used transfer learning to adapt these models to different problem domains. We tested the DPDA for classification on the UC Merced Land-use, Intel Image Classification, and Oxford-IIIT Pet datasets and image segmentation on the Oxford-IIIT Pet dataset. Note that, DenseNet, ResNet, and MobileNetV2 are trained with side-view commodity camera images, namely ImageNet. On</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Data augmentation accuracy comparisons (%) in different sizes of datasets (N) using</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>DenseNet201 on UC Merced Land-use dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='4'>Baseline DPDA+FI DPDA RE</ns0:cell><ns0:cell>FI</ns0:cell><ns0:cell cols='2'>HE+GC GC</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>76.35</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>82.86</ns0:cell><ns0:cell cols='3'>80.23 80.74 79.52</ns0:cell><ns0:cell>78.96</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>82.46</ns0:cell><ns0:cell>88.25</ns0:cell><ns0:cell>88.09</ns0:cell><ns0:cell cols='3'>86.00 85.55 84.52</ns0:cell><ns0:cell>84.76</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>84.12</ns0:cell><ns0:cell>88.33</ns0:cell><ns0:cell>88.25</ns0:cell><ns0:cell cols='3'>86.19 86.11 86.51</ns0:cell><ns0:cell>85.56</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>86.27</ns0:cell><ns0:cell>90.16</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell cols='3'>88.41 88.57 87.62</ns0:cell><ns0:cell>87.93</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>87.61</ns0:cell><ns0:cell>91.34</ns0:cell><ns0:cell>91.43</ns0:cell><ns0:cell cols='3'>89.92 89.60 89.76</ns0:cell><ns0:cell>88.65</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>89.28</ns0:cell><ns0:cell>92.62</ns0:cell><ns0:cell>92.54</ns0:cell><ns0:cell cols='3'>91.19 90.71 90.24</ns0:cell><ns0:cell>90.16</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>89.60</ns0:cell><ns0:cell>92.70</ns0:cell><ns0:cell>92.54</ns0:cell><ns0:cell cols='3'>90.95 90.72 90.16</ns0:cell><ns0:cell>90.24</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 85.10</ns0:cell><ns0:cell>89.53</ns0:cell><ns0:cell>89.39</ns0:cell><ns0:cell cols='3'>87.56 87.43 86.90</ns0:cell><ns0:cell>86.61</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>4.43</ns0:cell><ns0:cell>4.29</ns0:cell><ns0:cell>2.46</ns0:cell><ns0:cell>2.33</ns0:cell><ns0:cell>1.81</ns0:cell><ns0:cell>1.51</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>augmentation accuracy comparisons (%) in different sizes of datasets (N) using ResNet50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>on UC Merced Land-use Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='4'>Baseline DPDA+FI DPDA RE</ns0:cell><ns0:cell>FI</ns0:cell><ns0:cell cols='2'>HE+GC GC</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>74.76</ns0:cell><ns0:cell>83.25</ns0:cell><ns0:cell>82.86</ns0:cell><ns0:cell cols='3'>80.87 80.71 79.84</ns0:cell><ns0:cell>79.52</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>80.07</ns0:cell><ns0:cell>86.67</ns0:cell><ns0:cell>86.11</ns0:cell><ns0:cell cols='3'>83.33 82.78 82.38</ns0:cell><ns0:cell>81.90</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>82.62</ns0:cell><ns0:cell>89.05</ns0:cell><ns0:cell>88.99</ns0:cell><ns0:cell cols='3'>85.95 85.55 84.68</ns0:cell><ns0:cell>84.12</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>83.81</ns0:cell><ns0:cell>88.97</ns0:cell><ns0:cell>88.29</ns0:cell><ns0:cell cols='3'>86.32 86.19 86.97</ns0:cell><ns0:cell>85.95</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>85.00</ns0:cell><ns0:cell>90.48</ns0:cell><ns0:cell>90.32</ns0:cell><ns0:cell cols='3'>87.62 87.93 87.85</ns0:cell><ns0:cell>87.69</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>86.66</ns0:cell><ns0:cell>91.27</ns0:cell><ns0:cell>91.19</ns0:cell><ns0:cell cols='3'>89.87 89.46 88.96</ns0:cell><ns0:cell>88.57</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>87.62</ns0:cell><ns0:cell>91.74</ns0:cell><ns0:cell>91.67</ns0:cell><ns0:cell cols='3'>90.47 90.31 90.18</ns0:cell><ns0:cell>89.21</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 82.93</ns0:cell><ns0:cell>88.78</ns0:cell><ns0:cell>88.49</ns0:cell><ns0:cell cols='3'>86.35 86.13 85.84</ns0:cell><ns0:cell>85.28</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>5.84</ns0:cell><ns0:cell>5.56</ns0:cell><ns0:cell>3.41</ns0:cell><ns0:cell>3.20</ns0:cell><ns0:cell>2.90</ns0:cell><ns0:cell>2.35</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='7'>augmentation accuracy comparisons (%) in different sizes of datasets (N) using</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>DenseNet201 on Intel Image Classification Dataset</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='3'>Baseline DPDA RE</ns0:cell><ns0:cell>GC</ns0:cell><ns0:cell cols='2'>HE+GC FI</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>82.00</ns0:cell><ns0:cell>88.89</ns0:cell><ns0:cell cols='3'>87.83 87.00 87.50</ns0:cell><ns0:cell>86.33</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>90.56</ns0:cell><ns0:cell cols='3'>89.16 89.50 88.33</ns0:cell><ns0:cell>88.00</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>86.50</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell cols='3'>88.50 88.33 88.61</ns0:cell><ns0:cell>86.83</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>86.66</ns0:cell><ns0:cell>91.39</ns0:cell><ns0:cell cols='3'>90.16 90.00 89.16</ns0:cell><ns0:cell>90.33</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>88.69</ns0:cell><ns0:cell>92.50</ns0:cell><ns0:cell cols='3'>91.83 90.83 90.83</ns0:cell><ns0:cell>90.16</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>88.92</ns0:cell><ns0:cell>93.33</ns0:cell><ns0:cell cols='3'>91.50 90.83 91.00</ns0:cell><ns0:cell>90.17</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>90.16</ns0:cell><ns0:cell>94.16</ns0:cell><ns0:cell cols='3'>92.50 90.50 90.50</ns0:cell><ns0:cell>90.17</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 86.61</ns0:cell><ns0:cell>91.55</ns0:cell><ns0:cell cols='3'>90.21 89.57 89.42</ns0:cell><ns0:cell>88.86</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>4.94</ns0:cell><ns0:cell>3.60</ns0:cell><ns0:cell>2.96</ns0:cell><ns0:cell>2.81</ns0:cell><ns0:cell>2.25</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='7'>augmentation accuracy comparisons (%) in different sizes of datasets (N) using ResNet50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>on Intel Image Classification Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='3'>Baseline DPDA RE</ns0:cell><ns0:cell>FI</ns0:cell><ns0:cell>GC</ns0:cell><ns0:cell>HE+GC</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>79.67</ns0:cell><ns0:cell>86.94</ns0:cell><ns0:cell cols='4'>85.33 83.83 84.22 84.61</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>82.50</ns0:cell><ns0:cell>88.96</ns0:cell><ns0:cell cols='4'>87.50 86.16 86.44 85.83</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>84.67</ns0:cell><ns0:cell>90.28</ns0:cell><ns0:cell cols='4'>88.67 88.66 87.50 86.66</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>86.83</ns0:cell><ns0:cell>90.83</ns0:cell><ns0:cell cols='4'>89.67 88.66 88.33 88.22</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>88.16</ns0:cell><ns0:cell>91.39</ns0:cell><ns0:cell cols='4'>90.00 89.67 88.33 89.16</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>88.94</ns0:cell><ns0:cell>92.78</ns0:cell><ns0:cell cols='4'>92.00 90.83 90.00 89.33</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>89.33</ns0:cell><ns0:cell>92.50</ns0:cell><ns0:cell cols='4'>92.00 90.33 88.67 89.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 85.73</ns0:cell><ns0:cell>90.53</ns0:cell><ns0:cell cols='4'>89.31 88.31 87.64 87.54</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>4.80</ns0:cell><ns0:cell>3.58</ns0:cell><ns0:cell>2.58</ns0:cell><ns0:cell>1.91</ns0:cell><ns0:cell>1.82</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Data augmentation accuracy comparisons (%) in different sizes of datasets (N) using DenseNet201 on Oxford-IIIT Pet Dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='5'>Baseline DPDA DPDA+FI HE+GC FI</ns0:cell><ns0:cell>GC</ns0:cell><ns0:cell>RE</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>81.67</ns0:cell><ns0:cell>87.89</ns0:cell><ns0:cell>88.35</ns0:cell><ns0:cell>86.91</ns0:cell><ns0:cell cols='3'>87.43 88.27 86.32</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>85.26</ns0:cell><ns0:cell>90.95</ns0:cell><ns0:cell>89.05</ns0:cell><ns0:cell>89.91</ns0:cell><ns0:cell cols='3'>89.21 88.78 88.13</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>87.65</ns0:cell><ns0:cell>90.14</ns0:cell><ns0:cell>90.27</ns0:cell><ns0:cell>89.67</ns0:cell><ns0:cell cols='3'>89.62 88.83 89.10</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>88.36</ns0:cell><ns0:cell>91.13</ns0:cell><ns0:cell>90.94</ns0:cell><ns0:cell>90.54</ns0:cell><ns0:cell cols='3'>90.67 90.67 90.62</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>89.85</ns0:cell><ns0:cell>92.24</ns0:cell><ns0:cell>92.27</ns0:cell><ns0:cell>91.21</ns0:cell><ns0:cell cols='3'>91.62 91.54 91.81</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>90.73</ns0:cell><ns0:cell>92.43</ns0:cell><ns0:cell>92.51</ns0:cell><ns0:cell>92.19</ns0:cell><ns0:cell cols='3'>92.12 92.51 92.29</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>90.79</ns0:cell><ns0:cell>93.10</ns0:cell><ns0:cell>94.02</ns0:cell><ns0:cell>93.27</ns0:cell><ns0:cell cols='3'>92.91 92.83 92.56</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 87.78</ns0:cell><ns0:cell>91.13</ns0:cell><ns0:cell>91.06</ns0:cell><ns0:cell>90.53</ns0:cell><ns0:cell cols='3'>90.51 90.49 90.12</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>3.34</ns0:cell><ns0:cell>3.27</ns0:cell><ns0:cell>2.75</ns0:cell><ns0:cell>2.73</ns0:cell><ns0:cell>2.71</ns0:cell><ns0:cell>2.34</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>augmentation accuracy comparisons (%) in different sizes of datasets (N) using ResNet50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>on Oxford-IIIT Pet Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='4'>Baseline DPDA+FI DPDA RE</ns0:cell><ns0:cell>FI</ns0:cell><ns0:cell>GC</ns0:cell><ns0:cell>HE+GC</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>67.74</ns0:cell><ns0:cell>80.40</ns0:cell><ns0:cell>79.05</ns0:cell><ns0:cell cols='4'>77.08 78.27 72.94 72.51</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>75.46</ns0:cell><ns0:cell>82.83</ns0:cell><ns0:cell>82.48</ns0:cell><ns0:cell cols='4'>82.27 81.08 81.40 79.78</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>78.51</ns0:cell><ns0:cell>85.54</ns0:cell><ns0:cell>84.70</ns0:cell><ns0:cell cols='4'>82.75 82.70 82.91 82.99</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>80.72</ns0:cell><ns0:cell>86.62</ns0:cell><ns0:cell>86.48</ns0:cell><ns0:cell cols='4'>84.64 85.67 84.29 84.16</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>84.01</ns0:cell><ns0:cell>86.70</ns0:cell><ns0:cell>88.73</ns0:cell><ns0:cell cols='4'>87.54 85.81 86.75 85.43</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>84.91</ns0:cell><ns0:cell>90.08</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell cols='4'>89.94 88.24 88.40 86.57</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>85.85</ns0:cell><ns0:cell>89.32</ns0:cell><ns0:cell>89.34</ns0:cell><ns0:cell cols='4'>88.83 89.00 88.51 87.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 79.60</ns0:cell><ns0:cell>85.93</ns0:cell><ns0:cell>85.83</ns0:cell><ns0:cell cols='4'>84.72 84.40 83.60 82.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>6.33</ns0:cell><ns0:cell>6.23</ns0:cell><ns0:cell>5.12</ns0:cell><ns0:cell>4.80</ns0:cell><ns0:cell>4.00</ns0:cell><ns0:cell>3.03</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>We used the U-Net architecture on top of the MobileNetV2 architecture for the Oxford-IIIT Pet</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>Dataset segmentation experiments. DPDA provides by far the best performance in these experiments</ns0:cell></ns0:row><ns0:row><ns0:cell>(see Table</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Segmentation performance comparisons using MobileNetV2+U-Net on Oxford-IIIT Pet Dataset The above results indicate that models trained with the augmented UC Merced Land-use, Intel Image Classification, Oxford-IIIT Pet datasets, with the DPDA method, significantly improve classification performance. Besides, DPDA also provides superior performance in an image segmentation task. Thus, we can infer that the proposed DPDA method can improve DL performance for different datasets, different DL architectures (ResNet, DenseNet, and MobileNetV2), and different image analysis tasks.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Baseline DPDA FI</ns0:cell><ns0:cell>RE</ns0:cell><ns0:cell cols='2'>HE+GC GC</ns0:cell></ns0:row><ns0:row><ns0:cell>Average Accuracy 80.82</ns0:cell><ns0:cell>89.31</ns0:cell><ns0:cell cols='3'>81.64 80.59 80.44</ns0:cell><ns0:cell>78.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy Increase</ns0:cell><ns0:cell>8.49</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>-0.38</ns0:cell><ns0:cell>-2.20</ns0:cell></ns0:row></ns0:table><ns0:note>13/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:1:2:NEW 17 Apr 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='11'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:1:2:NEW 17 Apr 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:1:2:NEW 17 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to the Editor and the Reviewers
April 13, 2021
It is my opinion as the Academic Editor for your article ’Distribution-preserving data augmentation’ that it requires a number of Major Revisions.
Comment to authors: Please consider including other data augmentation methods in the comparison.
Response:
We are grateful for the fruitful comments and suggestions of the reviewers and the editor.
We tried our best to address these comments and suggestions, which we believe improved the
paper’s quality significantly. Among all reviews we get, the most critical ones are addressed
by adding 2 more datasets, a segmentation experiment, and more augmentation methods to
compare. Moreover, refined and enlarged experiments all comply with our initial findings on
UC Merced Dataset in our first submission.
Note: While revising our study, we did the tests in the first submission from scratch on a
system with a new and up-to-date graphics card. In these new tests, we increased the number
of epochs from 10 to 20. Besides, we used SGD optimizer instead of Adam optimizer. As
a result of these new tests, there have been slight changes (improvement) in the previously
submitted accuracy rates.
Reviewer 1
Basic reporting
The paper presents a color-based data augmentation method based on distribution-preserving
color that creates plausible image variation by shifting pixel colors to another point in the image
distribution. Although, this research may be interesting, in my opinion the paper has several
drawbacks that should be addressed.
Response: We appreciate the time and effort of the reviewer and the high-quality feedbacks
we obtained. We considered all feedbacks with great care and made necessary changes to the
paper.
Note: While revising our study, we did the tests in the first submission from scratch on a
system with a new and up-to-date graphics card. In these new tests, we increased the number
of epochs from 10 to 20. Besides, we used SGD optimizer instead of Adam optimizer. As a
result of these new tests, there have been changes in the previously submitted accuracy rates.
The introduction is lengthy and does not focus on the contributions of the paper. Moreover,
the paper is sometimes repetitive. For instance, the idea of using the mean-shift process by
Fukunaga and Hosteler and Comaniciu and Meer is mentioned in Line 108, in Line 134, in
Line 169, and in Line 199. Another example is the ”experimental setup” section which repeats
ideas about the usefulness of transfer learning instead of directly explaining the ”experimental
setup”. Besides, the experimental setup is repeated in the Test Settings and Quantitative Results
sections.
Response: We shortened the introduction as suggested by the reviewer, and we also eliminated
various duplications in the paper. For that purpose, we also reorganized some parts of the paper,
i.e., changing the order of sections or merging sections. We pay attention to our contributions
at the end of the introduction in addition to the abstract and conclusion sections.
The sentence ”the idea also fits in the feature domain, so it can be applied to the features
directly”, in Line 119, should be removed from the introduction and, if considered necessary,
explained in more detail in the material and methods sections.
Response: Like mean-shift clustering, our method works in feature space, so we flattened image
RGB color values as a feature vector, then we apply the DPDA method on these features. In the
literature, many datasets are formed of feature vectors extracted from many data sources (i.e.,
as tabular data in CSV format), and the DPDA method, in essence, can just work on such data.
This capability allows our method to be able to operate on both images and such datasets. This
is a unique characteristic for a data augmentation method since data augmentation methods
either works in the image domain or feature space domain but not both. However, we decide to
keep this as a separate study since classical machine learning methods use such datasets, but we
want to focus on deep learning approaches in our paper. Thus, we removed this statement from
all sections of the paper, and we just briefly mention it in ’Discussion and Future Works’. We
plan to write a conference paper that investigates this capability (working for non-image data)
of the DPDA method.
Experimental design
My main concern is that the method is compared with a unique augmentation method (random
erasing) on a unique dataset (UC Merced Land-use). But, on the one hand, the authors do not
explain why this dataset is chosen and why this dataset can benefit for the proposed method.
On the other hand, the authors do not explain why the random erasing augmentation method
is used to be compared with the proposed method. Random erasing is a method in which training images are generated with various levels of occlusion, which reduces the risk of over-fitting
and makes the model robust to occlusion. The goals of both methods seem to be very different.
Response: In our first submission, we used UCMerced Land-use dataset since it has significantly different characteristics from the ImageNet dataset where DenseNet201 and ResNet50
are trained. Also, the dataset contains a limited number of samples, making it a good testbed
for data (image) augmentation methods. We especially decided to increase the challenge for
transfer learning to adapt new data with insufficient samples where data augmentation methods (i.e. DPDA) can exhibit their effectiveness. However, suggested by both reviewers and the
editor, we added two more datasets and more data augmentation methods to show the proposed
DPDA method’s effectiveness in different scenarios and put forth its wide applicability area.
In my opinion, the authors should try to strive to find some dataset in which the proposed
augmentation method could be essential. Furthermore, the results of the proposed method
should be compared with more similar methods based on color transformations. That is proposed in the abstract (Line 25): ”. . . superior performance of existing color-based data augmentation methods”. Libraries such as albumentations or CLoDSA implements several color based
transformations which can be used in such as comparison.
Response: In our revised paper, we added Intel Image Classification Dataset and Oxford-IIIT
Pet Dataset datasets in addition to UC Merced Land-use Dataset. For these 3 datasets, we
make a classification-based performance analysis where we also added a segmentation-based
performance analysis for the Oxford-IIIT Pet Dataset dataset since this dataset also contains
segmentation masks (trimap). Based on the CLoDSA library, we incorporated flip image (FI),
gamma correction (GC), histogram equalization (HE) data augmentations in our comparison
studies, where we also used combinations of data augmentation methods.
The authors propose an image pyramid to increase the number of extracted features. Then,
two figures are included showing the effect of this increase in features. The authors explain
(lines 166-167) that ”the effect of image quantization errors are reduced, which demonstrates
the benefit of the image pyramid approach”. Can the authors explain to what extent these errors
are reduced?
Response: Performance of DPDA slightly decreased once we removed the image pyramid from
the feature extraction part. Therfore, we decide to keep this process in the feature extraction part
since image pyramid generation is extremely fast and does not cause extra overhead. Note that,
image pyramid approach does not reduce quantization errors on the original image pixel values.
Instead, it samples new data points in the feature space which are not covered in the image
since pixel values are discrete (quantized to integer values) in the original image. However, the
image pyramid creates new pixels with floating values (due to applied smoothing during pyramid
generation), so in feature space, we have new data samples between these discrete values (as
seen in Figure 5). Thereby, the quantization error is reduced in the feature space that is used
by the proposed DPDA method.
Figure 11 shows that the execution time of the method is very large for large images. A
comparison of execution times of the proposed method and other color-based data augmentation
methods used in the Performance Analysis section should be included.
Response: We added flip image (FI), gamma correction (GC), histogram equalization (HE)
in addition to random erasing (RE) method in our comparison studies. All these methods have
linear computational complexity, and they execute less than a second, so we exclude them from
our analysis. Note that, image sizes in common deep learning architectures are generally small,
i.e. 224 × 224 or 256 × 256, so high computational complexity of DPDA during processing large
images is not a big issue in practice.
Validity of the findings
There is not an actual Discussion section in which a comparison of the DPDA method and other
similar methods proposed in the literature. The first paragraph of this section only includes
a repetition of some characteristics of the DPDA method. Furthermore, it is suggested that
”the method obtain superior performance compared to other data augmentation approaches,
due to plausible augmentation”. But, as I previously mentioned, the method is only tested on
a unique dataset and compared with a unique data augmentation method. Moreover, it is not
explained why this dataset can benefit of such as plausible augmentation. Finally, the rest of
the Discussion section propose future work. In my opinion, some of this future work could try
to be addressed before the method is published in a journal. In particular, methods for speeding
up the proposal.
Response: In our revised paper, we added Intel Image Classification Dataset and Oxford-IIIT
Pet Dataset datasets in addition to UC Merced Land-use Dataset. For these three datasets,
we make classification-based performance analysis. We also added a segmentation-based performance analysis for the Oxford-IIIT Pet Dataset dataset since this dataset also contains segmentation masks (trimap). Based on CLoDSA library, we incorporated flip image (FI), gamma
correction (GC), histogram equalization (HE) data augmentations in our comparison studies,
where we also used combinations of data augmentation methods. Thereby, in this revision, we
already implemented some future works that we mentioned in our first submission.
Comments for the authors
Minor points: - Line 171 change pdf by PDF - Last sentence in line 221 -¿ moidified - The link
to the developed library should be included in the paper.
Response: As suggested by the reviewer, we put Windows and Linux code bases of the DPDA
method as a github repository at the end of the introduction section (see footnote 1).
See next page.
Reviewer 2 (Dr. Jonathan Heras)
Basic reporting:
The paper is well-written and the explanations are clear. In general, the paper provides enough
references. However, I think that in the introduction (paragraph starting in Line 56) some
recent data augmentation methods such as MixUp (Zhang et al, 2017), CutMix (Yun et al.
2019), CutOut (De Vries and Taylor. 2017) or AugMix (Hendrycks et al. 2019) should also be
included.
Response: We sincerely thank the reviewer for these motivating and constructive comments.
We added the references suggested by the reviewer in introduction section which fills the gaps
in our literature survey.
References:
- Zhang et al. 2017. mixup: Beyond empirical risk minimization.
https: // arxiv. org/ pdf/ 1710. 09412. pdf
- De Vries and Taylor. 2017. Improved Regularization of Convolutional Neural Networks with Cutout.
https: // arxiv. org/ pdf/ 1708. 04552v2. pdf
- Yun et al. 2019. CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features.
https: // arxiv. org/ abs/ 1905. 04899
- Hendrycks et al. 2019. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty.
https: // arxiv. org/ abs/ 1912. 02781
Note: While revising our study, we did the tests in the first submission from scratch on a
system with a new and up-to-date graphics card. In these new tests, we increased the number
of epochs from 10 to 20. Besides, we used SGD optimizer instead of Adam optimizer. As a
result of these new tests, there have been changes in the previously submitted accuracy rates.
Experimental design
My main issue with this paper is that the experiments are quite narrow. The authors have
focused on just one dataset, so it is not possible to know if this data augmentation regime
serves to several contexts. It would be interesting to know if the data-augmentation procedure
presented in the paper works not only for image classification but also for other computer vision
problems such as object detection and instance segmentation.
Response: In our revised paper, we added Intel Image Classification Dataset and Oxford-IIIT
Pet Dataset datasets in addition to UC Merced Land-use Dataset. For these 3 datasets, we make
classification-based performance analysis. We also added a segmentation-based performance
analysis (as suggested by the reviewer) for the Oxford-IIIT Pet Dataset dataset since this dataset
also contains segmentation masks (trimap).
Related to the previous point, the authors only compare their approach with the random erase
augmentation method since they claimed that this is the state-of-the-art. But, this augmentation
method is not the state-of-the-art for all datasets. So, the authors should clarify this. Moreover,
it would be necessary to compare the results with other augmentation techniques, specially with
color transformations since the method presented in the paper could be framed in this kind of
augmentation. Finally, the authors claim in Line 388 that their method can be easily combined
with other color or geometric augmentations to increase the performance of models, but this is
not proved at all.
Response: Based on CLoDSA library, we incorporated flip image (FI), gamma correction
(GC), histogram equalization (HE) data augmentations in our comparison studies, where we
also used combinations of data augmentation methods. As stated by the reviewer, the random
erase method does not perform second best in our new experiments. Thus, we removed the
’state-of-the-art’ statement for the random erase data augmentation method.
Validity of the findings
The results achieved by the authors are not conclusive since they have only focused on a particular dataset and task. Moreover, a thorough comparison with other methods is necessary.
Response: As just explained, we added more datasets, more augmentation methods, and also
segmentation task so that our comparison studies now becomes complete and definitive. Now,
we believe the validity of our findings on the superiority of DPDA becomes more clear.
Comments for the authors
A plus of this work is that they provide the code for their work. However, there is not any
instruction about how to install their program, or to reproduce the experiments of the authors.
An additional issue is that the code is implemented in C++ and it seems to require Visual Studio.
This has two problems. First, most people working nowadays in deep learning use Python as
programming language, so it is not possible to directly use your code (this is admitted by the
authors, but it is important to work in this direction). Similarly, I think that it is not possible
to install Visual Studio in Linux so many users will not be able to use it. At least the authors
should indicate the requirements to run your code. Moreover, it will be helpful if the authors
provide the link to the code in the manuscript. Finally, it would be interesting to integrate your
methods in libraries such as Albumentations (https://albumentations.readthedocs.io/) or
Clodsa (https://github.com/joheras/clodsa).
Response: We already implemented the DPDA project using ANSI C++ but of course, as
suggested by the reviewer, providing only the Visual Studio project limits the usage of the proposed method to Windows platforms. Thus, we ported our code to the Linux environment,
which further extends the usage of the DPDA method. Detailed explanations and project codes
can be found in the GitHub repository given at the end of the introduction, as footnote 1. We
also provided the experimental settings and environment in our paper to reproduce the experiments. As suggested by the reviewer, Python and Linux are the essential and common tools
in machine learning, specifically in deep learning. We also considered porting the DPDA code
to Python, but it requires a reasonable amount of time to create an efficient Python implementation. We also want to make our method faster using some techniques and parallelization.
However, all these efforts require time and may cause the current paper to become complex and
defocused. So, we decide to publish another study on these execution time improvement and
parallelization efforts in Python (using PyCUDA) and submit it to a journal that is specialized
in real-time image processing: https: // www. springer. com/ journal/ 11554/ With these
efforts completed, we can integrate this version of DPDA to CLoDSA.
" | Here is a paper. Please give your review comments after reading it. |
125 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In the last decade, Deep Learning is applied in a wide range of problems with tremendous success. This success mainly comes from large data availability, increased computational power, and theoretical improvements in the training phase. As the dataset grows, the real world is better represented, making it possible to develop a model that can generalize. However, creating a labeled dataset is expensive, time-consuming, and sometimes not likely in some domains if not challenging. Therefore, researchers proposed data augmentation methods to increase dataset size and variety by creating variations of the existing data. For image data, variations can be obtained by applying color or spatial transformations, only one or a combination. Such color transformations perform some linear or nonlinear operations in the entire image or in the patches to create variations of the original image. The current color-based augmentation methods are usually based on image processing methods that apply color transformations such as equalizing, solarizing, and posterizing. Nevertheless, these color-based data augmentation methods do not guarantee to create plausible variations of the image. This paper proposes a novel distribution-preserving data augmentation method that creates plausible image variations by shifting pixel colors to another point in the image color distribution. We achieved this by defining a regularized density decreasing direction to create paths from the original pixels' color to the distribution tails. The proposed method provides superior performance compared to existing data augmentation methods which is shown using a transfer learning scenario on the UC Merced Land-use, Intel Image Classification, and Oxford-IIIT Pet datasets for classification and segmentation tasks.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Since the first study conducted by <ns0:ref type='bibr' target='#b19'>Krizhevsky (Krizhevsky et al. (2012)</ns0:ref>) in the ImageNet competition in 2012, deep learning (DL) has been highly successful in image recognition problems. Today convolutional neural networks (CNN) are well-understood tools for image classification as a heavily employed DL approach. CNN's main strength comes from its ability to extract features automatically from regularly structured data such as speech signals, images, or medical volumes <ns0:ref type='bibr' target='#b11'>(Georgiou et al. (2020)</ns0:ref>), or even unstructured data such as point clouds <ns0:ref type='bibr' target='#b4'>(Charles et al. (2017)</ns0:ref>). However, training a DL network with high accuracy and generalization capability requires a large dataset representing the real world. Thus, deep learning algorithms' performance relies heavily on the variety and the size of the available training data.</ns0:p><ns0:p>Unfortunately, it may be challenging to obtain a sufficiently large amount of labeled samples <ns0:ref type='bibr' target='#b37'>(Wang et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>Kemker et al. (2018)</ns0:ref>). In some cases, gathering the data is complicated or even hardly possible.</ns0:p><ns0:p>Therefore, training DL becomes challenging due to insufficient training data or uneven class balance within the datasets (Yi-Min <ns0:ref type='bibr' target='#b41'>Huang and Shu-Xin Du (2005)</ns0:ref>).</ns0:p><ns0:p>One way to deal with an insufficient training data problem is using so-called data augmentation techniques to enlarge the training data by adding artificial variations of it <ns0:ref type='bibr' target='#b31'>(Simard et al. (2003)</ns0:ref>). Such enlarged training dataset can be even further extended by adding synthetically generated data <ns0:ref type='bibr' target='#b38'>Wong et al. (2019)</ns0:ref>. Data augmentation can be applied directly to the features, or it can be applied to the data source, which will be used to extract the features <ns0:ref type='bibr' target='#b35'>(Volpi et al. (2018)</ns0:ref>), e.g., CNN can extract features from the enlarged image dataset <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). The most challenging work is to improve the generalization ability of the trained model to avoid overfitting. If correctly done, data augmentation techniques can improve the performance and generalization ability of the trained model. Therefore, due to their success, data augmentation techniques are used in many studies that employs machine learning <ns0:ref type='bibr' target='#b1'>(Ali et al. (2020)</ns0:ref>, <ns0:ref type='bibr' target='#b16'>Islam et al. (2019)</ns0:ref>, and <ns0:ref type='bibr' target='#b44'>Zheng et al. (2019)</ns0:ref>).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:2:0:NEW 4 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Data augmentation strategies can be divided into three groups as color transformations, geometric transformations, and techniques using neural networks. Methods using color transformations manipulate the pixels' spectral values by doing operations such as changing the contrast, brightness, color, injecting noise <ns0:ref type='bibr' target='#b33'>(Takahashi et al. (2020)</ns0:ref>), or applying some filtering techniques <ns0:ref type='bibr' target='#b46'>(Zhu et al. (2020)</ns0:ref>). As a colorbased approach, <ns0:ref type='bibr' target='#b45'>Zhong et al. (2020)</ns0:ref> proposed a random erasing technique that either randomly puts a filled rectangle or puts a random-sized mask into a random position. Methods using geometric transformations are manipulating pixel positions by doing operations such as scaling, rotation, flipping, or cropping <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). In particular, methods based on geometric transformation should be selected according to the target dataset. For example, in CIFAR-10, horizontal flipping is an efficient data enlargement method, but it can corrupt data due to different symmetries in the MNIST dataset <ns0:ref type='bibr' target='#b8'>(Cubuk et al. (2018)</ns0:ref>). Similarly, as in face recognition samples, if there is a dataset where each face is centered in the frame, geometric transformations give outstanding results <ns0:ref type='bibr' target='#b39'>(Xia et al. (2017)</ns0:ref>). Otherwise, one should ensure he/she does not alter the label of the image while using these augmentation variants.</ns0:p><ns0:p>Moreover, the possibility of distancing the training data from the test data should also be considered <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). As with geometric transformations, some color transformations can also distort important color information, changing the image label <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). Augmentation methods can also be combined to increase the variety in the resulting augmented images <ns0:ref type='bibr' target='#b8'>(Cubuk et al. (2018)</ns0:ref>). With a careful setting, these color and geometric transformations help generate a new dataset covering the span of image variations <ns0:ref type='bibr' target='#b14'>(Howard (2013)</ns0:ref>). As recent approaches, <ns0:ref type='bibr' target='#b21'>Mun et al. (2017), and</ns0:ref><ns0:ref type='bibr' target='#b24'>Perez and</ns0:ref><ns0:ref type='bibr' target='#b24'>Wang (2017)</ns0:ref> proposed to generate synthetic images which retain similar features to the original images samples using various types of Generative Adversarial Networks (GAN). However, <ns0:ref type='bibr' target='#b5'>Chen et al. (2020)</ns0:ref> observed that the cost of training is time-consuming while the variability of data produced is often limited. Data augmentation methods can produce good results with different parameters in different types of problems. Even a single augmentation method is employed, the best parameters should be determined. For a combination of augmentation methods, determining which data augmentation methods to use and their execution order in addition to their optimal parameters is challenging. Thereby, in <ns0:ref type='bibr' target='#b8'>Cubuk et al. (2018)</ns0:ref>, the auto augmentation method is proposed that searches many augmentation algorithms to find the highest validation accuracy automatically. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>(a), an image that contains a blue lake and sky and green trees is shown. Plausible variations of this lake image with some red trees and color changes in clouds and lake are presented in Figure <ns0:ref type='figure' target='#fig_0'>1(b-c</ns0:ref>).</ns0:p><ns0:p>Note that, there are differences in these 2 plausible images, i.e. there are more red trees in the Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> These plausible images are more probable to occur in the real world than the unplausible images given in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>(e-f). The color distribution of the lake image is shown as 3D scatter plot and color channel histograms in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. As seen in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>, some colors frequently occur in an image, while some are rare. For example, the blue lake (mode in distribution and its surroundings) is always seen, but a lake that goes into purple (tails of the distribution) is rare. Mainly images at the tail of the color distribution, in particular, are also infrequent, yet they are plausible. We were inspired by the mean-shift process by <ns0:ref type='bibr' target='#b10'>Fukunaga and Hostetler (1975)</ns0:ref> and <ns0:ref type='bibr' target='#b6'>Comaniciu and Meer (2002)</ns0:ref> to obtain a distribution-preserving data augmentation mechanism. The mean-shift process seeks the local mode without estimating the global density, hence avoiding a computationally intensive task.</ns0:p><ns0:p>Unlike the mean-shift process, we get a path towards the tails in a density decreasing manner in our method. In the original mean-shift, the data gets denser, and the mean-shift path becomes smooth as it goes towards the distribution mode. However, as we go in the opposite direction, the data becomes increasingly sparse, which can cause the obtained path to act chaotically. We developed a regularized density decreasing direction to create paths from colors of the original image pixels to the image data distribution tails to prevent this. Then we shift the pixel colors to any point in the obtained path so that modified pixel colors will be in alignment with the color distribution of the original image. Thus, the proposed data augmentation method considers the color distribution of the image to produce plausible images by changing colors in this way. Since we shift the color in the augmented image, we also increase the training data diversity. Source code of the proposed method is shared 1 to facilitate reproducibility. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>MATERIALS & METHODS</ns0:head><ns0:p>To obtain a density decreasing path, we used the opposite of the mean-shift direction. As we go in the density decreasing direction, the data becomes increasingly sparse, which can cause the obtained path to act chaotically. We enforce the density decreasing path's smoothness by implementing a regularization on the reverse mean-shift direction to prevent this. Such regularized density decreasing paths for 3 pixels are shown in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> as examples. The colors of these 3 pixels are chosen to be close to the tail of the distribution to demonstrate the behavior of the density decreasing path generation. One can easily see that paths are smooth even if they move to extremely sparse regions of the image color distribution. Density decreasing path may contain varying numbers of points where the distance between the consecutive points can also be different. We want to have the same number of points, L, in each density decreasing path (L = 64). We also want to equalize the distance between consecutive points in the density decreasing path.</ns0:p><ns0:p>We construct a refined density decreasing path while satisfying these two objectives using cubic spline interpolation on the density decreasing path we found. we further enriched the feature space using the image pyramid approach <ns0:ref type='bibr' target='#b0'>Adelson et al. (1984)</ns0:ref>. During the image pyramid generation, we halved the original image in width and height three times. This creates an image pyramid with four levels where each level contains four times fewer pixels than the higher level in the pyramid. We used Lanczos interpolation over 8 × 8 neighborhood for the down-sampling operation <ns0:ref type='bibr' target='#b34'>(Turkowski (1990)</ns0:ref>). Using an image pyramid with four levels increases the number of features in X by 32.8%. In Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>(a), there are structural missing regions in feature space. This is due to quantization error since decimal parts of colors are quantized in 8 bits RGB images. However, refined feature space in Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>(b) is denser, and the effect of image quantization errors is reduced, which demonstrates another benefit of the employed image pyramid approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:2:0:NEW 4 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Creation of Density Decreasing Paths</ns0:head><ns0:p>Let x be the color of a pixel in the image as a starting point of a path we aim to find. Then probability density function (PDF) on the color feature space X constructed from the image is given in Equation (1).</ns0:p><ns0:formula xml:id='formula_0'>P(x) = 1 n n ∑ j=1 K(x − x j ) (1)</ns0:formula><ns0:p>where K( <ns0:ref type='formula'>.</ns0:ref>) is a kernel function and x j are data points in X where we used Epanechnikov kernel.</ns0:p><ns0:p>We define a density decreasing path T = {x (0) , x (1) , . . . , x (i) , . . . } (Figure <ns0:ref type='figure'>6</ns0:ref>) where x (i+1) = x (i) + s (i) for i ≥ 0. Here, x (0) is the starting point of the path and s (i) is a density decreasing direction at point x (i) .</ns0:p><ns0:p>Also, x (i) is only defined in color space domain where 0 ≤ x <ns0:ref type='formula' target='#formula_5'>4</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_1'>(i) r , x (i) g , x (i) b ≤ 255. • • • • • • • • S(0) X(1) X(2) S(1) S(2) X(3) X(0) X(4) X(5) S(3) S(</ns0:formula><ns0:formula xml:id='formula_2'>X(�-1) X(�) S(�-1)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Figure 6. Density decreasing path</ns0:head><ns0:p>Now, we can define the pdf for the point x (i+1) as below:</ns0:p><ns0:formula xml:id='formula_3'>P(x (i+1) ) = 1 n n ∑ j=1 K(x (i+1) − x j ) = 1 n n ∑ j=1 K(x (i) + s (i) − x j )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Since the points x (i) and x j are constant, we can rewrite above equations as below:</ns0:p><ns0:formula xml:id='formula_4'>P(x (i+1) ) = P(x (i) + s (i) ) = 1 n n ∑ j=1 K(s (i) − x j ) where x j = x j − x (i)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>So, x j points are centered to x (i) where x (i) is shifted to the origin; thus, s (i) becomes a direction vector.</ns0:p><ns0:p>Finally, we define a gradient descent direction as below that will lead to a density decreasing path:</ns0:p><ns0:formula xml:id='formula_5'>x (i+1) = x (i) − ∇J(x (i) + s (i) )<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where J(x (i) + s (i) ) is the cost function to minimize which is defined as below:</ns0:p><ns0:formula xml:id='formula_6'>J(x (i) + s (i) ) → P(x (i) + s (i) ) subject to s (i) ∈ Ω Ω = { s (i) ≤ S length and 1 − s (i) S length , ŝ(prior) ≤ S angle } with ŝ(i) = s (i) S length</ns0:formula><ns0:p>and ŝ(prior) = s (i−1) s (i−1)</ns0:p><ns0:p>(5) Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Equation <ns0:ref type='formula'>5</ns0:ref>, S length is the maximum length and S angle is the maximum angle for the direction vector s (i) . Here, second constraint is only defined for i > 0 where ŝ(prior) is the prior direction in unit length and considered as constant. Owing to Equation <ns0:ref type='formula'>5</ns0:ref>, density decreasing direction s (i) is regularized in length and orientation to avoid chaotic shifts in sparse data regions.</ns0:p><ns0:p>The gradient descent method is not practical to minimize cost functions with constraints. In such cases, one can use the projected gradient descent (PGD) method, which can minimize a cost function subject to a constraint where this constraint defines a domain <ns0:ref type='bibr' target='#b2'>(Boyd and Vandenberghe (2004)</ns0:ref>). Although PGD works fine for cost functions with a single constraint, we can still use it efficiently since our cost function's length, and orientation constraints only form a single domain, Ω. Therefore, we use PGD to obtain a gradient descent path on the cost function J as defined in Equation <ns0:ref type='formula'>6</ns0:ref>. min</ns0:p><ns0:formula xml:id='formula_7'>s (i) P(x (i) + s (i) ) subject to s (i) ∈ Ω x (i+1) = P Ω (x (i) − ∇P(x (i) + s (i) ))</ns0:formula><ns0:p>P Ω (x new ) = arg min</ns0:p><ns0:formula xml:id='formula_8'>s (i) ∈Ω (x (i) + s (i) ) − x new (6)</ns0:formula><ns0:p>After doing some algebraic manipulations one can see that s (i) equals to the opposite of the mean-shift direction m (i) such that s (i) = −m (i) . First, we will limit the number of iterations in the PGD to L/2 since we aim to find a density decreasing path with a limited number of points. Next, we will stop the PGD iteration if (a) the norm of mean-shift direction is becoming smaller than a tolerance value (C tolerance ) or i+1) exiting from the image color space domain. Also, we set default value for C tolerance as 10 −2 d. The final density decreasing path generation method is presented in Algorithm 1.</ns0:p><ns0:formula xml:id='formula_9'>(b) next point x (</ns0:formula><ns0:p>Algorithm 1 Find Density Decreasing Path using PGD 1: Inputs:</ns0:p><ns0:formula xml:id='formula_10'>x (0) , h, I FLANN , L, C tolerance 2: for i = 0 : L/2 do 3: m (i) ← calculateMeanShiftDirection(x (i) , h, I FLANN ) 4: if ( m (i) < C tolerance ) then 5: break ⊲ Converged, exits loop 6: end if 7: x (i+1) = P Ω (x (i) − ∇P(x (i) + s (i) )) ⊲ Move to new point with PGD 8: if (x (i+1) is out of domain) then 9:</ns0:formula><ns0:p>break ⊲ Converged, exits loop 10:</ns0:p><ns0:p>end if 11: end for 12: T ← regularize({x (0) , x (1) , . . . , x (i) }) ⊲ regularize to equidistant L points 13: Return T Calculation of the mean-shift direction m (i) at point x is as given as below:</ns0:p><ns0:formula xml:id='formula_11'>m (i) = ∑ x j ∈N (x) K(x j − x)x j ∑ x j ∈N (x) K(x j − x) − x (7)</ns0:formula><ns0:p>where K(.) is the kernel function and x j are k nearest neighbours of x. We used FLANN proposed by <ns0:ref type='bibr' target='#b20'>Muja and Lowe (2014)</ns0:ref> to have fast k nearest neighbor search operations for efficiency. In this study, we used 256 as the default value of k. Note that, one need to put value of x (i) into the point x in the Equation <ns0:ref type='formula'>7</ns0:ref>given in the Algorithm 1. However, kernel functions require the selection of the bandwidth parameter h. Since each image's characteristic is different, we estimate bandwidth parameter h from the image to balance differences between images as an approximation to median pair-wise distances to closest points.</ns0:p><ns0:p>First, we find the Euclid distances of each pixel with its 4 neighbors. Then, we use Quick Select <ns0:ref type='bibr' target='#b7'>(Cormen et al. (2009)</ns0:ref>) algorithm to find the median value of these distances as our bandwidth h. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We used the PGD method, which first does a gradient descent step, then back-projection of gradient descent result to the domain Ω. In Figure <ns0:ref type='figure'>7</ns0:ref>, blue vectors are prior directions; green vectors are new directions in the domain, and red vectors are new directions out of the domain. Domain Ω is the region between dotted gray lines, determined by the constraints in each gradient descent step.</ns0:p><ns0:formula xml:id='formula_12'>(a) s (1) ∈ Ω (b) s (2) / ∈ Ω (c) P Ω (s (2) ) ∈ Ω (d) s (3) / ∈ Ω (e) P Ω (s (3) ) ∈ Ω Figure 7.</ns0:formula><ns0:p>Example cases for directions and back-projections to domain Note that prior direction and new direction form a plane where its normal is the cross product of these two vectors. Therefore, a rotation matrix can be formed, which aligns this normal vector to the canonical z-axis where the prior direction and new direction vectors transform onto xy-plane. Once prior and new directions are rotated, all the back-projection operations can be done in 2D easily then back-projected direction can be rotated back to the original space. We used the method proposed by <ns0:ref type='bibr' target='#b22'>Möller and Hughes (1999)</ns0:ref> to construct a rotation matrix that aligns normal vector to z-axis as given in Equation <ns0:ref type='formula' target='#formula_13'>8</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_13'>v = f × t, u = v v c = f • t, r = (1 − c)/(1 − c 2 ) →   c + rv 2 x rv x v y − v z rv x v z + v y rv x v y + v z c + rv 2 y rv y v z − v x rv x v z − v y rv y v z + v x c + rv 2 z   (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>)</ns0:formula><ns0:p>where f is plane normal calculated by cross product of prior direction and new direction, and t is z-axis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creation of Augmented Images</ns0:head><ns0:p>Each pixel has its corresponding density decreasing path with L colors. 0 th color (first path node) has the largest color deviation from the original pixel color towards the tail of image color distribution. (L − 1) th color (last path node) equals to original image pixel color. So, we can take a different node (color) from the corresponding path for each pixel of an augmented image. Here all 0 indices will yield to augmented image with the most perturbation, while L − 1 indices will yield to the original image. For each pixel, we can randomly choose an index number between 0 and L − 1, which will lead to different augmented images that allow the generation of any number of augmented images. However, utterly random selection will result in unnatural results. Thus, we want to sample from path nodes in a random but spatially smooth manner. We modified the Perlin noise generator, which is proposed by <ns0:ref type='bibr' target='#b26'>Perlin (1985)</ns0:ref> to obtain a smooth but random index map as seen in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS & RESULTS</ns0:head><ns0:p>We Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Qualitative Results</ns0:head><ns0:p>To demonstrate our data augmentation results qualitatively, we first downloaded sample images from</ns0:p><ns0:p>Pxfuel, which provides high-quality royalty-free stock photos. Note that augmented images' brightness is slightly increased to emphasize the difference between original images and augmented images. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Figure <ns0:ref type='figure' target='#fig_2'>12</ns0:ref>, augmentation results are shown for man, forest, food, car, and urban images. In the first row, in the augmented male image, the male's skin color becomes lighter, and the eye color shifts to green. Also, there are some color changes in the background and t-shirt of the man. In the second row, the augmented forest image contains red trees, although there are no red trees in the original image.</ns0:p><ns0:p>Here, red trees occur in the augmented image because the original image's color distribution contains colors towards the red tones in the distribution's tail. In the third row, in the augmented image, each olive type's colors in the original image are changed differently while the background is not changed. In the fourth row, the old car's rust tones in the original image are changed naturally in the augmented image.</ns0:p><ns0:p>In the fifth row, the trees and the building roof's colors become greener with slight color changes in the buildings and the road in the augmented urban image.</ns0:p><ns0:p>DPDA results shown in Figure <ns0:ref type='figure' target='#fig_2'>12</ns0:ref> are all plausible image augmentations. In all these visible results, some image pixel colors are shifted to the tail of the image's color data distribution. Thus, the image is transformed into a less occurring version of itself. This is quite useful to increase data variability of the training dataset since the proposed data augmentation approach generates fewer occurring images, and thus original dataset is enriched. Therefore, the over-fitting problem is reduced while increasing the training accuracy. Since the image color data guides data augmentation, the algorithm does not require different parameter selections for different images, i.e., images with different content, resolution, or camera characteristics. Accordingly, default DPDA parameters are used for the data augmentations in Figure <ns0:ref type='figure' target='#fig_2'>12</ns0:ref> (as qualitative experiments) and also for all the quantitative experiments.</ns0:p></ns0:div>
<ns0:div><ns0:head>Quantitative Results</ns0:head><ns0:p>Training a DL network from scratch requires a considerable amount of data and computational power.</ns0:p><ns0:p>Therefore, researchers and practitioners with limited data and computational resources prefer to reuse existing DL architectures, which are trained with millions of data and using server farms. This reuse methodology employs a transfer learning approach where a well-proven DL model is fine-tuned with a new dataset <ns0:ref type='bibr' target='#b28'>(Shao et al. (2015)</ns0:ref>). Pre-training a DL network with transfer learning yields successful results, even with a small train dataset. However, transfer learning provides excellent results if the data and pre-trained model are on a similar domain <ns0:ref type='bibr' target='#b42'>(Yosinski et al. (2014)</ns0:ref>). A model pre-trained with the Imagenet dataset gives better outcomes for the datasets in the same domain, such as CIFAR-10 or Caltech-101. On the other hand, if the model is tuned using a small amount of training data that is not in a similar domain, the performance benefits of transferring features decrease. So, data augmentation helps increase dataset size and variety to remedy such problems <ns0:ref type='bibr' target='#b28'>(Shao et al. (2015)</ns0:ref>).</ns0:p><ns0:p>Note that our aim is not to give an extensive study of the architecture of CNNs as done by Szegedy <ns0:ref type='formula'>2019</ns0:ref>)). For all the experiments, we used an SGD solver with a momentum of 0.9.</ns0:p><ns0:p>Weights are initialized from a Gaussian distribution N (µ, σ ) for µ = 0 and σ = 10 −2 . We found 20 epochs and a batch size of 32 typically sufficient for convergence.</ns0:p><ns0:p>The following methodology was utilized to create train and validation sets for all datasets used in this study. First, we randomly selected 20 images from each class as a validation set and used the same valida- As seen in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>, average accuracy improvement ranges from 1.51% to 4.43%. All data augmentation methods provide performance increase compared to baseline performance for all the train set sizes.</ns0:p><ns0:p>However, the DPDA method and the DPDA combined with the flip image consistently provide the best performances in every test. The results also show that data augmentation in data sets with fewer elements contributes more to accuracy. For example, the highest accuracy increase is 6.98% in the training set consisting of 20 images per class, which is obtained with DPDA+FI augmentation. Next, we compared the performance increase with various data augmentation methods, including the DPDA, using ResNet50 architecture. As seen in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>, average accuracy improvement ranges from 2.35% to 5.84%. All data augmentation methods provide performance increase compared to baseline performance for all the train set sizes. However, the DPDA method itself and in combination with the flip image method, consistently provide the best performances in every single test. The results also show that data augmentation in data sets with fewer elements contributes more to accuracy. For example, the highest accuracy increase is 8.49% in the training set consisting of 20 images per class, which is obtained with DPDA+FI augmentation. Intel Image Classification Dataset: Like the UC Merced Land-use dataset, first, we compare the performance increase obtained with various data augmentation methods, including DPDA, on the Intel Image Classification dataset using DenseNet201 architecture. As seen in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>, average accuracy improvement ranges from 2.25% to 4.94%. All data augmentation methods provide performance increase compared to baseline performance for all the train set sizes. However, the DPDA method provides the best performances in every single test. The results also reveal that data augmentation in data sets with fewer elements contributes more to accuracy. For instance, the highest accuracy increase is 7.23% in the training set consisting of 30 images per class, which is obtained with DPDA augmentation. Next, we compare the performance increase with various data augmentation methods, including the DPDA, using ResNet50 architecture. As seen in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>, average accuracy improvement ranges from 1.82% to 4.80%. Every data augmentation methods provide a performance increase compared to baseline performance for all the train set sizes. However, the DPDA method provides the best performances in every single test. The results also indicate that data augmentation in data sets with fewer elements contributes more to accuracy. For instance, the highest accuracy increase is 7.27% in the training set consisting of 20 images per class, which is obtained with DPDA augmentation. This result is also in compliance with the DenseNet comparison study. Oxford-IIIT Pet Dataset: First, we compare the performance increase with various data augmentation methods, including DPDA, on the Oxford-IIIT Pet dataset using DenseNet201 architecture. As seen in Table <ns0:ref type='table' target='#tab_7'>5</ns0:ref>, average accuracy improvement ranges from 2.34% to 3.34%. All data augmentation methods provide performance increase compared to baseline performance for all the train set sizes while DPDA being superior. The results also reveal that data augmentation in data sets with fewer elements contributes more to accuracy. For example, the highest accuracy increase is 6.68% in the training set consisting of 20 images per class, which is obtained with DPDA+FI augmentation.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:2:0:NEW 4 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Next, we compare the performance increase with various data augmentation methods, including the DPDA, using ResNet50 architecture. As seen in Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref>, average accuracy improvement ranges from 3.03% to 6.33%. Every data augmentation methods provide a performance increase compared to baseline performance for all the train set sizes. However, the DPDA+FI method provides the best performances in every single test. The results also show that data augmentation in data sets with fewer elements contributes more to accuracy. For instance, the highest accuracy increase is 12.66% in the training set consisting of 20 images per class, which is again obtained with DPDA+FI augmentation. <ns0:ref type='table' target='#tab_9'>7</ns0:ref>). The accuracy improvement obtained with DPDA is 8.49%. The second highest accuracy improvement achieved with the FI augmentation is 0.81%, which is much less than the DPDA accuracy improvement. On the other hand, every data augmentation method does not provide a performance increase compared to baseline performance. For instance, the GC decreases the accuracy by −2.20%. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Execution Time Analysis</ns0:head><ns0:p>There are n pixels in an image. During the execution of DPDA, for each pixel, we find a path with a length of up to L/2. For each point in a path, the nearest neighbor search using FLANN method <ns0:ref type='bibr' target='#b20'>(Muja and Lowe (2014)</ns0:ref>) is done to retrieve k neighbor points. Our data is 3 channel so dimension d is 3. For each image, the FLANN tree is constructed for once where tree construction has a computational complexity of O(ndKI(logn/logK)), where I is a maximum number of iterations, and K is the branching factor. We used exact search in FLANN, which leads to O(Md(logn/logK)) for single nearest neighbor search where M is a maximum number of points to examine. However, we need to do a separate neighbor search for L/2 times for n pixels, which leads to nL/2 neighbor search operations. Thus, computational complexity of the all neighbour search operations is O(nLMd(logn/logK)). Thus, computational complexity of the DPDA is O(nd(KI + LM)(logn/logK)) including tree construction and neighbor search operations.</ns0:p><ns0:p>The average execution time for 10 augmentations of DPDA, RE, GC, and FI methods concerning image size (# of pixels) is shown in Figure <ns0:ref type='figure' target='#fig_15'>13</ns0:ref>. Although FLANN provides efficient nearest neighbor search operations, as shown in Figure <ns0:ref type='figure' target='#fig_15'>13</ns0:ref>, the execution times of the DPDA method are longer compared to RE, GC, and FI methods. Fortunately, in DL training, images are generally in small sizes, i.e. 435 × 387</ns0:p><ns0:p>for Oxford dataset, 250 × 250 for UCMerced dataset, and 150 × 150 for Intel dataset (see Figure <ns0:ref type='figure' target='#fig_15'>13</ns0:ref>). </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION & FUTURE WORKS</ns0:head><ns0:p>The proposed DPDA method employs a distribution preserving approach to create plausible variants of a given image, as shown in qualitative and quantitative results. These augmented images enrich the training dataset so that the over-fitting problem is reduced while higher training accuracies are obtained.</ns0:p><ns0:p>Obtained augmentation performance is demonstrated on UC Merced Land-use, Intel Image Classification, and Oxford-IIIT Pet datasets for classification and segmentation tasks. These experiments show the superiority of the proposed DPDA method compared to commonly used data augmentation methods such as image flipping, histogram equalization, gamma correction, and random erasing. We also combined our DPDA method with a geometric data augmentation method (flip), and in most cases, the performance of DPDA is slightly increased. This shows that the DPDA method can be combined with other data augmentation methods to increase performance further. Therefore, it is evident that DPDA is a good candidate for data augmentation tasks in different scenarios. This is consistent with the research outcomes in the literature where various data augmentation methods provide performance improvement in numerous machine learning tasks and datasets <ns0:ref type='bibr' target='#b29'>(Shorten and Khoshgoftaar (2019)</ns0:ref>). Although the proposed method provides outstanding data augmentation capabilities, there is still room for further improvements. These improvements can be divided into three groups: computational efficiency improvements, augmentation performance improvements (reflecting on DL training), and usage dissemination improvements.</ns0:p><ns0:p>Data augmentation methods are generally fast, while the DPDA method is not as fast as its competitors.</ns0:p><ns0:p>The main reason for this speed bottleneck is the computational burden of neighbor search, which is also the reason for the slowness of mean-shift-based clustering or filtering methods. This bottleneck can be alleviated by changing FLANN with a faster or a specifically designed neighbor search method. Additional speed-ups can be obtained using CPU and GPU parallelization techniques since an image contains lots of pixels, and finding density decreasing path for each pixel is independent of other pixels that can be done in parallel. Since using GPU is a common approach for DL training, GPU parallelized DPDA method will not cause extra hardware procurement on its user. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Performance of the DPDA method can be increased using spatial regularization, i.e., using graph-cut, dealing with blocking artifacts due to JPEG compression. Similar images can be retrieved, and their color data can be added to the image color data to be augmented, which may increase the quality and variety of the color data distribution especially if the image size is small. DPDA uses Perlin noise to create different augmentations from a single density decreasing path per pixel. However, Perlin noise is spatially smooth approach but still a purely random one. Instead, an image can be segmented into background and foreground objects then randomization can be done in an object-wise manner.</ns0:p><ns0:p>DPDA code can be extended to multispectral and hyperspectral images, which have 4 or more channels.</ns0:p><ns0:p>Additionally, DPDA is not limited to the augmentation of images and can be easily adapted to augment</ns0:p><ns0:p>any training data since it already works in a feature space. This is quite useful for training traditional machine learning methods that generally work on data with already extracted features. Furthermore, DPDA can be ported to Python for easy integration with current Python-based DL frameworks.</ns0:p><ns0:p>As a future study, in addition to various performance improvements and support for augmentation in feature space, we plan to improve computational efficiency using special techniques and data structures with a parallelized implementation in Python.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this paper, a novel distribution-preserving data augmentation (DPDA) method that creates plausible variations of the given image is presented. There is no study using a distribution-preserving approach that creates plausible image variations to the best of our knowledge. The proposed method employs density decreasing direction to create paths from colors of the original pixels to the tails of the image data distribution. We achieved this by regularizing the opposite of the mean-shift direction with length and orientation constraints. Finally, we developed efficient mechanisms to obtain these density decreasing paths, fused with Perlin noise results to create as many augmented images as desired.</ns0:p><ns0:p>The proposed method's performance is presented in a transfer learning scenario using three different the other hand, the UC Merced land-use dataset is obtained from nadir as over-head imagery (that can be acquired using airborne and spaceborne platforms). Also, the resolution and camera characteristics of the ImageNet dataset are pretty different from the resolution and camera characteristics of the UC Merced Land-use dataset. Nevertheless, transfer learning able to cope with this challenging adaptation.</ns0:p><ns0:p>However, the UC Merced land-use dataset's size is small, limiting the applied transfer learning schema's adaptation performance. This is a common scenario since companies or institutions develop pre-trained models with large datasets and substantial computational resources. Despite this, researchers who use these pre-trained models with transfer learning to adapt them to their problem domain generally have small datasets and scarce computational resources. In this study, the transfer learning performance is further increased using data augmentation methods such as the proposed DPDA, image flipping, histogram equalization, gamma correction, and random erasing. On the other hand, for image classification and segmentation tasks, the proposed DPDA method consistently shows superior performance compared to commonly used data augmentation methods on different datasets and different training sizes using three different DL architectures. Therefore, we concluded that the proposed DPDA method provides successful data augmentation performance.</ns0:p><ns0:p>Although the proposed method provides superior data augmentation capabilities, there is still room for further improvements. However, we did not implement these improvements since we want to present our novel density-preserving data augmentation idea's baseline performance in its simplest form. Nevertheless, possible improvements and future studies are shared in 'Discussion & Future Works' section. Among these possible future studies, improving the computational efficiency of the proposed DPDA is the most important one since high computational complexity seems to be the most significant disadvantage of the proposed method. As a final remark, although we presented our DPDA method as an image augmentation study, it is not limited to images and can work for all kinds of the dataset with already extracted features since it works in feature space. This is an excellent property of the proposed DPDA method since most image data augmentation methods are only limited to the image domain. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Example plausible and unplausible images</ns0:figDesc><ns0:graphic coords='3,153.07,566.56,126.43,126.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>(c) compared to Figure 1(b). Although Figure 1(d) is also a plausible image, it contains a limited variation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Color distribution for the image in Figure 1(a)</ns0:figDesc><ns0:graphic coords='4,146.89,195.60,259.66,120.90' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Density decreasing paths for Figure 1(a)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. DPDA Process Diagram</ns0:figDesc><ns0:graphic coords='5,181.84,468.58,330.85,71.01' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Feature space of the image in Figure 1(a), without and with image pyramid</ns0:figDesc><ns0:graphic coords='6,63.19,14.23,452.82,227.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 8. Here, we choose parameter values randomly from a predefined range where parameters are roughness (N roughness ), noise scale (N scale ), and noise center (N center ). Finally, modified Perlin noise is generated using C x,y = 0.5(tanh(N scale * (N x,y − N center )) + 1) where N x,y = Perlin.generate(xN roughness , yN roughness , 1) is original Perlin noise generation function.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Effects of different Perlin noises (top row) on augmented images (bottom row)</ns0:figDesc><ns0:graphic coords='8,155.29,641.59,72.84,62.95' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>conducted qualitative and quantitative experiments using different datasets and DL networks to evaluate the effectiveness of the proposed DPDA method. Training and testing are carried out on a server running Ubuntu Linux with Intel i9 CPU (3.7 GHz), 128 GB RAM, Nvidia RTX 3070 GPU. Python using the Keras API and TensorFlow DL libraries are utilized for training the models. This section describes the datasets and experiments used to obtain qualitative and quantitative results. Datasets We used Pxfuel 2 for qualitative experiments, and three different datasets for quantitative experiments, namely the UC Merced Land-use Yang and Newsam (2010), the Intel Image Classification 3 , and the Oxford-IIIT Pet datasets Parkhi et al. (2012). UC Merced Land-use dataset consists of satellite images of size 256 × 256 and 0.3-meter resolution that are open to the public. There are a total of 21 classes and 100 images in each class (see Figure 9).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Example image classes from UCMerced land-use dataset</ns0:figDesc><ns0:graphic coords='9,153.43,241.51,60.87,60.87' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Example image classes from Intel Classification dataset</ns0:figDesc><ns0:graphic coords='9,153.43,405.57,60.87,60.87' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Example images from Oxford IIIT Pet dataset 2 https://www.pxfuel.com/ (Royalty-free stock photos free & unlimited download) 3 https://www.kaggle.com/puneet6060/intel-image-classification/</ns0:figDesc><ns0:graphic coords='9,359.23,591.87,97.58,73.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Figure 12. Plausible Image Augmentations</ns0:figDesc><ns0:graphic coords='10,205.91,601.00,136.87,91.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>et al. (2015), or<ns0:ref type='bibr' target='#b12'>He et al. (2016)</ns0:ref> but to briefly use them for evaluating the performance of the proposed DPDA method in transfer learning settings. Resnet50<ns0:ref type='bibr' target='#b12'>(He et al. (2016)</ns0:ref>) and DenseNet201<ns0:ref type='bibr' target='#b15'>(Huang et al. (2016)</ns0:ref>) network weights trained on the ImageNet are used as starting weights in the classification task since they are widely used in the current studies<ns0:ref type='bibr' target='#b18'>(Khan et al. (2020)</ns0:ref>. Then the models are fine-tuned during training<ns0:ref type='bibr' target='#b36'>(Vrbančič and Podgorelec (2020)</ns0:ref>) since initial layers of CNNs preserve more abstract, generic features. We just copy the weights in convolutional layers rather than the entire network, excluding fully connected layers. MobileNetV2<ns0:ref type='bibr' target='#b27'>(Sandler et al. (2018)</ns0:ref>) network weights trained on the ImageNet are used as starting weights in segmentation task as a base model and trained with CNN architecture based on U-Net(Silburt et al. (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>tion set in all tests. Then, we created different train sets in various sizes(N = 20, 30, 40, 50, 60, 70, 80) to investigate the effect of training dataset size on classification performance using the data augmentation approaches. For segmentation tests, we only used the train set size of 80. To avoid sample imbalance in the training datasets, we randomly selected the training datasets in equal numbers from each class. We evaluated the final classification performance of each dataset with the average accuracy over 10 runs. Weincreased the original training dataset size 5-fold, utilizing random erase (RE), flip image (FI), gamma correction (GC), histogram equalization combined with gamma correction (HE+GC), the proposed DPDA method, and the DPDA method combined with the flip image (DPDA+FI) separately. We implemented color-based augmentation methods as done in CLoDSA (Casado-García et al. (2019)) library. a performance comparison study using transfer learning with three different DL architectures, namely DenseNet, ResNet, and MobileNetV2. These architectures are trained using transfer learning on original and augmented versions of 3 datasets. In the experiments, DPDA, DPDA+FI, FI, RE, GC, HE+GC methods are used for the augmentation of images. Baseline performances are obtained by training on the original datasets using the transfer learning approach. Augmentation performances for classification on UC Merced Land-use, Intel Image Classification, and Oxford-IIIT Pet datasets and for segmentation on Oxford-IIIT Pet dataset compared to the baseline performances are presented. UC Merced Land-use Dataset: First, we compare the performance increase obtained with various data augmentation methods and DPDA on the UC Merced Land-use dataset using DenseNet201 architecture.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Execution time (in log-scale) with respect to image size (# of pixels, n)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>DL architectures: DenseNet, ResNet, and MobileNetV2. These DL architectures are trained with millions of color images, where we used transfer learning to adapt these models to different problem domains. We tested the DPDA for classification on the UC Merced Land-use, Intel Image Classification, and Oxford-IIIT Pet datasets and image segmentation on the Oxford-IIIT Pet dataset. Note that, DenseNet, ResNet, and MobileNetV2 are trained with side-view commodity camera images, namely ImageNet. On</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:57948:2:0:NEW 4 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Data augmentation accuracy comparisons (%) in different sizes of datasets (N) using</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>DenseNet201 on UC Merced Land-use dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='4'>Baseline DPDA+FI DPDA RE</ns0:cell><ns0:cell>FI</ns0:cell><ns0:cell cols='2'>HE+GC GC</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>76.35</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>82.86</ns0:cell><ns0:cell cols='3'>80.23 80.74 79.52</ns0:cell><ns0:cell>78.96</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>82.46</ns0:cell><ns0:cell>88.25</ns0:cell><ns0:cell>88.09</ns0:cell><ns0:cell cols='3'>86.00 85.55 84.52</ns0:cell><ns0:cell>84.76</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>84.12</ns0:cell><ns0:cell>88.33</ns0:cell><ns0:cell>88.25</ns0:cell><ns0:cell cols='3'>86.19 86.11 86.51</ns0:cell><ns0:cell>85.56</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>86.27</ns0:cell><ns0:cell>90.16</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell cols='3'>88.41 88.57 87.62</ns0:cell><ns0:cell>87.93</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>87.61</ns0:cell><ns0:cell>91.34</ns0:cell><ns0:cell>91.43</ns0:cell><ns0:cell cols='3'>89.92 89.60 89.76</ns0:cell><ns0:cell>88.65</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>89.28</ns0:cell><ns0:cell>92.62</ns0:cell><ns0:cell>92.54</ns0:cell><ns0:cell cols='3'>91.19 90.71 90.24</ns0:cell><ns0:cell>90.16</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>89.60</ns0:cell><ns0:cell>92.70</ns0:cell><ns0:cell>92.54</ns0:cell><ns0:cell cols='3'>90.95 90.72 90.16</ns0:cell><ns0:cell>90.24</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 85.10</ns0:cell><ns0:cell>89.53</ns0:cell><ns0:cell>89.39</ns0:cell><ns0:cell cols='3'>87.56 87.43 86.90</ns0:cell><ns0:cell>86.61</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>4.43</ns0:cell><ns0:cell>4.29</ns0:cell><ns0:cell>2.46</ns0:cell><ns0:cell>2.33</ns0:cell><ns0:cell>1.81</ns0:cell><ns0:cell>1.51</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>augmentation accuracy comparisons (%) in different sizes of datasets (N) using ResNet50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>on UC Merced Land-use Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='4'>Baseline DPDA+FI DPDA RE</ns0:cell><ns0:cell>FI</ns0:cell><ns0:cell cols='2'>HE+GC GC</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>74.76</ns0:cell><ns0:cell>83.25</ns0:cell><ns0:cell>82.86</ns0:cell><ns0:cell cols='3'>80.87 80.71 79.84</ns0:cell><ns0:cell>79.52</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>80.07</ns0:cell><ns0:cell>86.67</ns0:cell><ns0:cell>86.11</ns0:cell><ns0:cell cols='3'>83.33 82.78 82.38</ns0:cell><ns0:cell>81.90</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>82.62</ns0:cell><ns0:cell>89.05</ns0:cell><ns0:cell>88.99</ns0:cell><ns0:cell cols='3'>85.95 85.55 84.68</ns0:cell><ns0:cell>84.12</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>83.81</ns0:cell><ns0:cell>88.97</ns0:cell><ns0:cell>88.29</ns0:cell><ns0:cell cols='3'>86.32 86.19 86.97</ns0:cell><ns0:cell>85.95</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>85.00</ns0:cell><ns0:cell>90.48</ns0:cell><ns0:cell>90.32</ns0:cell><ns0:cell cols='3'>87.62 87.93 87.85</ns0:cell><ns0:cell>87.69</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>86.66</ns0:cell><ns0:cell>91.27</ns0:cell><ns0:cell>91.19</ns0:cell><ns0:cell cols='3'>89.87 89.46 88.96</ns0:cell><ns0:cell>88.57</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>87.62</ns0:cell><ns0:cell>91.74</ns0:cell><ns0:cell>91.67</ns0:cell><ns0:cell cols='3'>90.47 90.31 90.18</ns0:cell><ns0:cell>89.21</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 82.93</ns0:cell><ns0:cell>88.78</ns0:cell><ns0:cell>88.49</ns0:cell><ns0:cell cols='3'>86.35 86.13 85.84</ns0:cell><ns0:cell>85.28</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>5.84</ns0:cell><ns0:cell>5.56</ns0:cell><ns0:cell>3.41</ns0:cell><ns0:cell>3.20</ns0:cell><ns0:cell>2.90</ns0:cell><ns0:cell>2.35</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>augmentation accuracy comparisons (%) in different sizes of datasets (N) using</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>DenseNet201 on Intel Image Classification Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='3'>Baseline DPDA RE</ns0:cell><ns0:cell cols='2'>DPDA+FI GC</ns0:cell><ns0:cell cols='2'>HE+GC FI</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>82.00</ns0:cell><ns0:cell>88.89</ns0:cell><ns0:cell cols='2'>87.83 89.17</ns0:cell><ns0:cell cols='2'>87.00 87.50</ns0:cell><ns0:cell>86.33</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>83.33</ns0:cell><ns0:cell>90.56</ns0:cell><ns0:cell cols='2'>89.16 89.34</ns0:cell><ns0:cell cols='2'>89.50 88.33</ns0:cell><ns0:cell>88.00</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>86.50</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell cols='2'>88.50 89.50</ns0:cell><ns0:cell cols='2'>88.33 88.61</ns0:cell><ns0:cell>86.83</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>86.66</ns0:cell><ns0:cell>91.39</ns0:cell><ns0:cell cols='2'>90.16 89.67</ns0:cell><ns0:cell cols='2'>90.00 89.16</ns0:cell><ns0:cell>90.33</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>88.69</ns0:cell><ns0:cell>92.50</ns0:cell><ns0:cell cols='2'>91.83 90.00</ns0:cell><ns0:cell cols='2'>90.83 90.83</ns0:cell><ns0:cell>90.16</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>88.92</ns0:cell><ns0:cell>93.33</ns0:cell><ns0:cell cols='2'>91.50 90.83</ns0:cell><ns0:cell cols='2'>90.83 91.00</ns0:cell><ns0:cell>90.17</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>90.16</ns0:cell><ns0:cell>94.16</ns0:cell><ns0:cell cols='2'>92.50 92.50</ns0:cell><ns0:cell cols='2'>90.50 90.50</ns0:cell><ns0:cell>90.17</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 86.61</ns0:cell><ns0:cell>91.55</ns0:cell><ns0:cell cols='2'>90.21 90.14</ns0:cell><ns0:cell cols='2'>89.57 89.42</ns0:cell><ns0:cell>88.86</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>4.94</ns0:cell><ns0:cell>3.60</ns0:cell><ns0:cell>3.54</ns0:cell><ns0:cell>2.96</ns0:cell><ns0:cell>2.81</ns0:cell><ns0:cell>2.25</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>augmentation accuracy comparisons (%) in different sizes of datasets (N) using ResNet50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>on Intel Image Classification Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='3'>Baseline DPDA RE</ns0:cell><ns0:cell cols='2'>DPDA+FI FI</ns0:cell><ns0:cell>GC</ns0:cell><ns0:cell>HE+GC</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>79.67</ns0:cell><ns0:cell>86.94</ns0:cell><ns0:cell cols='2'>85.33 86.57</ns0:cell><ns0:cell cols='3'>83.83 84.22 84.61</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>82.50</ns0:cell><ns0:cell>88.96</ns0:cell><ns0:cell cols='2'>87.50 86.67</ns0:cell><ns0:cell cols='3'>86.16 86.44 85.83</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>84.67</ns0:cell><ns0:cell>90.28</ns0:cell><ns0:cell cols='2'>88.67 89.17</ns0:cell><ns0:cell cols='3'>88.66 87.50 86.66</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>86.83</ns0:cell><ns0:cell>90.83</ns0:cell><ns0:cell cols='2'>89.67 89.17</ns0:cell><ns0:cell cols='3'>88.66 88.33 88.22</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>88.16</ns0:cell><ns0:cell>91.39</ns0:cell><ns0:cell cols='2'>90.00 89.87</ns0:cell><ns0:cell cols='3'>89.67 88.33 89.16</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>88.94</ns0:cell><ns0:cell>92.78</ns0:cell><ns0:cell cols='2'>92.00 90.00</ns0:cell><ns0:cell cols='3'>90.83 90.00 89.33</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>89.33</ns0:cell><ns0:cell>92.50</ns0:cell><ns0:cell cols='2'>92.00 90.83</ns0:cell><ns0:cell cols='3'>90.33 88.67 89.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 85.73</ns0:cell><ns0:cell>90.53</ns0:cell><ns0:cell cols='2'>89.31 88.90</ns0:cell><ns0:cell cols='3'>88.31 87.64 87.54</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>4.80</ns0:cell><ns0:cell>3.58</ns0:cell><ns0:cell>3.17</ns0:cell><ns0:cell>2.58</ns0:cell><ns0:cell>1.91</ns0:cell><ns0:cell>1.82</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Data augmentation accuracy comparisons (%) in different sizes of datasets (N) using DenseNet201 on Oxford-IIIT Pet Dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='5'>Baseline DPDA DPDA+FI HE+GC FI</ns0:cell><ns0:cell>GC</ns0:cell><ns0:cell>RE</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>81.67</ns0:cell><ns0:cell>87.89</ns0:cell><ns0:cell>88.35</ns0:cell><ns0:cell>86.91</ns0:cell><ns0:cell cols='3'>87.43 88.27 86.32</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>85.26</ns0:cell><ns0:cell>90.95</ns0:cell><ns0:cell>89.05</ns0:cell><ns0:cell>89.91</ns0:cell><ns0:cell cols='3'>89.21 88.78 88.13</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>87.65</ns0:cell><ns0:cell>90.14</ns0:cell><ns0:cell>90.27</ns0:cell><ns0:cell>89.67</ns0:cell><ns0:cell cols='3'>89.62 88.83 89.10</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>88.36</ns0:cell><ns0:cell>91.13</ns0:cell><ns0:cell>90.94</ns0:cell><ns0:cell>90.54</ns0:cell><ns0:cell cols='3'>90.67 90.67 90.62</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>89.85</ns0:cell><ns0:cell>92.24</ns0:cell><ns0:cell>92.27</ns0:cell><ns0:cell>91.21</ns0:cell><ns0:cell cols='3'>91.62 91.54 91.81</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>90.73</ns0:cell><ns0:cell>92.43</ns0:cell><ns0:cell>92.51</ns0:cell><ns0:cell>92.19</ns0:cell><ns0:cell cols='3'>92.12 92.51 92.29</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>90.79</ns0:cell><ns0:cell>93.10</ns0:cell><ns0:cell>94.02</ns0:cell><ns0:cell>93.27</ns0:cell><ns0:cell cols='3'>92.91 92.83 92.56</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 87.78</ns0:cell><ns0:cell>91.13</ns0:cell><ns0:cell>91.06</ns0:cell><ns0:cell>90.53</ns0:cell><ns0:cell cols='3'>90.51 90.49 90.12</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>3.34</ns0:cell><ns0:cell>3.27</ns0:cell><ns0:cell>2.75</ns0:cell><ns0:cell>2.73</ns0:cell><ns0:cell>2.71</ns0:cell><ns0:cell>2.34</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='8'>augmentation accuracy comparisons (%) in different sizes of datasets (N) using ResNet50</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>on Oxford-IIIT Pet Dataset</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell cols='4'>Baseline DPDA+FI DPDA RE</ns0:cell><ns0:cell>FI</ns0:cell><ns0:cell>GC</ns0:cell><ns0:cell>HE+GC</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>67.74</ns0:cell><ns0:cell>80.40</ns0:cell><ns0:cell>79.05</ns0:cell><ns0:cell cols='4'>77.08 78.27 72.94 72.51</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>75.46</ns0:cell><ns0:cell>82.83</ns0:cell><ns0:cell>82.48</ns0:cell><ns0:cell cols='4'>82.27 81.08 81.40 79.78</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>78.51</ns0:cell><ns0:cell>85.54</ns0:cell><ns0:cell>84.70</ns0:cell><ns0:cell cols='4'>82.75 82.70 82.91 82.99</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>80.72</ns0:cell><ns0:cell>86.62</ns0:cell><ns0:cell>86.48</ns0:cell><ns0:cell cols='4'>84.64 85.67 84.29 84.16</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>84.01</ns0:cell><ns0:cell>86.70</ns0:cell><ns0:cell>88.73</ns0:cell><ns0:cell cols='4'>87.54 85.81 86.75 85.43</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>84.91</ns0:cell><ns0:cell>90.08</ns0:cell><ns0:cell>90.00</ns0:cell><ns0:cell cols='4'>89.94 88.24 88.40 86.57</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>85.85</ns0:cell><ns0:cell>89.32</ns0:cell><ns0:cell>89.34</ns0:cell><ns0:cell cols='4'>88.83 89.00 88.51 87.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Average 79.60</ns0:cell><ns0:cell>85.93</ns0:cell><ns0:cell>85.83</ns0:cell><ns0:cell cols='4'>84.72 84.40 83.60 82.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase</ns0:cell><ns0:cell /><ns0:cell>6.33</ns0:cell><ns0:cell>6.23</ns0:cell><ns0:cell>5.12</ns0:cell><ns0:cell>4.80</ns0:cell><ns0:cell>4.00</ns0:cell><ns0:cell>3.03</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>We used the U-Net architecture on top of the MobileNetV2 architecture for the Oxford-IIIT Pet</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>Dataset segmentation experiments. DPDA provides by far the best performance in these experiments</ns0:cell></ns0:row><ns0:row><ns0:cell>(see Table</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Segmentation performance comparisons using MobileNetV2+U-Net on Oxford-IIIT Pet Dataset The above results indicate that models trained with the augmented UC Merced Land-use, Intel Image Classification, Oxford-IIIT Pet datasets, with the DPDA method, significantly improve classification performance. Besides, DPDA also provides superior performance in an image segmentation task. Thus, we can infer that the proposed DPDA method can improve DL performance for different datasets, different DL architectures (ResNet, DenseNet, and MobileNetV2), and different image analysis tasks.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Baseline DPDA DPDA+FI FI</ns0:cell><ns0:cell>RE</ns0:cell><ns0:cell cols='2'>HE+GC GC</ns0:cell></ns0:row><ns0:row><ns0:cell>Average Accuracy 80.82</ns0:cell><ns0:cell>89.31</ns0:cell><ns0:cell>82.48</ns0:cell><ns0:cell cols='3'>81.64 80.59 80.44</ns0:cell><ns0:cell>78.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy Increase</ns0:cell><ns0:cell>8.49</ns0:cell><ns0:cell>1.66</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>-0.38</ns0:cell><ns0:cell>-2.20</ns0:cell></ns0:row></ns0:table><ns0:note>13/17PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:2:0:NEW 4 May 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='11'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:2:0:NEW 4 May 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57948:2:0:NEW 4 May 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to the Editor and the Reviewers
May 4, 2021
In general, reviewers agree this new version of the paper has many improvements and they
have only minor suggestions to accept it. Please take them into account in the next version of
the paper.
Response: We are grateful for the valuable comments and positive feedback of the reviewers
and the editor. We see that the final issues about our paper are the consistency of the tables
and more analysis on execution time. We tried our best to resolve both of these issues.
Reviewer: Jonathan Heras
Basic reporting : The issues reported in the previous version have been addressed.
Response: We are glad that we able to improve the quality of the paper with your guidance.
Experimental design : A minor issue with the Intel Image Classification dataset is that the
authors did not include the results obtained using DPDA+FI. The same happens in Table 7.
Please be consistent when presenting the results from Tables 1 to Table 7.
Response: Thanks for this point. We added DPDA+FI to tables 3, 4, and 7 and now all these
7 tables in the results section become consistent.
Validity of the findings : The approach presented by the authors seems to consistently improve
the resulsts in several datasets.
Response: Thanks. We are also pleased to have good results for our enriched experiments.
Comments for the author : The authors have addressed all my concerns, and it is great that
they provide the code for both Windows and Linux. As the authors suggest in the response
porting their code to Python will be quite time consuming and is independent from the presented method.
Response: Thanks for all these motivating comments. Both reviewers strongly emphasize the
importance of porting code to Python since Python is the de facto development environment
for Machine Learning. This motivates us to combine our planned future efforts, parallelization
and porting code to Python, into one study. As pointed by the reviewer, this will require some
time and effort, so we plan to realize these efforts as soon as possible as a new research study
on parallelizing the proposed DPDA method in Python using PyCUDA.
Reviewer 3
Basic reporting :
- The authors have improved the manuscript in overall following the suggestions made by the
reviewers.
- Although source code and data source links were provided, authors should consider Python to
reach a bigger audience.
Response:
Thanks for the positive feedback. We did our best to improve the paper with the suggestions we
get from the reviewers and the editor.
Both reviewers strongly emphasize the importance of porting code to Python since Python is the
de facto development environment for Machine Learning. This motivates us to combine our
planned future efforts, parallelization and porting code to Python, into one study. However, this
will require some time and effort to realize these efforts as soon as possible as a new research
study on parallelizing the proposed DPDA method in Python using PyCUDA.
Experimental design :
- The experimental study was extended by including 2 datasets and a segmentation analysis,
which was also suggested in the previous revision.
- There is a concern in Figure 13. The authors should have plotted the execution time regarding the other methods. According authors, there is a great gap between compared methods
and the proposal. A section where a balance is made between performance and runtime metrics
is needed, e.g. authors could elaborate a new metric where both mentioned metrics are merged.
Response:
Thanks for the positive comments. Now, the performance analysis section provides more
findings to the readers.
Augmentation methods in the literature usually have linear and low computational complexity;
hence they are fast, while DPDA has higher computational complexity than the other methods.
Thus, the execution time gap increases as the image size increases. However, image sizes are
not that large in the datasets in the literature. Also, in our last submission, execution time in
Figure 13 is given for single data augmentation where DPDA seems slow. Nevertheless, the
execution time of the DPDA does not get any longer for multiple augmentations since heavy
computations (finding density decreasing paths) are only done once for an image. Then we can
create an augmented image (using Perlin noise) in the order of milliseconds and can be done
several times (i.e. 10 times). In revised paper, image sizes (in pixels) for the below 3 datasets
we used are shown in updated Figure 13 (image size in x-axis is in the order of 106 ):
OxfordPet Dataset
UCMerced Dataset
Intel Dataset
435 × 387 → 0.168345 × 106 pixels
256 × 256 → 0.065536 × 106 pixels
150 × 150 → 0.022500 × 106 pixels
Response:
Figure 13: Execution time with respect to image size
So, the actual execution time of DPDA is in the leftmost part of the x-axis of Figure 13.
We improved Figure 13. Improved figure compares the execution time of the DPDA method
with other augmentation methods (RE, GC, and FI methods) for the common image sizes in
the literature (i.e., OxfordPet, UC Merced, and Intel datasets). Note that execution times in
Figure 13 are given in log-scale to allow better visual comparison of the DPDA method with the
other methods that have milliseconds execution times. We also marked the 3 datasets we used
in Figure 13 so that reader can see execution time for the 3 datasets we used. We tried our best
to come up with a ’performance improvement versus execution time metric’, but we could not
develop such a metric. This is because data augmentation scenarios and requirements are quite
different for each application and each case and importance of the speed versus the performance
is entirely subjective.
Validity of the findings :
- The authors provided an extended discussion about the proposal and the possibilities to combined it with other data augmentation methods.
- Finally, the authors should consider the execution time as an important issue. So far, the time
reported in section “Execution Time Analysis” was for 1 image. An analysis about how long a
complete execution takes in every dataset should be included.
Response:
Thanks. We find a chance to extend the discussion section since the revised paper has more
findings than the first submission. Also, we carefully improved the discussion section as suggested by the reviewer.
As we explained in the ’Experimental design’ part (in the above response of ours), we improved
Figure 13 and related explanations to give detailed information suggested by the reviewer.
" | Here is a paper. Please give your review comments after reading it. |
126 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The development of correct and effective software defect prediction (SDP) models is one of the utmost needs of the software industry. Statistics of many defect-related open-source data sets depict the class imbalance problem in object-oriented projects. Models trained on imbalanced data leads to inaccurate future predictions owing to biased learning and ineffective defect prediction. In addition to this large number of software metrics degrades the model performance. This study aims at (1) identification of useful metrics in the software using correlation feature selection, (2) extensive comparative analysis of 10 resampling methods to generate effective machine learning models for imbalanced data,</ns0:p><ns0:p>(3) inclusion of stable performance evaluators-AUC, GMean, and Balance and (4) integration of statistical validation of results. The impact of 10 resampling methods is analyzed on selected features of 12 object-oriented Apache datasets using 15 machine learning techniques. The performances of developed models are analyzed using AUC, GMean, Balance, and sensitivity. Statistical results advocate the use of resampling methods to improve SDP. Random oversampling portrays the best predictive capability of developed defect prediction models. The study provides a guideline for identifying metrics that are influential for SDP. The performances of oversampling methods are superior to undersampling methods.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Software Defect Prediction (SDP) deals with uncovering the probable future defects. Efficient defect prediction helps in the timely identification of areas in software that can lead to defects in statistically ensemble methods and nearest neighbor methods performed better than neural networks and statistical techniques. The answers to these questions are explored by building ML models on CFS selected features using ten-fold cross-validation. Predictive performances of developed models are evaluated using stable performance evaluators like sensitivity, GMean, Balance, and AUC. <ns0:ref type='bibr' target='#b38'>Kitchenham et al. (2017)</ns0:ref> reported that in many studies results are biased because they lack statistical validation. They recommend using a robust statistical test to examine if performance differences are significant or not. Statistical validation is carried out using the Friedman test followed by post hoc analysis that is performed using the Nemenyi test. The conducted study will acquaint developers with useful resampling methods and performance evaluators that will assist them to solve CIP. This study also guides developers and software practitioners about the important metrics that affect the SDP potential of ML models. The result examination ascertained ROSbased and AHC-based ML models as the best defect predictors for datasets related to the software engineering domain. With ROS as a resampling method, nearest neighbors and ensembles exhibited comparable performance in SDP. These models were statistically better than other ML models. The rest of the paper is organized as follows. Section 2 'Related work' deals with research work done in SDP for imbalanced data. Empirical Study Design is explained in Section 3 'Materials and Methods'. Section 4 'Results' expounds on the empirical findings and provides answers to set RQs. Next, Section 5 'Discussions' summarizes the results and provides the comparison of this study with related studies. Section 6 'Validity Threats' uncovers the validity threats of this study. Finally, Section 7 'Conclusions' presents the concluding remarks with potential future directions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Related Work</ns0:head><ns0:p>This section presents the related work done in the field of feature selection and resampling solutions proposed in the SDP field.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Feature Selection in SDP</ns0:head><ns0:p>Apache datasets have 20 OO metrics and models developed using all these metrics can hamper the defect prediction capabilities of ML models. The reason is the presence of redundant or irrelevant metrics. This curse of dimensionality can be reduced by using feature reduction strategies. This involves either feature selection-reducing the number of features, or feature extraction-extracting new features from existing ones. This study focuses on feature selection by using a widely acceptable CFS technique. <ns0:ref type='bibr' target='#b25'>Ghotra, McIntosh & Hassan (2017)</ns0:ref> explored 30 feature selection techniques and concluded CFS as the best feature predictor. They used NASA datasets and PROMISE datasets with 21 ML techniques. <ns0:ref type='bibr' target='#b4'>Balogun et al. (2019)</ns0:ref> explored feature selection and feature reduction methods for five NASA datasets over four ML techniques and experimentally concluded that FS techniques did not show consistent behavior for the datasets or ML techniques. Recent studies <ns0:ref type='bibr' target='#b3'>(Arar & Ayan, 2017;</ns0:ref><ns0:ref type='bibr' target='#b49'>Lingden et al. 2019</ns0:ref>) have emphasized the importance of feature selection and the impact of CFS in building efficient models with reduced complexity and computation time. <ns0:ref type='bibr' target='#b5'>Balogun et al. (2020)</ns0:ref> empirically investigated the effect of 46 FS methods over 25 datasets from different sources using Naïve Bayes and decision trees. Based on accuracy and AUC performance, they concluded CFS was the best performer in the FSS category. Therefore, in this study, CFS is used to reveal the most relevant features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Solutions Proposed for CIP in SDP</ns0:head><ns0:p>Only 20% of software classes are accountable for the defects in software <ns0:ref type='bibr' target='#b39'>(Koru & Tian, 2005)</ns0:ref>. This principle is enough to explain the reason for the uneven distribution of minority (defective) classes and majority (non-defective) classes. Areas of SDP and software change prediction <ns0:ref type='bibr' target='#b53'>(Malhotra & Khanna, 2017;</ns0:ref><ns0:ref type='bibr' target='#b71'>Tan et al., 2015)</ns0:ref> are explored to handle CIP resulting in promising outcomes. Now, to solve this class imbalance issue, a variety of resampling methods have been proposed in the literature, among which oversampling and undersampling techniques are most widely used. <ns0:ref type='bibr' target='#b50'>Liu, An & Huang (2006)</ns0:ref> used a combination of oversampling and undersampling techniques for predicting software defects using a support vector machine. The performance of developed models was evaluated using F-Measure, GMean, and ROC curve. Experimentation by <ns0:ref type='bibr' target='#b58'>Pelayo & Dick (2007)</ns0:ref> showed improvement in GMean values for SDP when the oversampling technique SMT is used with a C4.5 decision tree classifier. <ns0:ref type='bibr' target='#b33'>Kamei et al. (2007)</ns0:ref> evaluated the effect of ROS, RUS, SMT, and OSS on industrial software. The experimental analysis proved that resampling methods improved the performance of LDA and LR models in terms of F1measure. <ns0:ref type='bibr' target='#b35'>Khoshgoftaar & Gao (2009)</ns0:ref> used RUS to handle CIP and also used a wrapper based feature selection technique for attribute selection. They investigated four different scenarios of sampling techniques and feature selection combinations to evaluate which model has better predictive capability in terms of accuracy and AUC. <ns0:ref type='bibr' target='#b24'>Galar et al. (2011)</ns0:ref> performed SDP for imbalanced data using bagging-and boosting-based ensemble techniques with C4.5 as the base classifier. The performance was evaluated using the AUC measure. Some other studies support the application of resampling methods for handling CIP <ns0:ref type='bibr' target='#b63'>(Riquelme et al.,2008;</ns0:ref><ns0:ref type='bibr' target='#b59'>Pelayo & Dick, 2012;</ns0:ref><ns0:ref type='bibr' target='#b66'>Seiffert et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b64'>Rodriguez, 2014)</ns0:ref>. Shatwani (2012) also performed an empirical comparison of defect prediction models built using oversampling techniques with three different classifiers on the eclipse dataset. <ns0:ref type='bibr' target='#b75'>Wang & Yao (2013)</ns0:ref> investigated ML models built using five different resampling methods on 10 PROMISE datasets and their findings confirmed that models based on resampled data result in better SDP. Experimentations concluded the effective model development with AdaboostNC ensemble. <ns0:ref type='bibr' target='#b30'>Jindaluang, Chouvatut & Kantabutra (2014)</ns0:ref> proposed the undersampling technique with the kcenters clustering algorithm which proves to be effective in terms of sensitivity and F-measure, but they didn't use any stable metric for imbalanced data like GMean or AUC. <ns0:ref type='bibr' target='#b72'>Tantithamthavorn, Hassan & Matsumoto (2018)</ns0:ref> used four resampling methods with seven ML techniques and concluded that RUS performed better than others. They also proposed optimized SMT whose performance was comparable to RUS on 101 datasets. <ns0:ref type='bibr'>Agrawal & Menzis (2018)</ns0:ref> proposed the modified SMT by tuning the SMT parameters and emphasized preprocessing, i.e. resampling as more important than the ML technique used to build the model. If data could be better (less skewed), then results would be reliable. <ns0:ref type='bibr' target='#b52'>Malhotra and Kamal (2019)</ns0:ref> inspected the impact of oversampling techniques on ML models built with 12 NASA datasets. They demonstrated the improvement in ML models with oversampling and proposed a new resampling method-SPIDER3. Though many studies have been conducted, still there is no particular set of resampling methods that can be considered the winner of all. These techniques certainly need more replicated studies with different classifiers and different datasets. More often NASA datasets are exploited by researchers for investigating CIP. We have comparatively used Apache datasets to visualize the effect of sampling techniques. Apart from decision tree-based and ensemble-based classifiers, this study used rule-based, neural network-based, and statistical-based machine learners for assessing the predictive capability of models.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Materials & Methods</ns0:head><ns0:p>This section describes the components involved in this empirical study. This section describes the framework established to build a classification model for defect prediction from dataset collection to model validation. Figure <ns0:ref type='figure'>1</ns0:ref> explains the experimental setup for the study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Step1: Dataset Collection</ns0:head><ns0:p>Datasets mined from the promise repository are used for empirical predictive modeling and validation. These datasets are donated by <ns0:ref type='bibr' target='#b32'>(Jureczko Madeyski, 2010)</ns0:ref> and downloaded by Promise repository (http://openscience.us/repo). Datasets consist of 20 software metrics and a dependent variable indicating the number of defects in a particular class. The selection of projects is based on the percentage of data imbalance in them. Details of datasets are provided in Table <ns0:ref type='table'>1</ns0:ref>. #Classes denotes the total number of classes in the project, #DClasses denotes the number of defective classes, and #%ageDefects represents the percentage of defective classes in a project. The percentage of defective classes in addressed projects varies from 9.85% to 33.9%. This low percentage represents the imbalanced ratio of defective and non-defective classes in the datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Step 2: Data preprocessing and Identification of Variables</ns0:head><ns0:p>The independent variables used in this study are different OO metrics characterizing a software system from various aspects. The OO metrics used in the study include the following metrics: Chidamber and Kemerer (CK) Metric suite <ns0:ref type='bibr' target='#b15'>(Chidamber & Kemerer, 1994</ns0:ref>): Six popularly used metrics namely Weighted Methods of a Class (WMC), Depth of Inheritance Tree (DIT), Number of Children (NOC), Lack of Cohesion in Methods (LCOM), Response For a Class (RFC) and Coupling Between Objects (CBO) are incorporated in this metric suite. This metric suite has been validated in many empirical studies for developing SDP models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Quality Model for Object-Oriented Design metric suite (QMOOD) (Bansiya & Davis, 2002):</ns0:head><ns0:p>The study uses few metrics from this metric suite namely Number of Public Methods (NPM), Data Access Metric (DAM), Method of Functional Abstraction (MFA), Measure of Aggression (MOA) and Cohesion among Methods of a class (CAM) along with CK metrics for developing defect prediction models. These metrics are also well exploited in related studies to develop effective software quality prediction models. Other metrics: Few other metrics have also been used in this study as independent variables in addition to the above metrics that are widely used by researchers. These metrics are-Efferent Coupling (Ce), Afferent Coupling (Ca), Lines of Code (LOC), Coupling Between Methods of a Class (CBM), Average Method Complexity (AMC), and the variant of LCOM (LCOM3). Ce and Ca are proposed by <ns0:ref type='bibr' target='#b55'>Martin (1994)</ns0:ref> and LOC, CBM, AMC, and LCOM3 are proposed by <ns0:ref type='bibr' target='#b28'>Henderson-Sellers (1995)</ns0:ref>. Inheritance Coupling (IC), maximum Cyclomatic complexity (Max_cc), and average cyclomatic complexity (Avg_cc) are also used in addition to the above metrics. Details of metrics used can be referred to from http://gromit.iiar.pwr.wroc.pl/p_inf/ckjm/metric.html. Datasets are checked for any inconsistencies like missing data or redundant data. If there are data inconsistencies, the model prediction would be biased. Therefore, it is important to clean the data before using it for model development. Datasets collected contain a continuous variable representing the number of defects. It is converted into a binary variable by replacing '0' with 'No' and natural numbers with 'yes'. The binary dependent variable, 'defect' with two possible values are 'yes' and 'no' reflects whether the software class is defective or not.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Step 3: Feature Selection</ns0:head><ns0:p>All these metrics may not be important in the concern of predicting defects in the early stages of project development. This study employs CFS for the identification of significant metrics. A review by <ns0:ref type='bibr'>Malhotra [56]</ns0:ref> has revealed that CFS is the most commonly used feature selection technique. CFS is used in this study for selecting features because it is the most preferred FS technique in SDP literature <ns0:ref type='bibr' target='#b25'>(Ghotra, McIntosh & Hassan, 2017;</ns0:ref><ns0:ref type='bibr' target='#b3'>Arar & Ayan, 2017;</ns0:ref><ns0:ref type='bibr' target='#b49'>Lingden et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Balogun et al., 2020)</ns0:ref>. CFS performs the ranking of features based on information gain and identifies the optimal subset of features. The features in the subset are highly correlated to the class label 'defect' and are uncorrelated or less correlated with each other. CFS is incorporated to minimize the multicollinearity effect. A list of metrics selected by CFS for each dataset is presented in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.4</ns0:head><ns0:p>Step 4: Handling CIP This step involves applying resampling methods for treating CIP. These methods are implemented using a knowledge extraction tool based on evolutionary learning (KEEL) <ns0:ref type='bibr' target='#b2'>(Alcalá-Fdez, 2011)</ns0:ref>. Six oversampling methods and four undersampling methods were applied to create a balance between majority and minority classes. Details of these resampling methods are given in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Step 5: Performance Evaluation and Model Development</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.5.1'>Performance Evaluators</ns0:head><ns0:p>When it comes to imbalanced data, the selection of appropriate performance evaluators plays a critical role. These measures can be calculated by using the confusion matrix shown in Table <ns0:ref type='table'>4</ns0:ref>. TP represents the number of defective classes predicted correctly. TN represents the number of non-defective classes predicted correctly. FP represents the number of non-defective classes that are wrongly predicted as defective classes. FN represents the number of defective classes that are wrongly predicted as non-defective classes.</ns0:p><ns0:p>The use of accuracy to evaluate performance is specious when data is imbalanced. Instead, robust performance evaluators like AUC, GMean, and Balance should be used in the class imbalance framework. The sensitivity indicates the probability of correctly predicted defective classes out of total defective classes. whereas specificity refers to the probability of identifying non-defective classes correctly. Sensitivity or True Positive Rate (TPR) is defined as</ns0:p><ns0:formula xml:id='formula_0'>(1) 𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁</ns0:formula><ns0:p>GMean maintains a balance between both these accuracies <ns0:ref type='bibr' target='#b48'>(Li et al. 2012)</ns0:ref>. Therefore, it is wise to use GMean as an effective measure to assess imbalanced data. GMean is defined as the geometric mean of sensitivity and specificity for any classifier. Balance corresponds to the Euclidean distance between a pair of sensitivity and False Positive Rate (FPR) <ns0:ref type='bibr' target='#b48'>(Li et al. 2012)</ns0:ref>. FPR is the probability of false alarm. It exemplifies the proportion of non-defective classes that are misclassified as defective classes amongst actual non-defective classes. Balance can be computed as-</ns0:p><ns0:formula xml:id='formula_1'>(4) 𝐵𝑎𝑙𝑎𝑛𝑐𝑒 = 1 - (0 -𝐹𝑃𝑅 100 ) 2 + (1 -𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 100 ) 2 2 𝑤ℎ𝑒𝑟𝑒 (5) FPR = 𝐹𝑃 𝑇𝑁 + 𝐹𝑃</ns0:formula><ns0:p>The area under the curve (AUC) is widely accepted as a consistent and robust performance evaluator for predictions in imbalanced data <ns0:ref type='bibr' target='#b19'>(Fawcett, 2006;</ns0:ref><ns0:ref type='bibr' target='#b53'>Malhotra & Khanna, 2017)</ns0:ref>. It is threshold independent and can handle skewed data. It is a measure to distinguish between the two classes. The range of AUC is (0, 1). Higher the AUC value, the better the prediction model. AUC value of 0.5 signifies that the model cannot differentiate between the two classes. AUC values from 0.7 to 0.8 are considered acceptable. AUC values greater than 0.8 are considered excellent.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.2'>Model Development using ML Techniques</ns0:head><ns0:p>This study developed models based on 15 ML techniques. Ten-fold with-in project crossvalidation is carried out to reduce the partitioning bias. Data is divided into ten partitions. Nine partitions are used for the training part and the remaining one partition is used for the testing part. Then performance evaluators are averaged across ten folds. The ML parameters that were used in experiments created in this study are noted in Table <ns0:ref type='table'>5</ns0:ref>. ML techniques used in this study can be divided into five major categories as described below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.2.1'>Statistical Techniques</ns0:head><ns0:p> Naive Bayes (NB) <ns0:ref type='bibr' target='#b31'>(John & Langley, 1995)</ns0:ref>: Naïve Bayes is a probability-based classifier that works on the Bayes theorem. It is an instance-based learner that computes class wise conditional probabilities. Features need to be conditionally independent with each other.</ns0:p><ns0:p>It provides fair results even in violation of this assumption. This ML technique works well for both categorical and numerical variables.  Simple Logistic (SL) <ns0:ref type='bibr' target='#b70'>(Sumner, Frank & Hall, 2005)</ns0:ref>: SimpleLogistic uses LogitBoost to construct logistic regression models. LogitBoost uses the logit transform to predict the probabilities. With each repetition, one simple regression model is added for each class.</ns0:p><ns0:p>The process terminates when there is no more reduction in classification error.</ns0:p><ns0:p> LogitBoost (LB) <ns0:ref type='bibr' target='#b22'>(Friedman, Hastie & Tibshirani, 2000)</ns0:ref>: LogitBoost is an additive logistic regression with a decision stump as the base classifier. It maximizes the likelihood and, therefore, generalizes the linear logistic model. The base classifier taken is the decision stump which considers entropy for classification.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.2.2'>Neural Networks</ns0:head><ns0:p> MultiLayerPerceptron (MLP) <ns0:ref type='bibr' target='#b65'>(Rojas & Feldman, 2013)</ns0:ref>: It is a backpropagation neural network that uses sigmoid function as the activation function. The number of hidden layers in the network is determined by the average of the number of attributes and total classes for a particular dataset. The error is backpropagated in every epoch and reduced via gradient descent. The network is then learned based on revised weights.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.2.3'>Nearest Neighbors</ns0:head><ns0:p> IBk <ns0:ref type='bibr'>(Aha, Kibler & Albert,1991)</ns0:ref>  Kstar <ns0:ref type='bibr' target='#b16'>(Cleary & Trigg, 1995)</ns0:ref>: Like Ibk, Kstar is also an instance-based learning algorithm. The difference between the two techniques is about the similarity measures they use. IBk exploits Euclidian distance and Kstar uses the similarity measure based on entropy. Kstar exhibits good classification competence for noisy and imbalanced data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.2.4'>Ensembles</ns0:head><ns0:p> AdaboostM1 (ABM1) <ns0:ref type='bibr' target='#b61'>(Quinlan, 1993;</ns0:ref><ns0:ref type='bibr' target='#b21'>Freund & Schapire, 1996)</ns0:ref>  Bagging (Bag) <ns0:ref type='bibr' target='#b61'>(Quinlan, 1993;</ns0:ref><ns0:ref type='bibr' target='#b10'>Breiman, 1996)</ns0:ref> Bagging reduces the variance. The number of bags used for experimentation is 10 and the base classifier used is J48.</ns0:p></ns0:div>
<ns0:div><ns0:head> Iterative Classifier Optimizer (ICO):</ns0:head><ns0:p>LogitBoost is used as the iterative classifier in this technique. Cross-validation is utilized for its optimization. In the experiments conducted, it goes through 50 iterations to decide for the best cross-validation.</ns0:p><ns0:p> Logistic Model Tree (LMT) <ns0:ref type='bibr' target='#b42'>(Landwehr, Hall & Frank, 2005)</ns0:ref>: Logistic Model Tree is a meta-learning algorithm that uses logistic regression at leaf nodes for classification. A combination of linear logistic regression and decision tree helps in dealing with the biasvariance tradeoff. This technique is robust to missing values and can handle numeric as well as nominal attributes.</ns0:p><ns0:p> Random Tree (RT) <ns0:ref type='bibr' target='#b46'>(Leo, 2001;</ns0:ref><ns0:ref type='bibr' target='#b70'>Sumner, Frank & Hall, 2005)</ns0:ref>  Random SubSpace method (RSS) <ns0:ref type='bibr' target='#b29'>(Ho, 1998)</ns0:ref>: Random SubSpace is used to construct random forests. Randomly feature subsets are selected to generate multiple trees.</ns0:p><ns0:p>Bagging is performed with Reptree. Reptree is faster than the basic decision tree and generates multiple trees in each iteration. It then selects the tree whose performance is the best.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5.2.5'>Decision Trees</ns0:head><ns0:p> Pruning rule-based classification tree (PART) <ns0:ref type='bibr' target='#b20'>(Frank & Witten, 1998)</ns0:ref>: PART is a rulebased learning algorithm that exploits partial C4.5 decision trees and generates rules at each iteration. PART stands for a pruning rule-based classification tree. The rule that results in the best classification is selected. MDL is used to find the optimal split. smaller the confidence factor more will be the pruning done.</ns0:p><ns0:p> J48 <ns0:ref type='bibr' target='#b61'>(Quinlan, 1993)</ns0:ref>: J48 is a JAVA implementation of the C4.5 decision tree. It follows the greedy technique to build a decision tree and uses the gain ratio as splitting criteria.</ns0:p><ns0:p>Leaf nodes are the classification labels-defective and non-defective and rules can be derived by traversing from root to leaf node. It generates a binary tree, and one-third of the data is used for reduced error pruning.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6'>Step 6: Statistical Validation</ns0:head><ns0:p>The results need to be statistically verified because without the involvement of statistics results may be misleading <ns0:ref type='bibr' target='#b20'>(Frank & Witten, 1998)</ns0:ref>. <ns0:ref type='bibr'>Kitcehnham et al. (2018)</ns0:ref> emphasized employing robust statistical tests for validating the experimental results. For statistical validation, we can use either parametric or nonparametric tests. The nonparametric Friedman statistical test <ns0:ref type='bibr' target='#b23'>(Friedman, 1940)</ns0:ref> and the Nemenyi Test are exercised in this study because software data do not follow a normal distribution <ns0:ref type='bibr' target='#b18'>(Demšar, 2006)</ns0:ref>. The Friedman test is executed for different performance evaluators for establishing the statistical difference amongst the performance of developed SDP models. We need to compare several ML models built for several datasets. Therefore, Friedman rankings are computed using the Friedman test. Mean ranks are determined with the help of actual values of performance measure and then these ranks are exploited to perform post-hoc Nemenyi test. If the Friedman test results tend to be positive, post hoc analysis is carried by the Nemenyi test to find pair-wise significant differences. Nemenyi test is executed to determine the technique that statistically outperforms others. The Friedman test and the Nemenyi test are the non-parametric alternatives of the parametric ANOVA test and Tukey test. Both tests are performed using a 95% confidence interval. Hypotheses are set for the corresponding test and we need to accept or refute hypotheses at α = 0.05.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Results</ns0:head><ns0:p>4.1 RQ1: Which features are repeatedly selected by CFS in software engineering datasets? 20 OO metrics of datasets fundamentally define the IQAs of the software and can be grouped into cohesion, coupling, size, complexity, inheritance, encapsulation, and composition metrics. OO metrics corresponding to each IQA are presented in Table <ns0:ref type='table'>6</ns0:ref>. #Selected denotes the number of times a particular metric is selected by CFS for all datasets. LCOM was selected by 11 datasets whereas the Ce metric was selected by 10 datasets. This RQ contemplates the metrics that are important for SDP. The weightage of each metric that was selected by CFS for 12 datasets is considered and their proportion selection was determined for each IQA. In cohesion metrics, LCOM, CAM, and LCOM3 were chosen by 91.67%, 66.67%, and 41.67% of datasets respectively. The cumulative proportion of the selection of cohesion metrics is 66.7%. This shows that cohesion metrics are important for SDP and while developing the software, developers can focus more on LCOM and CAM values. Similarly, the composition metric (MOA) was selected by 66.67% of the datasets. Exploring Table <ns0:ref type='table'>6</ns0:ref>, though the cumulative proportion of coupling metrics is 55.6% its significance can be judged by selecting the top three selected metrics-RFC, Ca, and Ce. RFC is picked by 10 datasets whereas Ca and Ce are opted by 9 datasets each. Considering only these three metrics, the proportion selection of coupling metrics raises from 55.65% to 77.78%. In size metrics, LOC and AMC are more preferred software metrics for defect prediction. In all datasets, the least selected metrics belong to the inheritance category. Therefore, resource investment can be done wisely by developers. The number of times any metric is selected for all the datasets guides developers and software practitioners in determining its worth for SDP.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>RQ2: What is the performance of ML techniques on imbalanced data while building SDP models?</ns0:head><ns0:p>A box and whisker diagram (boxplot diagram) graphically represents numerical data distributions using five statistics: a) the smallest observation, b) lower quartile (Q 1 ), c) median, d) upper quartile (Q 3 ), and e) the largest observation. The box is constructed based on the interquartile range (IQR) from Q 1 to Q 3 . The median is represented by the line inside the box. The whiskers at both ends indicate the smallest observation and the largest observation. GMean Analysis: When no resampling method is used, GMean ranges from 0 to 0.79. The median value of GMean for all datasets is observed as 0.6. NB achieved the highest GMean values for 58.33% of datasets. Considering the models for all datasets with 5 different ML techniques, only 14.4% of models achieved GMean greater than 0.7. 30.6% of models have a GMean value less than 0.5. Sensitivity Analysis: Sensitivity is less than 60% in 93.9% of cases. The median value of sensitivity is only 0.39. This supports the low predictive capability of developed models when CIP is not handled. Sensitivity values lie between 0 and 0.63. Only 0.6% of cases have a sensitivity greater than 70% which is not an acceptable achievement for any prediction model. The maximum sensitivity value obtained in the NS case is 0.7 by IBk, the nearest neighbor technique, in the log4j1.1 dataset. Thus, the overall performance of SDP models developed using machine learning algorithms on imbalanced data is not satisfactory for high-quality predictions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>RQ3a: What is the comparative performance of various SDP models developed using resampling methods? RQ3b: Is there any improvement in the performance of SDP models developed using ML techniques on the application of resampling methods?</ns0:head><ns0:p>To answer these questions, we exploited performance evaluators-Sensitivity, GMean, Balance, and AUC values that are calculated with help of a confusion matrix obtained by ten-fold crossvalidation-trained models developed using resampling methods. Boxplot diagrams are generated and presented in Figure <ns0:ref type='figure'>3</ns0:ref> to visually depict the defect predictive capability of ML models in terms of AUC, Balance, GMean, and sensitivity on resampled data. Median values of the best resampling method and NS scenario for AUC, Balance, GMean, and sensitivity are recorded in Figure <ns0:ref type='figure'>4</ns0:ref>, Figure <ns0:ref type='figure'>5</ns0:ref>, Figure <ns0:ref type='figure'>6</ns0:ref>, and Figure <ns0:ref type='figure'>7</ns0:ref> in form of bar graphs. We have analyzed the performance of developed models based on mean and median values obtained for the cumulative ML techniques of considered performance evaluators. NS cases are included to provide a fair comparison with the resampling-based models.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.1'>Comparative performance of various SDP models developed using resampling methods</ns0:head><ns0:p>For all the datasets, the median values are reported for NS and the resampling method that yields the maximum median performance value in bar graphs. Comparison of boxplot diagram in Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref> with boxplot diagram in Figure <ns0:ref type='figure'>3</ns0:ref> shows the rise in the median line, quartiles, and the highest value achieved by models that were built using resampled data. AUC Analysis: On the application of resampling methods, 51.4% of models achieved AUC greater than 0.8. From the bars in Figure <ns0:ref type='figure'>4</ns0:ref>, it is visible that ROS performance is the best amongst others. ROS attained the highest mean and median value for 75% of the datasets-Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Synapse1.0, Tomcat6.0, and Xerces1.3. It also showed the highest mean value for Synapse1.1. Other resampling methods like AHC, SMT, and SPD also have depicted good performance in terms of AUC. SPD got the highest median value for Log4j1.1 (0.92) and Synapse1.1 (0.87). In undersampling methods, only NCL results can be considered progressive. NCL was able to manage to secure the highest median value for only one dataset, i.e., Synapse1.2. The highest AUC value of 1 is achieved by Jedit4.2 and Tomcat6.0 datasets. 13.9% of models have AUC greater than 90%, which is a remarkable improvement. Balance Analysis: Referring to Figure <ns0:ref type='figure'>5</ns0:ref>, according to the Balance performance evaluator values achieved in predictive modeling, ROS performance seems to outperform the other resampling methods. ADSYN could maximum achieve 86.24 Balance value for Synapse1.0. There were only three resampling methods, ROS, AHC, and SPD, that could achieve the highest Balance value greater than 90. ROS was able to generate a Balance value of 95.58 for Ivy2.0. Comparatively, undersampling techniques, RUS and OSS, had a maximum Balance value of only 77.36 and 80.02 respectively. The highest median and mean values for all datasets except Synapse1.2 are attained by either SPD or ROS. Synapse 1.2 got the highest mean value with NCL and undersampling method. 60.9% of cases have a Balance greater than 70. Therefore, there is 357% of growth in median values of Balance when resampling is done as compared to NS. GMean Analysis: Similar patterns are observed in GMean also. Bar graphs in Figure <ns0:ref type='figure'>6</ns0:ref> portray the better predictive capabilities of ML techniques with ROS. The highest mean and median values are attained by ROS and SPD for all datasets except Synapse1.2. NCL gave the best results for mean and median values of Synapse1.2. 80.7% of cases have GMean greater than 0.65 for resampling methods. AHC, though not having any maximum values, but have consistent performance for all the datasets. With NS, only 3.9% of cases could achieve a GMean value greater than 0.75. With resampling methods, the number of cases for GMean values greater than or equal to 0.75 has elevated to 42.7% with the 1182% of improvement. Comparing the oversampling and undersampling methods, oversampling methods thrived in predicting defects efficiently.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Sensitivity Analysis: Sensitivity values are illustrated in the bar graph presented in Figure <ns0:ref type='figure'>7</ns0:ref>. CNNTL worked best for Log4j1.1 and Synapse1.2 with an average sensitivity value of 0.85 and 0.83 respectively. For other datasets, ROS outperformed other resampling methods. After applying resampling methods, sensitivity increases above 0.9 for 11.9% of the developed models. 53% of resampling-based models have a sensitivity greater than 0.7 as compared to only 0.6% of cases in the NS scenario. This accounts for a whopping 9440% of improvement in ML models having a sensitivity value greater than 0.7. Therefore, it can be concluded that the predictive capability of ML models has immensely improved after treating imbalanced data properly using resampling methods.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2'>Comparison of Resampling-based ML models with NS models</ns0:head><ns0:p>A comparison of the results of models developed without employing resampling methods and models developed using resampling methods provides an answer to RQ3b. 1800 ML models were constructed for 12 datasets with ten resampling methods. The results are compared based on the maximum value attained and averaged median value achieved in each dataset. Percentage Improvement in Performance Measures after resampling methods are used is presented in Table <ns0:ref type='table'>7</ns0:ref>. Analysis of Table <ns0:ref type='table'>7</ns0:ref> shows that there is a positive increment in all the values of maximum or median for the four performance evaluators when resampling methods are employed to build SDP models. This proves that there is a definite improvement in resampling-based ML models than the NS models when evaluated based on AUC, Balance, GMean, and sensitivity. For AUC, the overall percentage growth for the median value is 6.3%. The maximum AUC value achieved in the NS case is 0.86 for Log4j1.1 which increases to 0.97 when resampling methods are applied to it. Jedit4.2 and Tomcat6.0 were able to attain a maximum AUC value of 1 showing 18.2% and 22.2% of the increase. For Synapse1.0, on the application of resampling methods, the maximum value depicts the incremental growth of 33.8% and the respective median growth corresponds to 24.3%. The increase in median values of Balance and GMean for all the datasets is 29.1% and 21.2% respectively with resampling methods. Synapse1.0, Tomcat6.0, Ivy2.0, and Camel1.6 has illustrated more than 60% of improvement in Balance median values and more than 70% improvement in GMean median values. The maximum Balance value gained by models with resampling methods is 95.58 for Ivy2.0 which was earlier 58.88. Similarly, the sensitivity median values have shown a remarkable improvement of 84.6%. The median value of NS was 0.39 when all datasets were considered together. This value was raised to 0.71 on the application of resampling methods. Answer to RQ3b-The results verify that there is an improvement in the performance of SDP models developed using ML techniques on the application of resampling methods.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>RQ4: Which resampling method outperforms the addressed undersampling and oversampling techniques for building an efficient SDP model?</ns0:head><ns0:p>This RQ addresses the effectiveness of 10 resampling methods that are investigated in this study for constructing good SDP models. For the experiments conducted, is there any particular resampling method that can be considered the best? In this direction, we conducted Friedman tests on performance evaluators to provide rankings to models built using resampling methods. The case when no resampling method was used is also included. The Friedman test is used to find the difference between techniques statistically. Four hypotheses are formed for four different performance evaluators. The hypothesis formed to achieve this objective is stated as: H i 0 (Null Hypothesis): There is no significant statistical difference between the performance of any of the defect prediction models developed after using resampling methods and models developed using original data, in terms of PM j . H i a (Alternate Hypothesis): There is a significant statistical difference between the performance of any of the defect prediction models developed after using resampling methods and model developed using original data, in terms of PM j . where i = 1 to 4 denoting H 1 , H 2 , H 3 and H 4 hypothesis and j = 1 to 4. PM 1 = AUC, PM 2 = Balance, PM 3 = GMean, and PM 4 = Sensitivity. Table <ns0:ref type='table'>8</ns0:ref> provides the desired ranking of SDP models developed in this study for AUC, Balance, GMean, and Sensitivity. The mean ranks of each resampling method and NS are shown in parentheses. As discussed, NS represents the scenario when no sampling technique is used, so, it represents the performance with original data. We evaluated the hypothesis at the 0.05 level of significance, i.e., 95% of the confidence interval. Rank 1 is the best rank and rank 11 is the worst rank. The p-values achieved for all performance evaluators are 0.000 and recorded in Table <ns0:ref type='table'>8</ns0:ref>. As p-values are less than 0.05, we reject the null hypothesis and declare that there is a significant difference between resampling methods applied to developed SDP models. Table <ns0:ref type='table'>8</ns0:ref> shows that ROS and AHC have unanimously scored Rank 1 and Rank 2 respectively for all the performance evaluators. Out of four undersampling methods, only one, i.e., NCL can make its space in the first seven positions for the reliable performance evaluators-AUC, Balance, and GMean. OSS, CNNTL, and RUS have acquired positions in the last four ranks with AUC, Balance, and GMean. These rankings clearly state the supremacy of oversampling methods over undersampling methods. For Balance, GMean, and Sensitivity, NS case is ranked last. Therefore, results statistically approved the visualization in RQ3 that usage of resampling methods improved the predictive power of SDP models. We have used Kendall's coefficient of concordance to assess the effect size. It is a quantitative measure of the magnitude of the experimental effect. Kendall in Table <ns0:ref type='table'>8</ns0:ref> shows the value for Kendall's coefficient of concordance. Its value ranges from 0 to 1. It reflects the degree of agreement. The Kendall value is 0.656, 0.589, 0.587, and 0.538 for AUC, Balance, GMean, and sensitivity respectively. As the values are greater than 0.5 but less than 0.7, therefore, its effect is moderate. Friedman test tells whether there is an overall difference or not in the model performances, but if there is a difference, it fails to further identify the pairwise difference, i.e, exactly which technique is significantly different from other. For this, a post-hoc analysis was conducted using Nemenyi Test on overall datasets and the comparative pairwise performance of all the resampling methods was evaluated with ROS. The Nemenyi test was carried out at the α = 0.05 level of significance. Table <ns0:ref type='table'>9</ns0:ref> summarizes the Nemenyi test results for AUC, GMean, Balance, and sensitivity. 'S+' represents 'significantly better' results. If the difference between the mean ranks of the two techniques is less than the value of critical distance (CD), there is no significant difference in the 95% confidence interval. If the difference is greater than the CD value, the technique with a higher rank is considered statistically better. The computed CD value for the Nemenyi test conducted for resampling methods was 1.135. It can be inferred from Table <ns0:ref type='table'>9</ns0:ref> that ROS has comparable performance with AHC and exhibits statistically better performance than all other compared scenarios based on AUC, Balance, GMean, and sensitivity. Answer to RQ4: Oversampling methods resulted in better SDP models than undersampling methods. ROS and AHC emerged as the statistically better resampling method in terms of AUC, Balance, GMean, and sensitivity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>RQ5: Which ML technique performs the best for SDP based on resampled data?</ns0:head><ns0:p>To answer this RQ, we performed the Friedman test for 15 ML techniques by considering ROSbased models. The Friedman rankings are recorded in Table <ns0:ref type='table'>10</ns0:ref> with Rank 1 as the best rank. The Friedman test was held at the 0.05 significance level with a degree of freedom of 14. The null hypothesis is set as there is no difference between comparative performances of ROS-based models for different ML techniques. The p-value for each performance evaluator was 0.000. Therefore, the results are considered 95% significant. We refute the null hypothesis as there is a significant difference amongst performances of models built using different ML techniques. The Kendall value for AUC, Balance, GMean, and sensitivity for ML techniques with ROS-based models is 0.876, 0.835, 0.844, and 0.838. All the four performance evaluators have a Kendall value greater than or equal to 0.835. This signifies that rankings for different datasets are approximate 83.5% similar and hence, increases the reliability and credibility of the Friedman statistical results. The impact of differences in results is high. This can be observed from Table <ns0:ref type='table'>10</ns0:ref> that Kstar, IBk, ABM1, RT, and RSS techniques incited better prediction models than other ML techniques. Kstar and IBk are the variants of the nearest neighbor techniques. These ML techniques provide good results when there is a large number of samples irrespective of the data distribution. Nearest neighbors are instance-based fast learners. Mean ranks are inscribed in brackets. The mean rank of KStar is 14.41 for AUC which is the highest in the pool. The rank of IBk is second when the models are evaluated based on Balance, GMean, and sensitivity. The mean rank of IBk is 12.66, 12.91, and 12.33 for Balance, GMean, and sensitivity respectively. ABM1, RT, Bag, and RSS have also attained high ranks. ABM1 is the first ranker in Balance and GMean with a mean rank of 13.41 and 13.5 respectively. ABM1 got 2nd rank with AUC and 4th rank with sensitivity. For the stable performance evaluators, ABM1, Bag, RT, and RSS appeared in the first six ranks, proving their competency in the ML world. These four techniques are ensembles that are considered robust in dealing with imbalanced data. The crux of ensemble techniques is to cover the weakness of the base ML technique and combine them to reduce bias or variance and enhance its predictive capability. The last three ranks for all the performance evaluators are grabbed by the statistical learners: Naïve Bayes, Simple Logistic, and Logistic Regression. These ML techniques, in contrast, were able to generate good prediction models when no resampling method was involved in model construction. The balancing of both the classes boosted the performance of other classifiers, especially ensembles and nearest neighbors, and resulted in their unacceptable performance. The post-hoc analysis is carried out using the Nemenyi Test and results are computed with CD = 6.191. Pairwise Comparison of ML Techniques using Nemenyi Test for AUC, Balance, GMean, and Sensitivity are presented in Table <ns0:ref type='table' target='#tab_2'>11</ns0:ref>. ABM1 was paired with other ML techniques and it was found comparable with IBk, Kstar, RT, BAG, RSS, PART, LMT, and J48. ABM1 was found statistically significant than MLP, ICO, LB, SL, LR, and NB. Answer to RQ5: Ensembles and nearest neighbors performed the best for SDP in imbalanced data with a random oversampling method.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>Discussions</ns0:head><ns0:p>Feature Selection is an important activity in software quality predictive modeling. The most widely accepted feature selection technique in literature is CFS. Through this study, we provided the important subset of features to the software practitioners. We identified that coupling and cohesion are the most important internal quality attributes that developers should focus on while building software to avoid defects. Defects in software were found to be highly correlated to LCOM, RFC, Ca, and Ce. Median values of sensitivity have improved by 45.7%, 341.9%, 250%, 72.2%, 154.8%, 66.7%, 18.9%, 330%, 46.5%, 32.8%,293.3%, and 88.8% for Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Log4j1.1, Synapse1.0, Synapse1.1, Synapse1.2, Tomcat6.0, and Xerces1.3 respectively after applying resampling methods. Median values of Balance have improved by 16.4%, 61.3%, 74.7%, 23.6%, 49.5%, 26%, 2.2%, 86.8%, 13.1%, 7.2%,77.7%, and 31.9% for Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Log4j1.1, Synapse1.0, Synapse1.1, Synapse1.2, Tomcat6.0, and Xerces1.3 respectively after applying resampling methods. Median values of G-Mean have improved by 13.1%, 73.7%, 73.5%, 16.9%, 44.7%, 20.1%, 0.4%, 88.3%, 9%, 5.5%, 83.5%, and 24.2% for Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Log4j1.1, Synapse1.0, Synapse1.1, Synapse1.2, Tomcat6.0, and Xerces1.3 respectively after applying resampling methods. Median values of ROC-AUC have improved by 1.9%, 3.8%, 14.3.7%, 1.9%, 1.3%, 8.7%, 0.6%, 24.3%, 0.7%, 3.3%, 4.7%, and 5.8% for Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Log4j1.1, Synapse1.0, Synapse1.1, Synapse1.2, Tomcat6.0, and Xerces1.3 respectively after applying resampling methods. For the datasets considered, ROS and AHC were significantly better than other resampling methods. Better model predictions can be achieved by incorporating oversampling methods than the undersampling methods.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Resampling methods did not improve the defect predictive capability of statistical techniques. Developers and researchers should prefer ensemble methods for software quality predictive modeling. Our findings are in the agreement with the <ns0:ref type='bibr' target='#b75'>Wang & Yao (2013)</ns0:ref> conclusions. They proved that resampling methods improve defect prediction and, in their settings, the AdaBoost ensemble gave the best performance. But they were not able to conclude on which resampling method should be selected by the software practitioners. We have provided detailed statistical analysis and experimentally proved ROS and AHC to be the better options for researchers and other practitioners. <ns0:ref type='bibr' target='#b9'>Bennin et al. (2016)</ns0:ref> concluded that ROS outperformed the SMT approach when defective classes are less than 20% in the software. This study also empirically proved that ROS is statistically better than SMT, hence supports the conclusions of <ns0:ref type='bibr' target='#b9'>Bennin et al. (2016)</ns0:ref>. <ns0:ref type='bibr' target='#b72'>Tantithamthavorn et al. (2018)</ns0:ref> included a large number of datasets but provided a comparative study with only four resampling methods. They also optimized SMT and found its performance comparable with RUS. We created 1980 models with 12 datasets as compared to their 4242 models with 101 datasets. In this study, we covered 10 resampling methods and provided the breadth of resampling methods. We compared models with 15 ML techniques whereas <ns0:ref type='bibr' target='#b72'>Tantithamthavorn et al. (2018)</ns0:ref> used only 7 ML techniques. The results of our study are different from those of <ns0:ref type='bibr' target='#b72'>Tantithamthavorn et al. (2018)</ns0:ref>. The main reasons are that the model evaluation is highly dependent on the nature of the data and the behavior of ML techniques. This problem amplifies by adding the third dimension of resampling methods. Furthermore, <ns0:ref type='bibr'>Bennin et al. (2018)</ns0:ref> empirically concluded that AUC performance is not improved by the SMOTE and this contradicts with the results of <ns0:ref type='bibr' target='#b72'>Tantithamthavorn et al. (2018)</ns0:ref>. Therefore, repetitive studies are required to be performed in the future for a fair comparison.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>Validity Threats</ns0:head></ns0:div>
<ns0:div><ns0:head n='6.1'>Conclusion Validity</ns0:head><ns0:p>Conclusion validity threat is a threat to statistical validity and this indicates that results of the empirical study are not properly analyzed and validated. To avoid this threat, ten-fold crossvalidation is performed. Friedman test and Wilcoxon signed-rank test during post hoc analysis strengthens conclusion validity further. In this study, both the statistical validation techniques are nom parametric in nature. Non-parametric tests are not based on any assumptions for underlying data and, therefore, applicable to the selected datasets. This reinforces the analysis of the relationship between independent variables and dependent variables and hence enhancing conclusion validity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Internal Validity</ns0:head><ns0:p>Applying resampling methods results in the change in the original ratio of defective and nondefective classes. This would be affecting the causal relationship between independent and dependent variables resulting in internal validity bias in our study. However, we have used stable performance evaluators -GMean and AUC to assess the performance of different SDP that help to reduce this threat. We have also used sensitivity that takes care of the proper classification of defective classes which is one of the major requirements of our problem domain. So, judicious selection of performance evaluators may have reduced some effect of internal validation threat.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3'>Construct Validity</ns0:head><ns0:p>Construct validity ensures the correctness of the way of measuring the independent and dependent variables of the study. It also emphasizes whether the variables are correctly mapped to the concept that they are representing. The Independent variables of this study are objectoriented software metrics and the dependent variable of this study is 'Defect' representing the absence or presence of a defect in the software module. Chosen independent variables and dependent variables are widely used in defect prediction studies and thus builds confidence in the removal of construct validity threats from our study. Hence, any construct validity threats may not exist in our study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4'>External Validity</ns0:head><ns0:p>External validity refers to the extent to which the results of the study are widely applicable. Whether the conclusions of the concerned study can be generalized or not? The datasets used in this study are available publicly. Concerned software are written in the JAVA language. Thus, results validity holds for similar situations only. Results may not be valid for proprietary software. Resampling methods are implemented with default parameter settings in KEEL <ns0:ref type='bibr' target='#b2'>(Alcalá-Fdez et al., 2011)</ns0:ref>, and ML parameter settings are provided. Therefore, this study can be reproduced without any complications. This minimizes the external validity threat in this study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>Conclusions</ns0:head><ns0:p>This study evaluates the effect of resampling methods on various machine learning models for defect prediction using Apache software. In total 1980 models were built and the performances of models were empirically compared using stable performance evaluators like GMean, Balance, and AUC. Important features were selected using CFS and ten-fold cross-validation was executed while training the model. Six oversampling and four undersampling methods were analyzed with 15 ML techniques and results were statistically validated by using the Friedman test followed by Nemenyi post hoc analysis. The use of statistical tests reinforces the correctness of results. CFS is incorporated to minimize the multicollinearity effect whereas resampling methods reduces the model bias and assures that predictions are not affected by majority class label. This study reinsures that SDP models developed with resampling methods enhance their predictive capability as compared to models developed without resampling methods. The results of the Friedman test strongly advocate the use of the random oversampling method for the improved predictive capability of SDP classifiers for imbalanced data. Apart from ROS, AHC and SMT also demonstrated good predictive capability for uncovering defects. Nemenyi test eradicates family-wise error and concluded ROS and AHC to be significantly and statistically better than other resampling methods. Models developed using oversampling methods illustrated better defect prediction capability than that of undersampling methods. Handling CIP using ROS and AHC will aid developers and software practitioners in detecting defects effectively in the early stages of software development reducing testing cost and effort. Resampling methods greatly improved the performance of ensemble methods.</ns0:p><ns0:p>Future direction involves the inclusion of more imbalanced datasets and investigates the impact of resampling methods on them. Datasets can be taken of any prevalent language different than JAVA like C#. There is a dire need for a benchmark study that compares all the existing resampling solutions in the literature with the common experimental settings. Optimized versions of base resampling methods like RUS or ROS can be proposed. Further, we will also like to explore the consequences of resampling methods with search-based techniques for the classification of software defects. Instead of predicting defects, a framework can besides be utilized to envisage defect severity. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Resampling Methods</ns0:p></ns0:div>
<ns0:div><ns0:head>Description</ns0:head><ns0:p>ADAptive SYNthetic Sampling (ADSYN) <ns0:ref type='bibr' target='#b27'>(He et al., 2008)</ns0:ref> In ADASYN, synthetic samples are generated by finding the density distribution of minority classes. Density distribution is computed using a k-nearest neighbor with Euclidian distance. It is an extension of the Synthetic Minority Oversampling Technique. It focuses on the samples that are hard to classify.</ns0:p><ns0:p>Synthetic Minority Over-sampling Technique (SMT) <ns0:ref type='bibr' target='#b14'>(Chawla et al., 2002)</ns0:ref> The number of minority class samples is increased by generating artificial samples in direction of k nearest neighbors of minority class samples. If one neighbor is selected, then one synthetic sample is generated corresponding to that original minority sample resulting in 100% oversampling of minority classes. In this study, k=5 results in 500% oversampling of minority classes.</ns0:p></ns0:div>
<ns0:div><ns0:head>Safe Level Synthetic Minority</ns0:head><ns0:p>Oversampling Technique (SLSMT) <ns0:ref type='bibr' target='#b12'>(Bunkhumpornpat, Sinapiromsaran & Lursinsap, 2009)</ns0:ref> Unlike the SMT version, where synthetic samples are generated randomly, in SLSMT first safe levels are calculated that helps in determining safe positions for generating the synthetic samples. If the safe level value is close to 0, it is considered noise and if the safe level value is close to k for k nearest neighbor implementation, then it is considered safe resulting in producing synthetic minority samples there.</ns0:p><ns0:p>Selective Preprocessing of Imbalanced Data (SPD) <ns0:ref type='bibr' target='#b69'>(Stefanowski & Wilk, 2008)</ns0:ref> This technique categorizes the sample as either safe or noise based on the nearest neighbor rule where distance measurement is done using a heterogeneous value distance metric. If an instance is accurately classified by its k nearest neighbors, it is considered safe otherwise it is considered noise and then discarded.</ns0:p><ns0:p>Random OverSampling (ROS) <ns0:ref type='bibr' target='#b8'>(Batista, Prati & Monard, 2004)</ns0:ref> ROS is a very simple oversampling technique in which minority class instances are replicated at random with the sole aim of creating a balance between majority and minority class instances.</ns0:p></ns0:div>
<ns0:div><ns0:head>Oversampling Methods</ns0:head><ns0:p>Agglomerative Hierarchical Clustering (AHC) <ns0:ref type='bibr' target='#b17'>(Cohen et al., 2006)</ns0:ref> In AHC, each class is decomposed into sub-clusters and synthetic samples are generated corresponding to cluster prototypes. Since artificial samples are created as centroids of sub-clusters of classes, they, therefore extract the characteristics of that class and represent better samples than randomly generated samples.</ns0:p><ns0:p>Random UnderSampling (RUS) <ns0:ref type='bibr' target='#b8'>(Batista, Prati & Monard, 2004)</ns0:ref> RUS, like ROS, is a non-heuristic technique. But in this instead of replicating minority class instances, majority class instances are removed with aim of creating a balance between majority and minority class instances. The problem with this technique is that some important or useful data may be rejected as it is based on random selection.</ns0:p><ns0:p>Condensed Nearest Neighbor(CNN) + Tomek's modification of Condensed Nearest Neighbor (CNNTL) <ns0:ref type='bibr' target='#b17'>(Cohen et al., 2006)</ns0:ref> First CNN is applied to find a consistent subset of samples that helps to eliminate majority class samples that are far from the decision border. Then <ns0:ref type='bibr'>Tomek links [46]</ns0:ref> are made between samples. If it exists between any two samples, then either both are borderline samples or one of them is noise. Samples with Tomek links that fit in majority classes are removed.</ns0:p><ns0:p>Neighborhood Cleaning Rule (NCL) <ns0:ref type='bibr' target='#b44'>(Laurikkala, 2001)</ns0:ref> NCL uses the edited nearest-neighbor (ENN) rule to remove majority class samples. For each training sample Si, first it finds three nearest neighbors. If Si= majority class sample, then discard it if more than two nearest neighbors incorrectly classifies it. If Si = minority class sample, then discard the nearest neighbors if they incorrectly classify it.</ns0:p></ns0:div>
<ns0:div><ns0:head>Undersampling Methods</ns0:head><ns0:p>One Sided Selection (OSS) <ns0:ref type='bibr' target='#b41'>(Kubat & Matwin, 1997)</ns0:ref> OSS and CNNTL have similar working. The difference lies in the order of the application of CNN and the determination of Tomek links. OSS identifies unsafe samples using Tomek links and then applying condensed nearest neighbor (CNN). Noisy and borderline samples are considered unsafe. Small noise may result in the flipping of the decision border of the borderline samples; therefore, they are also considered unsafe. CNN eliminates the majority of samples that are far away from decision boundaries. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>Experimental framework for SDP in imbalanced data</ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021) Manuscript to be reviewed Computer Science  Logistic Regression (LR) (Le Cessie & Van Houwelingen, 1992): Logistic Regression is also a probabilistic classifier used for dichotomous variables and assumes that the data follows Gaussian distribution. It works well in case of assumption desecration. During training coefficient values are minimized by ridge estimator to solve multicollinearity and this makes the model simpler. The algorithm runs until it converged.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>:</ns0:head><ns0:label /><ns0:figDesc>IBk is an instance-based K-nearest neighbor learner. It calculates the Euclidian distance measure of the test sample with all the training samples to find its 'k' nearest neighbors. It then assigns the class label to the testing instance based on the majority classification of nearest neighbors. Only one nearest neighbor is determined with k=1 and the class label of that nearest neighbor is assigned to the testing instance.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>:</ns0:head><ns0:label /><ns0:figDesc>AaboostM1 is an ensemble technique where numbers of weak classifiers are used iteratively to improve the overall performance. It augments the performance of weak learners by adjusting the weak hypothesis returned by the weak learner. The base decision tree used in ABM1 is J48. J48 learns from the previous trees about misclassified instances and calculates the weighted average. NumIterations in parameters represent the number of classifiers involved in this ensemble. This technique helps in reducing the bias in the model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>:</ns0:head><ns0:label /><ns0:figDesc>Bagging or bootstrap aggregation is also one of the ensemble techniques that improve the predictive capability of base classifiers PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021) Manuscript to be reviewed Computer Science by making bags of training data. Models work in parallel and their results are averaged.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>: Random Tree is an ensemble-based supervised learner where different trees are constructed from the same population. Random samples of the population are generated to form different trees with a random selection of features. After bags are constructed, models are developed and majority voting is performed to classify the class.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2presents the Boxplot diagrams to visually depict the defect predictive capability of ML models on imbalanced data in terms of AUC, Balance, GMean, and sensitivity. AUC Analysis: The AUC of ML models of imbalanced data varies from 0.52 to 0.86 with a median value of 0.76. Only 55.6% of models have AUC greater than 0.75. AUC is less than 0.85 in 97.8% of cases. Log4j1.1 depicted the best AUC value of 0.86 with NB. Analysis of results obtained on experimentation reveals that statistical techniques, NB and SL, have performed fairly well in terms of AUC, depicting the highest AUC values for Log4j1.1, Ant1.7, Jedit4.2, Log4j1.0, and Tomcat6.0. Similarly, LMT models were able to predict the highest AUC values for five datasets. Overall, 25% of ML models have AUC less than 0.7. Balance Analysis: The range of Balance in No Resampling (NS) case is from 29.20 to 76.55. In Tomcat6.0, only 59.01 value was achieved as the maximum value by NB. Balance values achieved by RSS, PART, and LMT are comparatively lower than other ML techniques. The median value for Balance for all models in the NS case is 56.35. Ivy2.0, Synape1.0, and Tomcat6.0 attained the maximum Balance value with NB. IBk also resulted in maximum Balance values for Jedit4.0, Synapse1.1, and Xerces1.3. 68.3% of datasets have a Balance value of less than 65. Only 3.3% of datasets have a Balance value greater than 75. GMean Analysis: When no resampling method is used, GMean ranges from 0 to 0.79. The median value of GMean for all datasets is observed as 0.6. NB achieved the highest GMean values for 58.33% of datasets. Considering the models for all datasets with 5 different ML techniques, only 14.4% of models achieved GMean greater than 0.7. 30.6% of models have a GMean value less than 0.5. Sensitivity Analysis: Sensitivity is less than 60% in 93.9% of cases. The median value of sensitivity is only 0.39. This supports the low predictive capability of developed models when CIP is not handled. Sensitivity values lie between 0 and 0.63. Only 0.6% of cases have a sensitivity greater than 70% which is not an acceptable achievement for any prediction model. The maximum sensitivity value obtained in the NS case is 0.7 by IBk, the nearest neighbor technique, in the log4j1.1 dataset. Thus, the overall performance of SDP models developed using machine learning algorithms on imbalanced data is not satisfactory for high-quality predictions. 4.3 RQ3a: What is the comparative performance of various SDP models developed using resampling methods? RQ3b: Is there any improvement in the performance of SDP models developed using ML techniques on the application of resampling methods? To answer these questions, we exploited performance evaluators-Sensitivity, GMean, Balance, and AUC values that are calculated with help of a confusion matrix obtained by ten-fold crossvalidation-trained models developed using resampling methods. Boxplot diagrams are generated and presented in Figure3to visually depict the defect predictive capability of ML models in terms of AUC, Balance, GMean, and sensitivity on resampled data. Median values of the best resampling method and NS scenario for AUC, Balance, GMean, and sensitivity are recorded in Figure4, Figure5, Figure6, and Figure7in form of bar graphs. We have analyzed the</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='51,42.52,178.87,525.00,264.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='52,42.52,178.87,525.00,264.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='53,42.52,178.87,525.00,264.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='54,42.52,178.87,525.00,264.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>AUC Balance GMean Sensitivity Rank 1 Kstar (14.41) ABM1 (13.41) ABM1 (13.5) RT (12.62)</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 2</ns0:cell><ns0:cell cols='2'>ABM1 (13.75)</ns0:cell><ns0:cell>IBk (12.66)</ns0:cell><ns0:cell>IBk (12.91)</ns0:cell><ns0:cell>IBk (12.54)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>OSS</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(2.68)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 3</ns0:cell><ns0:cell cols='2'>BAG (13.37)</ns0:cell><ns0:cell>RT (12.16)</ns0:cell><ns0:cell>RT (12.58)</ns0:cell><ns0:cell>Kstar (12.33)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Balance</ns0:cell><ns0:cell>ROS (10.08)</ns0:cell><ns0:cell cols='2'>AHC (9.01) Rank 4 SPD (7.93)</ns0:cell><ns0:cell>SMT (7.56) RSS (12.29)</ns0:cell><ns0:cell>NCL (6.6) BAG (12.12) ADSYN (6.23)</ns0:cell><ns0:cell cols='3'>SLSMT (5.09) BAG (11.75) ABM1 (12.2) RUS (4.42) CNNTL (3.85)</ns0:cell><ns0:cell>OSS (3.41)</ns0:cell><ns0:cell>NS (1.77)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 5</ns0:cell><ns0:cell /><ns0:cell>IBk (9.83)</ns0:cell><ns0:cell>Kstar (11.08)</ns0:cell><ns0:cell>Kstar (11.25)</ns0:cell><ns0:cell>LMT (10.95)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>GMean</ns0:cell><ns0:cell>ROS (10.15)</ns0:cell><ns0:cell cols='2'>AHC (9.01) Rank 6 SPD (7.91)</ns0:cell><ns0:cell>SMT (7.46) RT (9.2)</ns0:cell><ns0:cell>NCL (6.95) RSS (10.33) ADSYN (6.11)</ns0:cell><ns0:cell>SLSMT (4.88) RSS (9.58)</ns0:cell><ns0:cell cols='2'>RUS (4.36) PART (9.95) CNNTL (3.66)</ns0:cell><ns0:cell>OSS (3.3)</ns0:cell><ns0:cell>NS (2.16)</ns0:cell></ns0:row><ns0:row><ns0:cell>Sensitivity</ns0:cell><ns0:cell>ROS (9.89)</ns0:cell><ns0:cell cols='2'>AHC Rank 7 SMT (8.83) (7.06) Rank 8</ns0:cell><ns0:cell>SPD LMT (8.58) (7.06) J48 (7.62)</ns0:cell><ns0:cell>ADSYN PART (9.04) CNNTL (7.01) (6.48) LMT (9)</ns0:cell><ns0:cell>NCL PART (9.04) (5.29) LMT (9)</ns0:cell><ns0:cell>SLSMT J48 (9.83) (5.2) BAG (9.54)</ns0:cell><ns0:cell>RUS (4.01)</ns0:cell><ns0:cell>OSS (3.9)</ns0:cell><ns0:cell>NS (1.22)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell>Rank 9</ns0:cell><ns0:cell cols='2'>PART (7.58)</ns0:cell><ns0:cell>J48 (8.58)</ns0:cell><ns0:cell>J48 (8.7)</ns0:cell><ns0:cell>RSS (8.75)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 10</ns0:cell><ns0:cell /><ns0:cell>LB (6.08)</ns0:cell><ns0:cell>MLP (5.45)</ns0:cell><ns0:cell>MLP (5.75)</ns0:cell><ns0:cell>LB (5.25)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 11</ns0:cell><ns0:cell /><ns0:cell>ICO (5.54)</ns0:cell><ns0:cell>ICO (4.87)</ns0:cell><ns0:cell>ICO (4.87)</ns0:cell><ns0:cell>MLP (4.91)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 12</ns0:cell><ns0:cell /><ns0:cell>MLP (4.75)</ns0:cell><ns0:cell>LB (4.83)</ns0:cell><ns0:cell>LB (4.7)</ns0:cell><ns0:cell>ICO (4.83)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 13</ns0:cell><ns0:cell /><ns0:cell>LR (2.79)</ns0:cell><ns0:cell>SL (2.75)</ns0:cell><ns0:cell>SL (2.66)</ns0:cell><ns0:cell>SL (2.66)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 14</ns0:cell><ns0:cell /><ns0:cell>SL (2.5)</ns0:cell><ns0:cell>LR (2.41)</ns0:cell><ns0:cell>LR (2.33)</ns0:cell><ns0:cell>LR (2.41)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Rank 15</ns0:cell><ns0:cell /><ns0:cell>NB (1.66)</ns0:cell><ns0:cell>NB (1.25)</ns0:cell><ns0:cell>NB (1.33)</ns0:cell><ns0:cell>NB (1.16)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>p-value</ns0:cell><ns0:cell /><ns0:cell>0.000</ns0:cell><ns0:cell>0.000</ns0:cell><ns0:cell>0.000</ns0:cell><ns0:cell>0.000</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Kendall</ns0:cell><ns0:cell /><ns0:cell>0.876</ns0:cell><ns0:cell>0.835</ns0:cell><ns0:cell>0.844</ns0:cell><ns0:cell>0.838</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>1 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 11 (on next page)</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Pairwise Comparison of ML Techniques using Nemenyi Test for AUC, Balance, GMean, and Sensitivity</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Pair</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>Balance</ns0:cell><ns0:cell>GMean</ns0:cell><ns0:cell>Sensitivity</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-IBk</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-RT</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-BAG</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-Kstar</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-RSS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-PART</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-LMT</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-J48</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell><ns0:cell>NS</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-MLP</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-ICO</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-LB</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-SL</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-LR</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell></ns0:row><ns0:row><ns0:cell>ABM1-NB</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell><ns0:cell>S+</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54180:1:1:CHECK 23 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Editors and Reviewers of PeerJ - Computer Science,
Please find hereby enclosed the revision of the paper: Predicting defects in imbalanced data using resampling methods: An empirical investigation
We have modified the manuscript following the major reviews that were received.
In summary, we have performed the following modifications to the manuscript:
• After the Friedman test, post-hoc analysis is conducted by Nemenyi test as suggested.
• The effect size is analyzed using Kendall’s coefficient of concordance.
• Theoretical details of statistical tests are included in the subsection of “Materials and Methods” for a better understanding of readers.
• Brief descriptions of Research Questions are added in the “Introduction” section.
• Section “Materials and Methods” is modified to incorporate a detailed experimental process.
• Tables and Figures are revised according to the reviewers’ recommendations. Detailed tables are replaced by Boxplot diagrams and Bar graphs.
• A new section “Discussions” is added to the manuscript.
• The comparison with related studies is added to the manuscript in the “Discussions” section.
• The manuscript has been thoroughly revised to eliminate various grammatical, linguistics, and spelling mistakes.
We hope that we have addressed the issues raised by the reviewers in the best way
possible. We thank the reviewers for their constructive and detailed insight that helped us enrich our work significantly.
Below we report the comments from the review, with our point-by-point responses in blue.
We believe that the manuscript is now suitable for publication in PeerJ- Computer Science.
Sincerely,
Juhi Jain
On behalf of both authors
Comments of Reviewer #1:
1. The paper compares an impressive array of class rebalancing methods and defection prediction classifiers.
2. The paper comprehensively tries to estimate the impact of the class rebalancing method by experimenting with multiple configurations.
Comment 1: The structure or the organization of the paper is not very clear to me. I understand that the authors of the paper are trying to investigate the impact of class rebalancers on defect prediction models. However, I am not sure why the authors investigate the features that are selected consistently by a CFS method. They do not use this finding anywhere else. Furthermore, they assert that using all the features might lead to overfitting of the models. However, the studied defect datasets do not have more than 30 features! So I would recommend that the authors explain why they study a feature selection method along with class rebalancing if there is no connection between the findings.
Reply to Comment 1:
The reviewer is correct in observing that the datasets do not have more than 30 features. Instead of all features, if we identify only the relevant features and construct models based on them, this will certainly give us better prediction in less computation time. CFS uses greedy forward selection and helps us to recognize the subset of features that are highly correlated to the class ‘defect’ and less correlated with each other. Even if the features are less than 30 (which is the apprehension of the reviewer), the results can be biased if features are correlated with each other. Resampling methods are used to deal with imbalanced data problem in predictive modelling whereas CFS is incorporated to minimize the multicollinearity effect
We have added reasoning about the inclusion of feature selection method along with class rebalancing and the rationale for using CFS in section Related Work and Material & Methods.
Line 149-151
Ghotra, McIntosh & Hassan (2017) explored 30 feature selection techniques and concluded CFS as the best feature predictor. They used NASA datasets and PROMISE datasets with 21 ML techniques.
Line 154-159
Recent studies (Lingden et al. 2019, Arar and Ayan, 2017) have emphasized the importance of feature selection and the impact of CFS in building efficient models with reduced complexity and computation time. Balogun et al. (2020) empirically investigated the effect of 46 FS methods over 25 datasets from different sources using Naïve Bayes and decision trees. Based on accuracy and AUC performance, they concluded CFS was the best performer in the FSS category. Therefore, in this study, CFS is used to reveal the most relevant features.
Line 252-260
All these metrics may not be important in the concern of predicting defects in the early stages of project development. This study employs CFS for the identification of significant metrics. A review by Malhotra [56] has revealed that CFS is the most commonly used feature selection technique. CFS is used in this study for selecting features because it is the most preferred FS technique in SDP literature (Ghotra, McIntosh & Hassan, 2017; Arar & Ayan, 2017; Lingden et al. 2019; Balogun et al., 2020). CFS performs the ranking of features based on information gain and identifies the optimal subset of features. The features in the subset are highly correlated to the class label ‘defect’ and are uncorrelated or less correlated with each other.
Line 733-735
CFS is incorporated to minimize the multicollinearity effect whereas resampling methods reduces the model bias and assures that predictions are not affected by the majority class label.
Comment 2: I recommend the authors to replace table 6, 7, 8, 9,10, 11 with side-by-side boxplots for better readability
Reply to Comment 2:
As per the reviewer’s recommendations, the changes have been made in the revised manuscript.
Figure 2 and Figure 3 represent the boxplot diagrams for No sampling (NS) and resampling (RS) scenarios respectively. These figures contain boxplots for AUC, Balance, GMean, and sensitivity.
The rest of the tables are replaced by Figure 4 – Figure 7. These four figures are bar diagrams showing the comparison of median performance values of NS and RS scenarios.
Table 7 is simplified to include the percentage improvement in performance measures in terms of their maximum and median values.
These changes enhance the understandability of readers and aids in better readability.
Comment 3: several prior studies (e.g., https://arxiv.org/abs/1609.01759) find that tuning the machine learning classifiers is extremely important for the defect prediction models to perform well. In this case, it could particularly be an important confounder that affects the results. I would recommend the authors to tune the models that they use in their study
Reply to Comment 3:
We agree with the reviewer that tuning machine learning classifiers are important for generating better SDP models. A good fit is achievable at the trade-off point between overfitting and underfitting. Table 4 is added in the manuscript that mentions the values of the parameters of the ML techniques used in the study. These settings of parameters resulted in low error rates.
Section Material & Methods
Category
ML technique
Parameter Settings
Statistical ML techniques
NB
useKernelEstimator = false, displayModelInOldFormat = false, useSupervisedDiscretization = false
LR
Ridge: 1.0E-8, useConjugateGradientDescent = false, maxIts = -1
SL
Heuristic Stop = 50, Max Boosting Iterations = 500, useCrossValidation = True, weightTrimBeta = 0.0
LB
Zmax = 3.0, likelihoodThreshold = -1.7976931348623157E308, numIterations = 10, numThreads = 1, poolSize = 1, seed = 1, shrinkage = 1.0, useResampling = False, weightThreshold = 100
Neural Networks
MLP
Hidden layer = a, Learning rate = 0.3, Momentum = 0.2, Training time = 500, Validation threshold =20
Nearest Neighbour Methods
IBk
KNN = 1, nearestNeighbourSearchAlgorithm = LinearNNSearch,
K*
globalBlend = 20, entropicAutoBlend = False
Ensemble Methods
ABM1
numIterations = 10, weightThreshold = 100, seed =1, classifier = J48: confidenceFactor = 0.25, minNumObj = 2, numFolds = 3, seed = 1, subtreeRaising = True, useMDLcorrection = True
Bag
numIterations = 10, numExecutionSlots = 1, seed =1, bagSizePercent = 100, classifier = J48: confidenceFactor = 0.25, minNumObj = 2, numFolds = 3, seed = 1, subtreeRaising = True, useMDLcorrection = True
ICO
evaluationMetric = RMSE, lookAheadIterations = 50, numFolds = 10, numRuns = 1, numThreads = 1, poolsize = 1, seed = 1, stepSize = 1, iterativeClassifier = LogitBoost
LMT
errorOnProbabilities = False, fastRegression = True, minNumInstances = 15, numBoostingIterations = -1, weightTrimBeta = 0.0
RT
KValue = 0, breakTiesRandomly = False, maxDepth = 0, minNum = 1, minVarianceProp = 0.001, numFolds = 0, seed = 1
RSS
numExecutionSlots = 1, numIterations = 10, seed = 1, subSpaceSize = 0.5, classifier = Reptree: initialCount = 0.0, maxDepth = -1, minNum = 2.0, minVarianceProp = 0.001, numFolds = 3, seed = 1
Decision Tree
PART
confidenceFactor = 0.25, minNumObj = 2, numFolds = 3, reducedErrorPruning = False, seed = 1, useMDLcorrection = True
J48
confidenceFactor = 0.25, minNumObj = 2, numFolds = 3, seed = 1, subtreeRaising = True, useMDLcorrection = True
Comment 4: I am not clear about how the statistical tests are used. For instance, the authors say they used a Friedman test followed by a Wilcoxon test to rank the performance of the classifiers. However, a friedman test to my knowledge doesn’t generate ranking and a Wilcoxon test is only a parwise comparison. So I am not sure how the authors arrive at the ranking. I would urge the authors to explain the statistical tests that they used to arrive at their results a little clearly. Similarly, to rank the results of Friedman test, a post-hoc nemenyi test is the typical suggestion, rather than the Wilcoxon. Hence I would suggest the authors to use that (e.g., you can follow this paper: https://arxiv.org/abs/1707.09281, https://joss.theoj.org/papers/10.21105/joss.02173). If Wilcoxon is used to compare multiple pairs, you need to use a p-value correction like a Bonferroni correction for the results to be reliable.
Reply to Comment 4:
The reviewer is correct in saying that “a Friedman test to my knowledge doesn’t generate ranking” but the reviewer must agree that the Friedman test does compare the mean ranks between the related groups. It can tell whether there is an overall statistical difference between the mean ranks but cannot identify the particular groups that are statistically different from each other. We arrived at the ranking with the help of mean ranks for the subject that are computed using the Friedman test. We assigned Rank 1 to the method that has the highest mean rank, Rank 2 to the method with second-highest mean rank.
As per the suggestion of the reviewer, we implemented post-hoc Nemenyi test. The Wilcoxon signed-rank test results are replaced by Nemenyi test. The explanation of statistical test used is included in the manuscript.
Refer Table 9 and Table 11.
Refer Line 350-365, Line 582-594, and Line 631-635 for the description.
Comment 5: Major concern: Some of the key related works are missing. For instance, a recent TSE (https://arxiv.org/abs/1801.10269) and an ICSE (https://arxiv.org/pdf/1705.03697.pdf) study through a comprehensive study finds that SMOTE is indeed better than other class rebalancing methods. These studies are not discussed. I would recommend the authors to consider these studies and position their findings in the context of these studies. More along my previous point, the aforementioned papers assert that tuning the SMOTE is important for the benefits of SMOTE to shine through. Tantithamthavorn et. al. (https://arxiv.org/abs/1801.10269) in particular find that tuned SMOTE helps defect prediction models perform better than unbalanced classifier. They also compare several class rebalancing methods (though not as comprehensive). So I would like the authors to position their findings in the context of this paper. Therefore, I would suggest the authors to include the datasets used by Tantithamthavorn et. al. also in their studies to see if their findings agree with Tantithamthavorn et. al. or if they could refute them.
Reply to Comment 5:
The studies suggested by the reviewer are included in the manuscript.
Tantithamthavorn et. al. (https://arxiv.org/abs/1801.10269) evaluated the impact of resampling methods on performance of SDP models. We agree with the reviewer that “They also compare several class rebalancing methods (though not as comprehensive).” They finetuned the SMOTE and improved the performance of SMOTE and proved that tuned-SMOTE performs better than unbalanced classifier. We would like to highlight their findings here that they concluded that the performance of tuned-SMOTE and RUS are comparable to predict defects. Figure 6 on Page 10 of Tantithamthavorn et. al. (https://arxiv.org/abs/1801.10269) explains that RUS performed better than SMOTE. After proposing the optimized SMOTE, they found the results comparable to RUS. They advocated in their findings that predictive ability to classify defects is certainly improved by optimized SMOTE and RUS.
According to the reviewer, a recent TSE (https://arxiv.org/abs/1801.10269) and an ICSE (https://arxiv.org/pdf/1705.03697.pdf) study through a comprehensive study finds that SMOTE is indeed better than other class rebalancing methods. We respectfully disagree with the fact as an ICSE (https://arxiv.org/pdf/1705.03697.pdf) study does not compare any resampling method and worked only on SMOTE whereas TSE (https://arxiv.org/abs/1801.10269) study through comprehensive study found optimized SMOTE and RUS better than other rebalancing methods. They also emphasized that RUS should be used if the prime objective is to classify the defective classes correctly. Bennin et al. (2016) concluded that ROS outperformed SMT approach when defective classes are less than 20% in the software. This study also empirically proved that ROS is statistically better than SMT, hence supports the conclusions of Bennin et al. (2016). (Bennin et al., 2018) empirically concluded that AUC performance is not improved by the SMOTE.
Therefore, we can conclude that SMOTE is not always the best technique in resampling field and there is enough space to explore more resampling methods and perform more empirical studies to establish the supremacy of one method over the other.
Our study is different from Tantithamthavorn et. al. (https://arxiv.org/abs/1801.10269). They included the large number of datasets but provided comparative study with only four resampling methods. In this study, we covered 10 resampling methods. Models are generated with 15 ML techniques whereas Tantithamthavorn et. al. (https://arxiv.org/abs/1801.10269) used only 7 ML techniques.
We appreciate the reviewer to provide us the future direction of extensive comparison of our study with Tantithamthavorn et. al. (https://arxiv.org/abs/1801.10269). In addition to this, we are also motivated to design the optimized version of RUS and ROS, as Tantithamthavorn et. al. (https://arxiv.org/abs/1801.10269) designed for SMOTE. We will certainly conceptualize this in our future work.
Refer Line 668-688
Comment 6: Because the statistical issues outlined earlier I am not sure if the results are reliable.
Reply to Comment 6:
We have incorporated details of statistical tests and implemented post-hoc Nemenyi test after the Friedman test keeping in consideration the reviewer’s suggestions. Our reply to comment 4 also addressed this issue earlier. We hope this helps in removing reviewer’s apprehensions and statistical validation of results accounts to their reliability.
Refer Table 9 and Table 11.
Refer Line 350-365, Line 582-594, and Line 631-635 for the description.
Comments of Reviewer #2:
1. The paper compares an impressive array of class rebalancing methods and defection prediction classifiers.
2. The paper comprehensively tries to estimate the impact of the class rebalancing method by experimenting with multiple configurations.
Comment 1: The In general, the manuscript is clear and well-written, however it lacks a Discussion section which would be useful to present a general overview of the findings and discuss the results on a higher level.
Reply to Comment 1:
The discussion section has been added as per the reviewer’s suggestion after the Results. The results are discussed at a comprehensive level providing a general overview of the findings
Line 639-688
Discussions
Feature Selection is an important activity in software quality predictive modeling. The most widely accepted feature selection technique in literature is CFS. Through this study, we provided the important subset of features to the software practitioners. We identified that coupling and cohesion are the most important internal quality attributes that developers should focus on while building softwares to avoid defects. Defects in softwares were found to be highly correlated to LCOM, RFC, Ca, and Ce.
Median values of sensitivity have improved by 45.7%, 341.9%, 250%, 72.2%, 154.8%, 66.7%, 18.9%, 330%, 46.5%, 32.8%,293.3%, and 88.8% for Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Log4j1.1, Synapse1.0, Synapse1.1, Synapse1.2, Tomcat6.0, and Xerces1.3 respectively after applying resampling methods.
Median values of Balance have improved by 16.4%, 61.3%, 74.7%, 23.6%, 49.5%, 26%, 2.2%, 86.8%, 13.1%, 7.2%,77.7%, and 31.9% for Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Log4j1.1, Synapse1.0, Synapse1.1, Synapse1.2, Tomcat6.0, and Xerces1.3 respectively after applying resampling methods.
Median values of G-Mean have improved by 13.1%, 73.7%, 73.5%, 16.9%, 44.7%, 20.1%, 0.4%, 88.3%, 9%, 5.5%, 83.5%, and 24.2% for Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Log4j1.1, Synapse1.0, Synapse1.1, Synapse1.2, Tomcat6.0, and Xerces1.3 respectively after applying resampling methods.
Median values of ROC-AUC have improved by 1.9%, 3.8%, 14.3.7%, 1.9%, 1.3%, 8.7%, 0.6%, 24.3%, 0.7%, 3.3%, 4.7%, and 5.8% for Ant1.7, Camel1.6, Ivy2.0, Jedit4.0, Jedit4.2, Log4j1.0, Log4j1.1, Synapse1.0, Synapse1.1, Synapse1.2, Tomcat6.0, and Xerces1.3 respectively after applying resampling methods.
For the datasets considered, ROS and AHC were significantly better than other resampling methods. Better model predictions can be achieved by incorporating oversampling methods than the undersampling methods.
Resampling methods did not improve the defect predictive capability of statistical techniques. Developers and researchers should prefer ensemble methods for software quality predictive modeling.
Our findings are in the agreement with the Wang & Yao (2013) conclusions. They proved that resampling methods improve defect prediction and in their settings, the AdaBoost ensemble gave the best performance. But they were not able to conclude on which resampling method should be selected by the software practitioners. We have provided detailed statistical analysis and experimentally proved ROS and AHC to be the better options for researchers and other practitioners.
Bennin et al. (2016) concluded that ROS outperformed the SMT approach when defective classes are less than 20% in the software. This study also empirically proved that ROS is statistically better than SMT, hence supports the conclusions of Bennin et al. (2016).
Tantithamthavorn et al. (2018) included a large number of datasets but provided a comparative study with only four resampling methods. They also optimized SMT and found its performance comparable with RUS. We created 1980 models with 12 datasets as compared to their 4242 models with 101 datasets. In this study, we covered 10 resampling methods and provided the breadth of resampling methods. We compared models with 15 ML techniques whereas Tantithamthavorn et al. (2018) used only 7 ML techniques. The results of our study are different from those of Tantithamthavorn et al. (2018). The main reasons are that the model evaluation is highly dependent on the nature of the data and the behavior of ML techniques. This problem amplifies by adding the third dimension of resampling methods. Furthermore, Bennin et al. (2018) empirically concluded that AUC performance is not improved by the SMOTE and this contradicts with the results of Tantithamthavorn et al. (2018). Therefore, repetitive studies are required to be performed in the future for a fair comparison.
Comment 2: The study can also benefit from a small description of the research questions under study to make it more clear to the reader.
Reply to Comment 2:
As pointed by the reviewer, a small description of the research questions is added in the Introduction section to make it more clear to the reader.
Line 81-119
RQ1: Which features are repeatedly selected by CFS in software engineering datasets?
RQ1 finds the most important internal metrics that impact the possibility of the defect(s) in the software. Software metrics are recognized that software designers and developers should focus on while building the software. We found that defects in software engineering datasets are greatly impacted by LCOM, Ca, Ce, and RFC. LCOM is a cohesion metric whereas Ca, Ce, and RFC are coupling metrics. In any software, high cohesion and low coupling are desired. Therefore, as LCOM, Ca, Ce and RFC play a crucial role in SDP, their values should be monitored while designing the software. This would result in software with fewer defects.
RQ2: What is the performance of ML techniques on imbalanced data while building SDP models?
RQ2 summarizes the performance of 12 imbalanced datasets with 15 ML techniques in terms of sensitivity, GMean, Balance, and AUC.
RQ3a: What is the comparative performance of various SDP models developed using resampling methods?
RQ3b: Is there any improvement in the performance of SDP models developed using resampling methods?
RQ3 presents the performance of SDP models that are built on balanced datasets. The ratio of defective and non-defective classes is equalized with help of different resampling methods and their performance is compared with their original versions, i.e. when no resampling of classes is done. We empirically proved that all resampling methods improve the performance of SDP models as compared to the models that are trained on original data. The values of AUC, Balance, GMean, and Sensitivity increased with models trained on resampled data, and, hence, these models depict better defect prediction capabilities.
RQ4: Which resampling method outperforms the addressed undersampling and oversampling methods for building an efficient SDP model?
In RQ3 it is proved that resampling methods produce better SDP models but out of the wide array of resampling methods which resampling method should the developer or researcher choose? RQ4 is generated to find out the resampling method(s) that outperforms the other resampling methods. The results of the Friedman Test followed by the post-hoc Nemenyi Test obtained on AUC, Balance, GMean, and Sensitivity values show that ML models based on ROS and AHC statistically outperforms other models.
RQ5: Which ML technique performs the best for SDP in imbalanced data?
This study uses 15 ML techniques from different categories. RQ5 tries to explore if there is a statistical difference in the performances of different ML techniques. We constricted the comparison to the models developed using ROS as it was statistically better than other resampling methods except for AHC. Performance values of ROS-based models were comparable but greater than the performance values of AHC-based models. We concluded that statistically ensemble methods and nearest neighbor methods performed better than neural networks and statistical techniques.
Comment 3: While Figure 1 might be comprehensive, it would be better to include a small description which might make it easier for the reader to understand the full experimental process of the study.
Reply to Comment 3:
The description of the experimental process of the study is included in the manuscript for the better understanding. Section “ Materials and Methods” has been revised to incorporate the steps of the experimental setup.
Kindly Refer to Line 206-396
Comment 4: The paper contains grammatical/spelling errors that can be fixed with a detailed proofread, some of them are listed below:
Line 25: “…softwares…” --> “…software…”
Line 37: “…with uncovering the probable…” --> “…with uncovering probable…”
Line 44: “…becomes a difficult task.” --> “…become difficult tasks.”
Line 52: “…trained on the similar…” --> “…trained on similar…”
Line 65: “…it is widely…” --> “…it is a widely…”
Line 66: “…and emerged…” --> “…and it has emerged…”
Line 114: “…for existing ones.” --> “…from existing ones.”
Line 303: “…by Wilcoxon signed-rank test…” --> “…by the Wilcoxon signed-rank test…”
Line 341: “…IQA is scribed…” --> “…IQA is described…”
Line 350: “…66.67%%...” --> “…66.67%...”
Reply to Comment 4: The English of the manuscript has been improved. Grammatical, linguistic, and spelling errors have been carefully checked and rectified.
Comment 5: The paper includes a good experimental design of a large empirical study by including a set of widely used ML techniques and datasets, a set of non-biased evaluation measures and statistical tests. However, statistical significance cannot be assessed alone without analysing the effect size (Arcuri and Briand, 2004 https://dl.acm.org/doi/10.1002/stvr.1486). The effect size is a quantitative measure of the magnitude of the experimental effect and it would give the reader a better understanding of the impact of the difference in results.
Reply to Comment 5:
We agree that statistical significance cannot be assessed alone without analyzing the effect size. Therefore, we have included Kendall’s Coefficient of Concordance to measure the effect of the resampling method or ML technique used that are used to assess the model performances based on different performance measures. It reports the strength of agreement amongst techniques.
Line 576-581
We have used Kendall’s coefficient of concordance to assess the effect size. It is a quantitative measure of the magnitude of the experimental effect. Kendall in Table 8 shows the value for Kendall’s coefficient of concordance. Its value ranges from 0 to 1. It reflects the degree of agreement. The Kendall value is 0.656, 0.589, 0.587, and 0.538 for AUC, Balance, GMean, and sensitivity respectively. As the values are greater than 0.5 but less than 0.7, therefore, its effect is moderate.
Line 605-610
The Kendall value for AUC, Balance, GMean, and sensitivity for ML techniques with ROS-based models is 0.876, 0.835, 0.844, and 0.838. All the four performance evaluators have a Kendall value greater than or equal to 0.835. This signifies that rankings for different datasets are approximate 83.5% similar and hence, increases the reliability and credibility of the Friedman statistical results. The impact of differences in results is high.
Comment 6: Some of the datasets used in this study describe systems that consist of multiple versions, for example Log4j1.0 and Log4j1.1. Is there any reason 10-fold cross validation was favoured and applied over the more realistic cross-version defect prediction scenario whereby the model is trained on an older version and tested on the most recent one?
Reply to Comment 6:
We have used 8 softwares and different versions are included only for three datasets. The conclusions of our study rely on within-project defect prediction models. Ten-fold cross-validation is the reliable method used to remove validation bias with most of the researchers. In future, cross-version defect prediction scenario can be investigated by taking 2-3 versions of all considered softwares and compared with this study.
Comment 7: The study includes a well-developed related work section but fails to position the work with respect to existing ones, for example, the work of Wang and Yao, 2013. This undermines the contribution and novelty of the study presented which should be better highlighted in the manuscript with respect to existing studies tackling the data imbalance problem and comparing over- and under-sampling techniques.
Reply to Comment 7:
We have revised the manuscript following the reviewer’s comment. Comparison is added in the “Discussion” section. It is included in Answer to Comment 1.in this rebuttal.
Kindly refer to Line 668-688.
" | Here is a paper. Please give your review comments after reading it. |
128 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In reinforcement learning (RL), dealing with non-stationarity is a challenging issue.</ns0:p><ns0:p>However, some domains such as traffic optimization are inherently non-stationary. Causes for and effects of this are manifold. In particular, when dealing with traffic signal controls, addressing non-stationarity is key since traffic conditions change over time and as a function of traffic control decisions taken in other parts of a network. In this paper we analyze the effects that different sources of non-stationarity have in a network of traffic signals, in which each signal is modeled as a learning agent. More precisely, we study both the effects of changing the context in which an agent learns (e.g., a change in flow rates experienced by it), as well as the effects of reducing agent observability of the true environment state. Partial observability may cause distinct states (in which distinct actions are optimal) to be seen as the same by the traffic signal agents. This, in turn, may lead to sub-optimal performance. We show that the lack of suitable sensors to provide a representative observation of the real state seems to affect the performance more drastically than the changes to the underlying traffic patterns.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Controlling traffic signals is one way of dealing with the increasing volume of vehicles that use the existing urban network infrastructure. Reinforcement learning (RL) adds up to this effort by allowing decentralization (traffic signals-modeled as agents-can independently learn the best actions to take in each current state) as well as on-the-fly adaptation to traffic flow changes. It is noteworthy that this can be done in a model-free way (with no prior domain information) via RL techniques. RL is based on an agent computing a policy mapping states to actions without requiring an explicit environment model. This is important in traffic domains because such a model may be very complex, as it involves modeling traffic state transitions determined not only by the actions of multiple agents, but also by changes inherent to the environment-such as time-dependent changes to the flow of vehicles.</ns0:p><ns0:p>One of the major difficulties in applying reinforcement learning (RL) in traffic control problems is the fact that the environments may change in unpredictable ways. The agents may have to operate in different contexts-which we define here as the true underlying traffic patterns affecting an agent; importantly, the agents do not know the true context of their environment, e.g., since they do not have full observability of the traffic network. Examples of partially observable variables that result in different contexts include different traffic patterns during the hours of the day, traffic accidents, road maintenance, weather, and other hazards. We refer to changes in the environment's dynamics as non-stationarity.</ns0:p><ns0:p>In terms of contributions, we introduce a way to model different contexts that arise in urban traffic due to time-varying characteristics. We then analyze different sources of non-stationarity-when applying RL to traffic signal control-and quantify the impact that each one has on the learning process. More precisely, we study the impact in learning performance resulting from (1) explicit changes in traffic PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:1:1:NEW 1 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science patterns introduced by different vehicle flow rates; and (2) reduced state observability resulting from imprecision or unavailability of readings from sensors at traffic intersections. The latter problem may cause distinct states (in which distinct actions are optimal) to be seen as the same by the traffic signal agents.</ns0:p><ns0:p>This not only leads to sub-optimal performance but may introduce drastic drops in performance when the environment's context changes. We evaluate the performance of deploying RL in a non-stationary multiagent scenario, where each traffic signal uses Q-learning-a model-free RL algorithm-to learn efficient control policies. The traffic environment is simulated using the open-source microscopic traffic simulator SUMO (Simulation of Urban MObility) <ns0:ref type='bibr' target='#b17'>(Lopez et al., 2018)</ns0:ref> and models the dynamics of a 4 × 4 grid traffic network with 16 traffic signal agents, where each agent has access only to local observations of its controlled intersection. We empirically demonstrate that the aforementioned causes of non-stationarity can negatively affect the performance of the learning agents. We also demonstrate that the lack of suitable sensors to provide a representative observation of the true underlying traffic state seems to affect learning performance more drastically than changes to the underlying traffic patterns.</ns0:p><ns0:p>The rest of this paper is organized as follows. The next section briefly introduces relevant RL concepts.</ns0:p><ns0:p>Then, our model is introduced in Section 3, and the corresponding experiments in Section 4. Finally, we discuss related work in Section 5 and then present concluding remarks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Reinforcement Learning</ns0:head><ns0:p>In reinforcement learning <ns0:ref type='bibr' target='#b22'>(Sutton and Barto, 1998)</ns0:ref>, an agent learns how to behave by interacting with an environment, from which it receives a reward signal after each action. The agent uses this feedback to iteratively learn an optimal control policy π * -a function that specifies the most appropriate action to take in each state. We can model RL problems as Markov decision processes (MDPs). These are described by a set of states S , a set of actions A, a reward function R(s, a, s ′ ) → R and a probabilistic state transition function T (s, a, s ′ ) → [0, 1]. An experience tuple s, a, s ′ , r denotes the fact that the agent was in state s, performed action a and ended up in s ′ with reward r. Let t denote the t th step in the policy π. In an infinite horizon MDP, the cumulative reward in the future under policy π is defined by the Q-function Q π (s, a), as in Eq. 1, where γ ∈ [0, 1] is the discount factor for future rewards.</ns0:p><ns0:formula xml:id='formula_0'>Q π (s, a) = E ∞ τ=0 γ τ r t+τ |s t = s, a t = a, π<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>If the agent knows the optimal Q-values Q * (s, a) for all state-actions pairs, then the optimal control policy π * can be easily obtained; since the agent's objective is to maximize the cumulative reward, the optimal control policy is:</ns0:p><ns0:formula xml:id='formula_1'>π * (s) = argmax a Q * (s, a) ∀s ∈ S , a ∈ A (2)</ns0:formula><ns0:p>Reinforcement learning methods can be divided into two categories: model-free and model-based.</ns0:p><ns0:p>Model-based methods assume that the transition function T and the reward function R are available, or instead try to learn them. Model-free methods, on the other hand, do not require that the agent have access to information about how the environment works.</ns0:p><ns0:p>The RL algorithm used in this paper is Q-learning (QL), a model-free off-policy algorithm that estimates the Q-values in the form of a Q-table. After an experience s, a, s ′ , r , the corresponding Q(s, a) value is updated through Eq. 3, where α ∈ [0, 1] is the learning rate.</ns0:p><ns0:formula xml:id='formula_2'>Q(s, a) := Q(s, a) + α(r + γ max a Q(s ′ , a) − Q(s, a)) (3)</ns0:formula><ns0:p>In order to balance exploitation and exploration when agents select actions, we use in this paper the ε-greedy mechanism. This way, agents randomly explore with probability ε and choose the action with the best expected reward so far with probability 1 − ε.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Non-stationarity in RL</ns0:head><ns0:p>In RL, dealing with non-stationarity is a challenging issue <ns0:ref type='bibr' target='#b13'>(Hernandez-Leal et al., 2017)</ns0:ref>. Among the main causes of non-stationarity are changes in the state transition function T (s, a, s ′ ) or in the reward In an MDP, the probabilistic state transition function T is assumed not to change. However, this is not realistic in many real world problems. In non-stationary environments, the state transition function T and/or the reward function R can change at arbitrary time steps. In traffic domains, for instance, an action in a given state may have different results depending on the current context-i.e., on the way the network state changes in reaction to the actions of the agents. If agents do not explicitly deal with context changes, they may have to readapt their policies. Hence, they may undergo a constant process of forgetting and relearning control strategies. Though this readaptation is possible, it might cause the agent to operate in a sub-optimal manner for extended periods of time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Partial Observability</ns0:head><ns0:p>Traffic control problems might be modeled as Dec-POMDPs <ns0:ref type='bibr' target='#b5'>(Bernstein et al., 2000)</ns0:ref>-a particular type of decentralized multiagent MDP where agents have only partial observability of their true states. A Dec-POMDP introduces to an MDP a set of agents I, for each agent i ∈ I a set of actions A i , with A = i A i the set of joint actions, a set of observations Ω i , with Ω = i Ω i the set of joint observations, and observation probabilities O(o|s, a), the probability of agents seeing observations o, given the state is s and agents take actions a. As specific methods to solve Dec-POMDPs do not scale with the number of agents <ns0:ref type='bibr' target='#b4'>(Bernstein et al., 2002)</ns0:ref>, it is usual to tackle them using techniques conceived to deal with the fully-observable case. Though this allows for better scalability, it introduces non-stationarity as the agents cannot completely observe their environment nor the actions of other agents.</ns0:p><ns0:p>In traffic signal control, partial observability can appear due to lack of suitable sensors to provide a representative observation of the traffic intersection. Additionally, even when multiple sensors are available, partial observability may occur due to inaccurate (with low resolution) measures.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODS</ns0:head><ns0:p>As mentioned earlier, the main goal of this paper is to investigate the different causes of non-stationarity that might affect performance in a scenario where traffic signal agents learn how to improve traffic flow under various forms of non-stationarity. To study this problem, we introduce a framework for modeling urban traffic under time-varying dynamics. In particular, we first introduce a baseline urban traffic model based on MDPs. This is done by formalizing-following similar existing works-the relevant elements of the MDP: its state space, action set, and reward function.</ns0:p><ns0:p>Then, we show how to extend this baseline model to allow for dynamic changes to its transition function so as to encode the existence of different contexts. Here, contexts correspond to different traffic patterns that may change over time according to causes that might not be directly observable by the agent.</ns0:p><ns0:p>We also discuss different design decisions regarding the possible ways in which the states of the traffic system are defined; many of these are aligned with the modeling choices typically done in the literature, as for instance <ns0:ref type='bibr' target='#b18'>(Mannion et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b11'>Genders and Razavi, 2018)</ns0:ref>. Discussing the different possible definitions of states is relevant since these are typically specified in a way that directly incorporates sensor information. Given the amount and quality of sensor information, however, different state definitions arise that-depending on sensor resolution and partial observability of the environment and/or of other agents-result in different amounts of non-stationarity.</ns0:p><ns0:p>Furthermore, in what follows we describe the multiagent training scheme used (in Section 3.4) by each traffic signal agent in order to optimize its policy under non-stationary settings. We also describe how traffic patterns-the contexts in which our agents may need to operate-are modeled mathematically (Section 3.5). We discuss the methodology that is used to analyze and quantify the effects of nonstationarity in the traffic problem in Section 4.</ns0:p><ns0:p>Finally, we emphasize here that the proposed methods and analyzes that will be conducted in this paper-aimed at evaluating the impact of different sources of non-stationary-are the main contributions of our work. Most existing works (e.g., those discussed in Section 5) do not address or directly investigate at length the implications of varying traffic flow rates as sources of non-stationarity in RL.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>State Formulation</ns0:head><ns0:p>In the problems or scenarios we deal with, the definition of state space strongly influences the agents' behavior and performance. Each traffic signal agent controls one intersection, and at each time step t it 3/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:1:1:NEW 1 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science observes a vector s t that partially represents the true state of the controlled intersection.</ns0:p><ns0:p>A state, in our problem, could be defined as a vector s ∈ R (2+2|P|) , as in Eq. 4, where P is the set of all green traffic phases 1 , ρ ∈ P denotes the current green phase, δ ∈ [0, maxGreenT ime] is the elapsed time of the current phase, density i ∈ [0, 1] is defined as the number of vehicles divided by the vehicle capacity of the incoming movements of phase i and queue i ∈ [0, 1] is defined as the number of queued vehicles (we consider as queued a vehicle with speed under 0.1 m/s) divided by the vehicle capacity of the incoming movements of phase i.</ns0:p><ns0:formula xml:id='formula_3'>s = [ρ, δ, density 1 , queue 1 , ..., density |P| , queue |P| ]<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Note that this state definition might not be feasibly implementable in real-life settings due to cost issues arising from the fact that many physical sensors would have to be paid for and deployed. We introduce, for this reason, an alternative definition of state which has reduced scope of observation. More precisely, this alternative state definition removes density attributes from Eq. 4, resulting in the partially-observable state vector s ∈ R (2+|P|) in Eq 5. The absence of these state attributes is analogous to the lack of availability of real-life traffic sensors capable of detecting approaching vehicles along the extension of a given street (i.e., the density of vehicles along that street). This implies that, without the density attributes, the observed state can not inform the agent whether (or how fast) the links are being filled with new incoming vehicles, which may lead to a situation with large queue lengths in the next time steps.</ns0:p><ns0:formula xml:id='formula_4'>s = [ρ, δ, queue 1 , ..., queue |P| ]</ns0:formula><ns0:p>(5)</ns0:p><ns0:p>Note also that the above definition results in continuous states. Q-learning, however, traditionally works with discrete state spaces. Therefore, states need to be discretized after being computed. Both density and queue attributes are discretized in ten levels/bins equally distributed. We point out that a low level of discretization is also a form of partial-observability, as it may cause distinct states to be perceived as the same state. Furthermore, in this paper we assume-as commonly done in the literature-that one simulation time step corresponds to five seconds of real-life traffic dynamics. This helps encode the fact that traffic signals typically do not change actions every second; this modeling decision implies that actions (in particular, changes to the current phase of a traffic light) are taken in intervals of five seconds.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Actions</ns0:head><ns0:p>In an MDP, at each time step t each agent chooses an action a t ∈ A. The number of actions, in our setting, is equal to the number of phases, where a phase allows green signal to a specific traffic direction; thus, |A| = |P|. In the case where the traffic network is a grid (typically encountered in the literature <ns0:ref type='bibr' target='#b10'>(El-Tantawy et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b18'>Mannion et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b8'>Chu et al., 2019</ns0:ref>)), we consider two actions: an agent can either keep green time to the current phase or allow green time to another phase; we call these actions keep and change, respectively. There are two restrictions in the action selection: an agent can take the action change only if δ ≥ 10s (minGreenT ime) and the action keep only if δ < 50s (maxGreenT ime). Additionally, change actions impose a yellow phase with a fixed duration of 2 seconds. These restrictions are in place to, e.g., model the fact that in real life, a traffic controller needs to commit to a decision for a minimum amount of time to allow stopped cars to accelerate and move to their intended destinations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Reward Function</ns0:head><ns0:p>The rewards assigned to traffic signal agents in our model are defined as the change in cumulative vehicle waiting time between successive actions. After the execution of an action a t , the agent receives a reward r t ∈ R as given by Eq. 6:</ns0:p><ns0:formula xml:id='formula_5'>r t = W t − W t+1 (6)</ns0:formula><ns0:p>where W t and W t+1 represent the cumulative waiting time at the intersection before and after executing the action a t , following Eq. 7:</ns0:p><ns0:formula xml:id='formula_6'>W t = v∈V t w v,t<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>1 A traffic phase assigns green, yellow or red light to each traffic movement. A green traffic phase is a phase which assigns green to at least one traffic movement.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_1'>2020:10:54754:1:1:NEW 1 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where V t is the set of vehicles on roads arriving at an intersection at time step t, and w v,t is the total waiting time of vehicle v since it entered one of the roads arriving at the intersection until time step t. A vehicle is considered to be waiting if its speed is below 0.1 m/s. Note that, according to this definition, the larger the decrease in cumulative waiting time, the larger the reward. Consequently, by maximizing rewards, agents reduce the waiting time at the intersections, thereby improving the local traffic flow.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Multiagent Independent Q-learning</ns0:head><ns0:p>We tackle the non-stationarity in our scenario by using Q-learning in a multiagent independent training scheme <ns0:ref type='bibr' target='#b23'>(Tan, 1993)</ns0:ref>, where each traffic signal is a QL agent with its own Q-table, local observations, actions and rewards. This approach allows each agent to learn an individual policy, applicable given the local observations that it makes; policies may vary between agents as each one updates its Q-table using only its own experience tuples. Besides allowing for different behaviors between agents, this approach also avoids the curse of dimensionality that a centralized training scheme would introduce. However, there is one main drawback of an independent training scheme: as agents are learning and adjusting their policies, changes to their policies cause the environment dynamics to change, thereby resulting in non-stationary. This means that original convergence properties for single-agent algorithms no longer hold due to the fact that the best policy for an agent changes as other agents' policies change <ns0:ref type='bibr' target='#b6'>(Bus ¸oniu et al., 2008)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Contexts</ns0:head><ns0:p>In order to model one of the causes for non-stationary in the environment, we use the concept of traffic contexts, similarly to da Silva et al. <ns0:ref type='bibr' target='#b21'>(Silva et al., 2006)</ns0:ref>. We define contexts as traffic patterns composed of different vehicle flow distributions over the Origin-Destination (OD) pairs of the network. The origin node of an OD pair indicates where a vehicle is inserted in the simulation. The destination node is the node in which the vehicle ends its trip, and hence is removed from the simulation upon its arrival. A context, then, is defined by associating with each OD pair a number of vehicles that are inserted (per second) in its origin node. Non-stationarity then emerges since the current context changes during the simulation in the form of recurrent events on the traffic environment. Importantly, although each context corresponds to a stationary traffic pattern, the environment becomes non-stationary w.r.t. the agents because the underlying context changes unpredictably, and the agents cannot perceive an indicator of the current context.</ns0:p><ns0:p>Changing the context during a simulation causes the sensors measures to vary differently in time.</ns0:p><ns0:p>Events such as traffic accidents and hush hours, for example, cause the flow of vehicles to increase in a particular direction, thus making the queues on the lanes of this direction to increase faster. In the usual case, where agents do not have access to all information about the environment state, this can affect the state transition T and the reward R functions of the MDP directly. Consequently, when the state transition probabilities and the rewards agents are observing change, the Q-values of the state-action pairs also change. Therefore, traffic signal agents will most likely need to undergo a readaptation phase to correctly update their policies, resulting in periods of catastrophic drops in performance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EXPERIMENTS AND RESULTS</ns0:head><ns0:p>Our main goal with the following experiments is to quantify the impact of different causes of nonstationarity in the learning process of an RL agent in traffic signal control. Explicit changes in context (e.g., vehicle flow rate changes in one or more directions) are one of these causes and are present in all of the following experiments. This section first describes details of the scenario being simulated as well as the traffic contexts, followed by a definition of the performance metrics used as well as the different experiments that were performed. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Scenario</ns0:head><ns0:p>We used the open-source microscopic traffic simulator SUMO to model and simulate the traffic scenario and its dynamics, and SUMO-RL <ns0:ref type='bibr' target='#b0'>(Alegre, 2019)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Metrics</ns0:head><ns0:p>To measure the performance of traffic signal agents, we used as metric the summation of the cumulative vehicle waiting time on all intersections, as in Eq. 7. Intuitively, this quantifies for how long vehicles are delayed by having to reduce their velocity below 0.1 m/s due to long waiting queues and to the inadequate use of red signal phases. This metric is also a good indication of the agents performance, since it is strongly related to the rewards assigned to each agent, defined in Eq. 6. Therefore, as the agents improve their local policies to minimize the change in cumulative vehicle waiting time, it is expected that the global waiting time of the traffic environment also decreases. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we omit the time steps of the beginning of the simulation (since the network then is not yet fully populated with vehicles) as well as the last time steps (since then vehicles are no longer being inserted).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Traffic Signal Control under Fixed Policies</ns0:head><ns0:p>We first demonstrate the performance of a fixed policy designed by following the High Capacity Manual <ns0:ref type='bibr'>(National Research Council, 2000)</ns0:ref>, which is popularly used for such task. The fixed policy assigns to each phase a green time of 35 seconds and a yellow time of 2 seconds. As mentioned, our goal by defining this policy is to construct a baseline used to quantify the impact of a context change on the performance of traffic signals in two situations: one where traffic signals follow a fixed policy and one where traffic signals adapt and learn a new policy using QL algorithm. This section analyzes the former case. shows that the fixed policy, as expected, loses performance when the context is changed. When the traffic flow is set to Context 2 at time step 20000, a larger amount of vehicles are driving in the W-E direction and thus producing larger waiting queues. In order to obtain a good performance using fixed policies, it would be necessary to define a policy for each context and to know in advance the exact moment when context changes will occur. Moreover, there may be an arbitrarily large number of such contexts, and the agent, in general, has no way of knowing in advance how many exist. Prior knowledge of these quantities is not typically available since non-recurring events that may affect the environment dynamics, such as traffic accidents, cannot be predicted. Hence, traffic signal control by fixed policies is inadequate in scenarios where traffic flow dynamics may change (slowly or abruptly) over time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Effects of Disabling Learning and Exploration</ns0:head><ns0:p>We now describe the case in which agents stop, at some point in time, to learn from their actions and simply follow the policy learned before a given context change. The objective here is to simulate a situation where a traffic signal agent employs a previously-learned policy to a context/traffic pattern that</ns0:p><ns0:p>has not yet been observed in its training phase. We achieve this by setting both α (learning rate) and ε (exploration rate) to 0 when there is a change in context. By observing Eq. 3, we see that the Q-values no longer have their values changed if α = 0. By setting ε = 0, we also ensure that the agents will not explore and that they will only choose the actions with the higher estimated Q-value given the dynamics of the last observed context. By analyzing performance in this setting, we can quantify the negative effect of agents that act solely by following the policy learned from the previous contexts. Note, however, that some actions (e.g., changing the phase when there is congestion in one of the directions) are still capable of improving performance, since they are reasonable decisions under both contexts. This explains the reason why performance drops considerably when the context changes and why the waiting time keeps oscillating afterwards.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Effects of Reduced State Observability</ns0:head><ns0:p>In this experiment, we compare the effects of context changes under the two different state definitions presented in Section 3.1. The state definition in Eq. 4 represents a more unrealistic scenario in which expensive real-traffic sensors are available at the intersections. In contrast, in the partial state definition in Eq. 5 each traffic signal has information only about how many vehicles are stopped at its corresponding intersection (queue), but cannot relate this information to the number of vehicles currently approaching its waiting queue, as vehicles in movement are monitored only on density attributes.</ns0:p><ns0:p>Differently from the previous experiment, agents now continue to explore and update their Q-tables throughout the simulation. The ε parameter is set to a fixed value of 0.05; this way, the agents mostly exploit but still have a small chance of exploring other actions in order to adapt to changes in the environment. By not changing ε we ensure that performance variations are not caused by an exploration strategy. The values of the QL parameters (α and γ) are kept as in the previous experiment.</ns0:p><ns0:p>The results of this experiment are shown in Fig. <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. By analyzing the initial steps in the simulation, we note that agents using the reduced state definition learn significantly faster than those with the state definition that incorporates both queue and density attributes. This is because there are fewer states to explore, and so it takes fewer steps for the policy to converge. However, given this limited observation capability, agents converge to a policy resulting in higher waiting times when compared to that resulting from agents with more extensive state observability. This shows that the density attributes are fundamental to better characterize the true state of a traffic intersection. Also note that around time 10000, the performance of both state definitions (around 500 seconds of total waiting time) are better than that achieved under the fixed policy program (around 2200 seconds of total waiting time), depicted in the changing past dynamics. The dynamics of both contexts are, however, well-captured in the original state definition, as the combination of the density and queue attributes provides enough information about the dynamics of traffic arrivals at the intersection. This observation emphasizes the importance of more extensive state observability to avoid the negative impacts of non-stationarity in RL agents.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.6'>Effects of Different Levels of State Discretization</ns0:head><ns0:p>Besides the unavailability of appropriate sensors (which results in incomplete description of states)</ns0:p><ns0:p>another possible cause of non-stationarity is poor precision and low range of observations. As an example, consider imprecision in the measurement of the number of vehicles waiting at an intersection; this may cause distinct states-in which distinct actions are optimal-to be perceived as the same state. This not only leads to sub-optimal performance, but also introduces drastic performance drops when the context change. We simulate this effect by lowering the number of discretization levels of the attribute queue in cases where the density attribute is not available.</ns0:p><ns0:p>In with imprecise observations require a larger number of actions to readapt, thereby dramatically increasing queues.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.7'>Discussion</ns0:head><ns0:p>Many RL algorithms have been proposed to tackle non-stationary problems <ns0:ref type='bibr' target='#b7'>(Choi et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b9'>Doya et al., 2002;</ns0:ref><ns0:ref type='bibr' target='#b21'>Silva et al., 2006)</ns0:ref>. Specifically, these works assume that the environment is non-stationary (without studying or analyzing the specific causes of non-stationary) and then propose computational mechanisms to efficiently learn under that setting. In this paper, we deal with a complementary problem, which is to quantify the effects of different causes of non-stationarity in the learning performance. We also assume that non-stationarity exists, but we explicitly model many of the possible underline reasons why its effects may take place. We study this complementary problem because it is our understanding that by explicitly quantifying the different reasons for non-stationary effects, it may be possible to make better-informed decisions about which specific algorithm to use, or to decide, for instance, if efforts should be better spent by designing a more complete set of features instead of by designing more sophisticated learning algorithms.</ns0:p><ns0:p>In this paper, we studied these possible causes specifically when they affect urban traffic environments. We observed that the non-stationarity introduced by the actions of other concurrently-learning agents in a competitive environment seemed to be a minor obstacle to acquiring effective traffic signals policies.</ns0:p><ns0:p>However, a traffic signal agent that selfishly learns to reduce its own queue size may introduce a higher flow of vehicles arriving at neighboring intersections, thereby affecting the rewards of other agents and 10/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:1:1:NEW 1 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>producing non-stationarity. We believe that in more complex scenarios this effect would be more clearly visible.</ns0:p><ns0:p>Furthermore, we found that traditional tabular Independent Q-learning presented a good performance in our scenario if we do not take into account the non-stationarity impacts. Therefore, in this particular simulation it was not necessary to use more sophisticated methods such as algorithms based on valuefunction approximation; for instance, deep neural networks. These methods could help in dealing with larger-scale simulations that could require dealing with higher dimensional states. However, we emphasize the fact that even though they could help with higher dimensional states, they would also be affected by the presence of non-stationarity, just like standard tabular methods are. This happens because just like standard tabular Q-learning, deep RL methods do not explicitly model the possible sources of non-stationarity, and therefore would suffer in terms of learning performance whenever changes in state transition function occur.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>RELATED WORK</ns0:head><ns0:p>Reinforcement learning has been previously used with success to provide solutions to traffic signal control.</ns0:p><ns0:p>Surveys on the area <ns0:ref type='bibr' target='#b3'>(Bazzan, 2009;</ns0:ref><ns0:ref type='bibr' target='#b26'>Yau et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b25'>Wei et al., 2019)</ns0:ref> have discussed fundamental aspects of reinforcement learning for traffic signal control, such as state definitions, reward functions and algorithms classifications. Many works have addressed multiagent RL (Arguello <ns0:ref type='bibr' target='#b1'>Calvo and Dusparic, 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>Mannion et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b10'>El-Tantawy et al., 2013) and</ns0:ref><ns0:ref type='bibr'>deep RL (van der Pol, 2016;</ns0:ref><ns0:ref type='bibr' target='#b15'>Liang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Liu et al., 2017)</ns0:ref> methods in this context. In spite of non-stationarity being frequently mentioned as a complex challenge in traffic domains, we evidenced a lack of works quantifying its impact and relating it to its many causes and effects.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> we compare relevant related works that have addressed non-stationary in the form of partial observability, change in vehicle flow distribution and/or multiagent scenarios. In <ns0:ref type='bibr' target='#b21'>(Silva et al., 2006)</ns0:ref>, da Silva et. al explored non-stationarity in traffic signal control under different traffic patterns.</ns0:p><ns0:p>They proposed the RL-CD method to create partial models of the environment-each one responsible for dealing with one kind of context. However, they used a simple model of the states and actions available to each traffic signal agent: state was defined as the occupation of each incoming link and discretized into 3 bins; actions consisted of selecting one of three fixed and previously-designed signal plans. In <ns0:ref type='bibr' target='#b19'>(Oliveira et al., 2006)</ns0:ref>, Oliveira et al. extend the work in <ns0:ref type='bibr' target='#b21'>(Silva et al., 2006)</ns0:ref> to address the non-stationarity caused by the random behavior of drivers in what regards the operational task of driving (e.g. deceleration probability), but the aforementioned simple model of the states and actions was not altered. In <ns0:ref type='bibr' target='#b2'>(Balaji et al., 2010)</ns0:ref>, <ns0:ref type='bibr'>Balaji et al.</ns0:ref> analyze the performance of tabular Q-learning in a large multiagent scenario. Their state state space, however, was significantly discretized and constituted of only 9 possible states. In <ns0:ref type='bibr' target='#b16'>(Liu et al., 2017)</ns0:ref>, Liu et al. proposed a variant of independent deep Q-learning to coordinate four traffic signals. However, no information about vehicle distribution or insertion rates was mentioned or analyzed. A comparison between different state representations using the A3C algorithm was made in <ns0:ref type='bibr' target='#b11'>(Genders and Razavi, 2018)</ns0:ref>; however, that paper did not study the capability of agents to adapt to different traffic flow distributions. In <ns0:ref type='bibr' target='#b27'>(Zhang et al., 2018)</ns0:ref> state observability was analyzed in a vehicle-to-infrastructure (V2I) scenario, where the traffic signal agent detects approaching vehicles with Dedicated Short Range Communications (DSRC) technology under different rates. In <ns0:ref type='bibr'>(Horsuwan and</ns0:ref> Aswakul, 2019) a scenario with partially observable state (only occupancy sensors available) was studied, however no comparisons with different state definitions or sensors were made. In <ns0:ref type='bibr' target='#b8'>(Chu et al., 2019)</ns0:ref> Fixed flow <ns0:ref type='bibr' target='#b8'>(Chu et al., 2019)</ns0:ref> Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>Non-stationarity is an important challenge when applying RL to real-world problems in general, and to traffic signal control in particular. In this paper, we studied and quantified the impact of different causes of non-stationarity in a learning agent's performance. Specifically, we studied the problem of non-stationarity in multiagent traffic signal control, where non-stationarity resulted from explicit changes in traffic patterns and from reduced state observability. This type of analysis complements those made in existing works related to non-stationarity in RL; these typically propose computational mechanisms to learn under changing environments, but usually do not systematically study the specific causes and impacts that the different sources of non-stationary may have on learning performance.</ns0:p><ns0:p>We have shown that independent Q-Learning agents can re-adapt their policies to traffic pattern context changes. Furthermore, we have shown that the agents' state definition and their scope of observations strongly influence the agents' re-adaptation capabilities. While agents with more extensive state observability do not undergo performance drops when dynamics change to previously-experienced contexts, agents operating under a partially observable version of the state often have to relearn policies.</ns0:p><ns0:p>Hence, we have evidenced how a better understanding of the reasons and effects of non-stationarity may aid in the development of RL agents. In particular, our results empirically suggest that effort in designing better sensors and state features may have a greater impact on learning performance than efforts in designing more sophisticated learning algorithms.</ns0:p><ns0:p>For future work, traffic scenarios that include other causes for non-stationarity can be explored. For instance, unexpected events such as traffic accidents may cause drastic changes to the dynamics of an intersection, as they introduce local queues. In addition, we propose studying how well our findings generalize to settings involving arterial roads (which have greater volume of vehicles) and intersections with different numbers of traffic phases.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:1:1:NEW 1 Mar 2021) Manuscript to be reviewed Computer Science function R(s, a, s ′ ), partial observability of the true environment state (discussed in Section 2.3) and non-observability of the actions taken by other agents.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>We first conduct an experiment where traffic signals use a fixed control policy-a common strategy in case the infrastructure lacks sensors and/or actuators. The results of this experiment are discussed in Section 4.3 and are used to emphasize the problem of lacking a policy that can adapt to different contexts; it also serves as a baseline for later comparisons. Afterwards, in Section 4.4 we explore the setting where agents employ a given policy in a context/traffic pattern that has not yet been observed during the training phase. In Section 4.5 we analyze (1) the impact of context changes when agents continue to explore and update their Q-tables throughout the simulation; and (2) the impact of having non-stationarity introduced both by context changes and by the use of the two different state definitions presented in Section 3.1. Then, in Section 4.6 we address the relation between non-stationarity and partial observations resulting 5/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:1:1:NEW 1 Mar 2021) Manuscript to be reviewed Computer Science from the use of imprecise sensors, simulated by poor discretization of the observation space. Lastly, in Section 4.7 we discuss what are the main findings and implications of the results observed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>to instantiate the simulation as a reinforcement learning environment with all the components of an MDP. The traffic network is a 4 × 4 grid network with traffic signals present in all 16 intersections (Fig. 1). All links have 150 meters, two lanes and are one-way. Vertical links follow N-S traffic directions and horizontal links follow W-E directions. There are 8 OD pairs: 4 in the W-E traffic direction (A2F2, A3F3, A4F4, and A5F5), and 4 in the N-S direction (B1B6, C1C6, D1D6, E1E6).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. 4 × 4 grid network. (A) Network topology. (B) Network in SUMO.</ns0:figDesc><ns0:graphic coords='7,172.75,205.86,351.53,181.62' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>At the time steps in which phase changes occur, natural oscillations in the queue sizes occur since many vehicles are stopping and many are accelerating. Therefore, all plots shown here depict moving averages of the previously-discussed metric within a time window of 15 seconds. The plots related to Q-learning are averaged over 30 runs, where the shadowed area shows the standard deviation. Additionally, 6/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:1:1:NEW 1 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Total waiting time of vehicles in the simulation: fixed policy traffic signals, context change at time step 20000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Total waiting time of vehicles. Q-learning traffic signals. Context change, α, and ε set to 0 at timestep 20000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>FigFigure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Fig. 2.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Total waiting time of vehicles. Q-learning traffic signals with different levels of discretization for the attribute queue. Context changes at time steps 20000, 40000 and 60000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>, Chu et al. introduced Multiagent A2C in scenarios where different vehicle flows distributed in the network changed their insertion rates independently. On the other hand, they only used a state definition which gives sufficient information about the traffic intersection. Finally, in (Padakandla et al., 2019), Padakandla et al. introduce Context-QL, a method similar to RL-CD that uses a change-point detection metric to capture context changes. They also explored non-stationarity caused by different traffic flows, but they did not consider the impact of the state definition used (with low discretization and only one sensor) in their results. To the extent of our knowledge, this is the first work to analyze how different levels of partial observability affect traffic signal agents under non-stationary environments where traffic flows change not only in vehicle insertion rate, but also in vehicle insertion distribution between phases. 11/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:1:1:NEW 1 Mar 2021)Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54754:1:1:NEW 1 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>The results of our experiments indicate that non-stationarity in the form of changes to vehicle flow rates significantly impact both traffic signal controllers following fixed policies and policies learned from standard RL methods that do not model different contexts. However, this impact (that results in rapid changes in the total number of vehicles waiting at the intersections) has different levels of impact on agents depending on the different levels of observability available to those agents. While agents with the original state definition (queue and density attributes) only present performance drops in the first time they operate in a new context, agents with reduced observation (only queue attributes) may always have to relearn the readapted Q-values. The original state definition, however, is not very realistic in the real world, as sensors capable of providing both attributes for large traffic roads are very expensive. Finally, in cases where agents observe only the queues attributes, we demonstrated that imprecise measures (e.g. low number of discretization bins) potencializes the impact of context changes. Hence, in order to design a robust RL traffic signal controller, it is critical to take into account which are the most adequate sensors and how they contribute to provide a more extensive observation of the true environment state.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Related Work</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Study</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Scenario</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell /><ns0:cell cols='2'>State Observabil-</ns0:cell><ns0:cell>Flow</ns0:cell><ns0:cell>non-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>ity</ns0:cell><ns0:cell>stationarity</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Silva et al., 2006)</ns0:cell><ns0:cell cols='3'>3x3 grid network RL-CD (model-</ns0:cell><ns0:cell cols='2'>Occupation dis-</ns0:cell><ns0:cell>Two unbalanced</ns0:cell></ns0:row><ns0:row><ns0:cell>and</ns0:cell><ns0:cell cols='2'>(Oliveira</ns0:cell><ns0:cell /><ns0:cell>based)</ns0:cell><ns0:cell /><ns0:cell cols='2'>cretized in 3 bins</ns0:cell><ns0:cell>flows</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>et al., 2006)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(no comparison</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>made)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Balaji et al.,</ns0:cell><ns0:cell>Central Business</ns0:cell><ns0:cell cols='2'>Q-learning</ns0:cell><ns0:cell cols='2'>Queue and flow</ns0:cell><ns0:cell>Morning and af-</ns0:cell></ns0:row><ns0:row><ns0:cell>2010)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>District area in</ns0:cell><ns0:cell cols='2'>(model-free)</ns0:cell><ns0:cell cols='2'>(9 possible states</ns0:cell><ns0:cell>ternoon peaks</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Singapore</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>only)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Liu et al., 2017)</ns0:cell><ns0:cell cols='3'>2x2 grid network CDRL (model-</ns0:cell><ns0:cell cols='2'>Position, speed</ns0:cell><ns0:cell>Not mentioned</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>free)</ns0:cell><ns0:cell /><ns0:cell>and</ns0:cell><ns0:cell>neighbour</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>intersection</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>(no comparison</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>made)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(Genders</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>Isolated intersec-</ns0:cell><ns0:cell cols='4'>A3C (model-free) Three different</ns0:cell><ns0:cell>Variable</ns0:cell><ns0:cell>flow</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Razavi, 2018)</ns0:cell><ns0:cell /><ns0:cell>tion</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>definitions (dif-</ns0:cell><ns0:cell>rate equally dis-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>ferent resolutions</ns0:cell><ns0:cell>tributed between</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>compared)</ns0:cell><ns0:cell>phases</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Zhang et al.,</ns0:cell><ns0:cell>Multiple network</ns0:cell><ns0:cell>DQN</ns0:cell><ns0:cell>(model-</ns0:cell><ns0:cell cols='2'>Different car de-</ns0:cell><ns0:cell>Variable</ns0:cell><ns0:cell>flow</ns0:cell></ns0:row><ns0:row><ns0:cell>2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>topologies</ns0:cell><ns0:cell>free)</ns0:cell><ns0:cell /><ns0:cell cols='2'>tection rates com-</ns0:cell><ns0:cell>rate equally dis-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>pared</ns0:cell><ns0:cell>tributed between</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>phases</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Horsuwan and</ns0:cell><ns0:cell>Isolated intersec-</ns0:cell><ns0:cell cols='2'>Ape-X (model-</ns0:cell><ns0:cell cols='2'>Mean occupancy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Aswakul, 2019)</ns0:cell><ns0:cell>tion on Sathorn</ns0:cell><ns0:cell>free)</ns0:cell><ns0:cell /><ns0:cell cols='2'>(no comparison</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Road</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>made)</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "CS-2020:10:54754:0:0:REVIEW – Answers to
Reviewers
Lucas N. Alegre
February 20, 2021
italics: reviewers’ comments
bold: our answers and/or comments
Editor
We thank the editor for handling our paper entitled “Quantifying
the impact of non-stationarity in reinforcement learning-based traffic signal control”. We have edited the manuscript to address the
reviewers’ concerns and suggestions.
Reviewer 1
The manuscript is written and organised well. Apart from a few typos and
grammar issues, readability is very good. Please correct situations such as: ”We
them analyze...” −→ ”We then analyze...”?; ”the impact that each one has in
the learning process...” −→ ”on the learning process”?; ”policy π ∗ The rewards”
−→ a full stop and space are perhaps missing.
We thank the reviewer for the feedback and suggestions. We fixed
these minor issues in the new version of the manuscript.
In this perspective, some concepts might be better discussed, though. Whereas
traffic flow in urban settings is potentially nonstationary, it is also true that
some patterns (although stochastic) are easily observed, such as the daily 24hour flow profile, in which off-peak and peak hours are identified. Stochasticity
not necessarily implies nonstationarity, as some systems may well work under
dynamic stability. Certainly, such dynamic stability may well be disturbed by unexpected events, such as accidents, or even by priority vehicles like ambulances
demanding other vehicles to give the right of way. From what this reviewer
could understand, it seems that the ”nonstationary traffic flow” is rather yielded
1
by a random OD demand imposed by authors in their different simulation scenarios. On the other hand, limitations in agents’ observability capacity do not
seem to cause itself nonstationary regimes upon the environment as might do
agents’ actions. Rather than a cause, it seems to be related to how sometimes
such limitation might well influence rationality even in (dynamic) stationary
environments making them seem rather nonstationary to the sensor-deficient
agents, affecting this way the soundness of their decision making.
We agree that stochastic flow patterns do not necessarily imply
non-stationarity. As the reviewer correctly observed, the environment may be perceived as non-stationary when the agent can not
fully observe all the necessary information, in order to differentiate
between different traffic situations. In this work, we focused on recurrent traffic events (modeled with different OD demands), and we
show that this source of non-stationarity affects the agents on different degrees, depending on their level of (partial) observability (sensors’ availability and resolution). We believe that the study of the
impact of other sources of non-stationary, such as traffic accidents,
are also interesting, and we mentioned them as future work in the
conclusion.
Whether being able to observe (sense) ”densities”, perhaps this might also
be influenced by how often vehicles’ speed drops below 0.1 m/s on a link and by
the distance to be traversed. The best-case scenario is when a vehicle traverses a
link on free-flow speed and does not need to stop at the junction; performance degrades when flow approaches saturation, affecting densities. Isn’t this behaviour
of the link performance implicitly embedded in the queue signal perceived by the
agents?
Our results show that the density attributes are fundamental in
order to the agents being able to deal with different traffic demands
without the need to readapt their decision-making policies every time
the context changes. The queue attributes only embeds information
regarding the vehicles with speed below 0.1 m/s, and therefore can
not inform the agent whether the link is full of vehicles (which may
have to stop in a queue the future) or not. We have emphasized this
on the manuscript.
Also, authors have selected Total Waiting Time as the main metric to analyse. Perhaps considering total/average travel time for the whole network would
allow for evaluating the system’s performance as a whole (collectively). That
could be an interesting analysis too.
Thanks for this suggestion. We thought about using total travel
time computed over the demand (vehicles) but since the demand is
managed by SUMO in a way that is not straightforward to control, we
opted to use a metric that is computed by us and refers to the waiting times at the intersections. Also, we believe that the total waiting
time should be more informative of the traffic signal agents’ performance, as the reward function of our MDP formulation is defined
2
as the change in cumulative vehicle waiting time between successive
actions.
Generally, the work is quite relevant and makes a good appreciation of related efforts to emphasise the authors’ contributions. Some aspects, especially
conceptual ones, could improve benefiting from a better discussion and deeper
reflection.
Thanks for this remark. We have improved the discussion regarding the different non-stationary sources and the results, to justify why
we are using such metrics. Also, we have emphasized the question of
recurrent events.
Reviewer 2
Authors declare “This means that original convergence properties for singleagent algorithms no longer hold due to the fact that the best policy for an agent
changes as other agents’ policies change” (line 166). What is a performance
guarantee mechanism instead of convergence during the non-stationarity iterative process? And how to provide?
We thank the reviewer for this remark. Line 166, as mentioned,
indeed implies that the theoretical convergence guarantees are lost.
The state-of-the knowledge is such that some standalone RL algorithms are only guaranteed to converge as long as the environment
the agent is experiencing is Markovian and the agent is allowed to
try out sufficient actions. Convergence is guaranteed only in specific cases such as Nash-Q, Friend or Foe, zero-sum games (Tuyls and
Weiss, AI Magazine 2011). Hu and Wellman (ICML 2000) discuss issues related to non-convergence in the general case (e.g., general-sum
stochastic games). This said, we have modeled the non-stationary
traffic environment with a set of different contexts/stationary MDPs
(each represented with a different traffic demand). Thus, on each
context/stationary MDP, the performance should converge if given
sufficient time for the agent to adapt its decision policy.
I notice the experiment takes use of times steps as end condition. Obviously
it is not universal.
The reviewer is right that the contexts could spam over different
number of time steps. However, we assumed, for simplicity, that
these time windows are the same. This can be changed in case the
user has the actual/real data, i.e., how the demand in fact changes.
This should not change the main findings we have obtained, provided,
as said, agents are given sufficient time to learn.
3
" | Here is a paper. Please give your review comments after reading it. |
129 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In reinforcement learning (RL), dealing with non-stationarity is a challenging issue.</ns0:p><ns0:p>However, some domains such as traffic optimization are inherently non-stationary. Causes for and effects of this are manifold. In particular, when dealing with traffic signal controls, addressing non-stationarity is key since traffic conditions change over time and as a function of traffic control decisions taken in other parts of a network. In this paper we analyze the effects that different sources of non-stationarity have in a network of traffic signals, in which each signal is modeled as a learning agent. More precisely, we study both the effects of changing the context in which an agent learns (e.g., a change in flow rates experienced by it), as well as the effects of reducing agent observability of the true environment state. Partial observability may cause distinct states (in which distinct actions are optimal) to be seen as the same by the traffic signal agents. This, in turn, may lead to sub-optimal performance. We show that the lack of suitable sensors to provide a representative observation of the real state seems to affect the performance more drastically than the changes to the underlying traffic patterns.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Controlling traffic signals is one way of dealing with the increasing volume of vehicles that use the existing urban network infrastructure. Reinforcement learning (RL) adds up to this effort by allowing decentralization (traffic signals-modeled as agents-can independently learn the best actions to take in each current state) as well as on-the-fly adaptation to traffic flow changes. It is noteworthy that this can be done in a model-free way (with no prior domain information) via RL techniques. RL is based on an agent computing a policy mapping states to actions without requiring an explicit environment model. This is important in traffic domains because such a model may be very complex, as it involves modeling traffic state transitions determined not only by the actions of multiple agents, but also by changes inherent to the environment-such as time-dependent changes to the flow of vehicles.</ns0:p><ns0:p>One of the major difficulties in applying reinforcement learning (RL) in traffic control problems is the fact that the environments may change in unpredictable ways. The agents may have to operate in different contexts-which we define here as the true underlying traffic patterns affecting an agent; importantly, the agents do not know the true context of their environment, e.g., since they do not have full observability of the traffic network. Examples of partially observable variables that result in different contexts include different traffic patterns during the hours of the day, traffic accidents, road maintenance, weather, and other hazards. We refer to changes in the environment's dynamics as non-stationarity.</ns0:p><ns0:p>In terms of contributions, we introduce a way to model different contexts that arise in urban traffic due to time-varying characteristics. We then analyze different sources of non-stationarity-when applying RL to traffic signal control-and quantify the impact that each one has on the learning process. More precisely, we study the impact in learning performance resulting from (1) explicit changes in traffic PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science patterns introduced by different vehicle flow rates; and (2) reduced state observability resulting from imprecision or unavailability of readings from sensors at traffic intersections. The latter problem may cause distinct states (in which distinct actions are optimal) to be seen as the same by the traffic signal agents.</ns0:p><ns0:p>This not only leads to sub-optimal performance but may introduce drastic drops in performance when the environment's context changes. We evaluate the performance of deploying RL in a non-stationary multiagent scenario, where each traffic signal uses Q-learning-a model-free RL algorithm-to learn efficient control policies. The traffic environment is simulated using the open-source microscopic traffic simulator SUMO (Simulation of Urban MObility) <ns0:ref type='bibr' target='#b18'>(Lopez et al., 2018)</ns0:ref> and models the dynamics of a 4 × 4 grid traffic network with 16 traffic signal agents, where each agent has access only to local observations of its controlled intersection. We empirically demonstrate that the aforementioned causes of non-stationarity can negatively affect the performance of the learning agents. We also demonstrate that the lack of suitable sensors to provide a representative observation of the true underlying traffic state seems to affect learning performance more drastically than changes to the underlying traffic patterns.</ns0:p><ns0:p>The rest of this paper is organized as follows. The next section briefly introduces relevant RL concepts.</ns0:p><ns0:p>Then, our model is introduced in Section 3, and the corresponding experiments in Section 4. Finally, we discuss related work in Section 5 and then present concluding remarks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Reinforcement Learning</ns0:head><ns0:p>In reinforcement learning <ns0:ref type='bibr' target='#b24'>(Sutton and Barto, 1998)</ns0:ref>, an agent learns how to behave by interacting with an environment, from which it receives a reward signal after each action. The agent uses this feedback to iteratively learn an optimal control policy π * -a function that specifies the most appropriate action to take in each state. We can model RL problems as Markov decision processes (MDPs). These are described by a set of states S , a set of actions A, a reward function R(s, a, s ′ ) → R and a probabilistic state transition function T (s, a, s ′ ) → [0, 1]. An experience tuple s, a, s ′ , r denotes the fact that the agent was in state s, performed action a and ended up in s ′ with reward r. Let t denote the t th step in the policy π. In an infinite horizon MDP, the cumulative reward in the future under policy π is defined by the action-value function (or Q-function) Q π (s, a), as in Eq. 1, where γ ∈ [0, 1] is the discount factor for future rewards.</ns0:p><ns0:formula xml:id='formula_0'>Q π (s, a) = E ∞ τ=0 γ τ r t+τ |s t = s, a t = a, π<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>If the agent knows the optimal Q-values Q * (s, a) for all state-actions pairs, then the optimal control policy π * can be easily obtained; since the agent's objective is to maximize the cumulative reward, the optimal control policy is:</ns0:p><ns0:formula xml:id='formula_1'>π * (s) = argmax a Q * (s, a) ∀s ∈ S , a ∈ A (2)</ns0:formula><ns0:p>Reinforcement learning methods can be divided into two categories: model-free and model-based.</ns0:p><ns0:p>Model-based methods assume that the transition function T and the reward function R are available, or instead try to learn them. Model-free methods, on the other hand, do not require that the agent have access to information about how the environment works. Instead, they learn an action-value function based only on samples obtained by interacting with the environment.</ns0:p><ns0:p>The RL algorithm used in this paper is Q-learning (QL), a model-free off-policy algorithm that estimates the Q-values in the form of a Q-table. After an experience s, a, s ′ , r , the corresponding Q(s, a) value is updated through Eq. 3, where α ∈ [0, 1] is the learning rate.</ns0:p><ns0:formula xml:id='formula_2'>Q(s, a) := Q(s, a) + α(r + γ max a Q(s ′ , a) − Q(s, a)) (3)</ns0:formula><ns0:p>Importantly, in the tabular case with online learning, which we tackle in our work, Q-learning is known to converge to optimal policies given mild assumptions about exploration whenever deployed on stationary MDPs <ns0:ref type='bibr' target='#b28'>(Watkins, 1989;</ns0:ref><ns0:ref type='bibr' target='#b26'>Tsitsiklis, 1994)</ns0:ref>.</ns0:p><ns0:p>In order to balance exploitation and exploration when agents select actions, we use in this paper the ε-greedy mechanism. This way, agents randomly explore with probability ε and choose the action with the best expected reward so far with probability 1 − ε. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.2'>Non-stationarity in RL</ns0:head><ns0:p>In RL, dealing with non-stationarity is a challenging issue <ns0:ref type='bibr' target='#b14'>(Hernandez-Leal et al., 2017)</ns0:ref>. Among the main causes of non-stationarity are changes in the state transition function T (s, a, s ′ ) or in the reward function R(s, a, s ′ ), partial observability of the true environment state (discussed in Section 2.3) and non-observability of the actions taken by other agents.</ns0:p><ns0:p>In an MDP, the probabilistic state transition function T is assumed not to change. However, this is not realistic in many real world problems. In non-stationary environments, the state transition function T and/or the reward function R can change at arbitrary time steps. In traffic domains, for instance, an action in a given state may have different results depending on the current context-i.e., on the way the network state changes in reaction to the actions of the agents. If agents do not explicitly deal with context changes, they may have to readapt their policies. Hence, they may undergo a constant process of forgetting and relearning control strategies. Though this readaptation is possible, it might cause the agent to operate in a sub-optimal manner for extended periods of time.</ns0:p><ns0:p>Importantly, no convergence guarantees exist in the non-stationary case, and so one needs to design ways to keep the agents from being heavily affected by changes to the environment's dynamics <ns0:ref type='bibr' target='#b21'>(Padakandla, 2020)</ns0:ref>. Motivated by this challenge, one of the goals of our work is to quantify the impact that different sources of non-stationarity have on the agents' learning process. Ideally, one should aim to shape the learning problem into one that is as stationary as possible, so that convergence guarantees may be given. Recent work in the RL literature has investigated methods for dealing with non-stationary environments by explicitly modeling a set of contexts and their associated local policies <ns0:ref type='bibr' target='#b1'>(Alegre et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b21'>Padakandla, 2020)</ns0:ref>. These methods are orthogonal to the idea studied in our paper: by augmenting state definitions we can reduce partial observability and thus minimize the effect of non-stationarity on the learning process and on convergence.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Partial Observability</ns0:head><ns0:p>Traffic control problems might be modeled as Dec-POMDPs <ns0:ref type='bibr' target='#b6'>(Bernstein et al., 2000)</ns0:ref>-a particular type of decentralized multiagent MDP where agents have only partial observability of their true states. A Dec-POMDP introduces to an MDP a set of agents I, for each agent i ∈ I a set of actions A i , with A = i A i the set of joint actions, a set of observations Ω i , with Ω = i Ω i the set of joint observations, and observation probabilities O(o|s, a), the probability of agents seeing observations o, given the state is s and agents take actions a. As specific methods to solve Dec-POMDPs do not scale with the number of agents <ns0:ref type='bibr' target='#b5'>(Bernstein et al., 2002)</ns0:ref>, it is usual to tackle them using techniques conceived to deal with the fully-observable case. Though this allows for better scalability, it introduces non-stationarity as the agents cannot completely observe their environment nor the actions of other agents.</ns0:p><ns0:p>In traffic signal control, partial observability can appear due to lack of suitable sensors to provide a representative observation of the traffic intersection. Additionally, even when multiple sensors are available, partial observability may occur due to inaccurate (with low resolution) measures.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODS</ns0:head><ns0:p>As mentioned earlier, the main goal of this paper is to investigate the different causes of non-stationarity that might affect performance in a scenario where traffic signal agents learn how to improve traffic flow under various forms of non-stationarity. To study this problem, we introduce a framework for modeling urban traffic under time-varying dynamics. In particular, we first introduce a baseline urban traffic model based on MDPs. This is done by formalizing-following similar existing works-the relevant elements of the MDP: its state space, action set, and reward function.</ns0:p><ns0:p>Then, we show how to extend this baseline model to allow for dynamic changes to its transition function so as to encode the existence of different contexts. Here, contexts correspond to different traffic patterns that may change over time according to causes that might not be directly observable by the agent.</ns0:p><ns0:p>We also discuss different design decisions regarding the possible ways in which the states of the traffic system are defined; many of these are aligned with the modeling choices typically done in the literature, as for instance <ns0:ref type='bibr' target='#b19'>(Mannion et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b13'>Genders and Razavi, 2018)</ns0:ref>. Discussing the different possible definitions of states is relevant since these are typically specified in a way that directly incorporates sensor information. Given the amount and quality of sensor information, however, different state definitions arise that-depending on sensor resolution and partial observability of the environment and/or of other agents-result in different amounts of non-stationarity. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Furthermore, in what follows we describe the multiagent training scheme used (in Section 3.4) by each traffic signal agent in order to optimize its policy under non-stationary settings. We also describe how traffic patterns-the contexts in which our agents may need to operate-are modeled mathematically (Section 3.5). We discuss the methodology that is used to analyze and quantify the effects of nonstationarity in the traffic problem in Section 4.</ns0:p><ns0:p>Finally, we emphasize here that the proposed methods and analyzes that will be conducted in this paper-aimed at evaluating the impact of different sources of non-stationary-are the main contributions of our work. Most existing works (e.g., those discussed in Section 5) do not address or directly investigate at length the implications of varying traffic flow rates as sources of non-stationarity in RL.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>State Formulation</ns0:head><ns0:p>In the problems or scenarios we deal with, the definition of state space strongly influences the agents' behavior and performance. Each traffic signal agent controls one intersection, and at each time step t it observes a vector s t that partially represents the true state of the controlled intersection.</ns0:p><ns0:p>A state, in our problem, could be defined as a vector s ∈ R (2+2|P|) , as in Eq. 4, where P is the set of all green traffic phases 1 , ρ ∈ P denotes the current green phase, δ ∈ [0, maxGreenT ime] is the elapsed time of the current phase, density i ∈ [0, 1] is defined as the number of vehicles divided by the vehicle capacity of the incoming movements of phase i and queue i ∈ [0, 1] is defined as the number of queued vehicles (we consider as queued a vehicle with speed under 0.1 m/s) divided by the vehicle capacity of the incoming movements of phase i.</ns0:p><ns0:formula xml:id='formula_3'>s = [ρ, δ, density 1 , queue 1 , ..., density |P| , queue |P| ]<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Note that this state definition might not be feasibly implementable in real-life settings due to cost issues arising from the fact that many physical sensors would have to be paid for and deployed. We introduce, for this reason, an alternative definition of state which has reduced scope of observation. More precisely, this alternative state definition removes density attributes from Eq. 4, resulting in the partially-observable state vector s ∈ R (2+|P|) in Eq 5. The absence of these state attributes is analogous to the lack of availability of real-life traffic sensors capable of detecting approaching vehicles along the extension of a given street (i.e., the density of vehicles along that street). This implies that, without the density attributes, the observed state can not inform the agent whether (or how fast) the links are being filled with new incoming vehicles, which may lead to a situation with large queue lengths in the next time steps.</ns0:p><ns0:formula xml:id='formula_4'>s = [ρ, δ, queue 1 , ..., queue |P| ]</ns0:formula><ns0:p>(5)</ns0:p><ns0:p>Note also that the above definition results in continuous states. Q-learning, however, traditionally works with discrete state spaces. Therefore, states need to be discretized after being computed. Both density and queue attributes are discretized in ten levels/bins equally distributed. We point out that a low level of discretization is also a form of partial-observability, as it may cause distinct states to be perceived as the same state. Furthermore, in this paper we assume-as commonly done in the literature-that one simulation time step corresponds to five seconds of real-life traffic dynamics. This helps encode the fact that traffic signals typically do not change actions every second; this modeling decision implies that actions (in particular, changes to the current phase of a traffic light) are taken in intervals of five seconds.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Actions</ns0:head><ns0:p>In an MDP, at each time step t each agent chooses an action a t ∈ A. The number of actions, in our setting, is equal to the number of phases, where a phase allows green signal to a specific traffic direction; thus, |A| = |P|. In the case where the traffic network is a grid (typically encountered in the literature <ns0:ref type='bibr' target='#b12'>(El-Tantawy et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b19'>Mannion et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b10'>Chu et al., 2019</ns0:ref>)), we consider two actions: an agent can either keep green time to the current phase or allow green time to another phase; we call these actions keep and change, respectively. There are two restrictions in the action selection: an agent can take the action change only if δ ≥ 10s (minGreenT ime) and the action keep only if δ < 50s (maxGreenT ime). Additionally, change actions impose a yellow phase with a fixed duration of 2 seconds. These restrictions are in place to, e.g., model the fact that in real life, a traffic controller needs to commit to a decision for a minimum amount of time to allow stopped cars to accelerate and move to their intended destinations.</ns0:p><ns0:p>1 A traffic phase assigns green, yellow or red light to each traffic movement. A green traffic phase is a phase which assigns green to at least one traffic movement.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Reward Function</ns0:head><ns0:p>The rewards assigned to traffic signal agents in our model are defined as the change in cumulative vehicle waiting time between successive actions. After the execution of an action a t , the agent receives a reward r t ∈ R as given by Eq. 6:</ns0:p><ns0:formula xml:id='formula_5'>r t = W t − W t+1 (6)</ns0:formula><ns0:p>where W t and W t+1 represent the cumulative waiting time at the intersection before and after executing the action a t , following Eq. 7:</ns0:p><ns0:formula xml:id='formula_6'>W t = v∈V t w v,t<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where V t is the set of vehicles on roads arriving at an intersection at time step t, and w v,t is the total waiting time of vehicle v since it entered one of the roads arriving at the intersection until time step t. A vehicle is considered to be waiting if its speed is below 0.1 m/s. Note that, according to this definition, the larger the decrease in cumulative waiting time, the larger the reward. Consequently, by maximizing rewards, agents reduce the waiting time at the intersections, thereby improving the local traffic flow.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Multiagent Independent Q-learning</ns0:head><ns0:p>We tackle the non-stationarity in our scenario by using Q-learning in a multiagent independent training scheme <ns0:ref type='bibr' target='#b25'>(Tan, 1993)</ns0:ref>, where each traffic signal is a QL agent with its own Q-table, local observations, actions and rewards. This approach allows each agent to learn an individual policy, applicable given the local observations that it makes; policies may vary between agents as each one updates its Q-table using only its own experience tuples. Besides allowing for different behaviors between agents, this approach also avoids the curse of dimensionality that a centralized training scheme would introduce. However, there is one main drawback of an independent training scheme: as agents are learning and adjusting their policies, changes to their policies cause the environment dynamics to change, thereby resulting in non-stationary. This means that original convergence properties for single-agent algorithms no longer hold due to the fact that the best policy for an agent changes as other agents' policies change <ns0:ref type='bibr' target='#b7'>(Bus ¸oniu et al., 2008)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Contexts</ns0:head><ns0:p>In order to model one of the causes for non-stationary in the environment, we use the concept of traffic contexts, similarly to da Silva et al. <ns0:ref type='bibr' target='#b23'>(Silva et al., 2006)</ns0:ref>. We define contexts as traffic patterns composed of different vehicle flow distributions over the Origin-Destination (OD) pairs of the network. The origin node of an OD pair indicates where a vehicle is inserted in the simulation. The destination node is the node in which the vehicle ends its trip, and hence is removed from the simulation upon its arrival. A context, then, is defined by associating with each OD pair a number of vehicles that are inserted (per second) in its origin node. Non-stationarity then emerges since the current context changes during the simulation in the form of recurrent events on the traffic environment. Importantly, although each context corresponds to a stationary traffic pattern, the environment becomes non-stationary w.r.t. the agents because the underlying context changes unpredictably, and the agents cannot perceive an indicator of the current context.</ns0:p><ns0:p>Changing the context during a simulation causes the sensors measures to vary differently in time.</ns0:p><ns0:p>Events such as traffic accidents and hush hours, for example, cause the flow of vehicles to increase in a particular direction, thus making the queues on the lanes of this direction to increase faster. In the usual case, where agents do not have access to all information about the environment state, this can affect the state transition T and the reward R functions of the MDP directly. Consequently, when the state transition probabilities and the rewards agents are observing change, the Q-values of the state-action pairs also change. Therefore, traffic signal agents will most likely need to undergo a readaptation phase to correctly update their policies, resulting in periods of catastrophic drops in performance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EXPERIMENTS AND RESULTS</ns0:head><ns0:p>Our main goal with the following experiments is to quantify the impact of different causes of nonstationarity in the learning process of an RL agent in traffic signal control. Explicit changes in context </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Scenario</ns0:head><ns0:p>We used the open-source microscopic traffic simulator SUMO to model and simulate the traffic scenario and its dynamics, and SUMO-RL <ns0:ref type='bibr' target='#b0'>(Alegre, 2019)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2'>Metrics</ns0:head><ns0:p>To measure the performance of traffic signal agents, we used as metric the summation of the cumulative vehicle waiting time on all intersections, as in Eq. 7. Intuitively, this quantifies for how long vehicles are delayed by having to reduce their velocity below 0.1 m/s due to long waiting queues and to the inadequate use of red signal phases. This metric is also a good indication of the agents performance, since it is strongly related to the rewards assigned to each agent, defined in Eq. 6. Therefore, as the agents improve their local policies to minimize the change in cumulative vehicle waiting time, it is expected that the global waiting time of the traffic environment also decreases.</ns0:p><ns0:p>At the time steps in which phase changes occur, natural oscillations in the queue sizes occur since many vehicles are stopping and many are accelerating. Therefore, all plots shown here depict moving averages of the previously-discussed metric within a time window of 15 seconds. The plots related to Q-learning are averaged over 30 runs, where the shadowed area shows the standard deviation. Additionally, we omit the time steps of the beginning of the simulation (since the network then is not yet fully populated with vehicles) as well as the last time steps (since then vehicles are no longer being inserted).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Traffic Signal Control under Fixed Policies</ns0:head><ns0:p>We first demonstrate the performance of a fixed policy designed by following the High Capacity Manual <ns0:ref type='bibr'>(National Research Council, 2000)</ns0:ref>, which is popularly used for such task. The fixed policy assigns to each phase a green time of 35 seconds and a yellow time of 2 seconds. As mentioned, our goal by defining this policy is to construct a baseline used to quantify the impact of a context change on the performance of traffic signals in two situations: one where traffic signals follow a fixed policy and one where traffic signals adapt and learn a new policy using QL algorithm. This section analyzes the former case. shows that the fixed policy, as expected, loses performance when the context is changed. When the traffic flow is set to Context 2 at time step 20000, a larger amount of vehicles are driving in the W-E direction and thus producing larger waiting queues. In order to obtain a good performance using fixed policies, it would be necessary to define a policy for each context and to know in advance the exact moment when context changes will occur. Moreover, there may be an arbitrarily large number of such contexts, and the agent, in general, has no way of knowing in advance how many exist. Prior knowledge of these quantities is not typically available since non-recurring events that may affect the environment dynamics, such as traffic accidents, cannot be predicted. Hence, traffic signal control by fixed policies is inadequate in scenarios where traffic flow dynamics may change (slowly or abruptly) over time. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.4'>Effects of Disabling Learning and Exploration</ns0:head><ns0:p>We now describe the case in which agents stop, at some point in time, to learn from their actions and simply follow the policy learned before a given context change. The objective here is to simulate a situation where a traffic signal agent employs a previously-learned policy to a context/traffic pattern that</ns0:p><ns0:p>has not yet been observed in its training phase. We achieve this by setting both α (learning rate) and ε (exploration rate) to 0 when there is a change in context. By observing Eq. 3, we see that the Q-values no longer have their values changed if α = 0. By setting ε = 0, we also ensure that the agents will not explore and that they will only choose the actions with the higher estimated Q-value given the dynamics of the last observed context. By analyzing performance in this setting, we can quantify the negative effect of agents that act solely by following the policy learned from the previous contexts.</ns0:p><ns0:p>During the training phase (until time step 20000), we use a learning rate of α = 0.1 and discount factor γ = 0.99. The exploration rate starts at ε = 1 and decays by a factor of 0.9985 every time the agent chooses an action. These definitions ensure that the agents are mostly exploring at the beginning, while by the time step 10000 ε is below 0.05, thereby resulting in agents that continue to purely exploit a currently-learned policy even after a context change; i.e., agents that do not adapt to context changes. Note, however, that some actions (e.g., changing the phase when there is congestion in one of the directions) are still capable of improving performance, since they are reasonable decisions under both contexts. This explains the reason why performance drops considerably when the context changes and why the waiting time keeps oscillating afterwards.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Effects of Reduced State Observability</ns0:head><ns0:p>In this experiment, we compare the effects of context changes under the two different state definitions presented in Section 3.1. The state definition in Eq. 4 represents a more unrealistic scenario in which expensive real-traffic sensors are available at the intersections. In contrast, in the partial state definition in Eq. 5 each traffic signal has information only about how many vehicles are stopped at its corresponding intersection (queue), but cannot relate this information to the number of vehicles currently approaching its waiting queue, as vehicles in movement are monitored only on density attributes.</ns0:p><ns0:p>Differently from the previous experiment, agents now continue to explore and update their Q-tables throughout the simulation. The ε parameter is set to a fixed value of 0.05; this way, the agents mostly The results of this experiment are shown in Fig. <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. By analyzing the initial steps in the simulation, we note that agents using the reduced state definition learn significantly faster than those with the state definition that incorporates both queue and density attributes. This is because there are fewer states to explore, and so it takes fewer steps for the policy to converge. However, given this limited observation capability, agents converge to a policy resulting in higher waiting times when compared to that resulting from agents with more extensive state observability. This shows that the density attributes are fundamental to better characterize the true state of a traffic intersection. Also note that around time 10000, the performance of both state definitions (around 500 seconds of total waiting time) are better than that achieved under the fixed policy program (around 2200 seconds of total waiting time), depicted in </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.6'>Effects of Different Levels of State Discretization</ns0:head><ns0:p>Besides the unavailability of appropriate sensors (which results in incomplete description of states)</ns0:p><ns0:p>another possible cause of non-stationarity is poor precision and low range of observations. As an example, consider imprecision in the measurement of the number of vehicles waiting at an intersection; this may 9/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science cause distinct states-in which distinct actions are optimal-to be perceived as the same state. This not only leads to sub-optimal performance, but also introduces drastic performance drops when the context change. We simulate this effect by lowering the number of discretization levels of the attribute queue in cases where the density attribute is not available. and 60000) we can observe how the use of reduced discretization levels causes a significant drop in performance. At time 40000, for instance, the total waiting time increases up to 3 times when operating under the lower discretization level.</ns0:p><ns0:p>Intuitively, an agent with imprecise observation of its true state has reduced capability to perceive changes in the transition function. Consequently, when traffic flow rates change at an intersection, agents with imprecise observations require a larger number of actions to readapt, thereby dramatically increasing queues.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.7'>Discussion</ns0:head><ns0:p>Many RL algorithms have been proposed to tackle non-stationary problems <ns0:ref type='bibr' target='#b9'>(Choi et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b11'>Doya et al., 2002;</ns0:ref><ns0:ref type='bibr' target='#b23'>Silva et al., 2006)</ns0:ref>. Specifically, these works assume that the environment is non-stationary (without studying or analyzing the specific causes of non-stationary) and then propose computational mechanisms to efficiently learn under that setting. In this paper, we deal with a complementary problem, which is to quantify the effects of different causes of non-stationarity in the learning performance. We also assume that non-stationarity exists, but we explicitly model many of the possible underline reasons why its effects may take place. We study this complementary problem because it is our understanding that by explicitly quantifying the different reasons for non-stationary effects, it may be possible to make better-informed decisions about which specific algorithm to use, or to decide, for instance, if efforts should be better spent by designing a more complete set of features instead of by designing more sophisticated learning algorithms.</ns0:p><ns0:p>In this paper, we studied these possible causes specifically when they affect urban traffic environments.</ns0:p><ns0:p>The results of our experiments indicate that non-stationarity in the form of changes to vehicle flow rates significantly impact both traffic signal controllers following fixed policies and policies learned from standard RL methods that do not model different contexts. However, this impact (that results in rapid changes in the total number of vehicles waiting at the intersections) has different levels of impact on low number of discretization bins) potencializes the impact of context changes. Hence, in order to design a robust RL traffic signal controller, it is critical to take into account which are the most adequate sensors and how they contribute to provide a more extensive observation of the true environment state.</ns0:p><ns0:p>We observed that the non-stationarity introduced by the actions of other concurrently-learning agents in a competitive environment seemed to be a minor obstacle to acquiring effective traffic signals policies.</ns0:p><ns0:p>However, a traffic signal agent that selfishly learns to reduce its own queue size may introduce a higher flow of vehicles arriving at neighboring intersections, thereby affecting the rewards of other agents and producing non-stationarity. We believe that in more complex scenarios this effect would be more clearly visible.</ns0:p><ns0:p>Furthermore, we found that traditional tabular Independent Q-learning presented a good performance in our scenario if we do not take into account the non-stationarity impacts. Therefore, in this particular simulation it was not necessary to use more sophisticated methods such as algorithms based on valuefunction approximation; for instance, deep neural networks. These methods could help in dealing with larger-scale simulations that could require dealing with higher dimensional states. However, we emphasize the fact that even though they could help with higher dimensional states, they would also be affected by the presence of non-stationarity, just like standard tabular methods are. This happens because just like standard tabular Q-learning, deep RL methods do not explicitly model the possible sources of non-stationarity, and therefore would suffer in terms of learning performance whenever changes in state transition function occur.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>RELATED WORK</ns0:head><ns0:p>Reinforcement learning has been previously used with success to provide solutions to traffic signal control.</ns0:p><ns0:p>Surveys on the area <ns0:ref type='bibr' target='#b4'>(Bazzan, 2009;</ns0:ref><ns0:ref type='bibr' target='#b30'>Yau et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Wei et al., 2019)</ns0:ref> have discussed fundamental aspects of reinforcement learning for traffic signal control, such as state definitions, reward functions and algorithms classifications. Many works have addressed multiagent RL (Arguello <ns0:ref type='bibr' target='#b2'>Calvo and Dusparic, 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Mannion et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b12'>El-Tantawy et al., 2013) and</ns0:ref><ns0:ref type='bibr'>deep RL (van der Pol, 2016;</ns0:ref><ns0:ref type='bibr' target='#b16'>Liang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>Liu et al., 2017)</ns0:ref> methods in this context. In spite of non-stationarity being frequently mentioned as a complex challenge in traffic domains, we evidenced a lack of works quantifying its impact and relating it to its many causes and effects.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> we compare relevant related works that have addressed non-stationary in the form of partial observability, change in vehicle flow distribution and/or multiagent scenarios. In <ns0:ref type='bibr' target='#b23'>(Silva et al., 2006)</ns0:ref>, da Silva et. al explored non-stationarity in traffic signal control under different traffic patterns.</ns0:p><ns0:p>They proposed the RL-CD method to create partial models of the environment-each one responsible for dealing with one kind of context. However, they used a simple model of the states and actions available to each traffic signal agent: state was defined as the occupation of each incoming link and discretized into 3 bins; actions consisted of selecting one of three fixed and previously-designed signal plans. In <ns0:ref type='bibr' target='#b20'>(Oliveira et al., 2006)</ns0:ref>, Oliveira et al. extend the work in <ns0:ref type='bibr' target='#b23'>(Silva et al., 2006)</ns0:ref> to address the non-stationarity caused by the random behavior of drivers in what regards the operational task of driving (e.g. deceleration probability), but the aforementioned simple model of the states and actions was not altered. In <ns0:ref type='bibr' target='#b3'>(Balaji et al., 2010)</ns0:ref>, <ns0:ref type='bibr'>Balaji et al.</ns0:ref> analyze the performance of tabular Q-learning in a large multiagent scenario. Their state state space, however, was significantly discretized and constituted of only 9 possible states. In <ns0:ref type='bibr' target='#b17'>(Liu et al., 2017)</ns0:ref>, Liu et al. proposed a variant of independent deep Q-learning to coordinate four traffic signals. However, no information about vehicle distribution or insertion rates was mentioned or analyzed. A comparison between different state representations using the A3C algorithm was made in <ns0:ref type='bibr' target='#b13'>(Genders and Razavi, 2018)</ns0:ref>; however, that paper did not study the capability of agents to adapt to different traffic flow distributions. In <ns0:ref type='bibr' target='#b31'>(Zhang et al., 2018)</ns0:ref> Manuscript to be reviewed Computer Science however no comparisons with different state definitions or sensors were made. In <ns0:ref type='bibr' target='#b10'>(Chu et al., 2019)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>Non-stationarity is an important challenge when applying RL to real-world problems in general, and to traffic signal control in particular. In this paper, we studied and quantified the impact of different causes of non-stationarity in a learning agent's performance. Specifically, we studied the problem of non-stationarity in multiagent traffic signal control, where non-stationarity resulted from explicit changes in traffic patterns and from reduced state observability. This type of analysis complements those made in existing works related to non-stationarity in RL; these typically propose computational mechanisms to learn under changing environments, but usually do not systematically study the specific causes and impacts that the different sources of non-stationary may have on learning performance.</ns0:p><ns0:p>We have shown that independent Q-Learning agents can re-adapt their policies to traffic pattern context changes. Furthermore, we have shown that the agents' state definition and their scope of observations strongly influence the agents' re-adaptation capabilities. While agents with more extensive state observability do not undergo performance drops when dynamics change to previously-experienced contexts, agents operating under a partially observable version of the state often have to relearn policies.</ns0:p><ns0:p>Hence, we have evidenced how a better understanding of the reasons and effects of non-stationarity may aid in the development of RL agents. In particular, our results empirically suggest that effort in designing better sensors and state features may have a greater impact on learning performance than efforts in designing more sophisticated learning algorithms.</ns0:p><ns0:p>For future work, traffic scenarios that include other causes for non-stationarity can be explored. For instance, unexpected events such as traffic accidents may cause drastic changes to the dynamics of an intersection, as they introduce local queues. In addition, we propose studying how well our findings generalize to settings involving arterial roads (which have greater volume of vehicles) and intersections with different numbers of traffic phases.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>(e.g., vehicle flow rate changes in one or more directions) are one of these causes and are present in all 5/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021) Manuscript to be reviewed Computer Science of the following experiments. This section first describes details of the scenario being simulated as well as the traffic contexts, followed by a definition of the performance metrics used as well as the different experiments that were performed. We first conduct an experiment where traffic signals use a fixed control policy-a common strategy in case the infrastructure lacks sensors and/or actuators. The results of this experiment are discussed in Section 4.3 and are used to emphasize the problem of lacking a policy that can adapt to different contexts; it also serves as a baseline for later comparisons. Afterwards, in Section 4.4 we explore the setting where agents employ a given policy in a context/traffic pattern that has not yet been observed during the training phase. In Section 4.5 we analyze (1) the impact of context changes when agents continue to explore and update their Q-tables throughout the simulation; and (2) the impact of having non-stationarity introduced both by context changes and by the use of the two different state definitions presented in Section 3.1. Then, in Section 4.6 we address the relation between non-stationarity and partial observations resulting from the use of imprecise sensors, simulated by poor discretization of the observation space. Lastly, in Section 4.7 we discuss what are the main findings and implications of the results observed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>to instantiate the simulation as a reinforcement learning environment with all the components of an MDP. The traffic network is a 4 × 4 grid network with traffic signals present in all 16 intersections (Fig. 1). All links have 150 meters, two lanes and are one-way. Vertical links follow N-S traffic directions and horizontal links follow W-E directions. There are 8 OD pairs: 4 in the W-E traffic direction (A2F2, A3F3, A4F4, and A5F5), and 4 in the N-S direction (B1B6, C1C6, D1D6, E1E6).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. 4 × 4 grid network. (A) Network topology. (B) Network in SUMO.</ns0:figDesc><ns0:graphic coords='7,172.75,353.30,351.53,181.62' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Total waiting time of vehicles in the simulation: fixed policy traffic signals, context change at time step 20000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Total waiting time of vehicles. Q-learning traffic signals. Context change, α, and ε set to 0 at timestep 20000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Total waiting time of vehicles. Q-learning agents with two state representations: queue and queue + density. Context changes at times 20000, 40000 and 60000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Fig. 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Fig. 2.In the first context change, at time 20000, the total waiting time of both state definitions increases considerably. This is expected as it is the first time agents have to operate in Context 2. Agents operating under the original state definition recovered from this context change rapidly and achieved the same performance obtained in Context 1. However, with the partial state definition (i.e., only queue attributes), it is more challenging for agents to behave properly when operating under Context 2, which depicts an unbalanced traffic flow arriving at the intersection.Finally, we can observe how (at time step 60000) the non-stationarity introduced by context changes relates to the limited partial state definition. While traffic signal agents observing both queue and density do not show any oscillations in the waiting time of their controlled intersections, agents observing only queue have a significant performance drop. Despite having already experienced Context 2, they had to relearn their policies since the past Q-values were overwritten by the learning mechanism to adapt to the changing past dynamics. The dynamics of both contexts are, however, well-captured in the original state definition, as the combination of the density and queue attributes provides enough information about the dynamics of traffic arrivals at the intersection. This observation emphasizes the importance of more extensive state observability to avoid the negative impacts of non-stationarity in RL agents.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Total waiting time of vehicles. Q-learning traffic signals with different levels of discretization for the attribute queue. Context changes at time steps 20000, 40000 and 60000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021) Manuscript to be reviewed Computer Science agents depending on the different levels of observability available to those agents. While agents with the original state definition (queue and density attributes) only present performance drops in the first time they operate in a new context, agents with reduced observation (only queue attributes) may always have to relearn the readapted Q-values. The original state definition, however, is not very realistic in the real world, as sensors capable of providing both attributes for large traffic roads are very expensive. Finally, in cases where agents observe only the queues attributes, we demonstrated that imprecise measures (e.g.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>, Chu et al. introduced Multiagent A2C in scenarios where different vehicle flows distributed in the network changed their insertion rates independently. On the other hand, they only used a state definition which gives sufficient information about the traffic intersection. Finally, in (Padakandla et al., 2019), Padakandla et al. introduce Context-QL, a method similar to RL-CD that uses a change-point detection metric to capture context changes. They also explored non-stationarity caused by different traffic flows, but they did not consider the impact of the state definition used (with low discretization and only one sensor) in their results. To the extent of our knowledge, this is the first work to analyze how different levels of partial observability affect traffic signal agents under non-stationary environments where traffic flows change not only in vehicle insertion rate, but also in vehicle insertion distribution between phases.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>state observability was analyzed in a vehicle-to-infrastructure (V2I) scenario, where the traffic signal agent detects approaching vehicles with Dedicated Short Range Communications (DSRC) technology under different rates. In (Horsuwan and Aswakul, 2019) a scenario with partially observable state (only occupancy sensors available) was studied, Related Work PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>11/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54754:2:0:NEW 19 Apr 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "CS-2020:10:54754:0:0:REVIEW
Answers to Reviewers
Lucas N. Alegre
Ana L. C. Bazzan
Bruno C. da Silva
April 19, 2021
Notation
• italics: reviewers’ comments
• bold: our answers and/or comments
To the Editor and Reviewers:
We thank the editor and the reviewers for their careful and thoughtful analyses of our work. We have edited the manuscript to address
the remaining concerns, in particular those brought up by reviewer
2.
Reviewer 2:
Reviewer #2, comment #1: I am sorry for my previous unclear statement.
The RL is used to find an optimal action that matches the current states and the
stationarity or non-stationarity environment in a long run. The RL is trained by
episodes to pursue rewards (actions value), which is classified as an online mode
and an offline mode. For online the reward of each episode can be obtained from
the Q table (descrete system) or the critic network (continuous system) based on
observations after taking an action. But no one knows the effect of this action
in training process, which will lead to a fatal fault (traffic confusion). A feasible
way is offline training. But it needs a model of actions on the environment to get
the action value in all kinds of traffic cases. This model (missing in this paper)
is indispensable for adjusting the Q table or the weights of critic network.
Thank you for clarifying your concerns. We agree that they are
indeed relevant, and we have improved the discussion in our paper. In
particular, we have improved our paper with the following discussion
on convergence, in order to address your comments:
1
• If policy learning is performed offline, then either one needs to
deploy an off-policy algorithm (such as Q-Learning), or one needs
to assume access to a model of the environment, from which the
agent can sample experiences.
• Importantly, though, in this paper we focus on the former setting
and use a model-free RL algorithm based on temporal-difference
learning. RL techniques of this type are known to converge to
optimal policies under mild exploration assumptions, and without requiring access to a model of the environment.
• We also clarify that the specific algorithm we use (Q-Learning)
does not require a model to learn an action-value function. It
is, by contrast, capable of doing so based only on samples obtained by interacting with the environment. In the tabular case,
which we tackle in our work, Q-learning has convergence guarantees whenever deployed on stationary MDPs (see references
(Watkins,1989; Tsitsiklis, 1994)).
Reviewer #2, comment #2: Both training processes will suffer from an early
fluctuation to the end stability (this is called to be convergent). For a stationarity, it will spend a very long time to get the optimal action. For a nonstationarity, maybe it will keep changing and cannot get optimal action. So I
ask for how to provide a performance guarantee mechanism. By the way the
agent will get the optimization (be convergent) if given sufficient time seems not
to meet the reality.
The reviewer is correct that in non-stationary environments, standard RL algorithms may oscillate and keep updating their policies
continuously, in an attempt track the newest experienced context/dynamics
observed by the agent. This observation, in fact, is a principal motivation underlying our work: the desire to keep such oscillations
from happening by reducing the impact of possible sources of nonstationarity. In particular, we show and argue that one may design states that more completely describe the underlying dynamics
of the MDP, thereby reducing the effects of partial observability. This
turns the problem into one that is (at least approximately) less nonstationary. If, by contrast, one does not analyze and take into account
the sources of non-stationarity, and instead directly deploys vanilla
RL algorithms, then all of the reviewer’s observations are correct and
relevant. In fact, in extreme/adversarial cases, oscillations may occur
indefinitely.
The reviewer also correctly points out that convergence may not
be guaranteed in the more general non-stationary case. This is, in
fact, one of the main motivations underlying our work: no convergence guarantees exist (to the best of our knowledge) in the nonstationary case, and so one needs to design ways to keep the agent
from being heavily affected by changes to the environment’s dynamics. Motivated by this challenge, one of the goals of our work is to
2
quantify the impact that different sources of non-stationarity have
on the agents’ learning process. Ideally, one should aim to shape the
learning problem into one that is as stationary as possible, so that
convergence guarantees may be given. Our work analyzes different
ways in which this can be done—in particular, in terms of ways to
augment state definitions in order to reduce partial observability.
Regarding the reviewer’s comments on non-stationarity caused by
different traffic demands, these demands are known to induce a set
of stationary MDPs often called contexts. If an agent is given sufficient time interacting with the dynamics associated with one specific
context/stationary MDP, then its policy will converge to an optimal
policy for that corresponding context while it does not change. Recent work in the RL literature has investigated methods for dealing
with non-stationary environments by explicitly modeling a set of contexts and their associated local policies. These methods are orthogonal to the main idea studied in our paper; that is, the idea that by
augmenting state definitions we can reduce partial observability and
thus minimize the effect of non-stationarity on the learning process
and on convergence.
Finally, still regarding the reviewer’s comments on convergence
and oscillations, we emphasize that one of the main results of our
work is the observation that if sufficient information is given to agents
(which results in less partial observability), the impact of non-stationary
dynamics on learning performance may become negligible. See, for
example, Figure 4, where we show that the performance of agents
with access both to density and queue state features is no longer affected after context changes; e.g., learning performance is no longer
affected after the environment dynamics is actively modified at time
step 60000.
3
" | Here is a paper. Please give your review comments after reading it. |
130 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Character recognition is an important research field of interest for many applications. In recent years, deep learning has made breakthroughs in image classification, especially for character recognition. However, convolutional neural networks (CNN) still deliver state-ofthe-art results in this area. Motivated by the success of CNNs, this paper proposes a simple novel full depth stacked CNN architecture for Latin and Arabic handwritten alphanumeric characters that is also utilized for license plate (LP) characters recognition. The proposed architecture is constructed by four convolutional layers, two max-pooling layers, and one fully connected layer. This architecture is low-complex, fast, reliable and achieves very promising classification accuracy that may move the field forward in terms of low complexity, high accuracy and full feature extraction. The proposed approach is tested on four benchmarks for handwritten character datasets, Fashion-MNIST dataset, public LP character datasets and a newly introduced real LP isolated character dataset. The proposed approach tests report an error of only 0.28% for MNIST, 0.34% for MAHDB, 1.45% for AHCD, 3.81% for AIA9K, 5.00% for Fashion-MNIST, 0.26% for Saudi license plate character and 0.97% for Latin license plate characters datasets. The license plate characters include license plates from Turkey (TR), Europe (EU), USA, United Arab Emirates (UAE) and Kingdom of Saudi Arabia (KSA).</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Character recognition (CR) plays a key role in many applications and motivates R&D in the field for accurate and fast classification solutions. CR has been widely investigated in many languages using different proposed methods. In the last years, researchers widely used CNN as deep learning classifiers and achieved good results on handwritten Alphanumeric in many languages <ns0:ref type='bibr' target='#b40'>(Lecun et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abdleazeem and El-Sherif, 2008;</ns0:ref><ns0:ref type='bibr'>El-Sawy et al., 2017)</ns0:ref>, character recognition in real-world images <ns0:ref type='bibr' target='#b47'>(Netzer et al., 2011)</ns0:ref>, document scanning, optical character recognition (OCR) and automatic license plate character recognition (ALPR) <ns0:ref type='bibr' target='#b13'>(Comelli et al., 1995)</ns0:ref>. Searching for text information in images is a time-consuming process that largely benefits of CR. Particularly, in Arabic language the connectivity of letters make a challenge for classification <ns0:ref type='bibr' target='#b21'>(Eltay et al., 2020)</ns0:ref>. Therefore, isolated character datasets get more interest in research.</ns0:p><ns0:p>MNIST is a handwritten digits dataset introduced by <ns0:ref type='bibr' target='#b40'>Lecun et al. (1998)</ns0:ref> and used to test supervised machine learning algorithms . The best accuracy obtained by stacked CNN architectures, until before two years, is a test error rate of 0.35% in <ns0:ref type='bibr' target='#b10'>(Cires ¸an et al., 2010)</ns0:ref>, where large deep CNN of nine layers with an elastic distortion applied to the input images. Narrowing the gap to human performance, a new architecture of five committees of seven deep CNNs with six width normalization and elastic distortion was trained and tested in <ns0:ref type='bibr' target='#b12'>(Ciresan et al., 2011)</ns0:ref> and reported an error rate of 0.27%, where the main CNN is seven stacked layers. In <ns0:ref type='bibr' target='#b11'>(Ciregan et al., 2012)</ns0:ref>, a near-human performance error rate of 0.23% was achieved, where several techniques were combined in a novel way to build a multi-column deep neural network (MCDNN) inspired by micro-columns of neurons in cerebral cortex compared to the number of layers found between retina and visual cortex of macaque monkeys.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:1:1:NEW 19 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Recently, <ns0:ref type='bibr' target='#b44'>Moradi et al. (2019)</ns0:ref> developed a new CNN architecture with orthogonal feature maps based on Residual modules of ResNet <ns0:ref type='bibr' target='#b26'>(He et al., 2016)</ns0:ref> and Inception modules of GoogleNet <ns0:ref type='bibr' target='#b57'>(Szegedy et al., 2015)</ns0:ref>, with 534474 learnable parameters which are equal to SqeezeNet <ns0:ref type='bibr' target='#b31'>(Iandola et al., 2016)</ns0:ref> learnable parameters, and thus the model reported an error of 0.28%. However, a CNN architecture for small size input images of 20×20 pixels was proposed in <ns0:ref type='bibr' target='#b39'>(Le and Nguyen, 2019)</ns0:ref>. In addition, a multimodal deep learning architecture was proposed in <ns0:ref type='bibr' target='#b34'>(Kowsari et al., 2018)</ns0:ref>, where deep neural networks (DNN), CNN and recurrent neural networks (RNN) were used in one architecture design achieving an error of 0.18%. A plain CNN with stochastic optimization method was proposed in <ns0:ref type='bibr' target='#b3'>(Assiri, 2019)</ns0:ref>, this method applied regular Dropout layers after each pooling and fully connected (FC) layers, this 15 stacked layers approach obtained an error of 0.17% by 13.21M parameters. <ns0:ref type='bibr' target='#b28'>Hirata and Takahashi (2020)</ns0:ref> proposed an architecture with one base CNN and multiple FC sub-networks, this 28 spars layers architecture with 28.67M parameters obtained an error of 0.16%. <ns0:ref type='bibr' target='#b8'>Byerly et al. (2020)</ns0:ref> presented a CNN design with additional branches after certain convolutions, and from each branch, they transformed each of the final filters into a pair of homogeneous vector capsules, this 21 spars layers obtained an error of 0.16%.</ns0:p><ns0:p>While MNIST was well studied in the literature, there were only a few works on Arabic handwritten character recognition <ns0:ref type='bibr' target='#b0'>(Abdleazeem and El-Sherif, 2008)</ns0:ref>. The large Arabic Handwritten Digits (AHDBase) has been introduced in (El- <ns0:ref type='bibr' target='#b20'>Sherif and Abdelazeem, 2007)</ns0:ref>. <ns0:ref type='bibr'>Abdleazeem and El-Sherif (2008) modified</ns0:ref> AHDBase to be MADBase and evaluated 54 different classifier/features combinations and reported a classification error of 0.52% utilizing radial basis function (RBF) and support vector machine (SVM). Also, they discussed the problem of Arabic zero, which is just a dot and smaller than other digits.They solved the problem by introducing a size-sensitive feature which is the ratio of the digit bounding box area to the average bounding box area of all digits in AHDBase's training set. In the same context, Mudhsh and Almodfer (2017) obtained a validation error of 0.34% on the MADBase dataset by using an Alphanumeric VGG network inspired by the VGGNet <ns0:ref type='bibr' target='#b53'>(Simonyan and Zisserman, 2015)</ns0:ref> with dropout regularization and data augmentation but the error performance does not hold on the test set. <ns0:ref type='bibr' target='#b58'>Torki et al. (2014)</ns0:ref> introduced AIA9K dataset and reported a classification error of 5.72% on the test set by using window-based descriptors with some common classifiers such as logistic regression, linear SVM , nonlinear SVM and artificial neural networks (ANN) classifiers. <ns0:ref type='bibr' target='#b63'>Younis (2017)</ns0:ref> tested a CNN architecture and obtained an error of 5.2%, he proposed a stacked CNN of three convolution layers followed by batch normalization, rectified linear units (ReLU) activation, dropout and two FC layers.</ns0:p><ns0:p>The AHCD dataset was introduced by El-Sawy et al. <ns0:ref type='bibr'>(2017)</ns0:ref>, they reported a classification error of 5.1% using a stacked CNN of two convolution layers, two pooling layers and two FC layers. <ns0:ref type='bibr' target='#b46'>Najadat et al. (2019)</ns0:ref> obtained a classification error of 2.8% by using a series CNN of four convolution layers activated by ReLU, two pooling layers and three FC layers. The state-of-the-art result for this dataset is a classification error of 1.58% obtained by <ns0:ref type='bibr' target='#b54'>Sousa (2018)</ns0:ref>, it was achieved by ensemble averaging of four CNNs, two inspired by VGG16 and two written from scratch, with batch normalization and dropout regularization, to form 12 layers architecture called VGG12.</ns0:p><ns0:p>For benchmarking machine learning algorithms on tiny grayscale images other than Alphanumeric characters, <ns0:ref type='bibr' target='#b59'>Xiao et al. (2017)</ns0:ref> introduced Fashion-MNIST dataset to serve as a direct replacement for the original MNIST dataset and reported a classification test error of 10.3% using SVM. This dataset gained the attention of many researchers to test their approaches and better error of 3.65% was achieved by <ns0:ref type='bibr' target='#b67'>Zhong et al. (2017)</ns0:ref> in which a random erasing augmentation was used with wide residual networks (WRN) <ns0:ref type='bibr' target='#b65'>(Zagoruyko and Komodakis, 2016)</ns0:ref>. The state-of-the-art performance for Fashion-MNIST is an error of 2.34% reported in <ns0:ref type='bibr' target='#b66'>(Zeng et al., 2018)</ns0:ref> using a deep collaborative weight-based classification method based on VGG16. Recently, a modelling and optimization based method was used <ns0:ref type='bibr' target='#b9'>(Chou et al., 2019)</ns0:ref> to optimize the parameters for a multi-layer (16 layer) CNN reporting an error of 8.32% and 0.57% for Fashion-MNIST and MNIST respectively.</ns0:p><ns0:p>ALPR is a group of techniques that use CR modules to recognize vehicle's LP number. Sometimes, it is also referred to as license plate detection and recognition (LPDR). ALPR is used in many real-life applications <ns0:ref type='bibr' target='#b17'>(Du et al., 2013)</ns0:ref> like electronic toll collection, traffic control, security, etc. The main challenges of detection and recognition of license plates are the variations in the plate types, environments, languages and fonts. Both CNN and traditional approaches are used to solve vehicle license plates recognition problems. Traditional approaches involve computer vision, image processing and pattern recognition algorithms for features such as color, edge and morphology <ns0:ref type='bibr' target='#b60'>(Xie et al., 2018)</ns0:ref>. A typical ALPR system consists of three modules, plate detection, character segmentation and CR modules (Shyang-Lih Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b52'>Chang et al., 2004)</ns0:ref>. This research focuses on CR techniques and compared them with the proposed CR approach. CR modules need an off-line training phase to train a classifier on each isolated character using a set of manually cropped character images <ns0:ref type='bibr' target='#b7'>(Bulan et al., 2017)</ns0:ref>. Excessive operational time, cost and efforts must be considered when manual cropping of character images are needed to be collected and labeled for training and testing, and to overcome this, artificially generated synthetic license plates were proposed <ns0:ref type='bibr' target='#b6'>(Bulan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Additionally, very little research was done on multi-language LP character recognition, the reason is mostly due to the lack of multi-language LP datasets. Some recent researches were interested in introducing a global ALPR system. <ns0:ref type='bibr' target='#b1'>Asif et al. (2017)</ns0:ref> studied only LP detection module using a histogrambased approach, and a private dataset was used, which comprised of LPs from Hungary, America, Serbia, Pakistan, Italy, China, and UAE <ns0:ref type='bibr' target='#b1'>(Asif et al., 2017)</ns0:ref>. VGG and LSTM were proposed for CR module in <ns0:ref type='bibr' target='#b16'>(Dorbe et al., 2018)</ns0:ref> and the measured CR module accuracy was 96.7% where the test was done on LPs from Russia, Poland, Latvia, Belarus, Estonia, Germany, Lithuania, Finland and Sweden. Also, tiny YOLOv3 was used as a unified CR module for LPs from Greece, USA, Croatia, Taiwan, and South Korea <ns0:ref type='bibr' target='#b27'>(Henry et al., 2020)</ns0:ref>. Furthermore, several proposed methods interested in multi-language LPCR testing CR modules on each LP country's dataset separately, without accumulating the characters into one dataset <ns0:ref type='bibr' target='#b41'>(Li et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b64'>Yépez et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b2'>Asif et al., 2019)</ns0:ref>. In addition, <ns0:ref type='bibr' target='#b51'>Selmi et al. (2020)</ns0:ref> proposed a mask R-CNN detector for character segmentation and recognition concerning Arabic and English LP characters from Tunisia and USA. <ns0:ref type='bibr' target='#b49'>Park et al. (2019)</ns0:ref> concerned USA and Korean LPs describing the problem as multi-style detection. CNN shrinkage-based architecture was studied in <ns0:ref type='bibr' target='#b50'>(Salemdeeb and Erturk, 2020)</ns0:ref>, utilizing the maximum number of convolutional layers that can be added. <ns0:ref type='bibr' target='#b50'>Salemdeeb and Erturk (2020)</ns0:ref> studied the LP detection and country classification problem for multinational and multi-language LPs from Turkey, Europe, USA, UAE and KSA, without studying CR problem. These researches studied LPs from 23 different countries where most of them use Latin characters to write the LP number, and totally five languages were concerned (English, Taiwanese, Korean, Chinese and Arabic). In Taiwan, Korea, China, UAE, Tunisia and KSA, the LP number is written using Latin characters, but the city information is coded using characters from that the country's language.</ns0:p><ns0:p>In this paper, Arabic and Latin isolated characters are targeted to be recognized using a proposed full depth CNN (FDCNN) architecture in which the regions of interest are USA, EU and Middle East. To verify the performance of the proposed FDCNN, some isolated handwritten Arabic and Latin characters benchmarks such as MNIST, MADbase, AHCD, AIA9K datasets are also tested. Also, a new dataset named LP Arabic and Latin isolated characters (LPALIC) is introduced and tested. In addition, the recent FashionMNIST dataset is also tested to generalize the full depth feature extraction approach performance on tiny grayscale images. The proposed FDCNN approach closes the gap between software and hardware implementation since it provides low complexity and high performance. All the trained models and the LPALIC dataset 1 are made publicly available online for research community and future tests.</ns0:p><ns0:p>The rest of this paper is organized as follows; section 2 introduces the structure of datasets used in this paper and also the new LPALIC dataset. In section 3, the proposed approach is described in details.</ns0:p><ns0:p>Section 4 presents a series of experimental results and discussions. Finally, section 5 summarizes the main points of the entire work as a conclusion.</ns0:p></ns0:div>
<ns0:div><ns0:head>DATASETS Datasets Available in the Literature</ns0:head><ns0:p>MNIST is a low-complexity data collection of handwritten digits to test supervised machine learning algorithms introduced by <ns0:ref type='bibr' target='#b40'>Lecun et al. (1998)</ns0:ref>. It has grayscale images of size 28×28 pixels with 60000 training digits and 10000 test digits written by different persons. The digits are white and have black background, normalized to 20×20 pixels preserving the aspect ratio, and then centered at the center of mass of the 28×28 pixels grayscale images. The official site for the dataset and results are availabe by LeCun 2 .</ns0:p><ns0:p>In MADbase, 700 Arabic native writers wrote ten digits ten times and the images were collected as 70000 binary images; 60000 for training and 10000 for testing, so that writers of training set and test set are exclusive. This dataset 3 has the same format as MNIST to make veracity for comparisons between 1 https://www.kaggle.com/dataset/b4697afbddab933081344d1bed3f7907f0b2b2522f637adf15a5fcea67af2145 2 http://yann.lecun.com/exdb/mnist/ 3 http://datacenter.aucegypt.edu/shazeem</ns0:p></ns0:div>
<ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_11'>2020:10:53905:1:1:NEW 19 Jan 2021)</ns0:ref> Manuscript to be reviewed Computer Science digits (used in Arabic and English languages) recognition approaches. Table <ns0:ref type='table'>1</ns0:ref> shows example digits of printed Latin, Arabic and handwritten Arabic characters used for numbers as declared in ISO/IEC 8859-6:1999.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Printed and handwritten digits. <ns0:ref type='table' target='#tab_8'>Characters 0 1 2 3 4 5 6 7 8</ns0:ref> Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> gives a brief review on some publicly available related LP datasets for LPDR problem. The Zemris dataset is also called English LP in some references <ns0:ref type='bibr' target='#b48'>(Panahi and Gholampour, 2017)</ns0:ref>. Furthermore, Some characters were collected from some public LP datasets, LP websites and our own camera pictures in Turkey taken in different weather conditions, places, blurring, distances, tilts and illuminations. These characters are real LP manually cropped characters without any filtering. For uniformity a size of 28×28 pixels of grayscale images was utilized.</ns0:p></ns0:div>
<ns0:div><ns0:head>Printed Latin</ns0:head><ns0:p>The manually cropped characters were fed into the following conversion pipeline inspired from</ns0:p><ns0:p>FashionMNIST <ns0:ref type='bibr' target='#b59'>(Xiao et al., 2017)</ns0:ref> which is similar to MNIST <ns0:ref type='bibr' target='#b40'>(Lecun et al., 1998)</ns0:ref> ,</ns0:p><ns0:p>1. Resizing the longest edge of the image to 24 to save the aspect ratio.</ns0:p><ns0:p>2. Converting the image to 8-bit grayscale pixels image.</ns0:p><ns0:p>3. Negating the intensities of the image to get white character with black background.</ns0:p><ns0:p>4. Computing the center of mass of the pixels.</ns0:p><ns0:p>5. Translating the image to put center of mass at the center of the 28×28 grayscale image.</ns0:p><ns0:p>Some samples of the LPALIC dataset is visualized in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> for Latin characters and in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> for Arabic characters. <ns0:ref type='table' target='#tab_2'>3</ns0:ref> illustrates the total number of Arabic and Latin characters included in LPALIC dataset. </ns0:p></ns0:div>
<ns0:div><ns0:head>PROPOSED APPROACH</ns0:head><ns0:p>Stacked CNN architecture is simple, where each layer has a single input and a single output. For small size images, the key efficient simple deep learning architecture was LeNet-5 <ns0:ref type='bibr' target='#b40'>(Lecun et al., 1998)</ns0:ref>, it consists of</ns0:p></ns0:div>
<ns0:div><ns0:head>5/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:1:1:NEW 19 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Architecture</ns0:head><ns0:p>The core of the proposed model is the convolution block which is a convolutional layer followed by a batch normalization (BN) layer <ns0:ref type='bibr' target='#b32'>(Ioffe and Szegedy, 2015)</ns0:ref> and a non-linear activation ReLU layer <ns0:ref type='bibr' target='#b36'>(Krizhevsky et al., 2012)</ns0:ref>. This block is called standard convolutional layer in <ns0:ref type='bibr' target='#b29'>(Howard et al., 2017)</ns0:ref>. The proposed convolutional layers have kernels of size 5 × 5 with a single stride. This kernel size showed a good feature extraction capability in LeNet-5 <ns0:ref type='bibr' target='#b40'>(Lecun et al., 1998)</ns0:ref> for small images as it covers 3.2%</ns0:p><ns0:p>of the input image in every stride. However, the recent trends are to replace 5 × 5 with 2 layers of 3 × 3 kernels as in InceptionV3 <ns0:ref type='bibr' target='#b56'>(Szegedy et al., 2016)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows the architecture design of the proposed model. For a mini-batch B= {x 1 , x 2 , ..., x m } of size m, the mean µ B and variance σ 2 B of B is computed and each input image in the mini-batch is normalized according to Equation (1).</ns0:p><ns0:formula xml:id='formula_0'>xi = (x i ) − µ B σ 2 B + ε (1)</ns0:formula><ns0:p>Where ε is a constant, xi is the i th normalized image scaled by learnable scale parameter γ and shifted by learnable shift parameter β producing the i th normalized output image y i <ns0:ref type='bibr' target='#b32'>(Ioffe and Szegedy, 2015)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>y i = BN γ,β , (x i ) = γ xi + β (2)</ns0:formula><ns0:p>Motivated by LeNet-5 convolution kernel 5 × 5, BN used in InceptionV3 and ReLU in Alexnet, the proposed model convolution block is built as in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. The size of output feature map (FM) of each convolution block has lower size than the input feature map if no additional padding is applied. Equation (3) describes the relation between input and output FM sizes <ns0:ref type='bibr' target='#b25'>(Goodfellow et al., 2016)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_2'>W y = W x −W k + 2P W s + 1 (3)</ns0:formula><ns0:p>Where W y is the width of the output, W x is the width of the input, W k is the width of the kernel, W s is the width of the stride kernel and P is the number of padding pixels. For the height H, Equation (3) can be used by replacing W with H. This reduction is called the shrinkage of convolution and it limits the number of convolutional layers that the network can include <ns0:ref type='bibr' target='#b25'>(Goodfellow et al., 2016)</ns0:ref>. The feature map shrinks from borders to the center as convolutional layers as added. Eventually, feature maps drop to 1 × 1 × channels (single neuron per channel) at which no more convolutional layers can be added. This is the concept of full depth used for designing the proposed architecture, Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref> describes the full depth idea in FDCNN, where width and height shrink by 4 according to Equation (3). In Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>, each feature map is shrunk to a single value and this means that the features are convoluted into a single value resulting low number of parameters and high accuracy.</ns0:p><ns0:p>The proposed FDCNN model composed basically of two stacked convolutional stages and one FC layer for 28 × 28 input images. Every stage has two convolution blocks and one max-pooling layer. It has a single input and a single output in all of its layers. Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows the FDCNN architecture.</ns0:p></ns0:div>
<ns0:div><ns0:head>Parameter Selection</ns0:head><ns0:p>In the proposed architecture, there are some parameters have to be selected, these parameters are kernel sizes of convolution, pooling layers kerenl sizes, the number of filters (channels) in convolution layers and strides. The kernel sizes are selected to be 5 × 5 for convolutional layers and 2 × 2 for pooling layers as described in architecture design in the previous Proposed Architecture section.</ns0:p><ns0:p>In literature, the trend for selecting the number of filters is to increase the number of filters as deep as the network goes <ns0:ref type='bibr' target='#b36'>(Krizhevsky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b57'>Szegedy et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b53'>Simonyan and Zisserman, 2015;</ns0:ref><ns0:ref type='bibr' target='#b26'>He et al., 2016)</ns0:ref>. Generally, the first convolutional layers learn simple features while deeper layers learn more abstract features. Selecting optimal parameters is based on heuristics or grid searches <ns0:ref type='bibr' target='#b5'>(Bengio, 2012)</ns0:ref>.</ns0:p><ns0:p>The rule of thumb to design a network from scratch is to start with 8-64 filters per layer and double the number of filters after each pooling layer <ns0:ref type='bibr' target='#b53'>(Simonyan and Zisserman, 2015)</ns0:ref> or after each convolutional layer <ns0:ref type='bibr' target='#b26'>(He et al., 2016)</ns0:ref>. Recently, a new method was proposed to select the number of filters <ns0:ref type='bibr' target='#b23'>(Garg et al., 2018)</ns0:ref>, an optimization of network structure in terms of both the number of layers and the number of filters per layer was done using principal component analysis on trained network with a single shot of doubling the number of filters was also applied.</ns0:p><ns0:p>One of the contributions of this research is to select the number of channels that achieves full depth.</ns0:p><ns0:p>Number of filters may also be called the number of kernels, number of layer channels or layers width.</ns0:p><ns0:p>The number of filters is selected to be as the same as the number of shrinking pixels in each layer from bottom to the top. Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref> shows the shrinkage of the proposed model. From the fact that the network • The width of 4 th convolutional layer is 208 (the 1 st layer shrinkage).</ns0:p><ns0:p>• The width of 3 rd is 176 (the the 2 nd layer shrinkage).</ns0:p><ns0:p>• The max-pooling will make a loss of half in FM dimensions so the next layers shrinkage pixels will be doubled.</ns0:p><ns0:p>• The width of 2 nd is 128 (the double of the 3 rd layer shrinkage).</ns0:p></ns0:div>
<ns0:div><ns0:head>8/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:1:1:NEW 19 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• The width of 1 nd is 64 (the double of the 4 th layer shrinkage).</ns0:p><ns0:p>The same parameter selection method can be applied to 32 × 32 input architecture as described in Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> to reach full depth features (single value feature) as shown in Figure <ns0:ref type='figure' target='#fig_7'>5b</ns0:ref>. In general, DNNs give weights for all input features (neurons) to produce the output neurons, but this needs a huge number of parameters. Instead, CNNs convolve the adjacent neurons by the convolution kernel size to produce the output neurons. In the literature, the state-of-the-art architectures had high number of learnable parameters at the last FC layers. For example, VGG16 has totally 136M parameters, and after the last pooling layer the first FC layer has 102M parameters, which means more than 75% of the architecture parameters (just in one layer). AlexNet has totally 62M parameters, and the first FC layer has 37.75M parameters, which means more than 60% of the architecture parameters. In <ns0:ref type='bibr' target='#b28'>(Hirata and Takahashi, 2020)</ns0:ref>, the proposed architecture has 28.68M parameters, and the first FC layer has 3.68M parameters after majority voting from ten divisions. But, by using the full depth concept to reduce FM to 1x1 size after the last pooling layer, FDCNN has just 2090 parameters from totally 1.6M parameters as seen in Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>. The full depth concept of reducing the feature maps size to one neuron has decreased the total number of learnable parameters which make FDCNN simple and fast.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:1:1:NEW 19 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Training Process</ns0:head><ns0:p>Deep learning training algorithms were well explained in <ns0:ref type='bibr' target='#b25'>(Goodfellow et al., 2016)</ns0:ref>. The proposed model is trained using stochastic gradient descent with momentum (SGDM) with custom parameters chosen after many trails. initial learning rate (LR) of 0.025, mini-batch size equals to the number of training instances divided by number of batches needed to complete one epoch, LR drop factor by half every 2 epochs, 10 epochs, 0.95 momentum and the training set is shuffled every epoch. However, those training parameters are not used for all datasets since the number of images is not constant in all of them.</ns0:p><ns0:p>After getting the first results, The model parameters are tuned by training again on ADAM with larger mini-batch size and very small LR started by 1 × 10 −5 , then multiplying the batch size by 2 and LR by 1/2 every 10 epochs as long as the test error has improvement.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS AND DISCUSSION</ns0:head><ns0:p>All of training and testing are made on MATLAB2018 platform with GeForce 1060 (6GB shared memory GPU). The main goal of this research is to design a CNN to recognize multi-language characters of license plates but to generalize and verify the designed architecture several tests on handwritten character recognition benchmarks are done (verification process). The proposed approach showed very promising results. Table <ns0:ref type='table' target='#tab_7'>7</ns0:ref> summarizes the results obtained on MNIST dataset. It is clear that stacked CNN has not outperformed the error of 0.35% in the literature for MNIST but the approach used in (Assiri, 2019) obtained 0.17%. The proposed FDCNN performance approximately reached close to the performance of five committees CNN of <ns0:ref type='bibr' target='#b12'>(Ciresan et al., 2011)</ns0:ref>. FDCNN do as the same performance as <ns0:ref type='bibr' target='#b44'>(Moradi et al., 2019)</ns0:ref> which it is a sparse design that uses Residual blocks and Inception blocks as described in the literature. However, the architecture in <ns0:ref type='bibr' target='#b3'>(Assiri, 2019)</ns0:ref> has 15 layers with 13.12M parameters, the results were obtained utilizing data augmentations, different training processes and Dropout layers before and after each pooling layer with different settings. FDCNN has less parameters and layers and showed good results on MNIST.</ns0:p><ns0:p>On the other hand, the proposed approach is tested on MADbase, AHCD and AI9IK datasets for Arabic character recognition benchmarks to verify FDCNN and to generalize using it in Arabic ALPR systems. Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref> describes the classification error regarding the stat-of-the-art on such datasets.</ns0:p><ns0:p>As seen in Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref>, for MADbase dataset, most of the tested approaches were based on VGG architecture. Alphanumeric VGG <ns0:ref type='bibr' target='#b45'>(Mudhsh and Almodfer, 2017)</ns0:ref> reported a validation error of 0.34% that did not hold on the test set while FDCNN obtained 0.15% validation error and 0.34% test error. The proposed approach outperformed Arabic character recognition benchmarks state-of-the-arts for both digits and letters used in Arabic language with less number of layers and learnable parameters. It has succeed this verification process on these datasets too.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref>, input layer is included in the determination of the number layers <ns0:ref type='bibr' target='#b40'>(Lecun et al., 1998)</ns0:ref> for all architectures and ReLU layer is not considered as a layer but BN is considered as a layer. <ns0:ref type='bibr' target='#b54'>Sousa (2018)</ns0:ref> considered convolution, pooling and FC layers when the number of layers was declared but four trained CNNs were used with softmax averaging, this is why the number of layers and learnable parameters are high. <ns0:ref type='bibr' target='#b46'>Najadat et al. (2019)</ns0:ref> did not declare the most of network parameters like kernel size in every convolution layer and they changed many parameters to enhance the model. In <ns0:ref type='bibr' target='#b63'>(Younis, 2017)</ns0:ref>, 28 × 28 input images were used and no pooling layers were included.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:1:1:NEW 19 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science On the other hand, and in the same verification process, the proposed approach is tested on FashionM-NIST benchmark too to generalize using it over grayscale tiny images. As shown in Table <ns0:ref type='table' target='#tab_9'>9</ns0:ref>, the proposed approach outperformed the stacked CNN architectures and reached near DENSER network in <ns0:ref type='bibr' target='#b4'>(Assunc ¸ão et al., 2018)</ns0:ref> and EnsNet in <ns0:ref type='bibr' target='#b28'>(Hirata and Takahashi, 2020)</ns0:ref> with less layers and parameters but with a good performance. It can be said that FDCNN has a very good verification performance on FashionMNIST dataset. FDCNN outperformed <ns0:ref type='bibr' target='#b8'>(Byerly et al., 2020)</ns0:ref> results on Fashion-MNIST benchmark while <ns0:ref type='bibr' target='#b8'>(Byerly et al., 2020)</ns0:ref> outperformed FDCNN on MNIST. <ns0:ref type='bibr' target='#b33'>(Khaled et al., 2010)</ns0:ref>.</ns0:p><ns0:p>FDCNN has successfully verified on KSA Arabic LP characters dataset.</ns0:p><ns0:p>In this research and for more verification, FDCNN performance is also tested on both common publicly available LP benchmark characters and the new LPALIC dataset. To make robust tests, the characters were split manually and randomly as seen in Table <ns0:ref type='table' target='#tab_12'>11</ns0:ref>. In manual split, the most difficult characters were put in the test set and the others in the training set while in random split 80% were split for training and the rest of them for testing. As described in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>, the number of characters per country is not equal, which resulted various recognition accuracies in Table <ns0:ref type='table' target='#tab_12'>11</ns0:ref>. Since the number of UAE characters is not large enough to train FDCNN, Latin characters from other countries were used for training but the test was done only on UAE test set. FDCNN could learn features that give good average accuracy. In fact, the Latin characters in LPALIC have various background and foreground colors which make the classification more challenging than Arabic characters set, but FDCNN shows a promising recognition results on both and also on handwritten characters as well. Implementation of FDCNN approach is simple and can be used in real time applications worked on small devices like mobiles, tablets and some embedded systems. Very promising results were achieved on some common benchmarks like MNIST, FashionMNIST, MADbase, AIA9K, AHCD, Zemris, ReId, UFPR and the newly introduced LPALIC dataset. FDCNN performance is verified and compared to the state-of-the-art results in the literature. A new real LPs cropped characters dataset is also introduced. It is the largest dataset for LP characters in Turkey and KSA. More tests can be done on FDCNN for future work to be the core of CNN processor. Also, more experiments can be conducted to hybrid FDCNN with some common blocks like residual and inception blocks. Additionally, the proposed full depth approach may be applied to other stacked CNNs like Alexnet and VGG networks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53905:1:1:NEW 19 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>of 13440 training images and 3360 test images for 28 Arabic handwritten letters (classes) of size 32×32 pixels grayscale images. In AI9IK 5 dataset, 62 female and 45 male Arabic native writers aged between 18 to 25 years old at the Faculty of Engineering at Alexandria University-Egypt were invited to write all the Arabic letters 3 times to gather 8988 letters of which 8737 32×32 grayscale letter images were accepted after a verification process by eliminating cropping errors, writer mistakes and unclear letters. FasionMNIST dataset 6 has images of 70000 unique products taken by professional photographers. The thumbnails (51×73) were then converted to 28×28 grayscale images by the conversion pipeline declared in<ns0:ref type='bibr' target='#b59'>(Xiao et al., 2017)</ns0:ref>. It is composed of 60000 training images and 10000 test images of 10 class labels.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Samples of Latin characters in the LPALIC dataset.</ns0:figDesc><ns0:graphic coords='6,192.82,158.33,311.40,299.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Samples of Arabic characters in the LPALIC dataset.</ns0:figDesc><ns0:graphic coords='7,180.58,63.78,335.89,259.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Proposed FDCNN model architecture.</ns0:figDesc><ns0:graphic coords='7,141.73,540.01,413.59,119.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Proposed model convolution blocks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Full Depth concept of FDCNN.</ns0:figDesc><ns0:graphic coords='9,264.56,103.93,82.70,228.09' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>on deep learning technique of CNNs to recognize multi-language LP characters for both Latin and Arabic characters used in vehicle LPs. A new approach is proposed, analyzed and tested on Latin and Arabic CR benchmarks for both LP and handwritten characters recognition. The proposed approach consists of proposing FDCNN architecture, FDCNN parameter selection and training process. The proposed full depth and width selection ideas are very efficient in extracting features from tiny grayscale images. The complexity of FDCNN is also analyzed in terms of number of learnable parameters and feature maps memory usage.The full depth concept of reducing the feature maps size to one neuron has decreased the total number of learnable parameters while achieving very good results.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>A review of publicly available ALPR datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Approach</ns0:cell><ns0:cell>Number of Images</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Classifier</ns0:cell><ns0:cell>Character Set</ns0:cell><ns0:cell>Purpose</ns0:cell></ns0:row><ns0:row><ns0:cell>Zemris</ns0:cell><ns0:cell>Kraupner (2003)</ns0:cell><ns0:cell>510</ns0:cell><ns0:cell>86.2%</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>UCSD</ns0:cell><ns0:cell>Dlagnekov (2005)</ns0:cell><ns0:cell>405</ns0:cell><ns0:cell>89.5%</ns0:cell><ns0:cell>OCR</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>Snapshots</ns0:cell><ns0:cell>Martinsky (2007)</ns0:cell><ns0:cell>97</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>ARG</ns0:cell><ns0:cell>Fernández et al. (2011)</ns0:cell><ns0:cell>730</ns0:cell><ns0:cell>95.8%</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>SSIG</ns0:cell><ns0:cell>Gonc ¸alves et al. (2016)</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>95.8%</ns0:cell><ns0:cell>SVM-OCR</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>ReId</ns0:cell><ns0:cell>Špaňhel et al. (2017)</ns0:cell><ns0:cell>77k</ns0:cell><ns0:cell>96.5%</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPR</ns0:cell></ns0:row><ns0:row><ns0:cell>UFPR</ns0:cell><ns0:cell>Laroca et al. (2018)</ns0:cell><ns0:cell>4500</ns0:cell><ns0:cell>78.33%</ns0:cell><ns0:cell>CR-NET</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>CCPD</ns0:cell><ns0:cell>Xu et al. (2018)</ns0:cell><ns0:cell>250k</ns0:cell><ns0:cell>95.2%</ns0:cell><ns0:cell>RPnet</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Novel License Plate Characters Dataset This</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>research introduces a new multi-language LP chatacters dataset, involving both Latin and Arabic characters from LP images used in Turkey, USA, UAE, KSA and EU (Croatia, Greece, Czech, France, Germany, Serbia, Netherlands and Belgium ). It is called LPALIC datase. In addition, some characters cropped from Brazil, India and other countries were added for just training to give features diversity.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>LPALIC dataset number of cropped characters per country.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Country</ns0:cell><ns0:cell>TR</ns0:cell><ns0:cell>EU</ns0:cell><ns0:cell>USA</ns0:cell><ns0:cell>UAE</ns0:cell><ns0:cell cols='2'>Others KSA</ns0:cell></ns0:row><ns0:row><ns0:cell>Used Characters</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin & Arabic</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Arabic</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Characters</ns0:cell><ns0:cell cols='3'>60000 32776 7384</ns0:cell><ns0:cell>3003</ns0:cell><ns0:cell>17613</ns0:cell><ns0:cell>50000</ns0:cell></ns0:row><ns0:row><ns0:cell>Total Characters</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>120776</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>50000</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>The Latin characters were collected from 11 countries (LPs have different background and font</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>colors) while the Arabic characters were collected from only KSA (LPs have white background and black</ns0:cell></ns0:row></ns0:table><ns0:note>character). Choosing those countries is related to the availability of those LPs for public use.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Shrinkage process in 28×28 architecture.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Shrinking Pixels Width</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>28 2 − 24 2 = 208</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>24 2 − 20 2 = 176</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 1</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>10 2 − 6 2 = 64</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>6 2 − 2 2 = 32</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 2</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>goes more deeper the following selection is made:</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Shrinkage process in 32×32 architecture.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Shrinking Pixels Width</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>32 2 − 28 2 = 240</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>28 2 − 24 2 = 208</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>24 2 − 20 2 = 176</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 1</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>10 2 − 6 2 = 64</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv5</ns0:cell><ns0:cell>6 2 − 2 2 = 32</ns0:cell><ns0:cell>240</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 2</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>240</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>shows the number of learnable parameters and feature memory usage for the proposed model. Memory usage is multiplied by 4 as each pixel is stored as 4-byte single float number. For 32 × 32 input images just another convolutional block can be added before the first convolution block in FDCNN and the width of last convolutional layer will be 32 2 − 28 2 = 240 to get full depth of shrinkage. This layer of course affects the total number of model parameters and FM memory usage to be 2.94M and 2.51MB respectively.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Proposed model's memory usage and learnable parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Features Memory</ns0:cell><ns0:cell cols='2'>Learnable Parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>Input</ns0:cell><ns0:cell>28 × 28 × 1</ns0:cell><ns0:cell>3136</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>24 × 24 × 64</ns0:cell><ns0:cell>147456</ns0:cell><ns0:cell>(5 × 5) × 64 + 64 =</ns0:cell><ns0:cell>1664</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>24 × 24 × 64 × 2</ns0:cell><ns0:cell>294912</ns0:cell><ns0:cell>4 × 64 =</ns0:cell><ns0:cell>256</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>20 × 20 × 128</ns0:cell><ns0:cell>204800</ns0:cell><ns0:cell>5 × 5 × 64 × 128 + 128 =</ns0:cell><ns0:cell>204928</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell cols='2'>20 × 20 × 128 × 2 409600</ns0:cell><ns0:cell>4 × 128 =</ns0:cell><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-pooling1</ns0:cell><ns0:cell>10 × 10 × 128</ns0:cell><ns0:cell>51200</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>6 × 6 × 176</ns0:cell><ns0:cell>25344</ns0:cell><ns0:cell>5 × 5 × 128 × 176 + 176 =</ns0:cell><ns0:cell>563376</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>6 × 6 × 176 × 2</ns0:cell><ns0:cell>50688</ns0:cell><ns0:cell>4 × 176 =</ns0:cell><ns0:cell>704</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>2 × 2 × 208</ns0:cell><ns0:cell>3328</ns0:cell><ns0:cell>5 × 5 × 176 × 208 + 208 =</ns0:cell><ns0:cell>915408</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>2 × 2 × 208 × 2</ns0:cell><ns0:cell>6656</ns0:cell><ns0:cell>4 × 208 =</ns0:cell><ns0:cell>832</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-pooling2</ns0:cell><ns0:cell>1 × 1 × 208</ns0:cell><ns0:cell>832</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>FC</ns0:cell><ns0:cell>1 × 1 × 10</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>208 × 10 + 10 =</ns0:cell><ns0:cell>2090</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Total Memory</ns0:cell><ns0:cell>1197992 Bytes</ns0:cell><ns0:cell>Total Parameters</ns0:cell><ns0:cell>1689770</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Test results of FDCNN on MNIST.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Number of Layers Error</ns0:cell></ns0:row><ns0:row><ns0:cell>Cires ¸an et al. (2010)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0.35%</ns0:cell></ns0:row><ns0:row><ns0:cell>Ciresan et al. (2011)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>0.27%</ns0:cell></ns0:row><ns0:row><ns0:cell>Ciregan et al. (2012)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>245</ns0:cell><ns0:cell>0.23%</ns0:cell></ns0:row><ns0:row><ns0:cell>Moradi et al. (2019)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>0.28%</ns0:cell></ns0:row><ns0:row><ns0:cell>Kowsari et al. (2018)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.18%</ns0:cell></ns0:row><ns0:row><ns0:cell>Assiri (2019)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0.17%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Hirata and Takahashi (2020) sparse</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>0.16%</ns0:cell></ns0:row><ns0:row><ns0:cell>Byerly et al. (2020)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>0.16%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>0.28%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Arabic character recognition benchmarks state-of-the-art and proposed approach test errors.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Architecture</ns0:cell><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Layers Parameters</ns0:cell><ns0:cell>Error</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RBF SVM Abdleazeem and El-Sherif (2008)</ns0:cell><ns0:cell>linear</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.52 %</ns0:cell></ns0:row><ns0:row><ns0:cell>MADbase 28 × 28</ns0:cell><ns0:cell>LeNet5 El-Sawy et al. (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>51K</ns0:cell><ns0:cell>12%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Alphanumeric VGG Mudhsh and Almodfer (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>2.1M</ns0:cell><ns0:cell>0.34% validation</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VGG12 REGU Sousa (2018)</ns0:cell><ns0:cell>Average of 4 stacked CNN</ns0:cell><ns0:cell>66</ns0:cell><ns0:cell>18.56M</ns0:cell><ns0:cell>0.48%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.34%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN El-Sawy et al. (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>1.8M</ns0:cell><ns0:cell>5.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>AHCD 32 × 32</ns0:cell><ns0:cell>CNN Younis (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>200K</ns0:cell><ns0:cell>2.4%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VGG12 REGU Sousa (2018)</ns0:cell><ns0:cell>Average of 4 stacked CNN</ns0:cell><ns0:cell>66</ns0:cell><ns0:cell>18.56M</ns0:cell><ns0:cell>1.58%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN Najadat et al. (2019)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>Not mentioned</ns0:cell><ns0:cell>2.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>2.94M</ns0:cell><ns0:cell>1.39%</ns0:cell></ns0:row><ns0:row><ns0:cell>AI9IK</ns0:cell><ns0:cell>RBF SVM Torki et al. (2014)</ns0:cell><ns0:cell>linear</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>5.72%</ns0:cell></ns0:row><ns0:row><ns0:cell>32 × 32</ns0:cell><ns0:cell>CNN Younis (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>200K</ns0:cell><ns0:cell>5.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>2.94M</ns0:cell><ns0:cell>3.27%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Test results of FDCNN on FashionMNIST.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Layers Parameters</ns0:cell><ns0:cell>Error</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (Xiao et al., 2017)</ns0:cell><ns0:cell>linear</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>10.3%</ns0:cell></ns0:row><ns0:row><ns0:cell>DENSER (Assunc ¸ão et al., 2018)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>4.7%</ns0:cell></ns0:row><ns0:row><ns0:cell>WRN (Zhong et al., 2017)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>36.5M</ns0:cell><ns0:cell>3.65%</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16 (Zeng et al., 2018)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>138M</ns0:cell><ns0:cell>2.34%</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN (Chou et al., 2019)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>0.44M</ns0:cell><ns0:cell>8.32%</ns0:cell></ns0:row><ns0:row><ns0:cell>BRCNN (Byerly et al., 2020)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>1.51M</ns0:cell><ns0:cell>6.34%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>EnsNet (Hirata and Takahashi, 2020) sparse</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>28.67M</ns0:cell><ns0:cell>4.7%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>5.00%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Furthermore, FDCNN is tested also on Arabic LP characters from KSA. It could classify the test set</ns0:cell></ns0:row></ns0:table><ns0:note>with error of 0.46%. It outperformed the the recognition error results of 1.78% in</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Recognition error of proposed architecture on LP benchmarks datasets.For more analysis, another test is made on the introduced LPALIC dataset to analyze the recognition error on characters per country. Table11describes the results. As seen in Table11, the highest error is in classifying USA LP characters because it has more colors, drawings and shapes other than characters and also there is a small number of instances in the characters dataset. However, a very high recognition accuracy is achieved on Turkey and EU since they have the same standard and style for LPs. In Turkey, 10 digits and 23 letter is used since letters like Q, W and X are not valid in Turkish language. Additionally, FDCNN could classify Arabic LP characters with very low error. UAE characters set has a small number of cropped characters that is why it is tested just by FDCNN trained on other countries character sets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell cols='2'>Layers Parameters</ns0:cell><ns0:cell>Error</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (Panahi and Gholampour, 2017)</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell>LCR-Alexnet (Meng et al., 2018)</ns0:cell><ns0:cell>Zemris</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>>2.33M</ns0:cell><ns0:cell>2.7%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.979%</ns0:cell></ns0:row><ns0:row><ns0:cell>OCR (Dlagnekov, 2005) Proposed</ns0:cell><ns0:cell>UCSD</ns0:cell><ns0:cell>-12</ns0:cell><ns0:cell>-1.69M</ns0:cell><ns0:cell>10.5% 1.51%</ns0:cell></ns0:row><ns0:row><ns0:cell>MLP (Martinsky, 2007) Proposed</ns0:cell><ns0:cell>Snapshots</ns0:cell><ns0:cell>-12</ns0:cell><ns0:cell>-1.69M</ns0:cell><ns0:cell>15% 0.42%</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN ( Špaňhel et al., 2017)</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>17M</ns0:cell><ns0:cell>3.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>DenseNet169 (Zhu et al., 2019)</ns0:cell><ns0:cell>ReID</ns0:cell><ns0:cell>169</ns0:cell><ns0:cell>>15.3M</ns0:cell><ns0:cell>6.35%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>1.09%</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN (Laroca et al., 2018)</ns0:cell><ns0:cell /><ns0:cell>26</ns0:cell><ns0:cell>43.1M</ns0:cell><ns0:cell>35.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed trained just on UFPR</ns0:cell><ns0:cell>UFPR</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>4.29%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed trained on LPALIC</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>2.03%</ns0:cell></ns0:row><ns0:row><ns0:cell>Line Processing Algorithm (Khaled et al., 2010)</ns0:cell><ns0:cell>KSA</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1.78%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.46%</ns0:cell></ns0:row><ns0:row><ns0:cell>FDCNN</ns0:cell><ns0:cell>LPALIC</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.97%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Test recognition error per country characters with different training instances.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Characters Set</ns0:cell><ns0:cell>Number of Instances Train / Test</ns0:cell><ns0:cell>Manual Split</ns0:cell><ns0:cell>Trained on Other Countries</ns0:cell><ns0:cell>Random 80/20% Split Average Error</ns0:cell></ns0:row><ns0:row><ns0:cell>TR</ns0:cell><ns0:cell>48748 / 11755</ns0:cell><ns0:cell>2.67%</ns0:cell><ns0:cell>1.82%</ns0:cell><ns0:cell>0.97%</ns0:cell></ns0:row><ns0:row><ns0:cell>EU</ns0:cell><ns0:cell>23299 / 9477</ns0:cell><ns0:cell>2.30%</ns0:cell><ns0:cell>1.07%</ns0:cell><ns0:cell>1.03%</ns0:cell></ns0:row><ns0:row><ns0:cell>USA</ns0:cell><ns0:cell>5960 / 1424</ns0:cell><ns0:cell>10.88%</ns0:cell><ns0:cell>3.51%</ns0:cell><ns0:cell>1.96%</ns0:cell></ns0:row><ns0:row><ns0:cell>UAE</ns0:cell><ns0:cell>1279 / 1724</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1.51%</ns0:cell><ns0:cell>0.9%</ns0:cell></ns0:row><ns0:row><ns0:cell>All Latin Characters</ns0:cell><ns0:cell>96899 / 24380</ns0:cell><ns0:cell>2.08%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.97%</ns0:cell></ns0:row><ns0:row><ns0:cell>KSA</ns0:cell><ns0:cell>46981 / 3018</ns0:cell><ns0:cell>0.43%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.26%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='4'>https://www.kaggle.com/mloey1/ahcd1 5 www.eng.alexu.edu.eg/%7emehussein/AIA9k/index.html 6 github.com/zalandoresearch/fashion-mnist 4/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:1:1:NEW 19 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Department of Electronics and Communication Engineering
Kocaeli University - Turkey
January 12, 2021
Dear Editors,
We thank the editors and reviewers for their efforts, time and useful comments.
We have edited the manuscript addressing your valuable comments. We believe that your
comments made the manuscript scientifically suitable to be accepted for the next stage.
Mohammed Salemdeeb
PhD student at Electronics and Communication department.
On behalf of all authors.
Reviewer 1
Basic reporting
The paper presents a new approach to detect LP by stacking two CNN networks. It manages to
obtain a state if the art results while using a small network in terms of the number of parameters.
Experimental design
The experimental Section is clear and well-designed.
Validity of the findings
The novelty of the approach is questionable, but the finding is interesting.
The paper is totally revised and some clarifications are added.
Comments for the author
The paper presents another approach for LP recognition. The paper is clear, but there are several
points that need further classification.
1. VGG seems to obtain the same performance, you should explain why you need another
network
That hit the spot, in literature, most of researches used VGG’s first layers as a core CNN for CR
problem. But, the proposed approach in Table 8 outperformed the previous VGG-based works in
terms of number of layers, parameters and accuracy tests. For example, Alphanumeric VGG
(Mudhsh and Almodfer (2017)) reported the validation test only that was not hold on test set, this
was written at line 276. Sousa (2018) published in “PeerJ Comp. Sci.” better results (0.16%
validation and 0.48% test errors) on the same dataset (MADbase) pointing that Mudhsh and
Almodfer (2017) had reported a “validation set” error not “test set” error and finally FDCNN
outperformed both.
We have added this sentences at line 271 “ As seen in Table 8, for MADbase dataset most of the
tested approaches were based on VGG architecture. Alphanumeric VGG (Mudhsh and Almodfer
(2017)) reported a validation error of 0.34% that was not hold on test set while FDCNN obtained
0.15% validation error and 0.34% test error.”
In Table 9, Zeng et al., (2018) used standard (224x224 input size) VGG16 for Fashion-MNIST
but they resized the input image to 224x224 and they used a deep collaborative weight-based
classification method as mentioned in line 83, but FDCNN was trained and tested on the original
images size (28x28). This means that from this small size simple stacked FDCNN could learn
good features to be able to classify them with 95% accuracy.
2. You need to explain why you have different results for the various datasets on average (table
11)
This is an interesting comment, LPALIC dataset structure is introduced in subsection “Novel
License Plate Characters Dataset” at line 156.
The explanation is added at line 303. “To make robust tests the characters were split manually
and randomly as seen in Table 11. In manual split, the most difficult characters were put in the
test set and the others in the training set while in random split 80% were split for training and the
rest of them for testing. As described in Table 3, the number of characters per country is not
equal, which resulted various recognition accuracies in Table 11.”
3. What is the point of including fashionMNIST
We used FashionMNIST dataset to generalize using the performance of FDCNN as a powerful
feature extractor for small sized image classification problems other than CR problem. This
dataset is tested in many references and we tried to follow the same test methodology for fair
comparison. Byerly et al., (2020) and Hirata and Takahashi, (2020) had used both MNIST and
FashionMNIST.
Full depth approach shows a good result comparing to stacked architectures and using the
original 28x28 input image size. This test opens an opportunity for future work to apply full
depth approach to classify larger images. The tests are being done and we will write another
paper for that but it is not ready yet and needs many comparisons to prove it.
We have updated Table 9 by adding 2 new references rows.
4. There is no Latin digits, the one you call Latin are Arabic and the one you call Arabic are
Hindi
Agreed, but we mean the characters used in printing LPs and in those countries. We replaced the
“Latin digits” and “Arabic numbers” by “Digits used in English and Arabic languages”. Table 1
is also updated.
The references used those terms. For example, in PeerJ, https://peerj.com/articles/cs-167/ the title
used here is Arabic characters and digits; we used the same expressions there. Also, according to
the standard “ISO/IEC 8859-6:1999, Information technology — 8-bit single-byte coded graphic
character sets — Part 6: Latin/Arabic alphabet, is part of the ISO/IEC 8859 series of ASCIIbased standard character encodings, first edition published in 1987. It is informally referred to as
Latin/Arabic.
Reviewer 2
Basic reporting
Some revisions are necessary on the presentation of the text itself. In my general comments to
the authors, I’ve pointed out some grammar errors, typos and unclear passages.
This seems to warrant a full revision of the text to ensure the promising results of the paper are
not obfuscated to readers.
Thank you, a full revision is done taking into account your valuable reviews and comments.
Experimental design
The manuscript describes a classification architecture for license plates from multiple countries,
which is also tested in common character and digit recognition datasets. Additionally it is also
tested in FashionMNIST, a popular small grayscale image dataset. To my understanding, the
strongest contributions of the paper are the proposed Full Depth CNN (FDCNN) model, and the
proposal of a new license plate dataset called LPALIC. They seem to successfully address the
knowledge gaps identified by the authors.
Regarding the FDCNN model, promising results are reported for all the studied datasets,
including state of the art results for MNIST when only stacked CNNs are considered, according
to the authors. I believe that the methodology description and results must be entirely
reproducible, especially since state of the art results in widely used benchmarks are involved.
Because of this, I believe the paper needs revisions to allow for easier reproduction and
verification of results, and also needs more justifications on certain choices in the methodology,
as explained in my comments further below.
In this current form of the manuscript, readers might not understand why many aspects of this
FDCNN architecture were chosen, or how they were tested and validated. The lack of the use of
a validation set, for instance, should be clarified, as directly optimizing a model on the test set
could potentially introduce biases. This is true even if the justifications are of an empirical
nature. I believe the authors should be able to revise the manuscript to make all of these points
clearer and further highlight and solidify their already strong results, before publication.
Agreed, a total revision were done taking into account your valuable comments
Validity of the findings
All results are well described and data and codes are provided, but the points mentioned about
experimental design and basic reporting must be addressed to ensure the validity of the results.
I believe once those matters are addressed and clarified the validity of the findings will be more
easily asserted.
Comments for the author
Line 36 - It would be interesting to additionally address or discuss these recent works, which also
reportedly provide state of the art accuracies for the same task of classifying MNIST:
Hirata and Takahashi; Ensemble Learning In CNN Augmented with Fully Connected
Subnetworks
Byerly et al.; A Branching and Merging Convolutional Network with Homogeneous Filter
Capsules
Assiri; Stochastic Optimization of Plain Convolutional Neural Networks with Simple methods
Kowsari et al.; RMDL: Random Multimodel Deep Learning for Classification
Thank you for this informative comment, a discussion is added at line 50.
We have updated Tables 7 and 9, and we have added some clarification sentences regarding the
comparisons. At lines 266 and 286.
Line 84 – method was used
Line 100 – very little research was done
Line 205 – shrink
The editing is done
Line 100 – This paragraph starts with the idea that there are very few multi-language license
plate datasets, but then cites works about license plate classification from what seems to be a
considerable variety of countries and alphabet types. Please consider clarifying this paragraph by
stating more explicitly how many datasets exist among the cited works, and all the different
languages and character types used. This can create a clearer picture for the reader, and help to
further highlight the new contributions of the paper.
Agreed, this comment can create clearer picture for the reader, a paragraph is added at line 116
stating more explicitly the used datasets.
Lines 101 to 104 - The grammar in this sentence is unclear. Please restructure.
The lines were restructured making the sentence clearer.
Line 117 – Please revise the usage of “concerned” in this sentence.
The sentence is revised.
Line 117 – Since the main contribution of the paper revolves around FDCNNs, it is important
that the reader gets a clear understanding of other types of CNNs, what makes them different,
and why the changes included in FDCNN are necessary. I feel that this information is currently
lacking in the paper, and it’s not entirely clear how the authors justified pursuing this particular
approach or why it’s necessary, or important, or better, to reduce featuremaps to 1x1 size before
classification. Please include a more detailed discussion/justification.
Thank you for this strong comment. More detailed discussion/justification is added at line 247
and 311.
But here is the general answer for your comment, and thanks again.
The main idea is to reduce the number of learnable parameters since the state of the art
approaches had high number of parameters at the last FC layers. For example, in VGG16 which
has totally 136M parameters, after the last pooling layer the first FC layer has 102M parameters
which means more than 75% of architecture parameters (just in one layer). Another example, in
AlexNet, which has 62M parameters, the first FC layer has 37.75M parameters which means
more than 60% of the architecture parameters. But with 1x1 size after the last pooling layer,
FDCNN has just 2090 parameters from 1.6M parameters as seen in table 6.
In general, DNNs give weights for all input features (neurons) to produce the output neurons, but
this needs a huge number of parameters. Instead, in convolutions, it convolves the adjacent
neurons by the convolution kernel size to produce the output neurons; FDCNN convolves each
feature map to one neuron resulting low number of parameters and high accuracy.
Line 119 – Please restructure the sentence so FDCNN is not repeated twice in a row.
The sentence is restructured.
Line 158 – It would greatly help readers to have a table or something similar where all countries
studied are listed, and what language/character types are used in license plates in that particular
country.
For the languages and character types, a new row is added to Table 3. For the countries list a new
sentence is added at line 158 for LPs collected from EU. However, EU countries use the same
LP standard and style.
Figures 1 and 2 – These figures show some interesting differences between datasets. Namely,
Arabic alphabet LPs seem to vary much less in color. Could this possibly affect classification
accuracies? There seems to be a marked imbalance in the number of Latin and Arabic images in
the dataset. Would this warrant using additional metrics, besides accuracy, that are more
sensitive to these imbalances? Please discuss this in the manuscript.
Yes, there are some differences between datasets because the Latin characters were collected
from 11 countries (they use different background and font colors) while the Arabic characters
were collected from only KSA (LPs has white background and black character). The reason is
related to the availability of those LPs for public use. As reported in the literature, most of the
researches were done on private datasets and particularly in Arabic countries.
This study triggers the attention to study the possibility of using CNNs for both Arabic and Latin
characters. For future work, more characters and languages will be included in our new paper to
test an end-to-end multinational and multilingual ALPR system other than CR stage only.
The classification of KSA characters were more accurate than Latin characters and this is
especially for this dataset, this is why we verified FDCNN on handwritten characters and also on
FashionMNIST. We think the results at all are showing an added value. Most of the researches
use the accuracy metric for CR problem. We added a discussion at line 175 and at line 303.
Line 187 – Please avoid qualitative/subjective qualifiers such as “modest”, used here. There are
other examples of such adjectives in the manuscript. If necessary, compare directly to other
values used in the literature, by choosing appropriate metrics for the comparison.
Line 250 – See comment about line 187. The same applies to the usage of “modest” in this line.
At line 187, we mean the learning rate is not too small nor too large. The sentence is rewritten
avoiding qualitative qualifier.
At line 250, the modest word is deleted and the sentence is rewritten.
Line 251 – Please clarify the meaning of “needed iterations”. How is this defined?
It is number of batches needed to complete one training epoch.
A clarification is added at line 251.
Line 251 – Please specify batch size and momentum used. The supplemental files seem to show
the minibatch used had a size of 120. They also mention a LearnRateDropFactor of 0.9, whereas
the paper seems to mention one of 0.5 . It also seems that the standard momentum for Matlab’s
‘sgdm’ function is used, but this value is not stated explicitly in the paper. It would be interesting
to add it.
Since one of the results of the paper is a demonstrable improvement upon the state of the art of
stacked CNNs, it would be interesting, for reproducibility sake, to include these hyperparameters in the methodology description.
Yes, the datasets have not the same number of instances, for most of them we used BZ=120, but
for others, for example, AHCD has 13440 images so we select BZ=192 because 13440/192=70
training iterations.
The files where LearnRateDropFactor of 0.9 is not used in the main training file but it is used in
fine tuning files to make different trails and gain more improvements. The momentum used in
the paper is 0.95. We added a clarification sentences at line 254.
Line 251- Additionally, it would be interesting to discuss how these parameters were chosen. If
there were preliminary tests, or heuristics, how did other attempts affect the results, and by how
much?
We have rewritten the paragraph from line 249 to 254.
We did many tests with different parameters, and the optimizing the SGDM has a lot of details
which are not in the main scope of this paper, but we mentioned at line 249 that the details are
well described in (Goodfellow et al., 2016) book. These attempts are not affecting the average
results so much and we see that including more tables for all datasets will make the paper so long
without adding valuable thing to the FDCNN design.
It would seem that no validation set was used, so how were metrics chosen? Was the model
adjusted to get the best result directly on the test set, without using a separate validation set?
Couldn’t this bias the models to adjust particularly well only to the test set?
Agreed, but most of the approaches that tested on more than one dataset uses the accuracy metric
as the main measurement for the performance for classification problems. Moreover, to make
fair comparison we used almost the same common comparison methodology for both LP CR and
handwritten problems. The average accuracy performances show good results, then we can say
that it is not biased or adjusted for all different datasets. For some references where the
validation test was mentioned we added some sentence at line 271.
Table 11 – The UAE test set is larger than the training set. I feel this decision should be justified
in the paragraph above the table. I could not see a justification for it as is.
We collected UAE from two resources and we decides to use one resource as a test set which is
seemed to be more difficult for classification. In the table we stated that we used Latin from
other countries for training but the test was done only on the test set and the average accuracy
was very good since FDCNN trained on small UAE characters and some other characters but
could learn features that made good accuracy. We add a justification at line 303.
" | Here is a paper. Please give your review comments after reading it. |
131 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Character recognition is an important research field of interest for many applications. In recent years, deep learning has made breakthroughs in image classification, especially for character recognition. However, convolutional neural networks (CNN) still deliver state-ofthe-art results in this area. Motivated by the success of CNNs, this paper proposes a simple novel full depth stacked CNN architecture for Latin and Arabic handwritten alphanumeric characters that is also utilized for license plate (LP) characters recognition. The proposed architecture is constructed by four convolutional layers, two max-pooling layers, and one fully connected layer. This architecture is low-complex, fast, reliable and achieves very promising classification accuracy that may move the field forward in terms of low complexity, high accuracy and full feature extraction. The proposed approach is tested on four benchmarks for handwritten character datasets, Fashion-MNIST dataset, public LP character datasets and a newly introduced real LP isolated character dataset. The proposed approach tests report an error of only 0.28% for MNIST, 0.34% for MAHDB, 1.45% for AHCD, 3.81% for AIA9K, 5.00% for Fashion-MNIST, 0.26% for Saudi license plate character and 0.97% for Latin license plate characters datasets. The license plate characters include license plates from Turkey (TR), Europe (EU), USA, United Arab Emirates (UAE) and Kingdom of Saudi Arabia (KSA).</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Character recognition (CR) plays a key role in many applications and motivates R&D in the field for accurate and fast classification solutions. CR has been widely investigated in many languages using different proposed methods. In the last years, researchers widely used CNN as deep learning classifiers and achieved good results on handwritten Alphanumeric in many languages <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abdleazeem and El-Sherif, 2008;</ns0:ref><ns0:ref type='bibr'>El-Sawy et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b26'>Guha et al., 2020)</ns0:ref>, character recognition in real-world images <ns0:ref type='bibr' target='#b48'>(Netzer et al., 2011)</ns0:ref>, document scanning, optical character recognition (OCR) and automatic license plate character recognition (ALPR) <ns0:ref type='bibr' target='#b14'>(Comelli et al., 1995)</ns0:ref>. Searching for text information in images is a time-consuming process that largely benefits of CR. Particularly, in Arabic language the connectivity of letters make a challenge for classification <ns0:ref type='bibr' target='#b21'>(Eltay et al., 2020)</ns0:ref>. Therefore, isolated character datasets get more interest in research.</ns0:p><ns0:p>MNIST is a handwritten digits dataset introduced by <ns0:ref type='bibr' target='#b41'>Lecun et al. (1998)</ns0:ref> and used to test supervised machine learning algorithms . The best accuracy obtained by stacked CNN architectures, until before two years, is a test error rate of 0.35% in <ns0:ref type='bibr' target='#b11'>(Cires ¸an et al., 2010)</ns0:ref>, where large deep CNN of nine layers with an elastic distortion applied to the input images. Narrowing the gap to human performance, a new architecture of five committees of seven deep CNNs with six width normalization and elastic distortion was trained and tested in <ns0:ref type='bibr' target='#b13'>(Ciresan et al., 2011)</ns0:ref> and reported an error rate of 0.27%, where the main CNN is seven stacked layers. In <ns0:ref type='bibr' target='#b12'>(Ciregan et al., 2012)</ns0:ref>, a near-human performance error rate of 0.23% was achieved, where several techniques were combined in a novel way to build a multi-column deep neural network (MCDNN) inspired by micro-columns of neurons in cerebral cortex compared to the number of layers found between retina and visual cortex of macaque monkeys.</ns0:p><ns0:p>Recently, <ns0:ref type='bibr' target='#b45'>Moradi et al. (2019)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>on Residual modules of ResNet <ns0:ref type='bibr' target='#b27'>(He et al., 2016)</ns0:ref> and Inception modules of GoogleNet <ns0:ref type='bibr' target='#b59'>(Szegedy et al., 2015)</ns0:ref>, with 534474 learnable parameters which are equal to SqeezeNet <ns0:ref type='bibr' target='#b33'>(Iandola et al., 2016)</ns0:ref> learnable parameters, and thus the model reported an error of 0.28%. However, a CNN architecture for small size input images of 20×20 pixels was proposed in <ns0:ref type='bibr' target='#b40'>(Le and Nguyen, 2019)</ns0:ref>. In addition, a multimodal deep learning architecture was proposed in <ns0:ref type='bibr' target='#b36'>(Kowsari et al., 2018)</ns0:ref>, where deep neural networks (DNN), CNN and recurrent neural networks (RNN) were used in one architecture design achieving an error of 0.18%. A plain CNN with stochastic optimization method was proposed in <ns0:ref type='bibr' target='#b3'>(Assiri, 2019)</ns0:ref>, this method applied regular Dropout layers after each pooling and fully connected (FC) layers, this 15 stacked layers approach obtained an error of 0.17% by 13.21M parameters. <ns0:ref type='bibr' target='#b30'>Hirata and Takahashi (2020)</ns0:ref> proposed an architecture with one base CNN and multiple FC sub-networks, this 28 spars layers architecture with 28.67M parameters obtained an error of 0.16%. <ns0:ref type='bibr' target='#b9'>Byerly et al. (2020)</ns0:ref> presented a CNN design with additional branches after certain convolutions, and from each branch, they transformed each of the final filters into a pair of homogeneous vector capsules, this 21 spars layers obtained an error of 0.16%.</ns0:p><ns0:p>While MNIST was well studied in the literature, there were only a few works on Arabic handwritten character recognition <ns0:ref type='bibr' target='#b0'>(Abdleazeem and El-Sherif, 2008)</ns0:ref>. The large Arabic Handwritten Digits (AHDBase) has been introduced in (El- <ns0:ref type='bibr' target='#b20'>Sherif and Abdelazeem, 2007)</ns0:ref>. <ns0:ref type='bibr' target='#b0'>Abdleazeem and El-Sherif (2008)</ns0:ref> modified AHDBase to be MADBase and evaluated 54 different classifier/features combinations and reported a classification error of 0.52% utilizing radial basis function (RBF) and support vector machine (SVM). Also, they discussed the problem of Arabic zero, which is just a dot and smaller than other digits.They solved the problem by introducing a size-sensitive feature which is the ratio of the digit bounding box area to the average bounding box area of all digits in AHDBase's training set. In the same context, Mudhsh and Almodfer (2017) obtained a validation error of 0.34% on the MADBase dataset by using an Alphanumeric VGG network inspired by the VGGNet <ns0:ref type='bibr' target='#b55'>(Simonyan and Zisserman, 2015)</ns0:ref> with dropout regularization and data augmentation but the error performance does not hold on the test set. <ns0:ref type='bibr' target='#b60'>Torki et al. (2014)</ns0:ref> introduced AIA9K dataset and reported a classification error of 5.72% on the test set by using window-based descriptors with some common classifiers such as logistic regression, linear SVM , nonlinear SVM and artificial neural networks (ANN) classifiers. <ns0:ref type='bibr' target='#b64'>Younis (2017)</ns0:ref> tested a CNN architecture and obtained an error of 5.2%, he proposed a stacked CNN of three convolution layers followed by batch normalization, rectified linear units (ReLU) activation, dropout and two FC layers.</ns0:p><ns0:p>The AHCD dataset was introduced by El-Sawy et al. <ns0:ref type='bibr'>(2017)</ns0:ref>, they reported a classification error of 5.1% using a stacked CNN of two convolution layers, two pooling layers and two FC layers. <ns0:ref type='bibr' target='#b47'>Najadat et al. (2019)</ns0:ref> obtained a classification error of 2.8% by using a series CNN of four convolution layers activated by ReLU, two pooling layers and three FC layers. The state-of-the-art result for this dataset is a classification error of 1.58% obtained by <ns0:ref type='bibr' target='#b56'>Sousa (2018)</ns0:ref>, it was achieved by ensemble averaging of four CNNs, two inspired by VGG16 and two written from scratch, with batch normalization and dropout regularization, to form 12 layers architecture called VGG12.</ns0:p><ns0:p>For benchmarking machine learning algorithms on tiny grayscale images other than Alphanumeric characters, <ns0:ref type='bibr' target='#b61'>Xiao et al. (2017)</ns0:ref> introduced Fashion-MNIST dataset to serve as a direct replacement for the original MNIST dataset and reported a classification test error of 10.3% using SVM. This dataset gained the attention of many researchers to test their approaches and better error of 3.65% was achieved by <ns0:ref type='bibr' target='#b68'>Zhong et al. (2017)</ns0:ref> in which a random erasing augmentation was used with wide residual networks (WRN) <ns0:ref type='bibr' target='#b66'>(Zagoruyko and Komodakis, 2016)</ns0:ref>. The state-of-the-art performance for Fashion-MNIST is an error of 2.34% reported in <ns0:ref type='bibr' target='#b67'>(Zeng et al., 2018)</ns0:ref> using a deep collaborative weight-based classification method based on VGG16. Recently, a modelling and optimization based method was used <ns0:ref type='bibr' target='#b10'>(Chou et al., 2019)</ns0:ref> to optimize the parameters for a multi-layer (16 layer) CNN reporting an error of 8.32% and 0.57% for Fashion-MNIST and MNIST respectively.</ns0:p><ns0:p>ALPR is a group of techniques that use CR modules to recognize vehicle's LP number. Sometimes, it is also referred to as license plate detection and recognition (LPDR). ALPR is used in many real-life applications <ns0:ref type='bibr' target='#b17'>(Du et al., 2013)</ns0:ref> like electronic toll collection, traffic control, security, etc. The main challenges of detection and recognition of license plates are the variations in the plate types, environments, languages and fonts. Both CNN and traditional approaches are used to solve vehicle license plates recognition problems. Traditional approaches involve computer vision, image processing and pattern recognition algorithms for features such as color, edge and morphology <ns0:ref type='bibr' target='#b62'>(Xie et al., 2018)</ns0:ref>. A typical ALPR system consists of three modules, plate detection, character segmentation and CR modules (Shyang-Lih <ns0:ref type='bibr' target='#b54'>Chang et al., 2004)</ns0:ref>. This research focuses on CR techniques and compared them with the proposed CR Manuscript to be reviewed Computer Science approach. CR modules need an off-line training phase to train a classifier on each isolated character using a set of manually cropped character images <ns0:ref type='bibr' target='#b8'>(Bulan et al., 2017)</ns0:ref>. Excessive operational time, cost and efforts must be considered when manual cropping of character images are needed to be collected and labeled for training and testing, and to overcome this, artificially generated synthetic license plates were proposed <ns0:ref type='bibr' target='#b7'>(Bulan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Additionally, very little research was done on multi-language LP character recognition, the reason is mostly due to the lack of multi-language LP datasets. Some recent researches were interested in introducing a global ALPR system. <ns0:ref type='bibr' target='#b1'>Asif et al. (2017)</ns0:ref> studied only LP detection module using a histogrambased approach, and a private dataset was used, which comprised of LPs from Hungary, America, Serbia, Pakistan, Italy, China, and UAE <ns0:ref type='bibr' target='#b1'>(Asif et al., 2017)</ns0:ref>. VGG and LSTM were proposed for CR module in <ns0:ref type='bibr' target='#b16'>(Dorbe et al., 2018)</ns0:ref> and the measured CR module accuracy was 96.7% where the test was done on LPs from Russia, Poland, Latvia, Belarus, Estonia, Germany, Lithuania, Finland and Sweden. Also, tiny YOLOv3 was used as a unified CR module for LPs from Greece, USA, Croatia, Taiwan, and South Korea <ns0:ref type='bibr' target='#b29'>(Henry et al., 2020)</ns0:ref>. Furthermore, several proposed methods interested in multi-language LPCR testing CR modules on each LP country's dataset separately, without accumulating the characters into one dataset <ns0:ref type='bibr' target='#b42'>(Li et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b65'>Yépez et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b2'>Asif et al., 2019)</ns0:ref>. In addition, <ns0:ref type='bibr' target='#b53'>Selmi et al. (2020)</ns0:ref> proposed a mask R-CNN detector for character segmentation and recognition concerning Arabic and English LP characters from Tunisia and USA. <ns0:ref type='bibr' target='#b50'>Park et al. (2019)</ns0:ref> concerned USA and Korean LPs describing the problem as multi-style detection. CNN shrinkage-based architecture was studied in <ns0:ref type='bibr' target='#b51'>(Salemdeeb and Erturk, 2020)</ns0:ref>, utilizing the maximum number of convolutional layers that can be added. <ns0:ref type='bibr' target='#b51'>Salemdeeb and Erturk (2020)</ns0:ref> studied the LP detection and country classification problem for multinational and multi-language LPs from Turkey, Europe, USA, UAE and KSA, without studying CR problem. These researches studied LPs from 23 different countries where most of them use Latin characters to write the LP number, and totally five languages were concerned (English, Taiwanese, Korean, Chinese and Arabic). In Taiwan, Korea, China, UAE, Tunisia and KSA, the LP number is written using Latin characters, but the city information is coded using characters from that the country's language.</ns0:p><ns0:p>In this paper, Arabic and Latin isolated characters are targeted to be recognized using a proposed full depth CNN (FDCNN) architecture in which the regions of interest are USA, EU and Middle East. To verify the performance of the proposed FDCNN, some isolated handwritten Arabic and Latin characters benchmarks such as MNIST, MADbase, AHCD, AIA9K datasets are also tested. Also, a new dataset named LP Arabic and Latin isolated characters (LPALIC) is introduced and tested. In addition, the recent FashionMNIST dataset is also tested to generalize the full depth feature extraction approach performance on tiny grayscale images. The proposed FDCNN approach closes the gap between software and hardware implementation since it provides low complexity and high performance. All the trained models and the LPALIC dataset 1 are made publicly available online for research community and future tests.</ns0:p><ns0:p>The rest of this paper is organized as follows; section 2 introduces the structure of datasets used in this paper and also the new LPALIC dataset. In section 3, the proposed approach is described in details.</ns0:p><ns0:p>Section 4 presents a series of experimental results and discussions. Finally, section 5 summarizes the main points of the entire work as a conclusion.</ns0:p></ns0:div>
<ns0:div><ns0:head>DATASETS Datasets Available in the Literature</ns0:head><ns0:p>MNIST is a low-complexity data collection of handwritten digits to test supervised machine learning algorithms introduced by <ns0:ref type='bibr' target='#b41'>Lecun et al. (1998)</ns0:ref>. It has grayscale images of size 28×28 pixels with 60000 training digits and 10000 test digits written by different persons. The digits are white and have black background, normalized to 20×20 pixels preserving the aspect ratio, and then centered at the center of mass of the 28×28 pixels grayscale images. The official site for the dataset and results are availabe by LeCun 2 .</ns0:p><ns0:p>In MADbase, 700 Arabic native writers wrote ten digits ten times and the images were collected as 70000 binary images; 60000 for training and 10000 for testing, so that writers of training set and test set are exclusive. This dataset 3 has the same format as MNIST to make veracity for comparisons between 1 https://www.kaggle.com/dataset/b4697afbddab933081344d1bed3f7907f0b2b2522f637adf15a5fcea67af2145 2 http://yann.lecun.com/exdb/mnist/ 3 http://datacenter.aucegypt.edu/shazeem</ns0:p></ns0:div>
<ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_12'>2020:10:53905:2:0:NEW 4 Mar 2021)</ns0:ref> Manuscript to be reviewed Computer Science digits (used in Arabic and English languages) recognition approaches. Table <ns0:ref type='table'>1</ns0:ref> shows example digits of printed Latin, Arabic and handwritten Arabic characters used for numbers as declared in ISO/IEC 8859-6:1999.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Printed and handwritten digits. <ns0:ref type='table' target='#tab_12'>Characters 0 1 2 3 4 5 6 7 8</ns0:ref> Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> gives a brief review on some publicly available related LP datasets for LPDR problem. The Zemris dataset is also called English LP in some references <ns0:ref type='bibr' target='#b49'>(Panahi and Gholampour, 2017)</ns0:ref>. Furthermore, Some characters were collected from some public LP datasets, LP websites and our own camera pictures in Turkey taken in different weather conditions, places, blurring, distances, tilts and illuminations. These characters are real LP manually cropped characters without any filtering. For uniformity a size of 28×28 pixels of grayscale images was utilized.</ns0:p></ns0:div>
<ns0:div><ns0:head>Printed Latin</ns0:head><ns0:p>The manually cropped characters were fed into the following conversion pipeline inspired from</ns0:p><ns0:p>FashionMNIST <ns0:ref type='bibr' target='#b61'>(Xiao et al., 2017)</ns0:ref> which is similar to MNIST <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998)</ns0:ref> ,</ns0:p><ns0:p>1. Resizing the longest edge of the image to 24 to save the aspect ratio.</ns0:p><ns0:p>2. Converting the image to 8-bit grayscale pixels image.</ns0:p><ns0:p>3. Negating the intensities of the image to get white character with black background.</ns0:p><ns0:p>4. Computing the center of mass of the pixels.</ns0:p><ns0:p>5. Translating the image to put center of mass at the center of the 28×28 grayscale image.</ns0:p><ns0:p>Some samples of the LPALIC dataset is visualized in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> for Latin characters and in Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref> for Arabic characters. <ns0:ref type='table' target='#tab_2'>3</ns0:ref> illustrates the total number of Arabic and Latin characters included in LPALIC dataset. </ns0:p></ns0:div>
<ns0:div><ns0:head>PROPOSED APPROACH</ns0:head><ns0:p>Stacked CNN architecture is simple, where each layer has a single input and a single output. For small size images, the key efficient simple deep learning architecture was LeNet-5 <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Architecture</ns0:head><ns0:p>The core of the proposed model is the convolution block which is a convolutional layer followed by a batch normalization (BN) layer <ns0:ref type='bibr' target='#b34'>(Ioffe and Szegedy, 2015)</ns0:ref> and a non-linear activation ReLU layer <ns0:ref type='bibr' target='#b38'>(Krizhevsky et al., 2012)</ns0:ref>. This block is called standard convolutional layer in <ns0:ref type='bibr' target='#b31'>(Howard et al., 2017)</ns0:ref>. The proposed convolutional layers have kernels of size 5 × 5 with a single stride. This kernel size showed a good feature extraction capability in LeNet-5 <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998)</ns0:ref> for small images as it covers 3.2%</ns0:p><ns0:p>of the input image in every stride. However, the recent trends are to replace 5 × 5 with 2 layers of 3 × 3 kernels as in InceptionV3 <ns0:ref type='bibr' target='#b58'>(Szegedy et al., 2016)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the architecture design of the proposed model. For a mini-batch B= {x 1 , x 2 , ..., x m } of size m, the mean µ B and variance σ 2 B of B is computed and each input image in the mini-batch is normalized according to Equation (1).</ns0:p><ns0:formula xml:id='formula_0'>xi = (x i ) − µ B σ 2 B + ε (1)</ns0:formula><ns0:p>Where ε is a constant, xi is the i th normalized image scaled by learnable scale parameter γ and shifted by learnable shift parameter β producing the i th normalized output image y i <ns0:ref type='bibr' target='#b34'>(Ioffe and Szegedy, 2015)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>y i = BN γ,β , (x i ) = γ xi + β (2)</ns0:formula><ns0:p>Motivated by LeNet-5 convolution kernel 5 × 5, BN used in InceptionV3 and ReLU in Alexnet, the proposed model convolution block is built as in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. The size of output feature map (FM) of each convolution block has lower size than the input feature map if no additional padding is applied. Equation (3) describes the relation between input and output FM sizes <ns0:ref type='bibr' target='#b25'>(Goodfellow et al., 2016)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_2'>W y = W x −W k + 2P W s + 1 (3)</ns0:formula><ns0:p>Where W y is the width of the output, W x is the width of the input, W k is the width of the kernel, W s is the width of the stride kernel and P is the number of padding pixels. For the height H, Equation (3) can be used by replacing W with H. This reduction is called the shrinkage of convolution and it limits the number of convolutional layers that the network can include <ns0:ref type='bibr' target='#b25'>(Goodfellow et al., 2016)</ns0:ref>. The feature map shrinks from borders to the center as convolutional layers as added. Eventually, feature maps drop to 1 × 1 × channels (single neuron per channel) at which no more convolutional layers can be added. This is the concept of full depth used for designing the proposed architecture, Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref> describes the full depth idea in FDCNN, where width and height shrink by 4 according to Equation (3). In Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>, each feature map is shrunk to a single value and this means that the features are convoluted into a single value resulting low number of parameters and high accuracy.</ns0:p><ns0:p>The proposed FDCNN model composed basically of two stacked convolutional stages and one FC layer for 28 × 28 input images. Every stage has two convolution blocks and one max-pooling layer. It has a single input and a single output in all of its layers. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the FDCNN architecture.</ns0:p></ns0:div>
<ns0:div><ns0:head>Parameter Selection</ns0:head><ns0:p>In the proposed architecture, there are some parameters have to be selected, these parameters are kernel sizes of convolution, pooling layers kerenl sizes, the number of filters (channels) in convolution layers and strides. The kernel sizes are selected to be 5 × 5 for convolutional layers and 2 × 2 for pooling layers as described in architecture design in the previous Proposed Architecture section.</ns0:p><ns0:p>In literature, the trend for selecting the number of filters is to increase the number of filters as deep as the network goes <ns0:ref type='bibr' target='#b38'>(Krizhevsky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b59'>Szegedy et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b55'>Simonyan and Zisserman, 2015;</ns0:ref><ns0:ref type='bibr' target='#b27'>He et al., 2016)</ns0:ref>. Generally, the first convolutional layers learn simple features while deeper layers learn more abstract features. Selecting optimal parameters is based on heuristics or grid searches <ns0:ref type='bibr' target='#b6'>(Bengio, 2012)</ns0:ref>.</ns0:p><ns0:p>The rule of thumb to design a network from scratch is to start with 8-64 filters per layer and double the number of filters after each pooling layer <ns0:ref type='bibr' target='#b55'>(Simonyan and Zisserman, 2015)</ns0:ref> or after each convolutional layer <ns0:ref type='bibr' target='#b27'>(He et al., 2016)</ns0:ref>. Recently, a new method was proposed to select the number of filters <ns0:ref type='bibr' target='#b23'>(Garg et al., 2018)</ns0:ref>, an optimization of network structure in terms of both the number of layers and the number of filters per layer was done using principal component analysis on trained network with a single shot of doubling the number of filters was also applied.</ns0:p><ns0:p>One of the contributions of this research is to select the number of channels that achieves full depth.</ns0:p><ns0:p>Number of filters may also be called the number of kernels, number of layer channels or layers width.</ns0:p><ns0:p>The number of filters is selected to be as the same as the number of shrinking pixels in each layer from bottom to the top. Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> shows the shrinkage of the proposed model. From the fact that the network • The width of 4 th convolutional layer is 208 (the 1 st layer shrinkage).</ns0:p><ns0:p>• The width of 3 rd is 176 (the the 2 nd layer shrinkage).</ns0:p><ns0:p>• The max-pooling will make a loss of half in FM dimensions so the next layers shrinkage pixels will be doubled.</ns0:p><ns0:p>• The width of 2 nd is 128 (the double of the 3 rd layer shrinkage).</ns0:p></ns0:div>
<ns0:div><ns0:head>8/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:2:0:NEW 4 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• The width of 1 nd is 64 (the double of the 4 th layer shrinkage).</ns0:p><ns0:p>The same parameter selection method can be applied to 32 × 32 input architecture as described in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> to reach full depth features (single value feature) as shown in Figure <ns0:ref type='figure' target='#fig_8'>5b</ns0:ref>. In general, DNNs give weights for all input features (neurons) to produce the output neurons, but this needs a huge number of parameters. Instead, CNNs convolve the adjacent neurons by the convolution kernel size to produce the output neurons. In the literature, the state-of-the-art architectures had high number of learnable parameters at the last FC layers. For example, VGG16 has totally 136M parameters, and after the last pooling layer the first FC layer has 102M parameters, which means more than 75% of the architecture parameters (just in one layer). AlexNet has totally 62M parameters, and the first FC layer has 37.75M parameters, which means more than 60% of the architecture parameters. In <ns0:ref type='bibr' target='#b30'>(Hirata and Takahashi, 2020)</ns0:ref>, the proposed architecture has 28.68M parameters, and the first FC layer has 3.68M parameters after majority voting from ten divisions. But, by using the full depth concept to reduce FM to 1x1 size after the last pooling layer, FDCNN has just 2090 parameters from totally 1.6M parameters as seen in Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>. The full depth concept of reducing the feature maps size to one neuron has decreased the total number of learnable parameters which make FDCNN simple and fast.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:2:0:NEW 4 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Training Process</ns0:head><ns0:p>Deep learning training algorithms were well explained in <ns0:ref type='bibr' target='#b25'>(Goodfellow et al., 2016)</ns0:ref>. The proposed model </ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS AND DISCUSSION</ns0:head><ns0:p>All of training and testing are made on MATLAB2018 platform with GeForce 1060 (6GB shared memory GPU). The main goal of this research is to design a CNN to recognize multi-language characters of license plates but to generalize and verify the designed architecture several tests on handwritten character recognition benchmarks are done (verification process). The proposed approach showed very promising results. Table <ns0:ref type='table' target='#tab_8'>7</ns0:ref> summarizes the results obtained on MNIST dataset. It is clear that stacked CNN has not outperformed the error of 0.35% in the literature for MNIST but the approach used in <ns0:ref type='bibr' target='#b3'>(Assiri, 2019)</ns0:ref> obtained 0.17%. The proposed FDCNN performance approximately reached close to the performance of five committees CNN of <ns0:ref type='bibr' target='#b13'>(Ciresan et al., 2011)</ns0:ref>. FDCNN do as the same performance as <ns0:ref type='bibr' target='#b45'>(Moradi et al., 2019)</ns0:ref> which it is a sparse design that uses Residual blocks and Inception blocks as described in the literature. However, the architecture in <ns0:ref type='bibr' target='#b3'>(Assiri, 2019)</ns0:ref> has 15 layers with 13.12M parameters while FDCNN has 12 layers with 1.69M parameters which means that FDCNN is simpler and 7 times faster (in terms of number of the parameters 13.12/1.69). The results in <ns0:ref type='bibr' target='#b3'>(Assiri, 2019)</ns0:ref> were obtained utilizing data augmentations (not used in FDCNN training), different training processes (FDCNN training process is simpler as described in the previous section) and Dropout layers before and after each pooling layer with different settings, but FDCNN has no Dropout layer and showed good results on MNIST.</ns0:p><ns0:p>On the other hand, the proposed approach is tested on MADbase, AHCD and AI9IK datasets for Arabic character recognition benchmarks to verify FDCNN and to generalize using it in Arabic ALPR systems. Table <ns0:ref type='table' target='#tab_9'>8</ns0:ref> describes the classification error regarding the stat-of-the-art on such datasets. The same logic of size-sensitive feature proposed in <ns0:ref type='bibr' target='#b0'>(Abdleazeem and El-Sherif, 2008</ns0:ref>) is used to solve the problem of Arabic zero character by half size reduction for Arabic zero character images (in MADbase dataset)</ns0:p><ns0:p>since it has a smaller size than other characters.</ns0:p><ns0:p>As seen in Table <ns0:ref type='table' target='#tab_9'>8</ns0:ref>, for MADbase dataset, most of the tested approaches were based on VGG architecture. Alphanumeric VGG <ns0:ref type='bibr' target='#b46'>(Mudhsh and Almodfer, 2017)</ns0:ref> reported a validation error of 0.34% that did not hold on the test set while FDCNN obtained 0.15% validation error and 0.34% test error. The proposed approach outperformed Arabic character recognition benchmarks state-of-the-arts for both digits and letters used in Arabic language with less number of layers and learnable parameters. It has succeed this verification process on these datasets too.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:2:0:NEW 4 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>3.27%</ns0:head><ns0:p>In Table <ns0:ref type='table' target='#tab_9'>8</ns0:ref>, input layer is included in the determination of the number layers <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998)</ns0:ref> for all architectures and ReLU layer is not considered as a layer but BN is considered as a layer. <ns0:ref type='bibr' target='#b56'>Sousa (2018)</ns0:ref> considered convolution, pooling and FC layers when the number of layers was declared but four trained CNNs were used with softmax averaging, this is why the number of layers and learnable parameters are high. <ns0:ref type='bibr' target='#b47'>Najadat et al. (2019)</ns0:ref> did not declare the most of network parameters like kernel size in every convolution layer and they changed many parameters to enhance the model. In <ns0:ref type='bibr' target='#b64'>(Younis, 2017)</ns0:ref>, 28 × 28 input images were used and no pooling layers were included.</ns0:p><ns0:p>On the other hand, and in the same verification process, the proposed approach is also tested on FashionMNIST benchmark to generalize using it over grayscale tiny images. As shown in Table <ns0:ref type='table'>9</ns0:ref>, the proposed approach outperformed the stacked CNN architectures and reached near DENSER network in <ns0:ref type='bibr' target='#b5'>(Assunc ¸ão et al., 2018)</ns0:ref> and EnsNet in <ns0:ref type='bibr' target='#b30'>(Hirata and Takahashi, 2020)</ns0:ref> with less layers and parameters but with a good performance. It can be said that FDCNN has a very good verification performance on FashionMNIST dataset. FDCNN outperformed <ns0:ref type='bibr' target='#b9'>(Byerly et al., 2020)</ns0:ref> results on Fashion-MNIST benchmark while <ns0:ref type='bibr' target='#b9'>(Byerly et al., 2020)</ns0:ref> outperformed FDCNN on MNIST.</ns0:p><ns0:p>Table <ns0:ref type='table'>9</ns0:ref>. Test results of FDCNN on FashionMNIST.</ns0:p><ns0:p>Architecture Type Layers Parameters Error SVM <ns0:ref type='bibr' target='#b61'>(Xiao et al., 2017)</ns0:ref> linear --10.3% DENSER <ns0:ref type='bibr' target='#b5'>(Assunc ¸ão et al., 2018)</ns0:ref> sparse --4.7% WRN <ns0:ref type='bibr' target='#b68'>(Zhong et al., 2017)</ns0:ref> sparse 28 36.5M 3.65% VGG16 <ns0:ref type='bibr' target='#b67'>(Zeng et al., 2018)</ns0:ref> sparse 16 138M 2.34% CNN <ns0:ref type='bibr' target='#b10'>(Chou et al., 2019)</ns0:ref> stacked 16 0.44M 8.32% BRCNN <ns0:ref type='bibr' target='#b9'>(Byerly et al., 2020)</ns0:ref> sparse 16 1.51M 6.34% EnsNet <ns0:ref type='bibr' target='#b30'>(Hirata and Takahashi, 2020)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>All of the previous datasets were divided into training and test sets by their authors where the instances in the test set were collected from a different source (different writers for CR and different photographers for fashion) from the training set's source. The performance evaluation is done based on CNN type (stacked is simpler than spars), number of layers, number of learnable parameters and recognition error.</ns0:p><ns0:p>Furthermore, FDCNN is tested also on Arabic LP characters from KSA. However, <ns0:ref type='bibr' target='#b35'>Khaled et al. (2010)</ns0:ref> used his dataset for both training and testing, FDCNN could classify the whole dataset (as a test set)</ns0:p><ns0:p>of <ns0:ref type='bibr' target='#b35'>(Khaled et al., 2010)</ns0:ref> with error of 0.46% whereas the training was done on characters collected and cropped manually from public KSA LP images. It outperformed the recognition error results of 1.78% in <ns0:ref type='bibr' target='#b35'>(Khaled et al., 2010)</ns0:ref>. FDCNN has successfully verified on KSA Arabic LP characters dataset.</ns0:p><ns0:p>In this research and for more verification, FDCNN performance is also tested on both common publicly available LP benchmark characters and the new LPALIC dataset. and also there is a small number of instances in the characters dataset. However, a very high recognition accuracy is achieved on Turkey and EU since they have the same standard and style for LPs. In Turkey, 10 digits and 23 letter is used since letters like Q, W and X are not valid in Turkish language. Additionally, FDCNN could classify Arabic LP characters with very low error. UAE characters set has a small number of cropped characters that is why it is tested just by FDCNN trained on other countries character sets.</ns0:p><ns0:p>To make robust tests, the characters were split manually and randomly as seen in Table <ns0:ref type='table' target='#tab_13'>11</ns0:ref>. In manual split, the most difficult characters (difficult at manual labelling the character images in the dataset preparing stage) were put in the test set and the others in the training set while in random split 80% were split for training and the rest of them for testing. As described in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This research focused on deep learning technique of CNNs to recognize multi-language LP characters for both Latin and Arabic characters used in vehicle LPs. A new approach is proposed, analyzed and tested on Latin and Arabic CR benchmarks for both LP and handwritten characters recognition. The proposed approach consists of proposing FDCNN architecture, FDCNN parameter selection and training process. The proposed full depth and width selection ideas are very efficient in extracting features from tiny grayscale images. The complexity of FDCNN is also analyzed in terms of number of learnable parameters and feature maps memory usage.The full depth concept of reducing the feature maps size to one neuron has decreased the total number of learnable parameters while achieving very good results.</ns0:p><ns0:p>Implementation of FDCNN approach is simple and can be used in real time applications worked on small devices like mobiles, tablets and some embedded systems. Very promising results were achieved on some common benchmarks like MNIST, FashionMNIST, MADbase, AIA9K, AHCD, Zemris, ReId, UFPR and the newly introduced LPALIC dataset. FDCNN performance is verified and compared to the state-of-the-art results in the literature. A new real LPs cropped characters dataset is also introduced. It is the largest dataset for LP characters in Turkey and KSA. More tests can be done on FDCNN for future work to be the core of CNN processor. Also, more experiments can be conducted to hybrid FDCNN with some common blocks like residual and inception blocks. Additionally, the proposed full depth approach may be applied to other stacked CNNs like Alexnet and VGG networks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>developed a new CNN architecture with orthogonal feature maps based PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:2:0:NEW 4 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53905:2:0:NEW 4 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>of 13440 training images and 3360 test images for 28 Arabic handwritten letters (classes) of size 32×32 pixels grayscale images. In AI9IK 5 dataset, 62 female and 45 male Arabic native writers aged between 18 to 25 years old at the Faculty of Engineering at Alexandria University-Egypt were invited to write all the Arabic letters 3 times to gather 8988 letters of which 8737 32×32 grayscale letter images were accepted after a verification process by eliminating cropping errors, writer mistakes and unclear letters. FashionMNIST dataset 6 has images of 70000 unique products taken by professional photographers. The thumbnails (51×73) were then converted to 28×28 grayscale images by the conversion pipeline declared in<ns0:ref type='bibr' target='#b61'>(Xiao et al., 2017)</ns0:ref>. It is composed of 60000 training images and 10000 test images of 10 class labels.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Samples of Latin characters in the LPALIC dataset.</ns0:figDesc><ns0:graphic coords='6,192.82,158.33,311.40,299.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Samples of Arabic characters in the LPALIC dataset.</ns0:figDesc><ns0:graphic coords='7,180.58,63.78,335.89,259.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Proposed FDCNN model architecture.</ns0:figDesc><ns0:graphic coords='7,141.73,540.01,413.59,119.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Proposed model convolution blocks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Full Depth concept of FDCNN.</ns0:figDesc><ns0:graphic coords='9,264.56,103.93,82.70,228.09' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>is trained using stochastic gradient descent with momentum (SGDM) with custom parameters chosen after many trails, initial learning rate (LR) of 0.025, mini-batch size equals to the number of training instances divided by number of batches needed to complete one epoch, LR drop factor by half every 2 epochs, 10 epochs, 0.95 momentum and the training set is shuffled every epoch. However, those training parameters are not used for all datasets since the number of images is not constant in all of them.After getting the first results, The model parameters are tuned by training again on ADAM with larger mini-batch size and very small LR started by 1 × 10 −5 , then multiplying the batch size by 2 and LR by 1/2 every 10 epochs as long as the test error has improvement.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>A review of publicly available ALPR datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Approach</ns0:cell><ns0:cell>Number of Images</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Classifier</ns0:cell><ns0:cell>Character Set</ns0:cell><ns0:cell>Purpose</ns0:cell></ns0:row><ns0:row><ns0:cell>Zemris</ns0:cell><ns0:cell>Kraupner (2003)</ns0:cell><ns0:cell>510</ns0:cell><ns0:cell>86.2%</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>UCSD</ns0:cell><ns0:cell>Dlagnekov (2005)</ns0:cell><ns0:cell>405</ns0:cell><ns0:cell>89.5%</ns0:cell><ns0:cell>OCR</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>Snapshots</ns0:cell><ns0:cell>Martinsky (2007)</ns0:cell><ns0:cell>97</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>ARG</ns0:cell><ns0:cell>Fernández et al. (2011)</ns0:cell><ns0:cell>730</ns0:cell><ns0:cell>95.8%</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>SSIG</ns0:cell><ns0:cell>Gonc ¸alves et al. (2016)</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>95.8%</ns0:cell><ns0:cell>SVM-OCR</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>ReId</ns0:cell><ns0:cell>Špaňhel et al. (2017)</ns0:cell><ns0:cell>77k</ns0:cell><ns0:cell>96.5%</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPR</ns0:cell></ns0:row><ns0:row><ns0:cell>UFPR</ns0:cell><ns0:cell>Laroca et al. (2018)</ns0:cell><ns0:cell>4500</ns0:cell><ns0:cell>78.33%</ns0:cell><ns0:cell>CR-NET</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>CCPD</ns0:cell><ns0:cell>Xu et al. (2018)</ns0:cell><ns0:cell>250k</ns0:cell><ns0:cell>95.2%</ns0:cell><ns0:cell>RPnet</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Novel License Plate Characters Dataset This</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>research introduces a new multi-language LP chatacters dataset, involving both Latin and Arabic characters from LP images used in Turkey, USA, UAE, KSA and EU (Croatia, Greece, Czech, France, Germany, Serbia, Netherlands and Belgium ). It is called LPALIC datase. In addition, some characters cropped from Brazil, India and other countries were added for just training to give features diversity.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>LPALIC dataset number of cropped characters per country.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Country</ns0:cell><ns0:cell>TR</ns0:cell><ns0:cell>EU</ns0:cell><ns0:cell>USA</ns0:cell><ns0:cell>UAE</ns0:cell><ns0:cell cols='2'>Others KSA</ns0:cell></ns0:row><ns0:row><ns0:cell>Used Characters</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin & Arabic</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Arabic</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Characters</ns0:cell><ns0:cell cols='3'>60000 32776 7384</ns0:cell><ns0:cell>3003</ns0:cell><ns0:cell>17613</ns0:cell><ns0:cell>50000</ns0:cell></ns0:row><ns0:row><ns0:cell>Total Characters</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>120776</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>50000</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>The Latin characters were collected from 11 countries (LPs have different background and font</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>colors) while the Arabic characters were collected from only KSA (LPs have white background and black</ns0:cell></ns0:row></ns0:table><ns0:note>character). Choosing those countries is related to the availability of those LPs for public use.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Shrinkage process in 28×28 architecture.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Shrinking Pixels Width</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>28 2 − 24 2 = 208</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>24 2 − 20 2 = 176</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 1</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>10 2 − 6 2 = 64</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>6 2 − 2 2 = 32</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 2</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>goes more deeper the following selection is made:</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Shrinkage process in 32×32 architecture.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Shrinking Pixels Width</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>32 2 − 28 2 = 240</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>28 2 − 24 2 = 208</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>24 2 − 20 2 = 176</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 1</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>10 2 − 6 2 = 64</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv5</ns0:cell><ns0:cell>6 2 − 2 2 = 32</ns0:cell><ns0:cell>240</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 2</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>240</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>shows the number of learnable parameters and feature memory usage for the proposed model. Memory usage is multiplied by 4 as each pixel is stored as 4-byte single float number. For 32 × 32 input images just another convolutional block can be added before the first convolution block in FDCNN and the width of last convolutional layer will be 32 2 − 28 2 = 240 to get full depth of shrinkage. This layer of course affects the total number of model parameters and FM memory usage to be 2.94M and 2.51MB respectively.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Proposed model's memory usage and learnable parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Features Memory</ns0:cell><ns0:cell cols='2'>Learnable Parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>Input</ns0:cell><ns0:cell>28 × 28 × 1</ns0:cell><ns0:cell>3136</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>24 × 24 × 64</ns0:cell><ns0:cell>147456</ns0:cell><ns0:cell>(5 × 5) × 64 + 64 =</ns0:cell><ns0:cell>1664</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>24 × 24 × 64 × 2</ns0:cell><ns0:cell>294912</ns0:cell><ns0:cell>4 × 64 =</ns0:cell><ns0:cell>256</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>20 × 20 × 128</ns0:cell><ns0:cell>204800</ns0:cell><ns0:cell>5 × 5 × 64 × 128 + 128 =</ns0:cell><ns0:cell>204928</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell cols='2'>20 × 20 × 128 × 2 409600</ns0:cell><ns0:cell>4 × 128 =</ns0:cell><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-pooling1</ns0:cell><ns0:cell>10 × 10 × 128</ns0:cell><ns0:cell>51200</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>6 × 6 × 176</ns0:cell><ns0:cell>25344</ns0:cell><ns0:cell>5 × 5 × 128 × 176 + 176 =</ns0:cell><ns0:cell>563376</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>6 × 6 × 176 × 2</ns0:cell><ns0:cell>50688</ns0:cell><ns0:cell>4 × 176 =</ns0:cell><ns0:cell>704</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>2 × 2 × 208</ns0:cell><ns0:cell>3328</ns0:cell><ns0:cell>5 × 5 × 176 × 208 + 208 =</ns0:cell><ns0:cell>915408</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>2 × 2 × 208 × 2</ns0:cell><ns0:cell>6656</ns0:cell><ns0:cell>4 × 208 =</ns0:cell><ns0:cell>832</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-pooling2</ns0:cell><ns0:cell>1 × 1 × 208</ns0:cell><ns0:cell>832</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>FC</ns0:cell><ns0:cell>1 × 1 × 10</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>208 × 10 + 10 =</ns0:cell><ns0:cell>2090</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Total Memory</ns0:cell><ns0:cell>1197992 Bytes</ns0:cell><ns0:cell>Total Parameters</ns0:cell><ns0:cell>1689770</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Test results of FDCNN on MNIST.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Number of Layers Error</ns0:cell></ns0:row><ns0:row><ns0:cell>Cires ¸an et al. (2010)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0.35%</ns0:cell></ns0:row><ns0:row><ns0:cell>Ciresan et al. (2011)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>0.27%</ns0:cell></ns0:row><ns0:row><ns0:cell>Ciregan et al. (2012)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>245</ns0:cell><ns0:cell>0.23%</ns0:cell></ns0:row><ns0:row><ns0:cell>Moradi et al. (2019)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>0.28%</ns0:cell></ns0:row><ns0:row><ns0:cell>Kowsari et al. (2018)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.18%</ns0:cell></ns0:row><ns0:row><ns0:cell>Assiri (2019)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0.17%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Hirata and Takahashi (2020) sparse</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>0.16%</ns0:cell></ns0:row><ns0:row><ns0:cell>Byerly et al. (2020)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>0.16%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>0.28%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Arabic character recognition benchmarks state-of-the-art and proposed approach test errors.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Architecture</ns0:cell><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Layers Parameters</ns0:cell><ns0:cell>Error</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RBF SVM Abdleazeem and El-Sherif (2008)</ns0:cell><ns0:cell>linear</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.52 %</ns0:cell></ns0:row><ns0:row><ns0:cell>MADbase 28 × 28</ns0:cell><ns0:cell>LeNet5 El-Sawy et al. (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>51K</ns0:cell><ns0:cell>12%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Alphanumeric VGG Mudhsh and Almodfer (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>2.1M</ns0:cell><ns0:cell>0.34% validation</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VGG12 REGU Sousa (2018)</ns0:cell><ns0:cell>Average of 4 stacked CNN</ns0:cell><ns0:cell>66</ns0:cell><ns0:cell>18.56M</ns0:cell><ns0:cell>0.48%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.34%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN El-Sawy et al. (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>1.8M</ns0:cell><ns0:cell>5.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>AHCD 32 × 32</ns0:cell><ns0:cell>CNN Younis (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>200K</ns0:cell><ns0:cell>2.4%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VGG12 REGU Sousa (2018)</ns0:cell><ns0:cell>Average of 4 stacked CNN</ns0:cell><ns0:cell>66</ns0:cell><ns0:cell>18.56M</ns0:cell><ns0:cell>1.58%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN Najadat et al. (2019)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>Not mentioned</ns0:cell><ns0:cell>2.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>2.94M</ns0:cell><ns0:cell>1.39%</ns0:cell></ns0:row><ns0:row><ns0:cell>AI9IK</ns0:cell><ns0:cell>RBF SVM Torki et al. (2014)</ns0:cell><ns0:cell>linear</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>5.72%</ns0:cell></ns0:row><ns0:row><ns0:cell>32 × 32</ns0:cell><ns0:cell>CNN Younis (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>200K</ns0:cell><ns0:cell>5.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>2.94M</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Table 10 shows the promising results on LP benchmarks. FDCNN outperformed the state-of-the-art results on common LP datasets for isolated character recognition problem. Zemris, UCSD, Snapshots and ReId datasets were not used in the training process but the proposed FDCNN was tested on each of them as a test set to ensure that the model was fitted to character features, not to a dataset itself. For UFPR dataset, FDCNN was tested two times on UFPR test set, training on only the training set of UFPR and training on both UFPR and LPALIC characters. It is clear that FDCNN has efficiently verified on common LP benchmarks. Recognition error of proposed architecture on LP benchmarks datasets. For more analysis, another test is made on the introduced LPALIC dataset to analyze the recognition error on characters per country. Table 11 describes the results. As seen in Table 11, the highest error is in classifying USA LP characters because it has more colors, drawings and shapes other than characters</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell cols='2'>Layers Parameters</ns0:cell><ns0:cell>Error</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (Panahi and Gholampour, 2017)</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell>LCR-Alexnet (Meng et al., 2018)</ns0:cell><ns0:cell>Zemris</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>>2.33M</ns0:cell><ns0:cell>2.7%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.979%</ns0:cell></ns0:row><ns0:row><ns0:cell>OCR (Dlagnekov, 2005) Proposed</ns0:cell><ns0:cell>UCSD</ns0:cell><ns0:cell>-12</ns0:cell><ns0:cell>-1.69M</ns0:cell><ns0:cell>10.5% 1.51%</ns0:cell></ns0:row><ns0:row><ns0:cell>MLP (Martinsky, 2007) Proposed</ns0:cell><ns0:cell>Snapshots</ns0:cell><ns0:cell>-12</ns0:cell><ns0:cell>-1.69M</ns0:cell><ns0:cell>15% 0.42%</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN ( Špaňhel et al., 2017)</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>17M</ns0:cell><ns0:cell>3.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>DenseNet169 (Zhu et al., 2019)</ns0:cell><ns0:cell>ReID</ns0:cell><ns0:cell>169</ns0:cell><ns0:cell>>15.3M</ns0:cell><ns0:cell>6.35%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>1.09%</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN (Laroca et al., 2018)</ns0:cell><ns0:cell /><ns0:cell>26</ns0:cell><ns0:cell>43.1M</ns0:cell><ns0:cell>35.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed trained just on UFPR</ns0:cell><ns0:cell>UFPR</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>4.29%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed trained on LPALIC</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>2.03%</ns0:cell></ns0:row><ns0:row><ns0:cell>Line Processing Algorithm (Khaled et al., 2010)</ns0:cell><ns0:cell>KSA</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1.78%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.46%</ns0:cell></ns0:row><ns0:row><ns0:cell>FDCNN</ns0:cell><ns0:cell>LPALIC</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.97%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>, the number of characters per country is not equal, which resulted various recognition accuracies in Table11. Since the number of UAE characters is not large enough to train FDCNN, Latin characters from other countries were used for training but the test was done only on UAE test set. FDCNN could learn features that give good average accuracy.12/16PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:2:0:NEW 4 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Test recognition error per country characters with different training instances.In the manual split in Table11, the country's characters training and testing sets were used to train and test FDCNN. In trained on other countries, the FDCNN was trained on both the country's characters training set and other countries characters but tested only on that country's test set. In the random 80/20 split, the country's characters were split randomly into training and testing sets, and FDCNN was trained on both the split country's characters training set and other countries characters but tested only on that split country's test set, a lot of random split tests were done and the average errors were reported in the table. Those different test analyses were done to validate and evaluate the results and reduce the overfitting problem. In fact, the Latin characters in LPALIC have various background and foreground colors which make the classification more challenging than Arabic characters set, but FDCNN shows a promising recognition results on both and also on handwritten characters as well.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Characters Set</ns0:cell><ns0:cell>Number of Instances Train / Test</ns0:cell><ns0:cell>Manual Split</ns0:cell><ns0:cell>Trained on Other Countries</ns0:cell><ns0:cell>Random 80/20% Split Average Error</ns0:cell></ns0:row><ns0:row><ns0:cell>TR</ns0:cell><ns0:cell>48748 / 11755</ns0:cell><ns0:cell>2.67%</ns0:cell><ns0:cell>1.82%</ns0:cell><ns0:cell>0.97%</ns0:cell></ns0:row><ns0:row><ns0:cell>EU</ns0:cell><ns0:cell>23299 / 9477</ns0:cell><ns0:cell>2.30%</ns0:cell><ns0:cell>1.07%</ns0:cell><ns0:cell>1.03%</ns0:cell></ns0:row><ns0:row><ns0:cell>USA</ns0:cell><ns0:cell>5960 / 1424</ns0:cell><ns0:cell>10.88%</ns0:cell><ns0:cell>3.51%</ns0:cell><ns0:cell>1.96%</ns0:cell></ns0:row><ns0:row><ns0:cell>UAE</ns0:cell><ns0:cell>1279 / 1724</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1.51%</ns0:cell><ns0:cell>0.9%</ns0:cell></ns0:row><ns0:row><ns0:cell>All Latin Characters</ns0:cell><ns0:cell>96899 / 24380</ns0:cell><ns0:cell>2.08%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.97%</ns0:cell></ns0:row><ns0:row><ns0:cell>KSA</ns0:cell><ns0:cell>46981 / 3018</ns0:cell><ns0:cell>0.43%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.26%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='4'>https://www.kaggle.com/mloey1/ahcd1 5 www.eng.alexu.edu.eg/%7emehussein/AIA9k/index.html 6 github.com/zalandoresearch/fashion-mnist 4/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:2:0:NEW 4 Mar 2021)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:2:0:NEW 4 Mar 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Electrical-Electronics Engineering
Bartin University – Turkey
March 4, 2021
Dear Editors,
We thank the editors and reviewers for their efforts, time and useful comments.
We have edited the manuscript addressing your valuable comments. We believe that your
comments made the manuscript scientifically suitable to be published.
Mohammed Salemdeeb
PhD, Electronics and Communication Engineering.
Bartin University, Turkey,
Faculty of Engineering,
Electrical-Electronics Engineering department.
On behalf of all authors.
Reviewer 3 (Anonymous)
Basic reporting
A deeper error analysis is required.
More error analysis sentences are added at line 323, 324, 330 and 347.
Update at line 323,
“All of the previous datasets were divided into training and test sets by their authors where the
instances in the test set were collected from a different source (different writers for CR and
different photographers for fashion) from the training set’s source. The performance evaluation
is done based on CNN type (stacked is simpler than spars), number of layers, number of
learnable parameters and recognition error.“
Other updates are written below.
Experimental design
The authors need to discuss the evaluation procedure.
All the reported results were concentrated on the classification error, the datasets were divided
into training and test sets and the evaluation is based on CNN type, classification error, number
of layers and number of learning parameters.
More sentences to discuss the evaluation procedure are added at lines 324, 330 and 347.
Validity of the findings
The authors can discuss the results for other performance metrics.
We discuss the results for performance metrics classification error, number of learnable
parameters, number of layers and memory usage of FDCNN. In the previous related work these
are the most common used metrics and we follow them to compare our performance and
introduced dataset. Some of datasets were not entered to the training process but it was used for
test, Zemris dataset was not seen in the training and the model was trained on other datasets but
the model was tested only on Zemris, which reflects the validity of the results. More sentences
are added to discuss the validity of the results at line 324, 330 and 347.
Comments for the Author
The literature review needs to be updated with some of the recent works. Discussing the
following works would make the manuscript richer for the readers:
DevNet: An Efficient CNN Architecture for Handwritten Devanagari Character Recognition. Int.
J. Pattern Recognit. Artif. Intell. 34(12): 2052009:1-2052009:20 (2020)
Character recognition based on non-linear multi-projection profiles measure. Frontiers Comput.
Sci. 9(5): 678-690 (2015)
Relative Positioning of Stroke-Based Clustering: a New Approach to Online Handwritten
Devanagari Character Recognition. Int. J. Image Graph. 12(2) (2012)
Artistic Multi-character Script Identification Using Iterative Isotropic Dilation Algorithm.
RTIP2R (3) 2018: 49-62
Character Recognition Based on DTW-Radon. ICDAR 2011: 264-268
Spatial Similarity Based Stroke Number and Order Free Clustering. ICFHR 2010: 652-657
Dtw-Radon-Based Shape Descriptor for Pattern Recognition. Int. J. Pattern Recognit. Artif.
Intell. 27(3) (2013)
Thanks for these comments, we believe that:
“DevNet: An Efficient CNN Architecture for Handwritten Devanagari Character Recognition.
Int. J. Pattern Recognit. Artif. Intell. 34(12): 2052009:1-2052009:20 (2020)” is recent and related
to our research regarding both character recognition problem and CNN. We have updated the
literature review citing recent “DevNet” reference.
Reviewer 4 (Kanika Thakur)
Basic reporting
A new approach to LP identification by stacking two CNN networks is discussed in the paper.
The core of the convolution block, which is the convolution Layer is followed by batch
normalization and a non-Linear activation layer.
Experimental design
There is a well-designed experimental section. The manuscript's strongest achievements are the
suggested Full Depth CNN (FDCNN) model and the recommendation for a new license plate
data set called LPALIC. The strength of the paper is the selection of parameters, filters, and the
process of training. The manuscript also gives an overview of FDCNN with respect to the use of
memory.
The drawback of the manuscript is that the results obtained during the testing process on the test
dataset are missing and no validation dataset has been used, so it is unclear that how fine-tuning
of the model hyperparameters has been performed and the problem of overfitting has been
addressed. I think the writers should be able to update the manuscript before publishing to make
all these points clear and further illustrate and solidify their already good findings
Thank you, we did a lot of experiments and the written results are the final average results. Since
we have 11 tables in our paper and we see that we cannot add more tables otherwise the paper
will be too long.
Most of the tests done on LPALIC and ALPR datasets were designed to overcome the overfitting
problem by training the model on a dataset and testing the model on another characters dataset.
For example, to test UCSD dataset we trained the model on our Turkish and EU LP characters
set. FDCNN is tested on many datasets and the average performance is very good on test sets.
Fine-tuning of the model hyperparameters has been performed as described at line 284 by
retraining the model again using other training algorithm and different training settings. When
we did, the results get better which may be seen as a low effect of the overfitting problem.
We updated our manuscript to make all these points clear and further illustrate and solidify our
findings at lines 324, 330 and 347.
Update at Line 324 “However, Khaled et al., (2010) used his dataset for both training and testing,
FDCNN could classify the whole dataset (as a test set) of (Khaled et al., 2010) with error of
0.46\% whereas the training was done on characters collected and cropped manually from public
KSA LP images.”
Update at line 330 “Zemris, UCSD, Snapshots and ReId datasets were not used in the training
process but the proposed FDCNN was tested on each of them as a test set to ensure that the
model was fitted to character features, not to a dataset itself. For UFPR dataset, FDCNN was
tested two times on UFPR test set, training on only the training set of UFPR and training on both
UFPR and LPALIC characters.”
Update at line 347 “In the manual split in Table 11, the country’s characters training and testing
sets were used to train and test FDCNN. In trained on other countries, the FDCNN was trained
on both the country’s characters training set and other countries characters but tested only on that
country’s test set. In the random 80/20 split, the country’s characters were split randomly into
training and testing sets, and FDCNN was trained on both the split country’s characters training
set and other countries characters but tested only on that split country’s test set, a lot of random
split tests were done and the average errors were reported in the table. Those different test
analyses were done to validate and evaluate the results and reduce the overfitting problem.”
Validity of the findings
All outcomes are well defined and codes are given, but to ensure the validity of the data, the
points listed must be answered. I assume that the relevance of the results can be more readily
asserted if the issues are discussed and explained.
Comments for the Author
The paper is clear, but there are few points that require Clarification
1. Explains the methodology used in order to correctly recognize Arabic zero numbers and letters
written in continuous style.
Thank you for this comment. This paper is interested in isolated character recognition. However,
the Arabic words are written in a continuous style but in this study we used the isolated Arabic
character datasets.
At line 66 we described the solution proposed by Abdleazeem and El-Sherif (2008), in our paper
we used the same logic of size-sensitive feature by zero character half size reduction since it has
a smaller size than other characters.
A new explanation sentence is added at line 303,” The same logic of size-sensitive feature
proposed in (Abdleazeem and El-Sherif, 2008) is used to solve the problem of Arabic zero
character by half size reduction for Arabic zero character image (in MADbase dataset) since it
has a smaller size than other characters.”
2.Discuss the criteria based on which a character is labeled as difficult /easy as specified in line
number 342.
Agreed, the criteria here is difficulty of manually labelling the character images in the dataset
preparing stage (labelling by human).
A clarification parentheses is added at line 342. (difficult at manual labelling the character
images in the dataset preparing stage)
3.You should clarify, why one should go for the proposed FDCNN model, when Assiri,2019
stacked method has outperformed.
At line 297, we wrote a simple clarification.
We edited the clarification paragraph at line 297 to be like this:
“However, the architecture in (Assiri, 2019) has 15 layers with 13.12M parameters while
FDCNN has 12 layers with 1.69M parameters which means that FDCNN is simpler and 7 times
faster (in terms of the number of parameters 13.12/1.69). The results in Assiri 2019 were
obtained utilizing data augmentations (not used in FDCNN training), different training processes
(FDCNN training process is simpler as described in the previous section) and Dropout layers
before and after each pooling layer with different settings, but, FDCNN has no Dropout layer
and showed good results on MNIST.”
" | Here is a paper. Please give your review comments after reading it. |
132 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Character recognition is an important research field of interest for many applications. In recent years, deep learning has made breakthroughs in image classification, especially for character recognition. However, convolutional neural networks (CNN) still deliver state-ofthe-art results in this area. Motivated by the success of CNNs, this paper proposes a simple novel full depth stacked CNN architecture for Latin and Arabic handwritten alphanumeric characters that is also utilized for license plate (LP) characters recognition. The proposed architecture is constructed by four convolutional layers, two max-pooling layers, and one fully connected layer. This architecture is low-complex, fast, reliable and achieves very promising classification accuracy that may move the field forward in terms of low complexity, high accuracy and full feature extraction. The proposed approach is tested on four benchmarks for handwritten character datasets, Fashion-MNIST dataset, public LP character datasets and a newly introduced real LP isolated character dataset. The proposed approach tests report an error of only 0.28% for MNIST, 0.34% for MAHDB, 1.45% for AHCD, 3.81% for AIA9K, 5.00% for Fashion-MNIST, 0.26% for Saudi license plate character and 0.97% for Latin license plate characters datasets. The license plate characters include license plates from Turkey (TR), Europe (EU), USA, United Arab Emirates (UAE) and Kingdom of Saudi Arabia (KSA).</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Character recognition (CR) plays a key role in many applications and motivates R&D in the field for accurate and fast classification solutions. CR has been widely investigated in many languages using different proposed methods. In the last years, researchers widely used CNN as deep learning classifiers and achieved good results on handwritten Alphanumeric in many languages <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abdleazeem and El-Sherif, 2008;</ns0:ref><ns0:ref type='bibr'>El-Sawy et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b27'>Guha et al., 2020)</ns0:ref>, character recognition in real-world images <ns0:ref type='bibr' target='#b48'>(Netzer et al., 2011)</ns0:ref>, document scanning, optical character recognition (OCR) and automatic license plate character recognition (ALPR) <ns0:ref type='bibr' target='#b14'>(Comelli et al., 1995)</ns0:ref>. Searching for text information in images is a time-consuming process that largely benefits of CR. Particularly, in Arabic language the connectivity of letters make a challenge for classification <ns0:ref type='bibr' target='#b21'>(Eltay et al., 2020)</ns0:ref>. Therefore, isolated character datasets get more interest in research.</ns0:p><ns0:p>MNIST is a handwritten digits dataset introduced by <ns0:ref type='bibr' target='#b41'>Lecun et al. (1998)</ns0:ref> and used to test supervised machine learning algorithms . The best accuracy obtained by stacked CNN architectures, until before two years, is a test error rate of 0.35% in <ns0:ref type='bibr' target='#b11'>(Cires ¸an et al., 2010)</ns0:ref>, where large deep CNN of nine layers with an elastic distortion applied to the input images. Narrowing the gap to human performance, a new architecture of five committees of seven deep CNNs with six width normalization and elastic distortion was trained and tested in <ns0:ref type='bibr' target='#b13'>(Ciresan et al., 2011)</ns0:ref> and reported an error rate of 0.27%, where the main CNN is seven stacked layers. In <ns0:ref type='bibr' target='#b12'>(Ciregan et al., 2012)</ns0:ref>, a near-human performance error rate of 0.23% was achieved, where several techniques were combined in a novel way to build a multi-column deep neural network (MCDNN) inspired by micro-columns of neurons in cerebral cortex compared to the number of layers found between retina and visual cortex of macaque monkeys.</ns0:p><ns0:p>Recently, <ns0:ref type='bibr' target='#b45'>Moradi et al. (2019)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>on Residual modules of ResNet <ns0:ref type='bibr' target='#b28'>(He et al., 2016)</ns0:ref> and Inception modules of GoogleNet <ns0:ref type='bibr' target='#b59'>(Szegedy et al., 2015)</ns0:ref>, with 534474 learnable parameters which are equal to SqeezeNet <ns0:ref type='bibr' target='#b33'>(Iandola et al., 2016)</ns0:ref> learnable parameters, and thus the model reported an error of 0.28%. However, a CNN architecture for small size input images of 20×20 pixels was proposed in <ns0:ref type='bibr' target='#b40'>(Le and Nguyen, 2019)</ns0:ref>. In addition, a multimodal deep learning architecture was proposed in <ns0:ref type='bibr' target='#b36'>(Kowsari et al., 2018)</ns0:ref>, where deep neural networks (DNN), CNN and recurrent neural networks (RNN) were used in one architecture design achieving an error of 0.18%. A plain CNN with stochastic optimization method was proposed in <ns0:ref type='bibr' target='#b4'>(Assiri, 2019)</ns0:ref>, this method applied regular Dropout layers after each pooling and fully connected (FC) layers, this 15 stacked layers approach obtained an error of 0.17% by 13.21M parameters. <ns0:ref type='bibr' target='#b30'>Hirata and Takahashi (2020)</ns0:ref> proposed an architecture with one base CNN and multiple FC sub-networks, this 28 spars layers architecture with 28.67M parameters obtained an error of 0.16%. <ns0:ref type='bibr' target='#b9'>Byerly et al. (2020)</ns0:ref> presented a CNN design with additional branches after certain convolutions, and from each branch, they transformed each of the final filters into a pair of homogeneous vector capsules, this 21 spars layers obtained an error of 0.16%.</ns0:p><ns0:p>While MNIST was well studied in the literature, there were only a few works on Arabic handwritten character recognition <ns0:ref type='bibr' target='#b0'>(Abdleazeem and El-Sherif, 2008)</ns0:ref>. The large Arabic Handwritten Digits (AHDBase) has been introduced in (El- <ns0:ref type='bibr' target='#b20'>Sherif and Abdelazeem, 2007)</ns0:ref>. <ns0:ref type='bibr' target='#b0'>Abdleazeem and El-Sherif (2008)</ns0:ref> modified AHDBase to be MADBase and evaluated 54 different classifier/features combinations and reported a classification error of 0.52% utilizing radial basis function (RBF) and support vector machine (SVM). Also, they discussed the problem of Arabic zero, which is just a dot and smaller than other digits.They solved the problem by introducing a size-sensitive feature which is the ratio of the digit bounding box area to the average bounding box area of all digits in AHDBase's training set. In the same context, Mudhsh and Almodfer (2017) obtained a validation error of 0.34% on the MADBase dataset by using an Alphanumeric VGG network inspired by the VGGNet <ns0:ref type='bibr' target='#b55'>(Simonyan and Zisserman, 2015)</ns0:ref> with dropout regularization and data augmentation but the error performance does not hold on the test set. <ns0:ref type='bibr' target='#b60'>Torki et al. (2014)</ns0:ref> introduced AIA9K dataset and reported a classification error of 5.72% on the test set by using window-based descriptors with some common classifiers such as logistic regression, linear SVM , nonlinear SVM and artificial neural networks (ANN) classifiers. <ns0:ref type='bibr' target='#b64'>Younis (2017)</ns0:ref> tested a CNN architecture and obtained an error of 5.2%, he proposed a stacked CNN of three convolution layers followed by batch normalization, rectified linear units (ReLU) activation, dropout and two FC layers.</ns0:p><ns0:p>The AHCD dataset was introduced by El-Sawy et al. <ns0:ref type='bibr'>(2017)</ns0:ref>, they reported a classification error of 5.1% using a stacked CNN of two convolution layers, two pooling layers and two FC layers. <ns0:ref type='bibr' target='#b47'>Najadat et al. (2019)</ns0:ref> obtained a classification error of 2.8% by using a series CNN of four convolution layers activated by ReLU, two pooling layers and three FC layers. The state-of-the-art result for this dataset is a classification error of 1.58% obtained by <ns0:ref type='bibr' target='#b56'>Sousa (2018)</ns0:ref>, it was achieved by ensemble averaging of four CNNs, two inspired by VGG16 and two written from scratch, with batch normalization and dropout regularization, to form 12 layers architecture called VGG12.</ns0:p><ns0:p>For benchmarking machine learning algorithms on tiny grayscale images other than Alphanumeric characters, <ns0:ref type='bibr' target='#b61'>Xiao et al. (2017)</ns0:ref> introduced Fashion-MNIST dataset to serve as a direct replacement for the original MNIST dataset and reported a classification test error of 10.3% using SVM. This dataset gained the attention of many researchers to test their approaches and better error of 3.65% was achieved by <ns0:ref type='bibr' target='#b68'>Zhong et al. (2017)</ns0:ref> in which a random erasing augmentation was used with wide residual networks (WRN) <ns0:ref type='bibr' target='#b66'>(Zagoruyko and Komodakis, 2016)</ns0:ref>. The state-of-the-art performance for Fashion-MNIST is an error of 2.34% reported in <ns0:ref type='bibr' target='#b67'>(Zeng et al., 2018)</ns0:ref> using a deep collaborative weight-based classification method based on VGG16. Recently, a modelling and optimization based method was used <ns0:ref type='bibr' target='#b10'>(Chou et al., 2019)</ns0:ref> to optimize the parameters for a multi-layer (16 layer) CNN reporting an error of 8.32% and 0.57% for Fashion-MNIST and MNIST respectively.</ns0:p><ns0:p>ALPR is a group of techniques that use CR modules to recognize vehicle's LP number. Sometimes, it is also referred to as license plate detection and recognition (LPDR). ALPR is used in many real-life applications <ns0:ref type='bibr' target='#b17'>(Du et al., 2013)</ns0:ref> like electronic toll collection, traffic control, security, etc. The main challenges of detection and recognition of license plates are the variations in the plate types, environments, languages and fonts. Both CNN and traditional approaches are used to solve vehicle license plates recognition problems. Traditional approaches involve computer vision, image processing and pattern recognition algorithms for features such as color, edge and morphology <ns0:ref type='bibr' target='#b62'>(Xie et al., 2018)</ns0:ref>. A typical ALPR system consists of three modules, plate detection, character segmentation and CR modules (Shyang-Lih <ns0:ref type='bibr' target='#b54'>Chang et al., 2004)</ns0:ref>. This research focuses on CR techniques and compared them with the proposed CR Manuscript to be reviewed Computer Science approach. CR modules need an off-line training phase to train a classifier on each isolated character using a set of manually cropped character images <ns0:ref type='bibr' target='#b8'>(Bulan et al., 2017)</ns0:ref>. Excessive operational time, cost and efforts must be considered when manual cropping of character images are needed to be collected and labeled for training and testing, and to overcome this, artificially generated synthetic license plates were proposed <ns0:ref type='bibr' target='#b7'>(Bulan et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Additionally, very little research was done on multi-language LP character recognition, the reason is mostly due to the lack of multi-language LP datasets. Some recent researches were interested in introducing a global ALPR system. <ns0:ref type='bibr' target='#b1'>Asif et al. (2017)</ns0:ref> studied only LP detection module using a histogrambased approach, and a private dataset was used, which comprised of LPs from Hungary, America, Serbia, Pakistan, Italy, China, and UAE <ns0:ref type='bibr' target='#b1'>(Asif et al., 2017)</ns0:ref>. VGG and LSTM were proposed for CR module in <ns0:ref type='bibr' target='#b16'>(Dorbe et al., 2018)</ns0:ref> and the measured CR module accuracy was 96.7% where the test was done on LPs from Russia, Poland, Latvia, Belarus, Estonia, Germany, Lithuania, Finland and Sweden. Also, tiny YOLOv3 was used as a unified CR module for LPs from Greece, USA, Croatia, Taiwan, and South Korea <ns0:ref type='bibr' target='#b29'>(Henry et al., 2020)</ns0:ref>. Furthermore, several proposed methods interested in multi-language LPCR testing CR modules on each LP country's dataset separately, without accumulating the characters into one dataset <ns0:ref type='bibr' target='#b42'>(Li et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b65'>Yépez et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b3'>Asif et al., 2019)</ns0:ref>. In addition, <ns0:ref type='bibr' target='#b53'>Selmi et al. (2020)</ns0:ref> proposed a mask R-CNN detector for character segmentation and recognition concerning Arabic and English LP characters from Tunisia and USA. <ns0:ref type='bibr' target='#b51'>Park et al. (2019)</ns0:ref> concerned USA and Korean LPs describing the problem as multi-style detection. CNN shrinkage-based architecture was studied in <ns0:ref type='bibr' target='#b52'>(Salemdeeb and Erturk, 2020)</ns0:ref>, utilizing the maximum number of convolutional layers that can be added. <ns0:ref type='bibr' target='#b52'>Salemdeeb and Erturk (2020)</ns0:ref> studied the LP detection and country classification problem for multinational and multi-language LPs from Turkey, Europe, USA, UAE and KSA, without studying CR problem. These researches studied LPs from 23 different countries where most of them use Latin characters to write the LP number, and totally five languages were concerned (English, Taiwanese, Korean, Chinese and Arabic). In Taiwan, Korea, China, UAE, Tunisia and KSA, the LP number is written using Latin characters, but the city information is coded using characters from that the country's language.</ns0:p><ns0:p>In this paper, Arabic and Latin isolated characters are targeted to be recognized using a proposed full depth CNN (FDCNN) architecture in which the regions of interest are USA, EU and Middle East. To verify the performance of the proposed FDCNN, some isolated handwritten Arabic and Latin characters benchmarks such as MNIST, MADbase, AHCD, AIA9K datasets are also tested. Also, a new dataset named LP Arabic and Latin isolated characters (LPALIC) is introduced and tested. In addition, the recent FashionMNIST dataset is also tested to generalize the full depth feature extraction approach performance on tiny grayscale images. The proposed FDCNN approach closes the gap between software and hardware implementation since it provides low complexity and high performance. All the trained models and the LPALIC dataset 1 are made publicly available online for research community and future tests.</ns0:p><ns0:p>The rest of this paper is organized as follows; section 2 introduces the structure of datasets used in this paper and also the new LPALIC dataset. In section 3, the proposed approach is described in details.</ns0:p><ns0:p>Section 4 presents a series of experimental results and discussions. Finally, section 5 summarizes the main points of the entire work as a conclusion.</ns0:p></ns0:div>
<ns0:div><ns0:head>DATASETS Datasets Available in the Literature</ns0:head><ns0:p>MNIST is a low-complexity data collection of handwritten digits to test supervised machine learning algorithms introduced by <ns0:ref type='bibr' target='#b41'>Lecun et al. (1998)</ns0:ref>. It has grayscale images of size 28×28 pixels with 60000 training digits and 10000 test digits written by different persons. The digits are white and have black background, normalized to 20×20 pixels preserving the aspect ratio, and then centered at the center of mass of the 28×28 pixels grayscale images. The official site for the dataset and results are availabe by LeCun 2 .</ns0:p><ns0:p>In MADbase, 700 Arabic native writers wrote ten digits ten times and the images were collected as 70000 binary images; 60000 for training and 10000 for testing, so that writers of training set and test set are exclusive. This dataset 3 has the same format as MNIST to make veracity for comparisons between 1 https://www.kaggle.com/dataset/b4697afbddab933081344d1bed3f7907f0b2b2522f637adf15a5fcea67af2145 2 http://yann.lecun.com/exdb/mnist/ 3 http://datacenter.aucegypt.edu/shazeem</ns0:p></ns0:div>
<ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_11'>2020:10:53905:3:0:NEW 1 May 2021)</ns0:ref> Manuscript to be reviewed Computer Science digits (used in Arabic and English languages) recognition approaches. Table <ns0:ref type='table'>1</ns0:ref> shows example digits of printed Latin, Arabic and handwritten Arabic characters used for numbers as declared in ISO/IEC 8859-6:1999.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Printed and handwritten digits. <ns0:ref type='table' target='#tab_11'>Characters 0 1 2 3 4 5 6 7 8</ns0:ref> Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> gives a brief review on some publicly available related LP datasets for LPDR problem. The Zemris dataset is also called English LP in some references <ns0:ref type='bibr' target='#b50'>(Panahi and Gholampour, 2017)</ns0:ref>. Furthermore, Some characters were collected from some public LP datasets, LP websites and our own camera pictures in Turkey taken in different weather conditions, places, blurring, distances, tilts and illuminations. These characters are real LP manually cropped characters without any filtering. For uniformity a size of 28×28 pixels of grayscale images was utilized.</ns0:p></ns0:div>
<ns0:div><ns0:head>Printed Latin</ns0:head><ns0:p>The manually cropped characters were fed into the following conversion pipeline inspired from</ns0:p><ns0:p>FashionMNIST <ns0:ref type='bibr' target='#b61'>(Xiao et al., 2017)</ns0:ref> which is similar to MNIST <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998)</ns0:ref> ,</ns0:p><ns0:p>1. Resizing the longest edge of the image to 24 to save the aspect ratio.</ns0:p><ns0:p>2. Converting the image to 8-bit grayscale pixels image.</ns0:p><ns0:p>3. Negating the intensities of the image to get white character with black background.</ns0:p><ns0:p>4. Computing the center of mass of the pixels.</ns0:p><ns0:p>5. Translating the image to put center of mass at the center of the 28×28 grayscale image.</ns0:p><ns0:p>Some samples of the LPALIC dataset is visualized in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> for Latin characters and in Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref> for Arabic characters. <ns0:ref type='table' target='#tab_2'>3</ns0:ref> illustrates the total number of Arabic and Latin characters included in LPALIC dataset. </ns0:p></ns0:div>
<ns0:div><ns0:head>PROPOSED APPROACH</ns0:head><ns0:p>Stacked CNN architecture is simple, where each layer has a single input and a single output. For small size images, the key efficient simple deep learning architecture was LeNet-5 <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998)</ns0:ref>, it consists of</ns0:p></ns0:div>
<ns0:div><ns0:head>5/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Architecture</ns0:head><ns0:p>The core of the proposed model is the convolution block which is a convolutional layer followed by a batch normalization (BN) layer <ns0:ref type='bibr' target='#b34'>(Ioffe and Szegedy, 2015)</ns0:ref> and a non-linear activation ReLU layer <ns0:ref type='bibr' target='#b38'>(Krizhevsky et al., 2012)</ns0:ref>. This block is called standard convolutional layer in <ns0:ref type='bibr' target='#b31'>(Howard et al., 2017)</ns0:ref>. The proposed convolutional layers have kernels of size 5 × 5 with a single stride. This kernel size showed a good feature extraction capability in LeNet-5 <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998)</ns0:ref> for small images as it covers 3.2%</ns0:p><ns0:p>of the input image in every stride. However, the recent trends are to replace 5 × 5 with 2 layers of 3 × 3 kernels as in InceptionV3 <ns0:ref type='bibr' target='#b58'>(Szegedy et al., 2016)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the architecture design of the proposed model. For a mini-batch B= {x 1 , x 2 , ..., x m } of size m, the mean µ B and variance σ 2 B of B is computed and each input image in the mini-batch is normalized according to Equation (1).</ns0:p><ns0:formula xml:id='formula_0'>xi = (x i ) − µ B σ 2 B + ε (1)</ns0:formula><ns0:p>Where ε is a constant, xi is the i th normalized image scaled by learnable scale parameter γ and shifted by learnable shift parameter β producing the i th normalized output image y i <ns0:ref type='bibr' target='#b34'>(Ioffe and Szegedy, 2015)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>y i = BN γ,β , (x i ) = γ xi + β (2)</ns0:formula><ns0:p>Motivated by LeNet-5 convolution kernel 5 × 5, BN used in InceptionV3 and ReLU in Alexnet, the proposed model convolution block is built as in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. The size of output feature map (FM) of each convolution block has lower size than the input feature map if no additional padding is applied. Equation (3) describes the relation between input and output FM sizes <ns0:ref type='bibr' target='#b26'>(Goodfellow et al., 2016)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_2'>W y = W x −W k + 2P W s + 1 (3)</ns0:formula><ns0:p>Where W y is the width of the output, W x is the width of the input, W k is the width of the kernel, W s is the width of the stride kernel and P is the number of padding pixels. For the height H, Equation (3) can be used by replacing W with H. This reduction is called the shrinkage of convolution and it limits the number of convolutional layers that the network can include <ns0:ref type='bibr' target='#b26'>(Goodfellow et al., 2016)</ns0:ref>. The feature map shrinks from borders to the center as convolutional layers as added. Eventually, feature maps drop to 1 × 1 × channels (single neuron per channel) at which no more convolutional layers can be added. This is the concept of full depth used for designing the proposed architecture, Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref> describes the full depth idea in FDCNN, where width and height shrink by 4 according to Equation (3). In Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>, each feature map is shrunk to a single value and this means that the features are convoluted into a single value resulting low number of parameters and high accuracy.</ns0:p><ns0:p>The proposed FDCNN model composed basically of two stacked convolutional stages and one FC layer for 28 × 28 input images. Every stage has two convolution blocks and one max-pooling layer. It has a single input and a single output in all of its layers. Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows the FDCNN architecture.</ns0:p></ns0:div>
<ns0:div><ns0:head>Parameter Selection</ns0:head><ns0:p>In the proposed architecture, there are some parameters have to be selected, these parameters are kernel sizes of convolution, pooling layers kerenl sizes, the number of filters (channels) in convolution layers and strides. The kernel sizes are selected to be 5 × 5 for convolutional layers and 2 × 2 for pooling layers as described in architecture design in the previous Proposed Architecture section.</ns0:p><ns0:p>In literature, the trend for selecting the number of filters is to increase the number of filters as deep as the network goes <ns0:ref type='bibr' target='#b38'>(Krizhevsky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b59'>Szegedy et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b55'>Simonyan and Zisserman, 2015;</ns0:ref><ns0:ref type='bibr' target='#b28'>He et al., 2016)</ns0:ref>. Generally, the first convolutional layers learn simple features while deeper layers learn more abstract features. Selecting optimal parameters is based on heuristics or grid searches <ns0:ref type='bibr' target='#b6'>(Bengio, 2012)</ns0:ref>.</ns0:p><ns0:p>The rule of thumb to design a network from scratch is to start with 8-64 filters per layer and double the number of filters after each pooling layer <ns0:ref type='bibr' target='#b55'>(Simonyan and Zisserman, 2015)</ns0:ref> or after each convolutional layer <ns0:ref type='bibr' target='#b28'>(He et al., 2016)</ns0:ref>. Recently, a new method was proposed to select the number of filters <ns0:ref type='bibr' target='#b23'>(Garg et al., 2018)</ns0:ref>, an optimization of network structure in terms of both the number of layers and the number of filters per layer was done using principal component analysis on trained network with a single shot of doubling the number of filters was also applied.</ns0:p><ns0:p>One of the contributions of this research is to select the number of channels that achieves full depth.</ns0:p><ns0:p>Number of filters may also be called the number of kernels, number of layer channels or layers width.</ns0:p><ns0:p>The number of filters is selected to be as the same as the number of shrinking pixels in each layer from bottom to the top. Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref> shows the shrinkage of the proposed model. From the fact that the network • The width of 4 th convolutional layer is 208 (the 1 st layer shrinkage).</ns0:p><ns0:p>• The width of 3 rd is 176 (the the 2 nd layer shrinkage).</ns0:p><ns0:p>• The max-pooling will make a loss of half in FM dimensions so the next layers shrinkage pixels will be doubled.</ns0:p><ns0:p>• The width of 2 nd is 128 (the double of the 3 rd layer shrinkage).</ns0:p></ns0:div>
<ns0:div><ns0:head>8/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• The width of 1 nd is 64 (the double of the 4 th layer shrinkage).</ns0:p><ns0:p>The same parameter selection method can be applied to 32 × 32 input architecture as described in Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> to reach full depth features (single value feature) as shown in Figure <ns0:ref type='figure' target='#fig_8'>5b</ns0:ref>. In general, DNNs give weights for all input features (neurons) to produce the output neurons, but this needs a huge number of parameters. Instead, CNNs convolve the adjacent neurons by the convolution kernel size to produce the output neurons. In the literature, the state-of-the-art architectures had high number of learnable parameters at the last FC layers. For example, VGG16 has totally 136M parameters, and after the last pooling layer the first FC layer has 102M parameters, which means more than 75% of the architecture parameters (just in one layer). AlexNet has totally 62M parameters, and the first FC layer has 37.75M parameters, which means more than 60% of the architecture parameters. In <ns0:ref type='bibr' target='#b30'>(Hirata and Takahashi, 2020)</ns0:ref>, the proposed architecture has 28.68M parameters, and the first FC layer has 3.68M parameters after majority voting from ten divisions. But, by using the full depth concept to reduce FM to 1x1 size after the last pooling layer, FDCNN has just 2090 parameters from totally 1.6M parameters as seen in Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>. The full depth concept of reducing the feature maps size to one neuron has decreased the total number of learnable parameters which make FDCNN simple and fast.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Training Process</ns0:head><ns0:p>Deep learning training algorithms were well explained in <ns0:ref type='bibr' target='#b26'>(Goodfellow et al., 2016)</ns0:ref>. The proposed model </ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL RESULTS AND DISCUSSION</ns0:head><ns0:p>All of training and testing are made on MATLAB2018 platform with GeForce 1060 (6GB shared memory GPU). The main goal of this research is to design a CNN to recognize multi-language characters of license plates but to generalize and verify the designed architecture several tests on handwritten character recognition benchmarks are done (verification process). The proposed approach showed very promising results. Table <ns0:ref type='table' target='#tab_7'>7</ns0:ref> summarizes the results obtained on MNIST dataset. It is clear that stacked CNN has not outperformed the error of 0.35% in the literature for MNIST but the approach used in <ns0:ref type='bibr' target='#b4'>(Assiri, 2019)</ns0:ref> obtained 0.17%. The proposed FDCNN performance approximately reached close to the performance of five committees CNN of <ns0:ref type='bibr' target='#b13'>(Ciresan et al., 2011)</ns0:ref>. FDCNN do as the same performance as <ns0:ref type='bibr' target='#b45'>(Moradi et al., 2019)</ns0:ref> which it is a sparse design that uses Residual blocks and Inception blocks as described in the literature. However, the architecture in <ns0:ref type='bibr' target='#b4'>(Assiri, 2019)</ns0:ref> has 15 layers with 13.12M parameters while FDCNN has 12 layers with 1.69M parameters which means that FDCNN is simpler and 7 times faster (in terms of number of the parameters 13.12/1.69). The results in <ns0:ref type='bibr' target='#b4'>(Assiri, 2019)</ns0:ref> were obtained utilizing data augmentations (not used in FDCNN training), different training processes (FDCNN training process is simpler as described in the previous section) and Dropout layers before and after each pooling layer with different settings, but FDCNN has no Dropout layer and showed good results on MNIST.</ns0:p><ns0:p>On the other hand, the proposed approach is tested on MADbase, AHCD and AI9IK datasets for Arabic character recognition benchmarks to verify FDCNN and to generalize using it in Arabic ALPR systems. Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref> describes the classification error regarding the stat-of-the-art on such datasets. The same logic of size-sensitive feature proposed in <ns0:ref type='bibr' target='#b0'>(Abdleazeem and El-Sherif, 2008</ns0:ref>) is used to solve the problem of Arabic zero character by half size reduction for Arabic zero character images (in MADbase dataset)</ns0:p><ns0:p>since it has a smaller size than other characters.</ns0:p><ns0:p>As seen in Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref>, for MADbase dataset, most of the tested approaches were based on VGG architecture. Alphanumeric VGG <ns0:ref type='bibr' target='#b46'>(Mudhsh and Almodfer, 2017)</ns0:ref> reported a validation error of 0.34% that did not hold on the test set while FDCNN obtained 0.15% validation error and 0.34% test error. The proposed approach outperformed Arabic character recognition benchmarks state-of-the-arts for both digits and letters used in Arabic language with less number of layers and learnable parameters. It has succeed this verification process on these datasets too.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>3.27%</ns0:head><ns0:p>In Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref>, input layer is included in the determination of the number layers <ns0:ref type='bibr' target='#b41'>(Lecun et al., 1998)</ns0:ref> for all architectures and ReLU layer is not considered as a layer but BN is considered as a layer. <ns0:ref type='bibr' target='#b56'>Sousa (2018)</ns0:ref> considered convolution, pooling and FC layers when the number of layers was declared but four trained CNNs were used with softmax averaging, this is why the number of layers and learnable parameters are high. <ns0:ref type='bibr' target='#b47'>Najadat et al. (2019)</ns0:ref> did not declare the most of network parameters like kernel size in every convolution layer and they changed many parameters to enhance the model. In <ns0:ref type='bibr' target='#b64'>(Younis, 2017)</ns0:ref>, 28 × 28 input images were used and no pooling layers were included.</ns0:p><ns0:p>On the other hand, and in the same verification process, the proposed approach is also tested on FashionMNIST benchmark to generalize using it over grayscale tiny images. As shown in Table <ns0:ref type='table'>9</ns0:ref>, the proposed approach outperformed the stacked CNN architectures and reached near DENSER network in <ns0:ref type='bibr' target='#b5'>(Assunc ¸ão et al., 2018)</ns0:ref> and EnsNet in <ns0:ref type='bibr' target='#b30'>(Hirata and Takahashi, 2020)</ns0:ref> with less layers and parameters but with a good performance. It can be said that FDCNN has a very good verification performance on FashionMNIST dataset. FDCNN outperformed <ns0:ref type='bibr' target='#b9'>(Byerly et al., 2020)</ns0:ref> results on Fashion-MNIST benchmark while <ns0:ref type='bibr' target='#b9'>(Byerly et al., 2020)</ns0:ref> outperformed FDCNN on MNIST.</ns0:p><ns0:p>Table <ns0:ref type='table'>9</ns0:ref>. Test results of FDCNN on FashionMNIST.</ns0:p><ns0:p>Architecture Type Layers Parameters Error SVM <ns0:ref type='bibr' target='#b61'>(Xiao et al., 2017)</ns0:ref> linear --10.3% DENSER <ns0:ref type='bibr' target='#b5'>(Assunc ¸ão et al., 2018)</ns0:ref> sparse --4.7% WRN <ns0:ref type='bibr' target='#b68'>(Zhong et al., 2017)</ns0:ref> sparse 28 36.5M 3.65% VGG16 <ns0:ref type='bibr' target='#b67'>(Zeng et al., 2018)</ns0:ref> sparse 16 138M 2.34% CNN <ns0:ref type='bibr' target='#b10'>(Chou et al., 2019)</ns0:ref> stacked 16 0.44M 8.32% BRCNN <ns0:ref type='bibr' target='#b9'>(Byerly et al., 2020)</ns0:ref> sparse 16 1.51M 6.34% EnsNet <ns0:ref type='bibr' target='#b30'>(Hirata and Takahashi, 2020)</ns0:ref> Furthermore, FDCNN is tested also on Arabic LP characters from KSA. However, <ns0:ref type='bibr' target='#b35'>Khaled et al. (2010)</ns0:ref> used his dataset for both training and testing, FDCNN could classify the whole dataset (as a test set)</ns0:p><ns0:p>of <ns0:ref type='bibr' target='#b35'>(Khaled et al., 2010)</ns0:ref> with error of 0.46% whereas the training was done on characters collected and cropped manually from public KSA LP images. It outperformed the recognition error results of 1.78% in <ns0:ref type='bibr' target='#b35'>(Khaled et al., 2010)</ns0:ref>. FDCNN has successfully verified on KSA Arabic LP characters dataset.</ns0:p><ns0:p>In this research and for more verification, FDCNN performance is also tested on both common publicly available LP benchmark characters and the new LPALIC dataset. and also there is a small number of instances in the characters dataset. However, a very high recognition accuracy is achieved on Turkey and EU since they have the same standard and style for LPs. In Turkey, 10 digits and 23 letter is used since letters like Q, W and X are not valid in Turkish language. Additionally, FDCNN could classify Arabic LP characters with very low error. UAE characters set has a small number of cropped characters that is why it is tested just by FDCNN trained on other countries character sets.</ns0:p><ns0:p>To make robust tests, the characters were split manually and randomly as seen in Table <ns0:ref type='table' target='#tab_12'>11</ns0:ref>. In manual split, the most difficult characters (difficult at manual labelling the character images in the dataset preparing stage) were put in the test set and the others in the training set while in random split 80% were split for training and the rest of them for testing. As described in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Furthermore, in Table <ns0:ref type='table' target='#tab_12'>11</ns0:ref>, validation sets were also used to guarantee in a sufficiently clear way that the results were not optimized specifically for those test sets. 70% of the dataset is randomly split for training, 10% for validation and 20% for testing. The training hyperparameters were optimized on a validation set, and the best parameters for the validation set were then be used to calculate the error on the test set. Those different test analyses were done to validate and evaluate the results and reduce the overfitting problem. In fact, the Latin characters in LPALIC have various background and foreground colors which make the classification more challenging than Arabic characters set, but FDCNN shows a promising recognition results on both and also on handwritten characters as well.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>This research focused on deep learning technique of CNNs to recognize multi-language LP characters for both Latin and Arabic characters used in vehicle LPs. A new approach is proposed, analyzed and tested on Latin and Arabic CR benchmarks for both LP and handwritten characters recognition. The proposed approach consists of proposing FDCNN architecture, FDCNN parameter selection and training process. The proposed full depth and width selection ideas are very efficient in extracting features from tiny grayscale images. The complexity of FDCNN is also analyzed in terms of number of learnable parameters and feature maps memory usage.The full depth concept of reducing the feature maps size to one neuron has decreased the total number of learnable parameters while achieving very good results.</ns0:p><ns0:p>Implementation of FDCNN approach is simple and can be used in real time applications worked on small devices like mobiles, tablets and some embedded systems. Very promising results were achieved on some common benchmarks like MNIST, FashionMNIST, MADbase, AIA9K, AHCD, Zemris, ReId, UFPR and the newly introduced LPALIC dataset. FDCNN performance is verified and compared to the state-of-the-art results in the literature. A new real LPs cropped characters dataset is also introduced. It is the largest dataset for LP characters in Turkey and KSA. More tests can be done on FDCNN for future work to be the core of CNN processor. Also, more experiments can be conducted to hybrid FDCNN with some common blocks like residual and inception blocks. Additionally, the proposed full depth approach may be applied to other stacked CNNs like Alexnet and VGG networks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>developed a new CNN architecture with orthogonal feature maps based PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>of 13440 training images and 3360 test images for 28 Arabic handwritten letters (classes) of size 32×32 pixels grayscale images. In AI9IK 5 dataset, 62 female and 45 male Arabic native writers aged between 18 to 25 years old at the Faculty of Engineering at Alexandria University-Egypt were invited to write all the Arabic letters 3 times to gather 8988 letters of which 8737 32×32 grayscale letter images were accepted after a verification process by eliminating cropping errors, writer mistakes and unclear letters. FashionMNIST dataset 6 has images of 70000 unique products taken by professional photographers. The thumbnails (51×73) were then converted to 28×28 grayscale images by the conversion pipeline declared in<ns0:ref type='bibr' target='#b61'>(Xiao et al., 2017)</ns0:ref>. It is composed of 60000 training images and 10000 test images of 10 class labels.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Samples of Latin characters in the LPALIC dataset.</ns0:figDesc><ns0:graphic coords='6,192.82,158.33,311.40,299.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Samples of Arabic characters in the LPALIC dataset.</ns0:figDesc><ns0:graphic coords='7,180.58,63.78,335.89,259.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Proposed FDCNN model architecture.</ns0:figDesc><ns0:graphic coords='7,141.73,540.01,413.59,119.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Proposed model convolution blocks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Full Depth concept of FDCNN.</ns0:figDesc><ns0:graphic coords='9,264.56,103.93,82.70,228.09' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>is trained using stochastic gradient descent with momentum (SGDM) with custom parameters chosen after many trails, initial learning rate (LR) of 0.025, mini-batch size equals to the number of training instances divided by number of batches needed to complete one epoch, LR drop factor by half every 2 epochs, 10 epochs, 0.95 momentum and the training set is shuffled every epoch. However, those training parameters are not used for all datasets since the number of images is not constant in all of them.After getting the first results, The model parameters are tuned by training again on ADAM with larger mini-batch size and very small LR started by 1 × 10 −5 , then multiplying the batch size by 2 and LR by 1/2 every 10 epochs as long as the test error has improvement.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>previous datasets were divided into training and test sets by their authors where the instances in the test set were collected from a different source (different writers for CR and different photographers for fashion) from the training set's source. The performance evaluation is done based on CNN type (stacked is simpler than spars), number of layers, number of learnable parameters and recognition error.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>A review of publicly available ALPR datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Approach</ns0:cell><ns0:cell>Number of Images</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Classifier</ns0:cell><ns0:cell>Character Set</ns0:cell><ns0:cell>Purpose</ns0:cell></ns0:row><ns0:row><ns0:cell>Zemris</ns0:cell><ns0:cell>Kraupner (2003)</ns0:cell><ns0:cell>510</ns0:cell><ns0:cell>86.2%</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>UCSD</ns0:cell><ns0:cell>Dlagnekov (2005)</ns0:cell><ns0:cell>405</ns0:cell><ns0:cell>89.5%</ns0:cell><ns0:cell>OCR</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>Snapshots</ns0:cell><ns0:cell>Martinsky (2007)</ns0:cell><ns0:cell>97</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>MLP</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>ARG</ns0:cell><ns0:cell>Fernández et al. (2011)</ns0:cell><ns0:cell>730</ns0:cell><ns0:cell>95.8%</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>SSIG</ns0:cell><ns0:cell>Gonc ¸alves et al. (2016)</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>95.8%</ns0:cell><ns0:cell>SVM-OCR</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>ReId</ns0:cell><ns0:cell>Špaňhel et al. (2017)</ns0:cell><ns0:cell>77k</ns0:cell><ns0:cell>96.5%</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>No</ns0:cell><ns0:cell>LPR</ns0:cell></ns0:row><ns0:row><ns0:cell>UFPR</ns0:cell><ns0:cell>Laroca et al. (2018)</ns0:cell><ns0:cell>4500</ns0:cell><ns0:cell>78.33%</ns0:cell><ns0:cell>CR-NET</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row><ns0:row><ns0:cell>CCPD</ns0:cell><ns0:cell>Xu et al. (2018)</ns0:cell><ns0:cell>250k</ns0:cell><ns0:cell>95.2%</ns0:cell><ns0:cell>RPnet</ns0:cell><ns0:cell>Yes</ns0:cell><ns0:cell>LPDR</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Novel License Plate Characters Dataset This</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>research introduces a new multi-language LP chatacters dataset, involving both Latin and Arabic characters from LP images used in Turkey, USA, UAE, KSA and EU (Croatia, Greece, Czech, France, Germany, Serbia, Netherlands and Belgium ). It is called LPALIC datase. In addition, some characters cropped from Brazil, India and other countries were added for just training to give features diversity.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>LPALIC dataset number of cropped characters per country.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Country</ns0:cell><ns0:cell>TR</ns0:cell><ns0:cell>EU</ns0:cell><ns0:cell>USA</ns0:cell><ns0:cell>UAE</ns0:cell><ns0:cell cols='2'>Others KSA</ns0:cell></ns0:row><ns0:row><ns0:cell>Used Characters</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Latin & Arabic</ns0:cell><ns0:cell>Latin</ns0:cell><ns0:cell>Arabic</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of Characters</ns0:cell><ns0:cell cols='3'>60000 32776 7384</ns0:cell><ns0:cell>3003</ns0:cell><ns0:cell>17613</ns0:cell><ns0:cell>50000</ns0:cell></ns0:row><ns0:row><ns0:cell>Total Characters</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>120776</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>50000</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>The Latin characters were collected from 11 countries (LPs have different background and font</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>colors) while the Arabic characters were collected from only KSA (LPs have white background and black</ns0:cell></ns0:row></ns0:table><ns0:note>character). Choosing those countries is related to the availability of those LPs for public use.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Shrinkage process in 28×28 architecture.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Shrinking Pixels Width</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>28 2 − 24 2 = 208</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>24 2 − 20 2 = 176</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 1</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>10 2 − 6 2 = 64</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>6 2 − 2 2 = 32</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 2</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>goes more deeper the following selection is made:</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Shrinkage process in 32×32 architecture.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Shrinking Pixels Width</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>32 2 − 28 2 = 240</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>28 2 − 24 2 = 208</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>24 2 − 20 2 = 176</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 1</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>176</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>10 2 − 6 2 = 64</ns0:cell><ns0:cell>208</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv5</ns0:cell><ns0:cell>6 2 − 2 2 = 32</ns0:cell><ns0:cell>240</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-Pooling 2</ns0:cell><ns0:cell>--</ns0:cell><ns0:cell>240</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>shows the number of learnable parameters and feature memory usage for the proposed model. Memory usage is multiplied by 4 as each pixel is stored as 4-byte single float number. For 32 × 32 input images just another convolutional block can be added before the first convolution block in FDCNN and the width of last convolutional layer will be 32 2 − 28 2 = 240 to get full depth of shrinkage. This layer of course affects the total number of model parameters and FM memory usage to be 2.94M and 2.51MB respectively.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Proposed model's memory usage and learnable parameters.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Layer</ns0:cell><ns0:cell cols='2'>Features Memory</ns0:cell><ns0:cell cols='2'>Learnable Parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>Input</ns0:cell><ns0:cell>28 × 28 × 1</ns0:cell><ns0:cell>3136</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv1</ns0:cell><ns0:cell>24 × 24 × 64</ns0:cell><ns0:cell>147456</ns0:cell><ns0:cell>(5 × 5) × 64 + 64 =</ns0:cell><ns0:cell>1664</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>24 × 24 × 64 × 2</ns0:cell><ns0:cell>294912</ns0:cell><ns0:cell>4 × 64 =</ns0:cell><ns0:cell>256</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv2</ns0:cell><ns0:cell>20 × 20 × 128</ns0:cell><ns0:cell>204800</ns0:cell><ns0:cell>5 × 5 × 64 × 128 + 128 =</ns0:cell><ns0:cell>204928</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell cols='2'>20 × 20 × 128 × 2 409600</ns0:cell><ns0:cell>4 × 128 =</ns0:cell><ns0:cell>512</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-pooling1</ns0:cell><ns0:cell>10 × 10 × 128</ns0:cell><ns0:cell>51200</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv3</ns0:cell><ns0:cell>6 × 6 × 176</ns0:cell><ns0:cell>25344</ns0:cell><ns0:cell>5 × 5 × 128 × 176 + 176 =</ns0:cell><ns0:cell>563376</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>6 × 6 × 176 × 2</ns0:cell><ns0:cell>50688</ns0:cell><ns0:cell>4 × 176 =</ns0:cell><ns0:cell>704</ns0:cell></ns0:row><ns0:row><ns0:cell>Conv4</ns0:cell><ns0:cell>2 × 2 × 208</ns0:cell><ns0:cell>3328</ns0:cell><ns0:cell>5 × 5 × 176 × 208 + 208 =</ns0:cell><ns0:cell>915408</ns0:cell></ns0:row><ns0:row><ns0:cell>BN+ReLU</ns0:cell><ns0:cell>2 × 2 × 208 × 2</ns0:cell><ns0:cell>6656</ns0:cell><ns0:cell>4 × 208 =</ns0:cell><ns0:cell>832</ns0:cell></ns0:row><ns0:row><ns0:cell>Max-pooling2</ns0:cell><ns0:cell>1 × 1 × 208</ns0:cell><ns0:cell>832</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>FC</ns0:cell><ns0:cell>1 × 1 × 10</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>208 × 10 + 10 =</ns0:cell><ns0:cell>2090</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Total Memory</ns0:cell><ns0:cell>1197992 Bytes</ns0:cell><ns0:cell>Total Parameters</ns0:cell><ns0:cell>1689770</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Test results of FDCNN on MNIST.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Number of Layers Error</ns0:cell></ns0:row><ns0:row><ns0:cell>Cires ¸an et al. (2010)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0.35%</ns0:cell></ns0:row><ns0:row><ns0:cell>Ciresan et al. (2011)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>0.27%</ns0:cell></ns0:row><ns0:row><ns0:cell>Ciregan et al. (2012)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>245</ns0:cell><ns0:cell>0.23%</ns0:cell></ns0:row><ns0:row><ns0:cell>Moradi et al. (2019)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>0.28%</ns0:cell></ns0:row><ns0:row><ns0:cell>Kowsari et al. (2018)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.18%</ns0:cell></ns0:row><ns0:row><ns0:cell>Assiri (2019)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0.17%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Hirata and Takahashi (2020) sparse</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>0.16%</ns0:cell></ns0:row><ns0:row><ns0:cell>Byerly et al. (2020)</ns0:cell><ns0:cell>sparse</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>0.16%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>0.28%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Arabic character recognition benchmarks state-of-the-art and proposed approach test errors.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Architecture</ns0:cell><ns0:cell>Type</ns0:cell><ns0:cell cols='2'>Layers Parameters</ns0:cell><ns0:cell>Error</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RBF SVM Abdleazeem and El-Sherif (2008)</ns0:cell><ns0:cell>linear</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.52 %</ns0:cell></ns0:row><ns0:row><ns0:cell>MADbase 28 × 28</ns0:cell><ns0:cell>LeNet5 El-Sawy et al. (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>51K</ns0:cell><ns0:cell>12%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Alphanumeric VGG Mudhsh and Almodfer (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>2.1M</ns0:cell><ns0:cell>0.34% validation</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VGG12 REGU Sousa (2018)</ns0:cell><ns0:cell>Average of 4 stacked CNN</ns0:cell><ns0:cell>66</ns0:cell><ns0:cell>18.56M</ns0:cell><ns0:cell>0.48%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.34%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN El-Sawy et al. (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>1.8M</ns0:cell><ns0:cell>5.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>AHCD 32 × 32</ns0:cell><ns0:cell>CNN Younis (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>200K</ns0:cell><ns0:cell>2.4%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VGG12 REGU Sousa (2018)</ns0:cell><ns0:cell>Average of 4 stacked CNN</ns0:cell><ns0:cell>66</ns0:cell><ns0:cell>18.56M</ns0:cell><ns0:cell>1.58%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN Najadat et al. (2019)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>Not mentioned</ns0:cell><ns0:cell>2.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>2.94M</ns0:cell><ns0:cell>1.39%</ns0:cell></ns0:row><ns0:row><ns0:cell>AI9IK</ns0:cell><ns0:cell>RBF SVM Torki et al. (2014)</ns0:cell><ns0:cell>linear</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>5.72%</ns0:cell></ns0:row><ns0:row><ns0:cell>32 × 32</ns0:cell><ns0:cell>CNN Younis (2017)</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>200K</ns0:cell><ns0:cell>5.2%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>proposed</ns0:cell><ns0:cell>stacked</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>2.94M</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Table 10 shows the promising results on LP benchmarks. FDCNN outperformed the state-of-the-art results on common LP datasets for isolated character recognition problem. Zemris, UCSD, Snapshots and ReId datasets were not used in the training process but the proposed FDCNN was tested on each of them as a test set to ensure that the model was fitted to character features, not to a dataset itself. For UFPR dataset, FDCNN was tested two times on UFPR test set, training on only the training set of UFPR and training on both UFPR and LPALIC characters. It is clear that FDCNN has efficiently verified on common LP benchmarks. Recognition error of proposed architecture on LP benchmarks datasets. For more analysis, another test is made on the introduced LPALIC dataset to analyze the recognition error on characters per country. Table 11 describes the results. As seen in Table 11, the highest error is in classifying USA LP characters because it has more colors, drawings and shapes other than characters</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell cols='2'>Layers Parameters</ns0:cell><ns0:cell>Error</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (Panahi and Gholampour, 2017)</ns0:cell><ns0:cell /><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>3%</ns0:cell></ns0:row><ns0:row><ns0:cell>LCR-Alexnet (Meng et al., 2018)</ns0:cell><ns0:cell>Zemris</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>>2.33M</ns0:cell><ns0:cell>2.7%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.979%</ns0:cell></ns0:row><ns0:row><ns0:cell>OCR (Dlagnekov, 2005) Proposed</ns0:cell><ns0:cell>UCSD</ns0:cell><ns0:cell>-12</ns0:cell><ns0:cell>-1.69M</ns0:cell><ns0:cell>10.5% 1.51%</ns0:cell></ns0:row><ns0:row><ns0:cell>MLP (Martinsky, 2007) Proposed</ns0:cell><ns0:cell>Snapshots</ns0:cell><ns0:cell>-12</ns0:cell><ns0:cell>-1.69M</ns0:cell><ns0:cell>15% 0.42%</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN ( Špaňhel et al., 2017)</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>17M</ns0:cell><ns0:cell>3.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>DenseNet169 (Zhu et al., 2019)</ns0:cell><ns0:cell>ReID</ns0:cell><ns0:cell>169</ns0:cell><ns0:cell>>15.3M</ns0:cell><ns0:cell>6.35%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>1.09%</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN (Laroca et al., 2018)</ns0:cell><ns0:cell /><ns0:cell>26</ns0:cell><ns0:cell>43.1M</ns0:cell><ns0:cell>35.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed trained just on UFPR</ns0:cell><ns0:cell>UFPR</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>4.29%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed trained on LPALIC</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>2.03%</ns0:cell></ns0:row><ns0:row><ns0:cell>Line Processing Algorithm (Khaled et al., 2010)</ns0:cell><ns0:cell>KSA</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1.78%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell /><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.46%</ns0:cell></ns0:row><ns0:row><ns0:cell>FDCNN</ns0:cell><ns0:cell>LPALIC</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>1.69M</ns0:cell><ns0:cell>0.97%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>, the number of characters per country is not equal, which resulted various recognition accuracies in Table11. Since the number of UAE characters is not large enough to train FDCNN, Latin characters from other countries were used for training but the test was done only on UAE test set. FDCNN could learn features that give good average accuracy.12/16PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Test recognition error per country characters with different training instances.In the manual split in Table11, the country's characters training and testing sets were used to train and test FDCNN. In trained on other countries, the FDCNN was trained on both the country's characters training set and other countries characters but tested only on that country's test set. In the random 80/20 split, the country's characters were split randomly into training and testing sets, and FDCNN was trained on both the split country's characters training set and other countries characters but tested only on that split country's test set, a lot of random split tests were done and the average errors were reported in the table.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Characters Set</ns0:cell><ns0:cell>Number of Instances Train / Test</ns0:cell><ns0:cell>Manual Split</ns0:cell><ns0:cell>Trained on Other Countries</ns0:cell><ns0:cell>Random 80/20% Split Average Error</ns0:cell><ns0:cell>Random 70/10/20% Split Average Error</ns0:cell></ns0:row><ns0:row><ns0:cell>TR</ns0:cell><ns0:cell cols='2'>48748 / 11755 2.67%</ns0:cell><ns0:cell>1.82%</ns0:cell><ns0:cell>0.97%</ns0:cell><ns0:cell>0.99%</ns0:cell></ns0:row><ns0:row><ns0:cell>EU</ns0:cell><ns0:cell>23299 / 9477</ns0:cell><ns0:cell>2.30%</ns0:cell><ns0:cell>1.07%</ns0:cell><ns0:cell>1.03%</ns0:cell><ns0:cell>0.80%</ns0:cell></ns0:row><ns0:row><ns0:cell>USA</ns0:cell><ns0:cell>5960 / 1424</ns0:cell><ns0:cell>10.88%</ns0:cell><ns0:cell>3.51%</ns0:cell><ns0:cell>1.96%</ns0:cell><ns0:cell>1.79%</ns0:cell></ns0:row><ns0:row><ns0:cell>UAE</ns0:cell><ns0:cell>1279 / 1724</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1.51%</ns0:cell><ns0:cell>0.9%</ns0:cell><ns0:cell>1.08%</ns0:cell></ns0:row><ns0:row><ns0:cell>All Latin Characters</ns0:cell><ns0:cell cols='2'>96899 / 24380 2.08%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.97%</ns0:cell><ns0:cell>1.06%</ns0:cell></ns0:row><ns0:row><ns0:cell>KSA</ns0:cell><ns0:cell>46981 / 3018</ns0:cell><ns0:cell>0.43%</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.26%</ns0:cell><ns0:cell>0.30%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='4'>https://www.kaggle.com/mloey1/ahcd1 5 www.eng.alexu.edu.eg/%7emehussein/AIA9k/index.html 6 github.com/zalandoresearch/fashion-mnist 4/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53905:3:0:NEW 1 May 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Electrical-Electronics Engineering
Bartin University – Turkey
May 1, 2021
Dear Editors,
We thank the editors and reviewers for their efforts, time and useful comments.
We have edited the manuscript addressing your valuable comments. We believe that your
comments made the manuscript scientifically suitable to be published.
Mohammed SALEMDEEB
PhD, Electronics and Communication Engineering.
Bartin University, Turkey,
Faculty of Engineering,
Electrical-Electronics Engineering department.
On behalf of all authors.
Reviewer: Kanika Thakur
Basic reporting
The author has written in simple, unambiguous, and competent Language. The literature is wellreferenced and applicable, with an introduction and history to show meaning. The paper
discusses a new method for LP recognition that involves stacking two CNN networks. Batch
normalization and a non-Linear activation layer follow the convolution layer, which is at the core
of the convolution block.
Experimental design
All of the research questions have been presented and answered in a clear and concise manner,
and they are all important and significant in the context.
Validity of the findings
All validation problems have been addressed; they are stable, statistically accurate, and wellcontrolled.
Comments for the author
I congratulate the authors on their extensive data collection. Furthermore, the manuscript is
written in plain, unambiguous language, and the comments have been satisfactorily addressed.
Thank you a lot.
Reviewer 2 (Anonymous)
Basic reporting
no comment
Experimental design
The authors still only use a training set and a test set, with no validation set. The text doesn't
guarantee in a sufficiently clear way that the results weren't optimized specifically for this test
set, which could mean the data is biased to fit this specific test set.
The training hyperparameters should be optimized on a validation set, and the best parameters
for the validation set should then be used to calculate the test metrics.
Validity of the findings
The previous comment on validation sets must be clearly addressed by the authors. It is not the
same thing to use a validation set and then a test set, and to optimize directly for the test set
performance (which seems to be the case here).
We conduct more trainings (that is why it took some time to resubmit) and tests using validation
sets and Table 11 is updated.
For more clarifications we add this sentences at line 368,
“Furthermore, in Table 11, validation sets were also used to guarantee in a sufficiently clear way
that the results were not optimized specifically for those test sets. 70% of the dataset is randomly
split for training, 10% for validation and 20% for testing. The training hyperparameters were
optimized on a validation set, and the best parameters for the validation set were then be used to
calculate the error on the test set.”
Furthermore, always prefer to state where the performance is calculated. If the accuracy was
computed on the test set, mention it as 'test accuracy'. This must be abundantly clear to the
reader.
Comments for the Author
no comment
We prefer to measure the performance by stating the error of the tests, the accuracy and the error
are considered as one metric because the accuracy is also represented by average error. We
followed the same representation way for this metric as used in most of the references.
Reviewer 3 (Anonymous)
Basic reporting
Accept
Experimental design
Authors attended to the suggestions of the reviewers.
Validity of the findings
Authors attended to the suggestions of the reviewers.
Comments for the Author
Authors attended to the suggestions of the reviewers.
Thank you a lot.
Reviewer 4 (Kanika Thakur)
Basic reporting
The author has written in simple, unambiguous, and competent Language. The literature is wellreferenced and applicable, with an introduction and history to show meaning. The paper
discusses a new method for LP recognition that involves stacking two CNN networks. Batch
normalization and a non-Linear activation layer follow the convolution layer, which is at the core
of the convolution block.
Experimental design
All of the research questions have been presented and answered in a clear and concise manner,
and they are all important and significant in the context.
Validity of the findings
All validation problems have been addressed; they are stable, statistically accurate, and wellcontrolled.
Comments for the Author
I congratulate the authors on their extensive data collection. Furthermore, the manuscript is
written in plain, unambiguous language, and the comments have been satisfactorily addressed.
Thank you a lot
" | Here is a paper. Please give your review comments after reading it. |
133 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>A generative model is a statistical model capable of generating new data instances from previously observed ones. In the context of business processes, a generative model creates new execution traces from a set of historical traces, also known as an event log.</ns0:p><ns0:p>Two types of generative business process models have been developed in previous work: data-driven simulation models and deep learning models. Until now, these two approaches have evolved independently, and their relative performance has not been studied. This paper fills this gap by empirically comparing a data-driven simulation approach with multiple deep learning approaches for building generative business process models. The study sheds light on the relative strengths of these two approaches and raises the prospect of developing hybrid approaches that combine these strengths.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Process mining is a family of techniques that allow users to interactively analyze data extracted from enterprise information systems in order to derive insights to improve one or more business processes.</ns0:p><ns0:p>Process mining tools extract business process execution data from an enterprise system and consolidate it in the form of an event log.</ns0:p><ns0:p>In this setting, an event log is a collection of execution traces of a business process. Each trace in an event log consists of a sequence of event records. An event record captures an execution of one activity, which takes place as part of one execution of a business process. For example, in an order-to-cash process, each execution of the process (also known as a case) corresponds to the handling of one purchase order.</ns0:p><ns0:p>Hence, in an event log of this process, each trace contains records of the activities that were performed in order to handle one specific purchase order (e.g. purchase order PO2039). This trace contains one event record per activity execution. Each event record contains the identifier of the case (PO2039), an activity label (e.g. 'Dispatch the Products'), an activity start timestamp (e.g. 2020-11-06T10:12:00), an activity end timestamp (e.g. 2020-11-06T11:54:00), the resource who performed the activity (e.g. the identifier of a clerk at the company's warehouse), and possibly other attributes, such as ID of the client.</ns0:p><ns0:p>A generative model of a business process is a statistical model constructed from an event log, which is able to generate traces that resemble those observed in the log as well as other traces of the process.</ns0:p><ns0:p>Generative process models have several applications in the field of process mining, including anomaly detection <ns0:ref type='bibr' target='#b19'>(Nolle et al., 2018)</ns0:ref>, predictive monitoring <ns0:ref type='bibr' target='#b26'>(Tax et al., 2017)</ns0:ref>, what-if scenario analysis <ns0:ref type='bibr' target='#b3'>(Camargo et al., 2020)</ns0:ref> and conformance checking <ns0:ref type='bibr' target='#b22'>(Sani et al., 2020)</ns0:ref>. Two families of generative models have been studied in the process mining literature: Data-Driven Simulation (DDS) and Deep Learning (DL) models.</ns0:p><ns0:p>DDS models are discrete-event simulation models constructed from an event log. Several authors have proposed techniques for discovering DDS models, ranging from semi-automated techniques <ns0:ref type='bibr' target='#b18'>(Martin et al., 2016)</ns0:ref> to automated ones <ns0:ref type='bibr' target='#b20'>(Rozinat et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b3'>Camargo et al., 2020)</ns0:ref>. A DDS model is generally constructed by first discovering a process model from an event log and then fitting a number of parameters PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science (e.g. mean inter-arrival rate, branching probabilities, etc.) in a way that maximizes the similarity between the traces that the DDS model generates and those in (a subset of) the event log.</ns0:p><ns0:p>On the other hand, DL generative models are machine learning models consisting of interconnected layers of artificial neurons adjusted based on input-output pairs in order to maximize accuracy. Generative DL models have been widely studied in the context of predictive process monitoring <ns0:ref type='bibr' target='#b26'>(Tax et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b8'>Evermann et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Lin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Taymouri et al., 2020)</ns0:ref>, where they are used to generate the remaining path (suffix) of an incomplete trace by repeatedly predicting the next event. It has been shown that these models can also be used to generate entire traces <ns0:ref type='bibr' target='#b2'>(Camargo et al., 2019)</ns0:ref> (not just suffixes).</ns0:p><ns0:p>To date, the relative accuracy of these two families of generative process models has not been studied, barring a study that compares DL models vs automatically discovered process models that generate events without timestamps <ns0:ref type='bibr' target='#b25'>(Tax et al., 2020)</ns0:ref>. This paper fills this gap by empirically comparing these approaches using eleven event-logs, which vary in terms of structural and temporal characteristics. Based on the evaluation results, the paper discusses the relative strengths and potential synergies of these approaches.</ns0:p><ns0:p>The paper is organized as follows. Sections 2 and 3 review DDS and DL generative modeling approaches, respectively. Section 4 presents the empirical evaluation setup while Section 5 presents the findings. Section 6 discusses the conceptual trade-offs between DDS and DL approaches in terms of expressiveness and interpretability and relates these trade-offs to the empirical findings. Finally, Section 7 concludes and outlines future work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>GENERATIVE DATA-DRIVEN PROCESS SIMULATION MODELS</ns0:head><ns0:p>Business Process Simulation (BPS) is a quantitative process analysis technique in which a discrete-event model of a process is stochastically executed a number of times, and the resulting simulated execution traces are used to compute aggregate performance measures such as the average waiting times of activities or the average cycle time of the process <ns0:ref type='bibr' target='#b6'>(Dumas et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Typically, a BPS model consists of a process model enhanced with time and resource-related parameters such as the inter-arrival time of cases and its associated Probability Distribution Function (PDF), the PDFs of each activity's processing times, a branching probability for each conditional branch in the process model, and the resource pool responsible for performing each activity type in the process model <ns0:ref type='bibr' target='#b6'>(Dumas et al., 2018)</ns0:ref>. Such BPS models are stochastically executed by creating new cases according to the inter-arrival time PDF, and by simulating the execution of each case constrained to the control-flow semantics of the process model and to the following activity execution rules: (i) If an activity in a case is enabled, and there is an available resource in the pool associated to this activity, the activity is started and allocated to one of the available resources in the pool; (ii) When the completion time of an activity is reached, the resource allocated to the activity is made available again. Hence, the waiting time of an activity is entirely determined by the availability of a resource. Resources are assumed to be eager: as soon as a resource is assigned to an activity, the activity is started.</ns0:p><ns0:p>A key ingredient for BPS is the availability of a BPS model that accurately reflects the actual dynamics of the process. Traditionally, BPS models are created manually by domain experts by gathering data from interviews, contextual inquiries, and on-site observation. In this approach, the accuracy of the BPS model is limited by the accuracy of the process model used as a starting point.</ns0:p><ns0:p>Several techniques for discovering BPS models from event logs have been proposed <ns0:ref type='bibr' target='#b18'>(Martin et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b20'>Rozinat et al., 2009)</ns0:ref>. These approaches start by discovering a process model from an event log and then enhance this model with simulation parameters derived from the log (e.g. arrival rate, branching probabilities). Below, we use the term DDS model to refer to a BPS model discovered from an event log.</ns0:p><ns0:p>Existing approaches for discovering a DDS from an event log can be classified in two categories.</ns0:p><ns0:p>The first category consists of approaches that provide conceptual guidance to discover BPS models. For example, <ns0:ref type='bibr' target='#b18'>Martin et al. (2016)</ns0:ref> discusses how PM techniques can be used to extract, validate, and tune BPS model parameters, without seeking to provide fully automated support. Similarly, <ns0:ref type='bibr' target='#b29'>Wynn et al. (2008)</ns0:ref> outlines a series of steps to construct a DDS model using process mining techniques. The second category of approaches seek to automate the extraction of simulation parameters. For example, <ns0:ref type='bibr' target='#b20'>Rozinat et al. (2009)</ns0:ref> proposes a pipeline for constructing a DDS using process mining techniques. However, in this approach, the tuning of the simulation model (i.e., fitting the parameters to the data) is left to the user.</ns0:p><ns0:p>In this research, we use Simod <ns0:ref type='bibr' target='#b3'>(Camargo et al., 2020)</ns0:ref> as a representative DDS method because, to the best of our knowledge, it is the only fully automated method for discovering and tuning business process simulation models from event logs. The use of methods with automated tuning steps, such as that Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of <ns0:ref type='bibr' target='#b20'>Rozinat et al. (2009)</ns0:ref>, would introduce two sources of bias in the evaluation: (i) a bias stemming from the manual tuning of simulation parameters, which would have to be done separately for each event log using limited domain knowledge; and (ii) a bias stemming from the fact that the DDS model would be manually tuned while the deep learning models are automatically tuned as part of the model training phase.</ns0:p><ns0:p>By using Simod, we ensure a fair comparison, insofar as we compare a DDS method with automatic data-driven tuning of model parameters with deep learning methods that, likewise, tune their parameters (weights) to fit the data. In the structure discovery stage, Simod extracts a BPMN model from data and guarantees its quality and coherence with the event log. The first step is the Control Flow Discovery, using the SplitMiner algorithm <ns0:ref type='bibr' target='#b1'>(Augusto et al., 2019b)</ns0:ref>, which is known for being one of the fastest, simple, and accurate discovery algorithms. Next, Simod applies Trace alignment to assess the conformance between the discovered process model and each trace in the input log. The tool provides options for handling nonconformant traces via removal, replacement, or repair to ensure full conformance, which is needed in the following stages. Then Simod discovers the model branching probabilities offering two options: assign equal values to each conditional branch or computing the conditional branches' traversal frequencies by replaying the event log over the process model. Once all the structural components are extracted, they are assembled into a single data structure that a discrete event simulator can interpret (e.g., Bimp). The simulator is responsible for reproducing the model at discrete moments, generating an event log as a result.</ns0:p></ns0:div>
<ns0:div><ns0:head>Structure discovery</ns0:head><ns0:p>Then Simod uses a hyperparameter optimization technique to discover the configuration that maximizes the Control-Flow Log Similarity (CFLS) between the produced log and the ground truth.</ns0:p><ns0:p>In the time-related parameters discovery stage, Simod takes as input the structure of the optimized model, extracts all the simulation parameters related to the times perspective, and assembles them in a single BPS model. The extracted parameters correspond to the probability density function (PDF) of Interarrival times, the Resource pools involved in the process, the Activities durations, the instances generation calendars and the resources availability calendars. The PDFs of inter-arrival times and activities durations are discovered by fitting a collection of possible distribution functions to the data series, selecting the one that yields the smallest standard error. The evaluated PDFs correspond to those supported by the BIMP simulator <ns0:ref type='bibr'>(i.e., normal, lognormal, gamma, exponential, uniform, and triangular distributions)</ns0:ref>. The resource pool is discovered using the algorithm proposed by Song and Van der Aalst <ns0:ref type='bibr' target='#b24'>(Song and van der Aalst, 2008)</ns0:ref>; likewise, the resources are assigned to the different activities according to the frequency of execution. Finally, Simod discovers calendar expressions that capture the resources' time availability restricting the hours they can execute tasks. Similarly, the tool discovers case creation timetables that limit when the process instances can be created. Once all these simulation parameters are compiled, Simod again uses the hyperparameter optimization technique to discover the configuration that minimizes the Earth Mover's Distance (EMD) distance between the produced log and the ground truth.</ns0:p><ns0:p>The final product of the two optimization cycles is a model that reflects the structure and the simulation parameters that best represent the time dynamics observed in the ground truth log.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='3'>GENERATIVE DEEP LEARNING MODELS OF BUSINESS PROCESSES</ns0:head><ns0:p>A Deep Learning (DL) model is a network composed of multiple interconnected layers of neurons (perceptrons), which perform non-linear transformations of data <ns0:ref type='bibr' target='#b10'>(Hao et al., 2016)</ns0:ref>. These transformations allow training the network to learn the behaviors/patterns observed in the data. Theoretically, the more layers of neurons there are in the system, the more it becomes possible to detect higher-level patterns in the data thanks to the composition of complex functions <ns0:ref type='bibr' target='#b14'>(LeCun et al., 2015)</ns0:ref>. A wide range of neural network architectures have been proposed in the literature, e.g., feed-forward networks, Convolutional Neural Networks (CNN), Variational Autoencoders (VAE), and Recurrent Neural Networks (RNN). The latter type of architecture is specifically designed for handling sequential data.</ns0:p><ns0:p>DL models have been applied in several sub-fields of process mining, particularly in the context of predictive process monitoring. Predictive process monitoring is a class of process mining techniques that are concerned with predicting, at runtime, some property about the future state of a case, e.g. predicting the next event(s) in an ongoing case or the remaining time until completion of the case. Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> depicts the main phases for the construction and evaluation of DL models for predictive process monitoring. In the first phase (pre-processing) the events in the log are transformed into (numerical) feature vectors and grouped into sequences, each sequence corresponding to the execution of a case in the process (a trace). Next, a model architecture is selected depending on the prediction target. In this respect, different architectures may be used for predicting the type of the next event, its timestamp, or both. Not surprisingly, given that event logs consist of sequences (traces), various studies have advocated the use of RNNs in the context of predictive process monitoring. The model is then built using a certain training method. In this respect, a distinction can be made between classical generative training methods, which train a single neural network to generate sequences of events, and Generative Adversarial Network (GAN) methods, which train two neural networks by making them play against each other: one network trained to generate sequences and a second network to discriminate between sequences that have been observed in the dataset and sequences that are not present in the dataset. GAN methods have been shown in various applications to outperform classical training methods when a sufficiently large dataset is available, at the expense of higher computational cost. <ns0:ref type='formula'>2017</ns0:ref>) proposed an RNN-based architecture with a classical training method to train models that generate the most likely remaining sequence of events (suffix) starting from a prefix of an ongoing case. However, this architecture cannot handle numerical features, and hence it cannot generate sequences of timestamped events. The approaches of <ns0:ref type='bibr' target='#b16'>Lin et al. (2019)</ns0:ref>, and other approaches benchmarked in <ns0:ref type='bibr' target='#b25'>(Tax et al., 2020)</ns0:ref>, also lack of this ability to predict timestamps and durations.</ns0:p><ns0:p>In this paper, we tackle the problem of generating traces consisting not only of event types (i.e. activity labels) but also timestamps. One of the earliest studies to tackle this problem in the context of predictive process monitoring was that of <ns0:ref type='bibr' target='#b26'>Tax et al. (2017)</ns0:ref>, who proposed an approach to predict the type of the next event in an ongoing case, as well as its timestamp, using RNNs with a type of architecture known as Long-Short-Term Memory (LSTM). The same study showed that this approach can be effectively used to generate the remaining sequence of timestamped events, starting from a given prefix of a case. However, this approach cannot handle high dimensional inputs due to its reliance on one-hot encoding of categorical features. As a result, its accuracy deteriorates as the number of categorical features increases. This limitation is lifted in the DeepGenerator approach <ns0:ref type='bibr' target='#b2'>(Camargo et al., 2019)</ns0:ref>, which extends the approach of <ns0:ref type='bibr' target='#b26'>Tax et al. (2017)</ns0:ref> with two mechanisms to handle high-dimensional input, namely n-grams and embeddings, and integrates a mechanism for avoiding temporal instability namely Random Choice next-event selection. A more recent study <ns0:ref type='bibr' target='#b27'>Taymouri et al. (2020)</ns0:ref> proposes to use a GAN method to train an LSTM model capable of predicting the type of the next event and its timestamp. The authors show that this GAN approach outperforms classical training methods (for the task of predicting the next event and timestamp) on certain datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_10'>2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the empirical evaluation reported in this paper, we retain the LSTM approach of <ns0:ref type='bibr' target='#b2'>Camargo et al. (2019)</ns0:ref> and the GAN approach of <ns0:ref type='bibr' target='#b27'>Taymouri et al. (2020)</ns0:ref> as representative methods for training generative DL models from event logs. We selected these methods because they have the capability of generating both the type of the next event in a trace and its timestamp. This means that if we iteratively apply these methods starting from an empty sequence, via an approach known as hallucination, we can generate a sequence of events such that each event has one timestamp (the end timestamp). Hence, these methods can be used to produce entire sequences of timestamped events and therefore they can be used to generate event logs that are comparable to those that DDS methods generate, with the difference that the above DL training methods associate only one timestamp to each event whereas DDS methods associate both a start and an end timestamp to each event. Accordingly, for full comparability, we need to adapt the above two DL methods to generate two timestamps per event. In the following sub-sections we describe each of these approach and how we adapted them to fit this requirement.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>DeepGenerator approach</ns0:head><ns0:p>The DeepGenerator approach trains a generative model by using attributes extracted from the original event log, specifically activities, roles, relative times (start and end timestamps), and contextual times (day of the week, time during the day). These generative models are able to produce traces consisting of triplets (event type, role, timestamp). A role refers to a group of resources who are able to perform a given activity (e.g. 'Clerk' or 'Sales Representative'). In this paper, we adapt DeepGenerator to generate sequences of triplets of the form (event-type, start-timestamp, end-timestamp). Each triplet captures the execution of an activity of a given type (event-type) together with the timeframe during which the activity was executed. In this paper, we do not attach roles to events, in order to make the DeepGenerator method fully comparable to Simod as discussed in Sect. 7.</ns0:p><ns0:p>In the pre-processing phase (cf. Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>), DeepGenerator applies encoding and scaling techniques to transform the event log depending on the data type of each event attribute (categorical vs. continuous).</ns0:p><ns0:p>Categorical attributes (activities and roles) are encoded using embeddings in order to keep the data dimensionality low, as this property enhances the performance of the neural network. Meantime, start and end timestamps are relativized and scaled over a range of [0, 1]. The relativization is carried out by first calculating two features: the activities duration and the time-between-activities. The duration of an activity (a.k.a. the processing time) is the difference between its complete timestamp and its start timestamp. The time-between-activities (a.k.a. the waiting time) is the difference between the start timestamp of an activity and the end timestamp of the immediately preceding activity in the same trace. All relative times are scaled using normalization or log-normalization depending on the variability of the times in the event log.</ns0:p><ns0:p>Once the features are encoded, DeepGenerator executes the sequences creation step to extract n-grams which allow better handling of long sequences. One n-gram is generated for each step of the process execution and this is done for each attribute independently. Hence, DeepGenerator uses four independent inputs: activity prefixes, role prefixes, relativized durations, and relativized time-between-activities.</ns0:p><ns0:p>In the model training phase, one of two possible architectures is selected for training. These architectures, depicted in Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, vary depending on whether or not they share intermediate layers. The use of shared layers sometimes helps to better differentiate between execution patterns. DeepGenerator uses LSTM layers or GRU layers. Both of these types of layers are suitable for handling sequential data, with GRU layers sometimes outperforming LSTM layers <ns0:ref type='bibr' target='#b17'>(Mangal et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chung et al., 2014)</ns0:ref>.</ns0:p><ns0:p>Finally, the post-processing phase uses the resulting DL model in order to generate a set of traces (i.e. an event log). DeepGenerator takes each generated trace and uses the classical hallucination method to repeatedly ask the DL model to predict the next event given the events observed so far (or given the empty trace in the case of the first event). This step is repeated until we observe the 'end of trace' event.</ns0:p><ns0:p>At each step, the DL model predicts multiple possible 'next events', each one with a certain probability.</ns0:p><ns0:p>DeepGenerator selects among these possible events randomly but weighted by the associated probabilities. This mechanism turns out to be the most suitable for the task of generating complete event logs by avoiding getting stuck in the higher probabilities <ns0:ref type='bibr' target='#b2'>(Camargo et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>LSTM-GAN approach</ns0:head><ns0:p>The approach proposed by <ns0:ref type='bibr' target='#b27'>Taymouri et al. (2020)</ns0:ref> trains LSTM generative models using the GAN strategy.</ns0:p><ns0:p>The strategy proposed by the authors consists of two LSTM models, one generative and one discriminative, that are trained simultaneously through a game of adversaries. In this game, the generative model has to learn how to confuse a discriminative model to avoid distinguishing real examples from fake ones. As the game unfolds, the discriminative model becomes more capable of distinguishing between fake and real examples, thus forcing the generator to improve the generated examples. Fig. <ns0:ref type='figure'>4a</ns0:ref> presents the general architecture of the GAN strategy proposed in <ns0:ref type='bibr' target='#b27'>(Taymouri et al., 2020)</ns0:ref>. We performed modifications in every phase of this approach to be able to generate full traces and entire event logs so as to make it fully comparable with DDS methods. In the preprocessing phase (cf. Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>), the features corresponding to the activity's category and relative times are encoded and transformed. The model uses one-hot encoding for creating a binary column for each activity and returning a sparse matrix. We adapted the model to enable the prediction of two timestamps instead of one.The original method by <ns0:ref type='bibr' target='#b27'>(Taymouri et al., 2020)</ns0:ref> only handles one continuous attribute per event (the end timestamp). We added another continuous attribute to capture the time (in seconds) between the the end of the previous event in the sequence and the start of the current one. This additional attribute is herein called the inter-activity times.Next, the inter-activity times are then rounded up to the granularity of days so as to create a so-called design matrix composed of the one-hot encoded activities and the scaled inter-activity times. Then, we create the prefixes and the expected events in order to train the models. Since the original model was intended to train models starting from a k-sized prefix, all the smaller prefixes were discarded and the prediction of the first event of a trace was not considered. We also adapted the model to be trained to predict zero-size prefixes. For this purpose, we extended the number of prefixes considered by including a dummy start event before each trace and by applying right-padding to the prefixes. This modification of the input implied updating the loss functions in order to consider the additional attribute.</ns0:p><ns0:p>In the model training phase, <ns0:ref type='bibr' target='#b27'>Taymouri et al. (2020)</ns0:ref> trained specialized models to predict the next event from prefixes of a predefined size. While this approach is suitable for predicting the next event, it is not suitable for predicting entire traces of unknown size. Therefore, we train a single model with a</ns0:p></ns0:div>
<ns0:div><ns0:head>6/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>prefix of size five. This strategy is grounded on the results of the evaluation reported by Sindhgatta et al.</ns0:p><ns0:p>(2020), from which the authors concluded that increasing the size of the prefix used by the LSTM models (beyond a size of five events) does not substantially improve the model's predictive accuracy.</ns0:p><ns0:p>Finally, in the post-processing phase we take the complete predicted suffix to feed back the model instead of considering only the first event predicted by the model (see Fig. <ns0:ref type='figure'>4b</ns0:ref>). We carry out this operation to take advantage of the fact that the original generative model is a sequence-to-sequence model, which receives a sequence of size k and predicts a sequence of size k. The empirical evidence reported by <ns0:ref type='bibr' target='#b2'>Camargo et al. (2019)</ns0:ref> shows that concatenating only the last event predicted by the model generates a rapid degradation in the model's long-term precision, as the model gets trapped in predicting always the most probable events. Accordingly, we use the random selection to select the next type of event.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EVALUATION</ns0:head><ns0:p>This section presents an empirical comparison of DDS and DL generative process models. The evaluation aims at addressing the following questions: what is the relative accuracy of these approaches when it comes to generating traces of events without timestamps? and what is their relative accuracy when it comes to generating traces of events with timestamps?</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Datasets</ns0:head><ns0:p>We evaluated the selected approaches using eleven event logs that contain both start and end timestamps.</ns0:p><ns0:p>In this evaluation we use real logs from public and private sources and synthetic logs generated from simulation models of real processes:</ns0:p><ns0:p>• The event log of a manufacturing production (MP) process is a public log that contains the steps exported from an Enterprise Resource Planning (ERP) system <ns0:ref type='bibr' target='#b15'>(Levy, 2014)</ns0:ref>.</ns0:p><ns0:p>• The event log of a purchase-to-pay (P2P) process is a public synthetic log generated from a model not available to the authors. 1</ns0:p><ns0:p>• The event log from an Academic Credentials Recognition (ACR) process of a Colombian University was gathered from its BPM system (Bizagi).</ns0:p><ns0:p>• The W subset of the BPIC2012 2 event log, which is a public log of a loan application process from a Dutch financial institution. The W subset of this log is composed of the events corresponding to activities performed by human resources (i.e. only activities that have a duration).</ns0:p><ns0:p>• The W subset of the BPIC2017 3 event log, which is an updated version of the BPIC2012 log.</ns0:p><ns0:p>We carried out the extraction of the W-subset by following the recommendations reported by the winning teams participating in the BPIC 2017 challenge 4 .</ns0:p><ns0:p>• We used three private logs of real-life processes, each corresponding to a scenario of different sizes of data for training. The POC log belongs to an undisclosed banking process, and the CALL log belongs to a helpdesk process. Both of them correspond to large-size training data scenarios. The INS logs belong to an insurance claims process corresponding to a small size training data. For confidentiality reasons, only the detailed results of these three event logs will be provided.</ns0:p><ns0:p>• We used three synthetic logs generated from simulation models of real-life processes 5 . The selected models are complex enough to represent scenarios in which occur parallelism, resource contention, or scheduled waiting times. From these models, we generate event logs varying the number of instances representing greater or lesser availability of training data. The CVS retail pharmacy (CVS) event-log is a large-size training data scenario from a simulation model of an exercise described in the book Fundamentals of Business Process Management <ns0:ref type='bibr' target='#b6'>(Dumas et al., 2018)</ns0:ref>. We generated the CFM and CFS event logs from an anonymized confidential process. They were used to represent scenarios of large and small size training data. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref> characterizes these logs according to the number of traces and events. The BPI17W and BPI12W logs have the largest number of traces and events, while the MP, CFS and P2P have less traces but a higher average number of events per trace. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Evaluation measures</ns0:head><ns0:p>We use a generative process model to generate an event log (multiple times) and then we measure the average similarity between the generated logs and a ground-truth event log. To this end, we define four measures of similarity between pairs of logs: Control-Flow Log Similarity (CFLS), Mean Absolute Error (MAE) of cycle times, Earth-Mover's Distance (EMD) of the histograms of activity processing times, and Event Log Similarity (ELS). It is important to clarify that the generation of time and activity sequences is not a classification task. Therefore, the precision and recall metrics traditionally used for predicting the next event do not apply. Instead, we use symmetric distance metrics (i.e., that penalize the differences between a and b in the same way as from b to a) that measure both precision and recall at the same time as explained in <ns0:ref type='bibr' target='#b21'>(Sander et al., 2021)</ns0:ref>.</ns0:p><ns0:p>CFLS is defined based on a measure of distance between pairs of traces: one trace coming from the original event log and the other from the generated log. We first convert each trace into a sequence of activities (i.e. we drop the timestamps and other attributes). In this way, a trace becomes a sequence of symbols (i.e. a string). We then measure the difference between two traces using the Damerau-Levenshtein distance, which is the minimum number of edit operations necessary to transform one string (a trace in our context) into another. The supported edit operations are insertion, deletion, substitution, and transposition. Transpositions are allowed without penalty when two activities are concurrent, meaning that they appear in any order, i.e. given two activities, we observe both AB and BA in the log. Next, we normalize the resulting Damerau-Levenshtein distance by dividing the number edit operations by the length of the longest sequence. We then define the control-flow trace similarity as the one minus the normalized Damerau-Levenshtein distance. Given this trace similarity notion, we pair each trace in the generated log with a trace in the original log, in such a way that the sum of the trace similarities between the paired traces is maximal. This pairing is done using the Hungarian algorithm for computing optimal alignments <ns0:ref type='bibr' target='#b13'>(Kuhn, 1955)</ns0:ref>. Finally, we define the CFLS between the real and the generated log as the average similarity of the optimally paired traces.</ns0:p><ns0:p>The cycle time MAE measures the temporal similarity between two logs. The absolute error of a pair of traces T1 and T2 is the absolute value of the difference between the cycle time of T1 and that of T2. The cycle time MAE is the mean of the absolute errors over a collection of paired traces. Like for the CFLS measure, we use the Hungarian algorithm to pair each trace in the generated log with a corresponding trace in the original log.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The cycle time MAE is a rough measure of the temporal similarity between the traces in the original and the generated log. It does not take into account the timing of the events in a trace -only the cycle time of the full trace. To complement the cycle time MAE, we use the Earth Mover's Distance (EMD)</ns0:p><ns0:p>between the normalized histograms of the mean durations of the activities in the ground-truth log vs the same histogram computed from the generated log. The EMD between two histograms H1 and H2 is the minimum number of units that need to be added to, removed to, or transferred across columns in H1 in order to transform it into H2. The EMD is zero if the observed mean activity durations in the two logs are identical, and it tends to one the more they differ.</ns0:p><ns0:p>The above measures focus either on the control-flow or on the temporal perspective. To complement them, we use a measure that combines both perspectives, namely the ELS as defined in <ns0:ref type='bibr' target='#b3'>(Camargo et al., 2020)</ns0:ref>. This measure is defined in the same way as CLFS above, except that it uses a distance measure between traces that takes into account both the activity labels and the timestamps of activity labels. This distance measure between traces is called Business Process Trace Distance (BPTD). The BPTD measures the distance between traces composed of events that occur in time intervals. This metric is an adaptation of the CFLS metric that, in the case of label matching, assigns a penalty based on the differences in times. BPTD also supports parallelism, which commonly occurs in business processes. To do this, BPTD validates the concurrency relationship between activities applying the oracle used by the alpha algorithm in process discovery. We have called ELS the generalization of the BPTD that measures the distance between two event logs using the Hungarian algorithm <ns0:ref type='bibr' target='#b13'>(Kuhn, 1955)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Experiment setup</ns0:head><ns0:p>The aim of the evaluation is to compare the accuracy of DDS models vs DL models discovered from event logs. Fig. <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> presents the pipeline we followed. The use of temporal splits is common in the field of predictive process monitoring (from which the DL techniques included in this study are drawn from) as it prevents information leakage <ns0:ref type='bibr' target='#b2'>(Camargo et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Taymouri et al., 2020)</ns0:ref>.</ns0:p><ns0:p>We use the first 80% of the training fold to construct candidate DDS models and the remaining 20% for validation. We use Simod's hyperparameter optimizer to tune the DDS model (see the tool's two discovery stages in Sect. 2). First, the optimizer in the structure discovery stage was set to explore 15 parameter configurations with five simulation runs per configuration. At this stage, we kept the DDS model that gave the best results on the validation sub-fold in terms of CFLS averaged across the five runs.</ns0:p><ns0:p>Second, the optimizer in the time-related parameters discovery stage was set to explore 20 parameter configurations with five simulation runs per configuration. Then, we hold the DDS model that gave the best results on the validation sub-fold in terms of EMD averaged across the five runs. As a result of the two stages, Simod found the best model in both structure and time dynamics. We defined the number of Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>optimizer trials in each stage, by considering the differences in the search space's size in each stage (see Simod's model parameters in Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref>).</ns0:p><ns0:p>The experimental results shows that the best possible value is reached in fewer attempts than expected. In the case of the LSTM-GAN implementation, as proposed by the authors <ns0:ref type='bibr' target='#b27'>(Taymouri et al., 2020)</ns0:ref>, we dynamically adjust the size of hidden units in each layer being twice the input's size. Additionally, we use 25 training epochs, a batch of size five, and a prefix size of five.</ns0:p><ns0:p>The above led us to one DDS, one LSTM, one GRU, and one LSTM-GAN model per log. We then generated five logs per retained model. To ensure comparability, each generated log was of the same size (number of traces) as the testing fold of the original log. We then compare each generated log with the testing fold using the ELS, CFLS, EMD and MAE measures defined above. We report the mean of each of these measures across the 5 logs generated from each model in order to smooth out stochastic variations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>FINDINGS</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref> presents the evaluation results of CFLS, MAE, and ELS measures grouped by event log size and source type. Table <ns0:ref type='table' target='#tab_10'>3</ns0:ref> presents the exact values of all metrics sorted by metric, event log size, and source type. The Event-log column identifies the evaluated log; meanwhile, the GRU, LSTM, LSTM (GAN), and SIMOD columns present the accuracy measures. Note that ELS and CFLS are similarity measures (higher is better), whereas MAE and EMD are error/distance measures (lower is better).</ns0:p></ns0:div>
<ns0:div><ns0:head>10/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In the first column the CLFS are presented in similarity units (the higher the better), the second column presents the MAE results in distance units, and the third column presents the ELS results (the higher the better)</ns0:p><ns0:p>The results show a clear dependence of training data size on the models' accuracy. For small logs, Simod presents a greater similarity in the control flow generation in three of the five evaluated logs as shown by the CFLS results. In the remaining two logs, the measure is not far from the best-reported values. In terms of MAE, Simod obtains the smallest errors in four of the five logs, which leads to greater ELS similarity in four of the five logs. However, for large logs, the LSTM model presents the best CFLS results in five of the six evaluated logs wheres the GRU model approaches better in the remaining one. In terms of MAE, the LSTM model obtains the lowest errors in four of the six logs, whereas the LSTM-GAN model approaches better in the remaining two. The difference between the DL and Simod models for the MAE mesure is constant, and dramatic in some cases such as the CALL log. In this log, Simod generates a difference almost forty times greater than that reported by the DL models. This can be the result of a contention of resources that is non-existent in the ground truth.</ns0:p><ns0:p>When analyzing the ELS measure, which joins the two perspectives of control flow and time distance, the LSTM model obtains the greatest similarity in five out of six models and the GRU model in the remaining one. The LSTM-GAN model does not obtain a better result in this metric due to its poor performance in control flow similarity. The LSTM-GAN model's low performance is because the temporal stability of the models' predictions declines rapidly, despite having a higher precision in predicting the next event as demonstrated in <ns0:ref type='bibr' target='#b27'>(Taymouri et al., 2020)</ns0:ref>. This result also indicates overfitting on the models preventing the generalization of this approach for this predictive task.</ns0:p><ns0:p>On the one hand, the results indicate that DDS models perform well when capturing the occurrence and order of activities (control-flow similarity), and that this behavior is independent of the training dataset size. A possible explanation for this result is that event logs of business processes (at least the ones included in this evaluation) follow certain normative pathways captured sufficiently by automatically discovered simulation models. However, Deep Learning models and especially LSTM models outperform the DDS models if a sufficiently large training dataset is available.</ns0:p><ns0:p>On the other hand, Deep Learning models are more accurate when it comes to capturing the cycle times of the cases in the large logs (cf. the lower MAE for DL models vs. DDS models). Here, we observe that both DDS and DL models achieve similar EMD values, which entails that both types of models predict the processing times of activities with similar accuracy. Therefore, we conclude that the differences in temporal accuracy (cycle time MAE) between DL and DDS models come from the fact that DL models can better predict the waiting times of activities, rather than the processing times. 6 The results specifically put into evidence the limitations in modeling capabilities of DDS models.</ns0:p><ns0:p>Such limitations arise both along the control-flow level perspective (sequences of events) and along the temporal perspective (timestamps associated to each event).</ns0:p><ns0:p>From a control-flow perspective, DDS models can only generate sequences that can be fully parsed by a business process model. In the case of Simod, this model is a BPMN model. The choice of modeling notation naturally introduces a representational bias (van der Aalst, 2011). For example, free-choice workflow nets -which have the expressive power of BPMN models with XOR and AND gateways <ns0:ref type='bibr' target='#b9'>(Favre et al., 2015)</ns0:ref> -have limitations that prevent them from capturing certain synchronization constructs <ns0:ref type='bibr' target='#b11'>(Kiepuszewski et al., 2003)</ns0:ref>. Adopting a more expressive notation may reduce this representational bias, possibly at the expense of interpretability. Furthermore, any DDS approach relies on an underlying automated process discovery algorithm. For example, Simod relies on the Split Miner algorithm <ns0:ref type='bibr' target='#b1'>(Augusto et al., 2019b)</ns0:ref> to discover BPMN models. Every such algorithm is limited in terms of the class of process models that it can generate. For example the Split Miner and other algorithms based on directly-follows graphs (e.g. Fodina) cannot capture process models with duplicate activity labels (i.e. multiple activity nodes in the model sharing the same label). Meanwhile, the Inductive Miner algorithm cannot capture non-block-structured process models <ns0:ref type='bibr' target='#b0'>(Augusto et al., 2019a)</ns0:ref>. In contrast, deep learning models for sequence generation rely on non-linear functions that model the probability that a given activity occurs after a given sequence prefix. Depending on the type of architecture used and the parameters (e.g. the number of layers, the type of activation function, learning rate), these models may be able to learn dependencies that cannot be captured by the class of BPMN models generated by a given process discovery algorithm such as Split Miner.</ns0:p><ns0:p>Along the temporal perspective, DDS models make assumptions about the sources of waiting times of activities. Chiefly, DDS models assume that waiting times are caused exclusively by resource contention and they assume that as soon as a resource is available and assigned to an activity, the resource will start the activity in question (robotic behavior) (van der Aalst, 2015). Furthermore, DDS models generally fail to capture inter-dependencies between multiple concurrent cases (besides resource contention) such as batching or prioritization between cases (some cases having a higher priority than others) (van der <ns0:ref type='bibr' target='#b28'>Aalst, 2015)</ns0:ref>. Another limitation relates to the assumption that resources perform one activity at a time, i.e. no multi-tasking <ns0:ref type='bibr' target='#b7'>(Estrada-Torres et al., 2020)</ns0:ref>. In contrast, deep learning models simply try to learn the time to the next-activity in a trace based on observed patterns in the data. As such, they may learn to predict 6 The cycle time of a process instance adds the processing times (activity durations) and the waiting times <ns0:ref type='bibr' target='#b6'>(Dumas et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The inability for DDS models to accurately capture the waiting times can be attributed to the fact that these models rely on the assumption that the waiting times can be fully explained by the availability of resources. In other words, DDS models assume that resource contention is the sole cause of waiting times. Furthermore, DDS models operate under the assumption of eager resources as discussed in Section 2 (i.e., resources start an activity as soon as it is allocated to them). Conversely, DL models try to find the best possible fit for observed waiting times without any assumptions about the behavior of the resources involved in the process.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>DISCUSSION</ns0:head><ns0:p>The results of the empirical evaluation reflect the trade-offs between DDS models and deep learning models. Indeed, these two families of models strike different tradeoffs between modeling capabilities (expressive power) on the one hand, and interpretability on the other.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science delays associated with inter-case dependencies as well as delays caused by exogenous factors such as workers being busy performing work not related to the simulated process. These observations explain why deep learning models outperform DDS models when it comes to capturing the time between consecutive activities (and thus the total case duration). DDS models are prone to underestimating waiting times, and hence cycle times, because they only take into account waiting times due to resource contention.</ns0:p><ns0:p>Meanwhile, deep learning models learn to replicate the distributions of waiting times regardless of their origin.</ns0:p><ns0:p>On the other hand, that DDS models are arguably more interpretable than deep learning models, insofar as they rely on a white-box representation of the process that analysts typically use in practice. This property implies that DDS models can be modified by business analysts to capture what-if scenarios, such as what would happen is a task was removed from the model. Also, DDS models explicitly capture one of the possible causes of waiting times, specifically resource contention, while deep learning models do not explicitly capture any such mechanism. As such, DDS models are more amenable to capture increases or reductions in waiting or processing times that arise when a change is applied to a process. Specifically DDS models are capable of capturing the additional waiting time (or the reduction in waiting time) that result from higher or lower resource contention, for example due to an increase in the number of cases created per time unit.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>In this paper, we compared the accuracy of two approaches to discover generative models from event logs: Data-Driven Simulation (DDS) and Deep Learning (DL). The results suggest that DDS models are suitable for capturing the sequence of activities of a process. On the other hand, DL models outperform DDS models when predicting the timing of activities, specifically the waiting times between activities.</ns0:p><ns0:p>This observation can be explained by the fact that the simulation models used by DDS approaches assume that waiting times are entirely attributable to resource contention, i.e. to the fact that all resources that are able to perform an enabled activity instance are busy performing other activity instances. In other words, these approaches do not take into account the multitude of sources of waiting times that may arise in practice, such as waiting times caused by batching, prioritization of some cases relative to others, resources being involved in other business processes, or fatigue effects.</ns0:p><ns0:p>A natural direction for future work is to extend existing DDS approaches in order to take into account a wider range of mechanisms affecting waiting times, so as to increase their temporal accuracy. However, the causes of waiting times in business processes may ultimately prove to be so diverse, that no DDS approach would be able to capture them in their entirety. An alternative approach would be to combine DDS approaches with DL approaches so as to take advantage of their relative strengths. In such a hybrid approach, the DDS model would capture the control-flow perspective, while the DL model would capture A possible approach to tackle the first of these challenges is to generate sequences of events using a DDS model, or more specifically a stochastic model trained to generate distributions of sequences of activities <ns0:ref type='bibr' target='#b21'>(Sander et al., 2021)</ns0:ref>. In a second stage, the traces generated by such a stochastic model can be extended to incorporate timestamps via a deep learning model, trained to predict waiting times, i.e. the time between the moment an activity is enabled and the time it starts. Processing times can then be added using either a deep learning model, or a temporal probability distribution as in DDS approaches.</ns0:p><ns0:p>To tackle the second of the above challenges, we need a mechanism to adjust the predictions made by a deep learning model in order to capture a change in the process, e.g. the fact that an activity has been deleted. A possible approach is to adapt existing techniques to incorporate domain knowledge (e.g. the fact that an activity will not occur in a given suffix of a trace) into the output of a DL model <ns0:ref type='bibr' target='#b5'>(Di Francescomarino et al., 2017)</ns0:ref>.</ns0:p><ns0:p>In this paper, we focused on comparing DDS and DL approaches designed to predict sequences of activities together with their start and end timestamps. An event log may contain other attributes, most notably the resource who performs each activity and/or the role of this resource. A possible direction</ns0:p></ns0:div>
<ns0:div><ns0:head>13/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for future work is to compare the relative performance of DDS and DL approaches for the task of generating event logs with resources and/or roles. While there exist deep learning approaches to generate sequences of events with resources <ns0:ref type='bibr' target='#b2'>(Camargo et al., 2019)</ns0:ref>, existing DDS approaches, including Simod, are not able to discover automatically tuned simulation models covering the resource perspective. To design such a DDS approach, we need to first define a loss function that takes into account both the control-flow and the resource perspectives. We also need to incorporate a mechanism to assign a specific (individually identified) resource to each activity instance, while ensuring that the associations between activity instances and resources in the simulated log are reflective of those observed in the original log.</ns0:p><ns0:p>In other words, a possible direction for future work is to design a DDS technique that handles roles and resources as first-class citizens and to compare the relative performance of such a DDS technique against equivalent DL techniques. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Reproducibiltiy package</ns0:head><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Fig. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig. 1 depicts the steps of the Simod method, namely Structure discovery and Time-related parameters discovery.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Pipeline of Simod to generate process models.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Phases and steps for building DL models</ns0:figDesc><ns0:graphic coords='5,183.09,403.66,330.86,53.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Explored DeepGenerator architectures: (a) this architecture concatenates the inputs related with activities and roles, and shares the first layer, (b) this architecture completely shares the first layer. Despite the role's prefixes are encoded and predicted, their accuracy is not evaluated.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure 4. LSTM-GAN architechture: (a) Training strategy, (b) Inference strategy.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Experimental pipelineWe used the hold-out method with a temporal split criterion to divide the event logs into two folds: 80% for training and 20% for testing. Next, we use the training fold to train the DDS and the DL models.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Fig 6 Figure 6 .</ns0:head><ns0:label>66</ns0:label><ns0:figDesc>Fig 6 shows the log P2P in which the best model was found in the first optimization stage at the trial 10 and in the second stage at the trial 13.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure7. Evaluation results: In the first column the CLFS are presented in similarity units (the higher the better), the second column presents the MAE results in distance units, and the third column presents the ELS results (the higher the better)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>the temporal dynamics, particularly waiting times. The DSS model would also provide an interpretable model that users can change in order to define 'what-if' scenarios, e.g. a what-if scenario where an activity is removed or a new activity is added. Two challenges need to be overcome to design such a hybrid DDS-DL approach: (i) how to integrate the DDS model with the DL model; and (ii) how to incorporate the information of a what-if scenario into the DL model.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Event logs description</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Size</ns0:cell><ns0:cell>Type of source</ns0:cell><ns0:cell>Event log</ns0:cell><ns0:cell>Num. traces</ns0:cell><ns0:cell>Num. events</ns0:cell><ns0:cell>Num. activities</ns0:cell><ns0:cell>Avg. activities per trace</ns0:cell><ns0:cell>Avg. duration</ns0:cell><ns0:cell>Max. duration</ns0:cell></ns0:row><ns0:row><ns0:cell>LARGE</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>POC</ns0:cell><ns0:cell>70512</ns0:cell><ns0:cell>415261</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>5.89</ns0:cell><ns0:cell>15.21 days</ns0:cell><ns0:cell>269.23 days</ns0:cell></ns0:row><ns0:row><ns0:cell>LARGE</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>BPI17W</ns0:cell><ns0:cell>30276</ns0:cell><ns0:cell>240854</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>7.96</ns0:cell><ns0:cell>12.66 days</ns0:cell><ns0:cell>286.07 days</ns0:cell></ns0:row><ns0:row><ns0:cell>LARGE</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>BPI12W</ns0:cell><ns0:cell>8616</ns0:cell><ns0:cell>59302</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6.88</ns0:cell><ns0:cell>8.91 days</ns0:cell><ns0:cell>85.87 days</ns0:cell></ns0:row><ns0:row><ns0:cell>LARGE</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>CALL</ns0:cell><ns0:cell>3885</ns0:cell><ns0:cell>7548</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>1.94</ns0:cell><ns0:cell>2.39 days</ns0:cell><ns0:cell>59.1 days</ns0:cell></ns0:row><ns0:row><ns0:cell>LARGE</ns0:cell><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CVS</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>103906</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>10.39</ns0:cell><ns0:cell>7.58 days</ns0:cell><ns0:cell>21.0 days</ns0:cell></ns0:row><ns0:row><ns0:cell>LARGE</ns0:cell><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CFM</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>44373</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>26.57</ns0:cell><ns0:cell>0.76 days</ns0:cell><ns0:cell>5.83 days</ns0:cell></ns0:row><ns0:row><ns0:cell>SMALL</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>INS</ns0:cell><ns0:cell>1182</ns0:cell><ns0:cell>23141</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>19.58</ns0:cell><ns0:cell>70.93 days</ns0:cell><ns0:cell>599.9 days</ns0:cell></ns0:row><ns0:row><ns0:cell>SMALL</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>ACR</ns0:cell><ns0:cell>954</ns0:cell><ns0:cell>4962</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>5.2</ns0:cell><ns0:cell>14.89 days</ns0:cell><ns0:cell>135.84 days</ns0:cell></ns0:row><ns0:row><ns0:cell>SMALL</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>MP</ns0:cell><ns0:cell>225</ns0:cell><ns0:cell>4503</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>20.01</ns0:cell><ns0:cell>20.63 days</ns0:cell><ns0:cell>87.5 days</ns0:cell></ns0:row><ns0:row><ns0:cell>SMALL</ns0:cell><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CFS</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>21221</ns0:cell><ns0:cell>29</ns0:cell><ns0:cell>26.53</ns0:cell><ns0:cell>0.83 days</ns0:cell><ns0:cell>4.09 days</ns0:cell></ns0:row><ns0:row><ns0:cell>SMALL</ns0:cell><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>P2P</ns0:cell><ns0:cell>608</ns0:cell><ns0:cell>9119</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>21.46 days</ns0:cell><ns0:cell>108.31 days</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Table 2).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Stage</ns0:cell><ns0:cell>Parameter</ns0:cell><ns0:cell>Distribution</ns0:cell><ns0:cell>Values</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Parallelism threshold (ε)</ns0:cell><ns0:cell>Uniform</ns0:cell><ns0:cell>[0...1]</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Structure discovery</ns0:cell><ns0:cell>Percentile for frequency threshold (η)</ns0:cell><ns0:cell>Uniform</ns0:cell><ns0:cell>[0...1]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Conditional branching probabilities</ns0:cell><ns0:cell>Categorical</ns0:cell><ns0:cell>{Equiprobable, Discovered}</ns0:cell></ns0:row><ns0:row><ns0:cell>Simod</ns0:cell><ns0:cell /><ns0:cell>Log repair technique Resource pools similarity threshold</ns0:cell><ns0:cell>Categorical Uniform</ns0:cell><ns0:cell>{Repair, Removal, Replace} [0...1]</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Time-related parameters discovery</ns0:cell><ns0:cell>Resource availability calendar support Resource availability calendar confidence</ns0:cell><ns0:cell>Uniform Uniform</ns0:cell><ns0:cell>[0...1] [0...1]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Instances creation calendar support</ns0:cell><ns0:cell>Uniform</ns0:cell><ns0:cell>[0...1]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Instances creation calendars confidence</ns0:cell><ns0:cell>Uniform</ns0:cell><ns0:cell>[0...1]</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>N-gram size</ns0:cell><ns0:cell>Categorical</ns0:cell><ns0:cell>{5, 10, 15}</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Input scaling method</ns0:cell><ns0:cell>Categorical</ns0:cell><ns0:cell>{Max, Lognormal}</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM/GRU Training</ns0:cell><ns0:cell /><ns0:cell># units in hidden layer</ns0:cell><ns0:cell>Categorical</ns0:cell><ns0:cell>{50, 100}</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Activation function for hidden layers</ns0:cell><ns0:cell>Categorical</ns0:cell><ns0:cell>{Selu, Tanh}</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Model type</ns0:cell><ns0:cell>Categorical</ns0:cell><ns0:cell>{Shared Categorical, Full Shared}</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Parameter ranges and distributions used for hyperparameter optimization</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head /><ns0:label /><ns0:figDesc>Table 4 links to the repositories of the approaches used in the evaluation. The datasets, generative models, and the raw and summarized results can be found at: https://doi.org/ 10.5281/zenodo.4699983. DeepGenerator https://github.com/AdaptiveBProcess/GenerativeLSTM. Adapted LSTM(GAN) https://github.com/AdaptiveBProcess/LSTM-GAN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Approach</ns0:cell><ns0:cell>Repository</ns0:cell></ns0:row><ns0:row><ns0:cell>Simod tool</ns0:cell><ns0:cell>https://github.com/AdaptiveBProcess/Simod.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Source code of the approaches used in the evaluation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Metric</ns0:cell><ns0:cell>Size</ns0:cell><ns0:cell>Type of source</ns0:cell><ns0:cell>Event Log</ns0:cell><ns0:cell>GRU</ns0:cell><ns0:cell>LSTM</ns0:cell><ns0:cell>LSTM-GAN</ns0:cell><ns0:cell>SIMOD</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>POC</ns0:cell><ns0:cell>0.63141</ns0:cell><ns0:cell>0.67176</ns0:cell><ns0:cell>0.28998</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>LARGE</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>BPI17W BPI12W CALL</ns0:cell><ns0:cell>0.63751 0.58375 0.82995</ns0:cell><ns0:cell>0.71798 0.70228 0.83043</ns0:cell><ns0:cell>0.36629 0.35073 0.24055</ns0:cell><ns0:cell>0.58861 0.53744 0.62911</ns0:cell></ns0:row><ns0:row><ns0:cell>CLFS</ns0:cell><ns0:cell /><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CVS CFM</ns0:cell><ns0:cell>0.83369 0.81956</ns0:cell><ns0:cell>0.85752 0.60224</ns0:cell><ns0:cell>0.20898 0.11412</ns0:cell><ns0:cell>0.71359 0.77094</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>INS</ns0:cell><ns0:cell>0.50365</ns0:cell><ns0:cell>0.51299</ns0:cell><ns0:cell>0.25619</ns0:cell><ns0:cell>0.61034</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>REAL</ns0:cell><ns0:cell>ACR</ns0:cell><ns0:cell>0.78413</ns0:cell><ns0:cell>0.78879</ns0:cell><ns0:cell>0.18073</ns0:cell><ns0:cell>0.67959</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SMALL</ns0:cell><ns0:cell /><ns0:cell>MP</ns0:cell><ns0:cell>0.27094</ns0:cell><ns0:cell>0.23197</ns0:cell><ns0:cell>0.06691</ns0:cell><ns0:cell>0.34596</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CFS P2P</ns0:cell><ns0:cell>0.69543 0.41179</ns0:cell><ns0:cell>0.66782 0.65904</ns0:cell><ns0:cell>0.10157 0.13556</ns0:cell><ns0:cell>0.76648 0.45297</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>POC</ns0:cell><ns0:cell>801147</ns0:cell><ns0:cell>778608</ns0:cell><ns0:cell>603105</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>LARGE</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>BPI17W BPI12W CALL</ns0:cell><ns0:cell>868766 701892 160485</ns0:cell><ns0:cell>603688 327350 174343</ns0:cell><ns0:cell>828165 653656 159424</ns0:cell><ns0:cell>961727 662333 679847</ns0:cell></ns0:row><ns0:row><ns0:cell>MAE</ns0:cell><ns0:cell /><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CVS CFM</ns0:cell><ns0:cell>859926 25346</ns0:cell><ns0:cell>667715 15078</ns0:cell><ns0:cell>952004 956289</ns0:cell><ns0:cell>252458</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>INS</ns0:cell><ns0:cell>1586323</ns0:cell><ns0:cell>1516368</ns0:cell><ns0:cell>1302337</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>REAL</ns0:cell><ns0:cell>ACR</ns0:cell><ns0:cell>344811</ns0:cell><ns0:cell>341694</ns0:cell><ns0:cell>296094</ns0:cell><ns0:cell>230363</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SMALL</ns0:cell><ns0:cell /><ns0:cell>MP</ns0:cell><ns0:cell>335553</ns0:cell><ns0:cell>321147</ns0:cell><ns0:cell>210714</ns0:cell><ns0:cell>298641</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CFS P2P</ns0:cell><ns0:cell>30327 2407551</ns0:cell><ns0:cell>33016 2495593</ns0:cell><ns0:cell>717266 2347070</ns0:cell><ns0:cell>15297</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>POC</ns0:cell><ns0:cell>0.58215</ns0:cell><ns0:cell>0.65961</ns0:cell><ns0:cell>0.28503</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>LARGE</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>BPI17W BPI12W CALL</ns0:cell><ns0:cell>0.63643 0.57862 0.79336</ns0:cell><ns0:cell>0.70317 0.67751 0.81645</ns0:cell><ns0:cell>0.35282 0.33649 0.19123</ns0:cell><ns0:cell>0.58412 0.52555 0.59371</ns0:cell></ns0:row><ns0:row><ns0:cell>ELS</ns0:cell><ns0:cell /><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CVS CFM</ns0:cell><ns0:cell>0.65160 0.68292</ns0:cell><ns0:cell>0.70355 0.43825</ns0:cell><ns0:cell>0.16854 0.09505</ns0:cell><ns0:cell>0.70154 0.66301</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>INS</ns0:cell><ns0:cell>0.49625</ns0:cell><ns0:cell>0.50939</ns0:cell><ns0:cell>0.23070</ns0:cell><ns0:cell>0.57017</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>REAL</ns0:cell><ns0:cell>ACR</ns0:cell><ns0:cell>0.75635</ns0:cell><ns0:cell>0.45737</ns0:cell><ns0:cell>0.15884</ns0:cell><ns0:cell>0.71977</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SMALL</ns0:cell><ns0:cell /><ns0:cell>MP</ns0:cell><ns0:cell>0.25019</ns0:cell><ns0:cell>0.21508</ns0:cell><ns0:cell>0.04570</ns0:cell><ns0:cell>0.31024</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CFS P2P</ns0:cell><ns0:cell>0.54433 0.22923</ns0:cell><ns0:cell>0.57392 0.39249</ns0:cell><ns0:cell>0.07930 0.09968</ns0:cell><ns0:cell>0.67526 0.43202</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>POC</ns0:cell><ns0:cell>0.00036</ns0:cell><ns0:cell>0.00011</ns0:cell><ns0:cell>0.00001</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>LARGE</ns0:cell><ns0:cell>REAL</ns0:cell><ns0:cell>BPI17W BPI12W CALL</ns0:cell><ns0:cell>0.00060 0.00077 0.00084</ns0:cell><ns0:cell>0.01010 0.00061 0.15794</ns0:cell><ns0:cell>0.00072 0.00006 0.00090</ns0:cell><ns0:cell>0.00057 0.00002 0.00072</ns0:cell></ns0:row><ns0:row><ns0:cell>EMD</ns0:cell><ns0:cell /><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CVS CFM</ns0:cell><ns0:cell>0.61521 0.00472</ns0:cell><ns0:cell>0.57217 0.00828</ns0:cell><ns0:cell>0.40006 0.03529</ns0:cell><ns0:cell>0.13509 0.06848</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>INS</ns0:cell><ns0:cell>0.03343</ns0:cell><ns0:cell>0.00308</ns0:cell><ns0:cell>0.33336</ns0:cell><ns0:cell>0.00001</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>REAL</ns0:cell><ns0:cell>ACR</ns0:cell><ns0:cell>0.49996</ns0:cell><ns0:cell>0.68837</ns0:cell><ns0:cell>0.25012</ns0:cell><ns0:cell>0.58674</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SMALL</ns0:cell><ns0:cell /><ns0:cell>MP</ns0:cell><ns0:cell>0.12609</ns0:cell><ns0:cell>0.33375</ns0:cell><ns0:cell>0.28577</ns0:cell><ns0:cell>0.31411</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SYNTHETIC</ns0:cell><ns0:cell>CFS P2P</ns0:cell><ns0:cell>0.08253 0.25306</ns0:cell><ns0:cell>0.10784 0.33747</ns0:cell><ns0:cell>0.06924 0.23898</ns0:cell><ns0:cell>0.03461 0.03888</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Detailed evaluation results</ns0:figDesc><ns0:table /><ns0:note>16/16PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='11'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55306:1:1:NEW 28 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to Reviewers’ Comments
PeerJ Computer Science submission No. 55306
“Discovering Generative Models from Event Logs: Data-driven Simulation vs Deep Learning”
Dear Editors,
We thank you and the reviewers for the highly constructive feedback on our manuscript. Below,
we explain how we have addressed each of the reviewers’ comments. The changes made to
address the reviewers’ comments are marked in red in the revised manuscript.
Yours sincerely,
Manuel Camargo, Marlon Dumas, Oscar Gonzalez-Rojas
Reviewer 1
# Comment
Response/Revision
1 What is missing in my opinion is a
comparison of their expressive power.
Given that the same event log is used to
create both - do these two models actually
capture the same amount of information?
Are they equal in their modeling strength?
We added a discussion in Section 6 to
compare the two approaches in terms of
expressive power (modeling capabilities) as
well as interpretability - as these two
properties are tightly related.
In the discussion, we distinguish between
limitations along the control-flow level
perspective (sequences of events) and along
the temporal perspective (timestamps
associated to each event).
From a control-flow perspective, DDS models
can only generate sequences that can be
fully parsed by a business process model. In
the case of Simod, this model is a BPMN
model or a (free-choice) workflow net. The
expressive power of such models is limited,
as has been established by various studies
such as Kiepuszewski et al. (2003) Fundamentals of Control flow in workflows.
Furthermore, any DDS approach depends on
an underlying automated process discovery
algorithm. For example, Simod relies on the
Split Miner algorithm to discover BPMN
models. Every automated process discovery
algorithm is limited in terms of the class of
process models that it can produce. For
example the Split Miner and other algorithms
based on directly-follows graphs (e.g. Fodina)
cannot capture process models with duplicate
activities, while the inductive miner cannot
capture non-block-structured process models.
In contrast, deep learning models generate
sequences from non-linear functions that
collectively model the probability that a given
activity occurs after a given sequence prefix.
Depending on the type of architecture used
and the parameters (e.g. the number of
layers, the type of activation function, learning
rate), these models may be able to learn
dependencies that cannot be captured by the
class of BPMN models generated by a given
process discovery algorithm such as Split
Miner.
I am missing a model-to-model
comparison to complement the nice
empirical evaluation.
At the temporal level, DDS models make
assumptions about the sources of waiting
times of activities as spelled out in the
manuscript. Chiefly, DDS models assume that
waiting times are caused exclusively by
resource contention, and they assume that as
soon as a resource is available and assigned
to an activity, the resource will execute the
activity in question. In other words, they
assume that resources follow a robotic
behavior. Also, DDS models generally fail to
capture possible inter-dependencies between
multiple concurrent cases (besides resource
contention) such as batching or prioritization
between cases (some cases having a higher
priority than others). In contrast, deep
learning models simply try to learn the time to
the next-activity in a trace based on observed
patterns in the data. As such, they may learn
to predict delays associated with inter-case
dependencies as well as delays caused by
exogenous effects such as workers being
distracted or being busy performing work not
related to the simulated process. These
observations explain why deep learning
models outperform DDS models when it
comes to capturing the time between
consecutive activities (and thus the total case
duration).
On the other hand, that DDS models are
more interpretable than deep learning
models, insofar as they rely on a
representation of the process that analysts
would typically use in practice, such as
BPMN. This property implies that DDS
models can be modified by business analysts
to capture what-if scenarios, such as what
would happen is a task was removed from
the model. Also, DDS models explicitly
capture one of the possible causes of waiting
times, specifically resource contention, while
deep learning models do not explicitly capture
any such mechanism. As such, DDS models
are more amenable to capture what-if
scenarios. Specifically DDS models are
capable of capturing the additional waiting
time (or the reduction in waiting time) that
may result from higher or lower resource
contention.
Reviewer 2
# Comment
Response/Revision
1 The methodologies you evaluate in your
work are limited to specific
implementations of one DDS and two DL
approaches. In the threats of validity you
claim that the other state-of-the-art DDS
methodologies require manual
intervention to be tuned, and that the other
available DL methodologies are left as
future work.
● While I am fine with leaving some of
the DL methods as future work and I
believe that SIMOD is an extremely
innovative approach, I would ask you
that:
a. either you better motivate why you
only use (these) three methodologies
(and this should be clear from the
beginning), while in your title and
introduction you claim to be comparing
the DDS and DL families of
approaches for the discovery of
generative models;
b. or you assess at least another DDS
method in order to provide more
general results.
We used Simod because it is the only fully
automated tool for discovering and tuning
business process simulation models. An
alternative approach by Martin et al. (2016)
requires one to manually apply each step and
to manually select each discovery and
simulation parameter. Simod can be seen as
an automated pipeline based on the
guidelines of Martin et al. (2016) coupled with
an auto-tuning mechanism. In a similar vein,
the method by Rozinat et al. (2009) requires
manual user intervention and tuning. The use
of these alternative methods would introduce
two sources of bias in the evaluation: (i) a
bias stemming from the manual tuning of
simulation parameters; and (ii) a bias
stemming from the fact that the DDS model
would be manually tuned while the deep
learning model is automatically tuned as part
of the model learning step. By using Simod,
we are able to do a fair comparison: We
compare a DDS method with automatic
data-driven tuning of parameters (Simod)
against deep learning methods that
automatically tune their parameters (i.e. the
weights in the neural network).
We make this clarification explicit in Section 2
(highlighted in red font).
2 The meta parameters used to optimise the
hyper parameters of your approach, i.e.,
the number of configurations and the runs
carried out for each configuration, are not
well stated/motivated. For the DDS
method you state to explore 50 random
configurations with 5 runs each.It seems
to be a big amount of models to explore,
how did you come up with these two
parameters (i.e. 50 and 5)? On the other
hand, for the DL method you state you
use random search to determine the
hyperparameters. Are there any meta
parameters for the random search? If so,
We have added this explanation in subsection
4.3 (red text).
can you report and motivate them (e.g. the
maximum amount of runs, the maximum
time elapsed or the convergence of the
retrieved models)?
3 Roles are a very peculiar and interesting
sort of data to be used in process mining.
While you motivated why you do not use
them in Section 3, you do not mention
them in Section 2. How would the
investigated techniques behave with the
generation of event logs not only with
timestamps but also with attributes as
roles? I would ask: a. either you clarify
why you do not take into account the
generation of event logs enriched with
roles (e.g., SIMOD does not support the
generation of roles)
b. or experiment and compare the
techniques of the two families of
approaches for the generation of event
logs with timestamps and roles.
This paper does not address the problem of
generating traces annotated with roles (or
individualized resources) for two reasons.
First, in order to make such a comparison, we
would need to define a loss function that
takes into account both the activities and the
resources/roles account (during parameter
tuning). Simod and other existing DDS
proposals do not incorporate such a loss
function and defining one would be outside
the scope of this paper. Second,
Simod does not support the generation of
sequences with roles or resources. Instead,
Simod heuristically identifies groups of
resources from the input event log using
clustering techniques (proposed in previous
work), and equates these clusters as
resource pools. Hence, the resource pools
that Simod manipulates are synthetic and do
not correspond to roles that an end user
would recognize as such. Furthermore, the
simulator upon which Simod relies does not
treat resources in an individualized manner,
but simply treats them as anonymous
members of a resource pool -- the resource
pool that Simod computes via clustering.
Hence, it is not possible to use Simod to
generate sequences where each event is
linked to one of the resources in the original
log.
In other words, a comparison between DDS
and deep learning along the resource
perspective (or at the roles level) would
require one to design a DDS technique that
would handle roles and resources as
first-class citizens.
We added this clarification in the Conclusion
(last paragraph) and positioned this limitation
as a direction for future work.
Reviewer 3
# Comment
Response/Revision
1 Limited range of DL-based generative
models analysed.
The induction of DL-based generative
models for predicting/simulating process
behaviors is a topical problem, for which
there were recently proposed solutions,
like those in (Lin et al., 2019) and
(Taymouri et al., 2020).
In fact, you have mentioned these works,
but excluded them all from the
experimentation because of their 'inability
to predict timestamps and durations'.
We extended the empirical evaluation (cf.
Section 5) to include an adaptation of the
GAN technique proposed by Taymouri et al.
(2020) as one of the baselines. The
modifications made to the technique of
Taymouri et al. can be found in Section 3.2.
However, you might well decide to partially
evaluate these methods by only using the
CFLS metrics, provided that a public
implementation is available for both
methods --actually this seems to hold for
the method in (Taymouri et al., 2020).
More importantly, it seems to me that the
GAN-based method presented in
(Taymouri et al., 2020) can
predict/generate event timestamps, which
can be used to compute activity durations
(despite they are not returned directly by
the method itself).
Indeed, you could train the model of
(Taymouri et al., 2020) against traces
containing a pair of instantaneous events
for each of their activity instances (namely,
a start event and a completion event for
each activity instance), and then use the
trained generator to produce traces having
the same structure.
I presume that the computation of
evaluation metrics concerning activity
durations and cycle times can be easily
adapted to traces of this form.
In my opinion, extending the
experimentation to this method would add
value to your empirical study, beside
making it more complete.
2 Biased evaluation criteria.
The evaluation metrics computed in the
tests only focus on precision-like aspects,
so that one cannot assess whether the
traces generated by a method really cover
the variety of process behaviors that occur
in the test log. In other words, there may
well be modes of the test data distribution
that can be missed!
I hence suggest that additional, recall-like,
metrics be defined, quantifying how well
the test traces are represented by (the
sample traces yielded) each of the
discovered simulation/generative models.
The task of generating time and activity
sequences is not a classification task;
therefore, the classic precision and recall
metrics (which are classification performance
metrics) are not applicable. Instead, in this
research, we use (symmetric) distance
metrics between the traces in the testing fold
of the original log and the traces in the
simulated log. The symmetry property means
that the distance metric penalizes the
differences between the real log and the
simulated log in the same way as it penalizes
the differences between the simulated log
and the real log. Concretely, any behavior in
the simulated log that is not in the real log
(the counterpart of “lack of fitness” in our
setting) is penalized in the same way as a
behavior in the simulated log that is not
observed in the real log (the counterpart of
“lack of precision” in our setting).
The idea of using a distance metric to
evaluate stochastic process models (of which
simulation models are an extension) has
been advocated in recent work [1].
We added the above clarification in the first
paragraph of Subsection 4.2 (red text).
[1] S.J.J. Leemans, W.M.P. van der Aalst, T.
Brockhoff, A. Polyvyanyy, Stochastic process
mining : Earth movers ’ stochastic
conformance, Information Systems (2021).
3 Limited range of datasets.
Three of the logs are very small and, thus,
hardly contain sufficient data to train a
deep model appropriately.
More importantly, I am afraid that all the
processes considered in the
experimentation are rather simple and
pretty regular/structured.
Thus, it may be risky to draw, from your
current experimental results, general
conclusions on the relative strength of the
two classes of process simulation
approaches under analysis.
In particular, I am not fully convinced that
DSS models are good (and better than
We extended the number of event logs to
eleven; all of them contain both start and end
timestamps. We use real logs from public and
private sources and synthetic logs generated
from simulation models of real processes.
A detailed description of event logs was
added in subsection 4.1
DL-based ones) at capturing control-flow
behaviors accurately.
I think, indeed, that such a claim would
need to be substantiated by a wider range
of tests, encompassing logs of more
flexible and/or more complex processes
(e.g., featuring long-distance activity
dependencies).
Indeed, it may well be that DL models will
capture the variety of control-flow
behaviors of such processes better than
DSS models hinging on local activity
dependencies and local time distributions
--as partly the CFLS scores that these
methods obtained on the last log seem to
suggest.
I do understand that it is difficult to find
other public logs containing both start and
end times for each event, but adding logs
of less structured processes would really
strengthen the value, coverage and
significance of your empirical study.
To this respect, please consider the
possibility to relax the requirement of
dealing with traces containing only non
instantaneous events (i.e., traces where
each step encodes the execution of one
activity lasting over the time, and
associated with both start and end
timestamps).
In principle, one could artificially keep all
the activity durations fixed to 0, when
trying to simulate a log that only stores the
start or completion timestamp for each
activity instance. It seems to me that the
MAE and EMD scores would still make
sense in such a simulation setting, and
would give useful information on how
accurately the model generates the event
timestamps (and, indirectly, the cycle
time).
4 Some conclusions look too strong (and
not fully supported by test evidence).
4.a) In the concluding section you say that
'DDS models are suitable for capturing
the sequence of activities (or possibly
Thanks to the increase in the number of
event logs and the re-execution of all the
experiments, the conclusions presented in
this section are now better supported.
other categorical attributes) of a process.'
However, this claim does not seem to be
not fully aligned to the test results.
Indeed, the CFLS score of GRU and
LSTM models looks better than that of
DDS on the real-life log BPI2017W, which
is the only log containing more than 10K
traces (and 100K events).
May it be the case that DL models capture
activity sequence as accurately as (or
even better than) DSS models when
applied to complex enough processes
(and provided sufficiently large amounts of
training data)?
On the other hand, on the log of simple
processes featuring few activities, DL
models could be penalized by using
embeddings for the activities (and
low-dimensional categorical event
attributes). There is, indeed, some
evidence in the literature (Everman et al.,
2019) that one-hot encoding schemes
may help obtain more accurate models
than embeddings in such a case --in
principle, low-dimensional embeddings
may lead to loss of useful information.
I suggest that you consider the possibility
to test DL models obtained with one-hot
encodings for the activities (and, possibly,
for the users/roles), when the number of
them is small.
4.b) Still in the concluding section, you say
that 'DL models outperform DDS models
when predicting the timing of activities,
specifically the waiting times between
activities.'
However, the assumption of
eager-resources, related to the generation
of waiting times, is rather rough.
In my opinion, some way of overcoming
this limitation of your DSS models should
be prospected, which does not necessarily
rely on exploiting DL models.
4a) The conclusions presented in the first
version of the manuscript remain consistent
with what was observed in the new
experiments.
4b) The exploration of strategies to improve
the representation of waiting times in DDS
models is a direction for future. It is possible
to carry out future work in order to lift the
limitations of existing DDS techniques, for
example to capture batching, prioritization,
interactions between multiple processes,
multi-tasking, fatigue effects, etc. However,
the causes of waiting times in business
processes may ultimately prove to be so
diverse, that no DDS approach would be able
to capture them in their entirety. This is why
we propose to explore an alternative
approach where deep learning models would
be used to supplement DDS models in order
to capture waiting times based on the
patterns present in the data.
5 Some lack of technical details and of
critical discussion.
- Which specific families of PDFs are used
to model inter-arrival times and activity
durations?
- The discussion on hybrid approaches
combining DSS and DL methods, in order
to exploit the strengths of both
synergistically, is rather shallow from a
technical viewpoint. Some more technical
reflections on this respects could add
value to your work.
We added details about the supported
probability distributions used to represent the
interarrival times and the activity durations.
In the Conclusion section, we elaborated
more on the idea of designing a hybrid
technique that combines DDS and DL.
" | Here is a paper. Please give your review comments after reading it. |
134 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In the traditional irrigation process, a huge amount of water consumption is required which leads to water wastage. To reduce the wastage of water for this tedious task, an intelligent irrigation system is the need of the hour. In the era of machine learning (ML) and the Internet of Things (IoT), it is a great advantage to build an intelligent system that performs this task automatically with minimal human effort. In this paper, an IoT enabled machine learning (ML) trained recommendation system (IoT-IIRS) is proposed for efficient water usage with the nominal intervention of farmers. IoT devices are deployed in the crop field to collect the ground and environmental details precisely. The gathered data are forwarded and stored in a cloud-based server, which applies ML approaches to analyze those data and suggest the farmer regarding the irrigation. To make the system robust and adaptive, an inbuilt feedback mechanism is added to this recommendation system.</ns0:p><ns0:p>From the experimentation, it is found that the proposed system performs quite well on both the datasets such as own collected data and NIT Raipur crop dataset.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Water is the essential natural resource for agriculture and it is limited in nature <ns0:ref type='bibr' target='#b14'>(Kamienski et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Sahoo et al., 2019)</ns0:ref>. In a country like India, a huge share of water is used for irrigation purposes <ns0:ref type='bibr' target='#b24'>(Nawandar and Satpute, 2019)</ns0:ref>. Crop irrigation is a noteworthy factor in determining the plant yield, depending upon multiple climatic conditions such as air temperature, soil temperature, humidity, and soil moisture <ns0:ref type='bibr' target='#b2'>(Bavougian and Read, 2018)</ns0:ref>. Farmers primarily depend on personal monitoring and experience for harvesting the fields <ns0:ref type='bibr' target='#b7'>(Glaroudis et al., 2020)</ns0:ref>. The important thing is that water needs to be maintained in the field <ns0:ref type='bibr' target='#b22'>(Liu et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Fremantle and Scott, 2017)</ns0:ref>. The scarcity of water in these modern days is a burning issue. The scarcity of water is already affecting the people in the world <ns0:ref type='bibr' target='#b18'>(LaCanne and Lundgren, 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Schleicher et al., 2017)</ns0:ref>. The situation may become worse in the coming years.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:1:2:CHECK 7 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The uniform water distribution of the traditional irrigation systems is not optimal. Hence, the research effort has been satrted towards efficient agricultural monitoring system <ns0:ref type='bibr' target='#b17'>(Kim et al., 2008)</ns0:ref>. In this regard, the standalone monitoring station being developed. For instance, the 'MSP 430' has been developed with a microcontroller along with a set of meteorological sensors. In addition to this, the wireless sensor-based monitoring system had been developed. It was made of several wireless sensor nodes and a gateway <ns0:ref type='bibr' target='#b10'>(Gutiérrez et al., 2013a)</ns0:ref>. It has been implemented as an easy solution with better spatial and temporal determinations. Further, there is a demand for automating the irrigation system. It enables machines to apply some intelligence by reading historical data and accordingly analyze and predict the output. This mechanism is more effective than the traditional rule-based algorithm <ns0:ref type='bibr' target='#b4'>(Chlingaryan et al., 2018)</ns0:ref>. From this onwards, the role of Machine Learning (ML) and Artificial Intelligence (AI) plays an important role.</ns0:p><ns0:p>The application of ML in the field of agriculture domain is numerous <ns0:ref type='bibr' target='#b13'>(Jha et al., 2019)</ns0:ref>. Starting from crop selection, yielding, crop disease prediction; different ML techniques like Artificial Neural Networks (ANN), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), and Decision trees have shown huge success <ns0:ref type='bibr' target='#b6'>(Ge et al., 2019)</ns0:ref>. After the great success of the combination of WSN and ML techniques; there is a requirement for more automation and without human intervention. The development of Machine to Machine (M2M) and the Internet of Things (IoT); allow devices to communicate with each other without much intervention of humans. These days, the usage of mobile devices is increasing, at the same time cloud computing is becoming a popular technology.</ns0:p><ns0:p>Moreover, there is a high demand to analyze the real-time data based on the historical information for irrigating the fields. There is limited research on M2M system which communicates among the devices towards performing analysis and recommending more intelligently. By realizing the water scarcity issue and at the same time the technological advancement motivated us to design a fully automated irrigation system. This system must be smart enough so that it can adapt to the local climatic conditions and precisely predict the decision on irrigation in a reliable way.</ns0:p><ns0:p>There are many aspects of an efficient automatic irrigation system. As weather data is an important parameter for making irrigation decisions, the system must be smart enough to integrate the forecasted weather data <ns0:ref type='bibr' target='#b9'>(Goldstein et al., 2018)</ns0:ref>. The next important aspect is to estimate the soil features properly so that the prediction error can be minimized in the irrigation recommendation system <ns0:ref type='bibr' target='#b16'>(Khoa et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Once the weather data along with soil and environmental parameters are forecasted, the data can be used to make final decisions regarding irrigation. For this task, we need an efficient binary classifier to decide whether to irrigate or not. The advanced ML techniques and IoT may provide a solution for it. The two main motivations for this work are efficient water usage in irrigation to reduce the wastage of water and minimize human intervention as much as possible.</ns0:p><ns0:p>The main objective of the work is to reduce water wastage in the agricultural field by introducing an IoT enabled ML trained recommendation system (IoT-IIRS). In this prototype, the analyzed information can be sent from the cloud server to the farmer's mobile handset priorly. Towards watering the field or not this system makes things simpler for farmers. The main contributions of this work are stated below:</ns0:p><ns0:p>1. An IoT enabled ML trained recommendation system (IoT-IIRS) is proposed for efficient water usage.</ns0:p><ns0:p>2. IoT devices are deployed in the crop field to collect the ground and environmental details precisely.</ns0:p><ns0:p>The gathered data such as air temperature, soil temperature, humidity, and soil moisture are forwarded through an Arduino and stored in a cloud-based server, which applies ML algorithms such as SVM, regression tree, and agglomerative clustering to analyze those data and suggest the farmer regarding the irrigation.</ns0:p><ns0:p>3. To make the system robust and adaptive, an inbuilt feedback mechanism is added to this recommendation system. 4. From the experimentation, it is found that the proposed system performs well on both the datasets such as our own collected data (PME, 2021) and NIT Raipur crop dataset <ns0:ref type='bibr'>(rai, 2020)</ns0:ref>.</ns0:p><ns0:p>The remaining part of the paper is organized as follows. The next section introduces the related work.</ns0:p><ns0:p>The proposed methodology section discussed the working steps for the solution framework. The results of the experiments are well discussed in the result section. Finally, the last section ends with concluding remarks and further scope for improvement.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:1:2:CHECK 7 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Communication is one of the most important aspects of the implementation of an automatic and intelligent irrigation system. In the last few years, there is significant research carried out on smart irrigation.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b28'>(Salas et al., 2014)</ns0:ref>, authors suggested the implementation of GPRS (general packet radio service) communication as a gateway between WSN (wireless sensor network) and the Internet. Numerous data transmission techniques have been applied in closed-loop watering systems. These are used to apply the required amount of water in the desired place in due time to conserve natural resources. <ns0:ref type='bibr'>Bhanu et al.</ns0:ref> presented a system to build a wireless sensor network-based soil moisture controller, which determines the water demand by differentiating the soil moisture with a preset threshold value <ns0:ref type='bibr' target='#b3'>(Bhanu et al., 2014)</ns0:ref>.</ns0:p><ns0:p>Field authentication tests are regularly performed on distinct soils to estimate the soil moisture and water quantity in the soil to develop a productive watering system. If the preserve data do not match with the measured soil data, an interrupt is passed to the pressure unit, to stop the irrigation. Solar power based intelligent irrigation system is proposed to provide the required amount of water to the crop field <ns0:ref type='bibr' target='#b25'>(Rehman et al., 2017)</ns0:ref>. Soil and humidity sensors are deployed to measure the wet and dry states of the soil. Once sensing is completed, the sensor node transmits the signal to the microcontroller. Further, they forward that signal to the relay to switch on and off the motor. In <ns0:ref type='bibr' target='#b15'>(Kaur and Deepali, 2017)</ns0:ref>, the authors presented a WSN based smart irrigation system for efficient water usage with the help of automated remote sensing and persistent analysis of soil parameters and environmental conditions using machine learning. <ns0:ref type='bibr'>Hema et al.</ns0:ref> proposed an approach to estimate the native real-time weather parameters for interpolation with the help of an Automated Weather Station (ASW) <ns0:ref type='bibr' target='#b12'>(Hema and Kant, 2014</ns0:ref>). This intelligent system provides past, present, and future predictions utilizing nearby ASW data and control the irrigation process during conditions like rainfall. To control the irrigation, soil moisture and ASW data are exploited for error correction, where interpolated value is compared with soil moisture data. In <ns0:ref type='bibr' target='#b1'>(Ashwini, 2018)</ns0:ref>, the authors stated that the smart irrigation system employing WSN and GPRS modules optimizes water utilization for any agricultural crop. This approach is comprised of a distributed wireless sensor network along with different sensors such as moisture and temperature sensor. Gateway components are employed to transfer data from the sensor unit to the base station. Direct order is sent to the actuator for regulating the irrigation process, and handle data from the sensor unit. According to the need and conditions of the field, different algorithms have been used in the system. It is programmed in a microcontroller that sends commands through an actuator to regulate water quantity with a valve unit. The entire framework is powered by photo-voltaic panels, where duplex communication taking place through cellular networks.</ns0:p><ns0:p>Web applications control irrigation through regular monitoring and irrigation scheduling. In <ns0:ref type='bibr' target='#b21'>(Leh et al., 2019)</ns0:ref>, the authors designed both hardware and software by analyzing the routing protocols of the sensor network. Mobile phones and wireless PDA helps to monitor the soil moisture content as a result irrigation system is controlled. Mohapatra et al. proposed an irrigation system based on WSN <ns0:ref type='bibr' target='#b23'>(Mohapatra et al., 2019</ns0:ref>). The designed system uses fuzzy logic and neural network to save water efficiently. The used fuzzy neural network is an integrated set of fuzzy logic reasoning and self-learning ability of a neural network.</ns0:p><ns0:p>Sensor nodes measure temperature, humidity, soil moisture, and light intensity data. LAN or WAN helps to transfer the collected data to the irrigation control system via gateway nodes. The electromagnetic valve is controlled for precision irrigation based on the collected data. To predict the soil moisture, authors in <ns0:ref type='bibr' target='#b8'>(Goap et al., 2018)</ns0:ref>, developed an algorithm that works based on field sensors data and weather forecasting data. The algorithm uses the support vector regression model and k-means clustering. This algorithm also provides a suggestion regarding irrigation based on the level of soil moisture. Collected device information and the output of the algorithm is stored in MySQL Database at the server end. In <ns0:ref type='bibr' target='#b11'>(Gutiérrez et al., 2013b)</ns0:ref>, the authors presented a system that uses a camera to capture images. Captured images are processed to determine the water content of the soil. Depending upon the level of water in the soil, water is pumped into the crop field. The camera is controlled from an android application.</ns0:p><ns0:p>The camera captures RGB picture of soil using an anti-reflective glass window to find the wet and dry areas. The WiFi connection of the smartphone is used to transmit the estimated value to the gateway through a router to control the water pump. Machine to machine communication is applied as a robust mechanism for effective water management during the farm irrigation process <ns0:ref type='bibr' target='#b30'>(Shekhar et al., 2017)</ns0:ref>. In <ns0:ref type='bibr' target='#b35'>(Vij et al., 2020)</ns0:ref>, the authors proposed a distributed network environment using IoT, machine learning, and WSN technologies for efficient water usage and reducing soil erosion. Soil moisture prediction is one of the most important tasks for an automatic irrigation system. Many researchers contributed various methodologies and algorithms for this task <ns0:ref type='bibr' target='#b0'>(Adeyemi et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b32'>Sinwar et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b31'>Singh et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:1:2:CHECK 7 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>PROPOSED METHODOLOGY</ns0:head><ns0:p>The proposed architecture along with the descriptions are mentioned in this section. It is a three-tier architecture. The solution architecture of our smart irrigation system is given in Fig. <ns0:ref type='figure'>1</ns0:ref>. The details of each level are as described below.</ns0:p><ns0:formula xml:id='formula_0'>!'#$%& '&$() *&$+,#!- '&$() ,-*.-!/,#!- 012)+-%+&! 34) +-%+&! 5/,-!) .#*. '/6-)+-%+&!) !-/'$%7+809 :;)/%/(<=-! >-+#(,+)&?) :;8)09 1/%'(-! %'!&$') .. 3+-!) @%?&!*/,$&% A!&.) @%?&!*/,$&% '/6-)%-B)+-%+&!) !-/'$%7+),&)09 > C ' 2 4 $ -B ) ' / , / D ) E & * * / % ' ) , F -) ! ' # $ % & > C ' 2 > C ' 2 +G)BF-,F-!),&) ,#!%)&%D&??).#*.) 3+-!)(-6-( A(&#') H'-!6-!I)(-6-( A!&.)?$-(') ;-6-( J & !-E / + ,-' ) B -/ ,F -!) ' / ,/ 5 -/ , F -! ) K @ Figure 1. Solution framework</ns0:formula><ns0:p>• Crop Field Level: The first one is the crop field level where different sensors are deployed in the field. Various sensors like soil moisture (EC-1258), soil temperature (DS18B20), air temperature (DHT11), and humidity (DHT11) are used to gather all these soil and environmental attributes. All these sensors collect data and send them to the Arduino. Then the Arduino forwards the sensor data to the cloud server. All these sensor data are collected twice a day and forwarded for cloud storage using a micro-controller device. The average value is calculated and stored as the final reading for that particular day. For eliminating the inter-dependency among the used parameters, the Pearson correlation has been computed. It is found that there is no strong correlation exists.</ns0:p><ns0:p>The Arduino is also connected to the motor pump to turn on/off for irrigation. Here, we use the Arduino microcontroller because it requires low energy for its functioning.</ns0:p><ns0:p>• Cloud Level: The second level is the cloud level where the cloud server is used to provide service to the user. The sensors' data are stored in the database. Then the data is feed to the machine learning-based model to analyze them. This ML unit is the heart of this intelligent system, which has two sections. One is the regression model, used to predict the soil and environmental parameters in advance. By doing so, it can be used effectively for better performance of the system. The parameters that are considered from forecasted weather data are the atmospheric pressure,</ns0:p></ns0:div>
<ns0:div><ns0:head>4/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:1:2:CHECK 7 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 1 Working steps for solution framework.</ns0:p><ns0:p>Input: Authentication details for Login to Android application; All relevant crop information from the drop down list Output: Motor turn on/off 1: Collect all sensor data at regular intervals through the Arduino.</ns0:p><ns0:p>2: Save the sensor readings in the cloud server database.</ns0:p><ns0:p>3: Using the weather forecasted data, the ML analyzer model analyzes these stored sensor data to check whether irrigation is required or not? 4: The recommendation of the ML model is forwarded to the Android through the handler. 5: Based on the ML recommendation and sensor readings the user will inform the Arduino to send an on/off signal to the motor. 6: User may follow the recommendation and irrigate the field. 7: If the user does not follow the recommendation, then feedback will be sent and stored in the database for the corresponding sensor readings.</ns0:p><ns0:p>precipitation, solar radiation, and wind speed. Further, these predicted values are passed through a clustering model to reduce the predicted errors. The other ML-based model takes the results of the clustering model along with the forecasted weather data as input. This binary classification model categorizes the predicted samples into two predefined classes like irrigation required (Y) or not required (N). Then the results of these ML models are stored in the database for future actions. The last component of the cloud-based server is the handler which is used for coordination between the user and field units. Based on the suggestion of the ML model, the handler will send irrigation suggestions to the user via the Android application. Based on the literature from agricultural research the formula used for calculating the water requirement is discussed in Equation <ns0:ref type='formula' target='#formula_1'>1</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>EV o * C f = W need<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>• User Level: At the user level, the user interacts with an Android application to enter the details about the farmer and crop. Farmer credentials are used to authenticate the user through login operation. In crop details, the farmer may have to provide the information by selecting the dropdown menus like a session, crop name, total crop days, date of sowing, etc. Through this application, the farmer may get all relevant information about the crop and field. Thus, upon receiving the sensor data along with the irrigation suggestion on the Android application, the user may order the on/off command to the microcontroller. Here, the system is an adaptive one that takes feedback from the user for each suggestion by the handler. If the farmer does not follow the recommendation, then feedback is sent to the server for updation. Hence, the system will be fine-tuned subsequently based on user feedback. The microcontroller upon receiving the on/off command from the user performs the motor on/off operation for the supply of water to plants. Thus, we have an automatic irrigation system, which can be used to increase the productivity of the crop by providing an optimal amount of water. The working steps of the proposed solution framework have shown in Algorithm 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Machine Learning Model</ns0:head><ns0:p>As mentioned earlier, the machine learning analyzer is the main building block of our proposed system.</ns0:p><ns0:p>On the stored sensor data, the regression tree algorithm has been applied to predict future soil and environmental data. Regression trees may grasp non-linear relationships and are reasonably robust to outliers. These predicted parameters are further improved using the Agglomerative clustering algorithm.</ns0:p><ns0:p>It may be suitable for this task as the clusters are not supposed to be globular. The forecasted weather data are combined with the predicted soil data to be fed into the classification model. Then the classifier categorizes the data sample whether irrigation is required or not at that time. This way the ML technique helps the farmer with the suggestion for irrigation on the crop field. The details about this recommendation architecture is mentioned in Fig. <ns0:ref type='figure'>2</ns0:ref>. The working steps of the ML model have shown in Algorithm 2.</ns0:p></ns0:div>
<ns0:div><ns0:head>Regression Tree (RT)</ns0:head><ns0:p>A regression tree is constructed via a procedure known as binary iterative splitting, which is a repetitive process that partition the data into branches, subbranches, and so on. Here, each decision node in the tree</ns0:p></ns0:div>
<ns0:div><ns0:head>5/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:1:2:CHECK 7 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>holds an assessment of several input variable's values. The leaf nodes of the regression tree carry the predicted output (response) variable values <ns0:ref type='bibr' target='#b34'>(Torres-Barrán et al., 2019)</ns0:ref>. Here, the AdaBoost algorithm is applied using its library.</ns0:p></ns0:div>
<ns0:div><ns0:head>Agglomerative Clustering (AC)</ns0:head><ns0:p>Agglomerative clustering is a type of hierarchical clustering which works on a bottom-up approach <ns0:ref type='bibr' target='#b33'>(Stashevsky et al., 2019)</ns0:ref>. A fundamental assumption in hierarchical agglomerative clustering is that the merge operation is monotonic. Here, each scrutiny begins in its own cluster, and couples of clusters are merged as one moves up the hierarchy. This clustering may improve the performance of classical regression by partitioning the sample training space into subspaces.</ns0:p></ns0:div>
<ns0:div><ns0:head>Support Vector Machine (SVM)</ns0:head><ns0:p>SVM is a supervised machine learning model which works very well for many classification task <ns0:ref type='bibr' target='#b20'>(Lebrini et al., 2019)</ns0:ref>. Once SVM is fed with sets of labeled training data for each class, they can categorize the new samples. For nonlinear classification, it performs well with a limited number of labeled training data. devices one power bank is used as the source of power. This power source may be replaced with a small 207 solar panel with a battery in the future. The Arduino is used because of its low power consumption.</ns0:p><ns0:formula xml:id='formula_2'>!'#$%&'#()*#%'+#') (%'% ,!--#$'#()&#.&!')(%'% /#0'#&&1!.)''##) ''%1.1.0)2!(#- 3'#(1$'#()&#.&!')(%'% 400-!2#'%'15#) $-6&'#'1.0)%-0!'1'+2)'!) 7'#(1$')2!'#)%$$6'%'#) &#.&!')7%'%2#'#'&)*1'+) 21.12%-)#''!' 89:);<<<= 1.%-)7'#(1$'#()&#.&!')(%'% <#%'+#') 438 !'#$%&'#()*#%'+#')(%'% >?9)$-%&&1@1#')A'%1.1.0)2!(#- >600#&'1!.)!.)1''10%'1!.)'#B61'#()!').!'C</ns0:formula></ns0:div>
<ns0:div><ns0:head>208</ns0:head></ns0:div>
<ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head></ns0:div>
<ns0:div><ns0:head>209</ns0:head><ns0:p>The performance of the proposed model has been evaluated using Python. All the ML models have implemented using the Scikit-learn Python library. To validate the effectiveness of the system, our own collected data (PME, 2021) along with the Crops dataset of NIT Raipur (rai, 2020) have been used. We collected data like soil moisture, soil temperature, humidity, air temperature, and UV radiation by using the relevant sensors. The stored readings of each day are the average of all readings on that day. We gathered 150 samples from our simulation area. In the NIT Raipur dataset, there are 501 samples available. As mentioned earlier, we integrated forecasted weather data by using weather APIs <ns0:ref type='bibr'>(res, 2020; imd, 2020)</ns0:ref>. Through this, we have collected the data like next-day rainfall, amount of rainfall, precipitation, etc. of that particular area to make our irrigation accuracy even better. As the evening is the best time to irrigate,</ns0:p></ns0:div>
<ns0:div><ns0:head>7/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:1:2:CHECK 7 Jan 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science each day in the evening the machine learning model runs and based on its results the handler sends irrigation suggestions to the farmer along with the stored and forecasted parameters. For the classification task five-fold cross-validation method is used for better generalization. The efficiency of the system is estimated based on the most extensively used measures, such as precision, recall, F1-measure, and accuracy (A). Mathematically, these measures are represented in Equation 2 to Equation <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>Precision (P r ) = T p T p + F p (2) Recall (R c ) = T p T p + F n (3) F1 − measure (F1) = 2 * P r * R c P r + R c<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where T p , F p , and F n are denoted as true positives, false positives, and false negatives respectively. The results of different machine learning models for classification of irrigation required or not required are given in Table ( <ns0:ref type='formula' target='#formula_1'>1</ns0:ref>) and Fig. <ns0:ref type='figure' target='#fig_2'>4</ns0:ref> to Fig. <ns0:ref type='figure' target='#fig_5'>7</ns0:ref>. The experimental results show that the performance of our system is quite satisfactory for this automation task. The performance of this system may further improve with experience as new data will be collected. The user feedback will further fine-tune this system even if there are a few wrong suggestions. From the experimental results, it is quite clear that the SVM based model outperforms other classification models on both the datasets. Among these two datasets, the NIT Raipur crop dataset performed better which may be due to the larger number of samples present in it. </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In this work, a smart irrigation system prototype for efficient usage of water and minimal human intervention has been proposed. The proposed recommendation system includes regression of soil and environmental attributes, which are further improved with the help of agglomerative clustering. Forecasted weather data are integrated with these predicted attributes to reduce the irrigation error. Finally, the classification model is used to categorize the combined set of attributes for irrigation required or not.</ns0:p><ns0:p>Based on these results the system recommends the farmer for the next irrigation. If the farmer rejects the approval then feedback is sent to the system. Further, it updates and fine-tunes the model subsequently.</ns0:p><ns0:p>From the experiment, it is concluded that the SVM model outperforming other classification models on both datasets. As the ML-based models are data-hungry, there is a strong intuition that the proposed system will perform even better with more samples. Further, this system can be extended for deciding on spraying appropriate chemicals for proper growth of crop. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>ACKNOWLEDGMENTS</ns0:head><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. Machine learning based model architecture</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>This work was supported by the Collaborative Research Scheme (CRS) of the National Project Implementation Unit (NPIU), MHRD, Government of India. The authors wish to thank the Department of Computer Application, NIT Raipur for making available the Crops dataset.8/12PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:1:2:CHECK 7 Jan 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Performance evaluation by taking different ML approaches (70:30 ratio training and testing) using our collected data.</ns0:figDesc><ns0:graphic coords='10,150.08,69.22,396.87,283.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Performance evaluation by taking different ML approaches (70:30 ratio training and testing) using NIT Raipur Crop dataset.</ns0:figDesc><ns0:graphic coords='10,150.08,405.43,396.87,283.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Performance evaluation by taking different ML approaches (5-fold cross validation) using our collected data.</ns0:figDesc><ns0:graphic coords='11,150.08,69.22,396.87,283.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Performance evaluation by taking different ML approaches (5-fold cross validation) using NIT Raipur Crop dataset.</ns0:figDesc><ns0:graphic coords='11,150.08,405.43,396.87,283.48' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='8,142.61,205.51,411.83,296.20' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Suggestion for irrigation required or not required</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>70:30 ratio of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>5-fold cross</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Classifier</ns0:cell><ns0:cell cols='2'>training and testing</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>validation</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>P r</ns0:cell><ns0:cell>R c</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>A</ns0:cell><ns0:cell>P r</ns0:cell><ns0:cell>R c</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>Our sensor collected data</ns0:cell><ns0:cell cols='8'>Naive Bayes Decision Tree (C4.5) 85.74 85.12 83.81 84.46 85.83 85.19 83.88 84.53 83.48 82.67 80.95 81.80 83.61 82.77 80.99 81.87 SVM 87.29 86.77 85.42 86.09 87.45 86.85 85.51 86.17</ns0:cell></ns0:row><ns0:row><ns0:cell>NIT Raipur Crop dataset</ns0:cell><ns0:cell cols='8'>Naive Bayes Decision Tree (C4.5) 86.15 85.75 83.89 84.81 86.29 85.86 83.95 84.89 84.37 83.35 82.63 82.99 84.51 83.44 82.68 83.06 SVM 88.05 87.44 86.52 86.98 88.22 87.55 86.59 87.07</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='12'>/12 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:1:2:CHECK 7 Jan 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
02 Jan 2021
Dr. Abdel Hamid Soleman
Academic Editor, Peer Computer Science
RE: CS-2020:10:54857:0:1:REVIEW
Dear Dr. Soleman,
Thanks for the insightful comments from reviewers. We have carefully addressed their concerns and made corresponding revisions according to their advice. We hope the reviewers of PeerJ Computer Science, find our responses and amendments satisfactory. The following is the answers we have made in response to the reviewer’s comments:
Reviewer #1
The title is not in accordance with the work done, for example, I am unable to find the implementation of IoT in the manuscript.
Response:
Thank you for the comment. Many IoT devices like Arduino and sensors are used to collect and forward the ground and environmental information to the cloud for further processing. Hence the term IoT-IIRS has been devised. To make the title more meaningful, the prototype model has incorporated in Figure 3.
The literature review in the introduction is very poor.
Response:
Thank you for the comment. As suggested, introduction part has been expanded. Additionally, we have added a few contents from line no-44 to lino-62. To support the related work in the introduction section, the following references have been added.
• Kim, Yunseop, Robert G. Evans, and William M. Iversen. 'Remote sensing and control of an irrigation system using a distributed wireless sensor network.' IEEE transactions on instrumentation and measurement 57.7 (2008): 1379-1387.
• Gutiérrez, Joaquín, et al. 'Automated irrigation system using a wireless sensor network and GPRS module.' IEEE transactions on instrumentation and measurement 63.1 (2013): 166-176.
• Jha, Kirtan, et al. 'A comprehensive review on automation in agriculture using artificial intelligence.' Artificial Intelligence in Agriculture 2 (2019): 1-12.
• Ge, Xiangyu, et al. 'Combining UAV-based hyperspectral imagery and machine learning algorithms for soil moisture content monitoring.' PeerJ 7 (2019): e6926.
The quality of the figures is very poor.
Response:
Thank you for bringing it to our notice. Figure 1, Figure 2, Figure 4-6 have redrawn and incorporated in the revised manuscript.
The authors claim that 'a smart irrigation system prototype for efficient usage of water and minimal human intervention has been proposed'. However, I am unable to find this prototype in the paper.
Response:
As suggested the prototype of the model has included in Figure.3 and the necessary discussion has made from Line no 202-205.
Reviewer#2
The English writing should be improved.
Response:
As suggested, English has been improved in the revised version of the paper.
Your introduction needs more detail. I suggest you improve lines 55-57 to show why you think the existing researches are inadequate.
Response:
As suggested, introduction part has been improved with more recent works. Additionally, we have added few more content to support the work (from line no-44 to lino-62). To support the related work in introduction section, the following references have been added.
following references have been added.
• Kim, Yunseop, Robert G. Evans, and William M. Iversen. 'Remote sensing and control of an irrigation system using a distributed wireless sensor network.' IEEE transactions on instrumentation and measurement 57.7 (2008): 1379-1387.
• Gutiérrez, Joaquín, et al. 'Automated irrigation system using a wireless sensor network and GPRS module.' IEEE transactions on instrumentation and measurement 63.1 (2013): 166-176.
• Jha, Kirtan, et al. 'A comprehensive review on automation in agriculture using artificial intelligence.' Artificial Intelligence in Agriculture 2 (2019): 1-12.
• Ge, Xiangyu, et al. 'Combining UAV-based hyperspectral imagery and machine learning algorithms for soil moisture content monitoring.' PeerJ 7 (2019): e6926.
The limitations of the existing system have been discussed from line no-60 to line no 64.
The authors spend a lot of ink on related work (lines 75-129), but how exactly are these works related to the proposed method is not very clear. Figure 2 should be improved.
Response: As suggested, an explanation has been provided (line no-77 to line no-80) in the revised paper. We have redrawn the Figure 2.
The writing of the paper is quite fluent except that there are a few grammar mistakes. For instance, the situation may lead to worse in the coming days.
Response:
Thank you for the suggestion. This time the paper has corrected with utmost care by taking help from native English speaker.
It is difficult to understand the motivation of the paper.
Response:
There are mainly two motivations for this work to us.
1. Efficient water usage to reduce water wastage
2. Reduce human intervention as much as possible with an intelligent recommendation system.
Moreover, there is a high demand to analyze the real-time data based on the historical information for irrigating the fields. There is limited research on M2M system which helps to design a more intelligent system. By realizing the water scarcity issue and at the same time the technological advancement enforce to design a fully automated irrigation system. This system must be smart enough so that it can adapt to the local climatic conditions and precisely predict the decision on irrigation in a reliable way.
The motivation of the work has discussed from line no: 62 to 67.
Why some references in the article are in bracket while others are not.
Response:
In the revised version all references have been correctly cited.
" | Here is a paper. Please give your review comments after reading it. |
136 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In the traditional irrigation process, a huge amount of water consumption is required which leads to water wastage. To reduce the wasting of water for this tedious task, an intelligent irrigation system is urgently needed. The era of machine learning (ML) and the Internet of Things (IoT) brings it is a great advantage of building an intelligent system that performs this task automatically with minimal human effort. In this study, an IoT enabled ML-trained recommendation system is proposed for efficient water usage with the nominal intervention of farmers. IoT devices are deployed in the crop field to collect the ground and environmental details precisely. The gathered data are forwarded and stored in a cloudbased server, which applies ML approaches to analyze data and suggest irrigation to the farmer. To make the system robust and adaptive, an inbuilt feedback mechanism is added to this recommendation system. The experimentation, reveals that the proposed system performs quite well on our own collected dataset and NIT Raipur crop dataset.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Water is the essential natural resource for agriculture, and it is limited in nature <ns0:ref type='bibr' target='#b14'>(Kamienski et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Wang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Sahoo et al., 2019)</ns0:ref>. In a country like India, a huge share of water is used for irrigation <ns0:ref type='bibr' target='#b24'>(Nawandar and Satpute, 2019)</ns0:ref>. Crop irrigation is a noteworthy factor in determining plant yield, depending upon multiple climatic conditions such as air temperature, soil temperature, humidity, and soil moisture <ns0:ref type='bibr' target='#b2'>(Bavougian and Read, 2018)</ns0:ref>. Farmers primarily rely on personal monitoring and experience for harvesting fields <ns0:ref type='bibr' target='#b7'>(Glaroudis et al., 2020)</ns0:ref>. Water needs to be maintained in the field <ns0:ref type='bibr' target='#b22'>(Liu et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Fremantle and Scott, 2017)</ns0:ref>. Scarcity of water in these modern days is a hot issue. Such a scarcity is already affecting people worldwide <ns0:ref type='bibr' target='#b18'>(LaCanne and Lundgren, 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Schleicher et al., 2017)</ns0:ref>. The situation may worsen in the coming years.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:3:0:NEW 25 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The uniform water distribution of traditional irrigation systems is not optimal. Hence, research effort has been exterted towards efficient agricultural monitoring system <ns0:ref type='bibr' target='#b17'>(Kim et al., 2008)</ns0:ref>. In this regard, standalone monitoring station is under development. For instance, A mixed signal processor (MSP430) has been developed with a microcontroller along with a set of meteorological sensors. A wireless sensorbased monitoring system has also been developed. It has made of several wireless sensor nodes and a gateway <ns0:ref type='bibr' target='#b10'>(Gutiérrez et al., 2013a)</ns0:ref>. It has been implemented as an easy solution with better spatial and temporal determinations. In addition, there is a demand for automating the irrigation system. Automation enables machines to apply intelligence by reading historical data and accordingly analyze and predict the output. This mechanism is more effective than the traditional rule-based algorithm <ns0:ref type='bibr' target='#b4'>(Chlingaryan et al., 2018)</ns0:ref>. From here onwards the role of machine learning (ML) and artificial intelligence (AI) play an important role. The applications of ML in the field of agriculture domain is numerous <ns0:ref type='bibr' target='#b13'>(Jha et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Starting from crop selection and yielding to crop disease prediction, different ML techniques like artificial neural networks (ANN), support vector machine (SVM), k-nearest neighbor (k-NN), and decision trees have shown huge success <ns0:ref type='bibr' target='#b6'>(Ge et al., 2019)</ns0:ref>. After the great success of the combination of WSN and ML techniques, there is a requirement for more automation and without human intervention. The development of machine to machine (M2M) and the Internet of Things (IoT) allows devices to communicate with one another without much human intervention. These days, the usage of mobile devices is increasing, at the same time cloud computing is becoming a popular technology. The existing water monitoring system used wireless sensors for monitoring the soil condition for irrigation. These systems barely capture the data from the land and subsequently controls the electric motor for watering the land.</ns0:p><ns0:p>Moreover, there is a high demand to analyze the real-time data based on the historical information for irrigating fields. Research on M2M system is limited, particularly regarding communication among devices to more intelligently perform analysis and recommendation. Realizing the water scarcity issue and at the same time the technological advancement, we are motivated to design a fully automated irrigation system. This system must be smart enough so that it can adapt to the local climatic conditions and precisely predict the decision on irrigation in a reliable way.</ns0:p><ns0:p>There are many aspects of an efficient automatic irrigation system. As weather data is an important parameter for making irrigation decisions, the system must be smart enough to integrate the forecasted weather data <ns0:ref type='bibr' target='#b9'>(Goldstein et al., 2018)</ns0:ref>. The next important aspect is to estimate the soil features properly so that the prediction error can be minimized in the irrigation recommendation system <ns0:ref type='bibr' target='#b16'>(Khoa et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Once the weather data along with soil and environmental parameters are forecasted, the data can be used to make final decisions regarding irrigation. For this task, we need an efficient binary classifier to decide whether to irrigate or not. The advanced ML techniques and IoT may provide a solution. The two main motivations for this work are efficient water usage in irrigation to reduce water wastage and minimize human intervention as much as possible.</ns0:p><ns0:p>The main objective of the work is to reduce water wastage in the agricultural field by introducing an IoT-enabled ML-trained recommendation system (IoT-IIRS). In this prototype, the analyzed information can be sent from the cloud server to the farmer's mobile handset priorly. This system makes the decision of whether to water the field or not simple for farmers. The main contributions of this work are as follows:</ns0:p><ns0:p>1. An IoT-IIRS is proposed for efficient water usage.</ns0:p><ns0:p>2. IoT devices are deployed in the crop field to collect the ground and environmental details precisely.</ns0:p><ns0:p>The gathered data such as air temperature, soil temperature, humidity, and soil moisture are forwarded through an Arduino and stored in a cloud-based server, which applies ML algorithms such as SVM, regression tree, and agglomerative clustering to analyze those data and suggest irrigation to the farmer. 3. To make the system robust and adaptive, an inbuilt feedback mechanism is added to this recommendation system. 4. Experimentation reveals that the proposed system performs well on our own collected dataset and NIT Raipur crop dataset.</ns0:p><ns0:p>The remaining part of the paper is organized as follows. The next section introduces the related work. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the experiments are discussed in the result section. Finally, the last section ends with concluding remarks and further scope for improvement.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Communication is one of the most important aspects of the implementation of an automatic and intelligent irrigation system. In the last few years, significant research is carried out on smart irrigation. In <ns0:ref type='bibr' target='#b28'>(Salas et al., 2014)</ns0:ref>, the authors suggested the implementation of general packet radio service (GPRS) communication as a gateway between wireless sensor network (WSN) and the Internet. Numerous data transmission techniques have been applied in closed-loop watering systems. These are used to apply the required amount of water in the desired place in due time to conserve natural resources. <ns0:ref type='bibr'>Bhanu et al. presented</ns0:ref> a system to build a wireless sensor network-based soil moisture controller, which determines the water demand by differentiating the soil moisture with a preset threshold value <ns0:ref type='bibr' target='#b3'>(Bhanu et al., 2014)</ns0:ref>. Field authentication tests are regularly performed on distinct soils to estimate the soil moisture and water quantity in the soil to develop a productive watering system. If the preserved data do not match with the measured soil data, then an interrupt is passed to the pressure unit to stop the irrigation. Solar power-based intelligent irrigation system is proposed to provide the required amount of water to the crop field <ns0:ref type='bibr' target='#b25'>(Rehman et al., 2017)</ns0:ref>. Soil and humidity sensors are deployed to measure the wet and dry states of the soil. Once sensing is completed, the sensor node transmits the signal to the microcontroller. In addition, they forward that signal to the relay to switch on and off the motor. In <ns0:ref type='bibr' target='#b15'>(Kaur and Deepali, 2017)</ns0:ref>, the authors presented a WSN-based smart irrigation system for efficient water usage with the help of automated remote sensing and persistent analysis of soil parameters and environmental conditions using ML. <ns0:ref type='bibr'>Hema et al. proposed</ns0:ref> an approach to estimate the native real-time weather parameters for interpolation with the help of an automated weather station (ASW) <ns0:ref type='bibr' target='#b12'>(Hema and Kant, 2014</ns0:ref>). This intelligent system provides past, present, and future predictions utilizing nearby ASW data and control the irrigation process during conditions like rainfall. To control the irrigation, soil moisture and ASW data are exploited for error correction, where interpolated value is compared with soil moisture data. In <ns0:ref type='bibr' target='#b1'>(Ashwini, 2018)</ns0:ref>, the authors stated that the smart irrigation system employing WSN and GPRS modules optimizes water utilization for any agricultural crop. This approach comprises of a distributed WSN along with different sensors, such as moisture and temperature sensors. Gateway components are employed to transfer data from the sensor unit to the base station. Direct order is sent to the actuator for regulating the irrigation process and handling data from the sensor unit. According to the need and conditions of the field, different algorithms are used in the system. It is programmed in a microcontroller that sends commands through an actuator to regulate water quantity with a valve unit. The entire framework is powered by photo-voltaic panels, where duplex communication takes place through cellular networks. Web applications control irrigation through regular monitoring and irrigation scheduling. In <ns0:ref type='bibr' target='#b20'>(Leh et al., 2019)</ns0:ref>, the authors designed hardware and software by analyzing the routing protocols of the sensor network. Mobile phones and wireless personal digital assistant (PDA) help to monitor the soil moisture content, as a result, irrigation system is controlled. Mohapatra et al. proposed an irrigation system based on WSN <ns0:ref type='bibr' target='#b23'>(Mohapatra et al., 2019</ns0:ref>). The designed system uses fuzzy logic and neural network to save water efficiently. The used fuzzy neural network is an integrated set of fuzzy logic reasoning and self-learning ability of a neural network. Sensor nodes measure temperature, humidity, soil moisture, and light intensity data. LAN or WAN helps to transfer the collected data to the irrigation control system via gateway nodes. The electromagnetic valve is controlled for precision irrigation based on the collected data. To predict the soil moisture, the authors in <ns0:ref type='bibr' target='#b8'>(Goap et al., 2018)</ns0:ref>, developed an algorithm that works based on field sensor and weather forecasting data. The algorithm uses the support vector regression model and k-means clustering. This algorithm also provides a suggestion regarding irrigation based on the level of soil moisture. Collected device information and the output of the algorithm are stored in MySQL Database at the server end. In <ns0:ref type='bibr' target='#b11'>(Gutiérrez et al., 2013b)</ns0:ref>, the authors presented a system that uses a camera to capture images. Captured images are processed to determine the water content of the soil. Depending on the level of water in the soil, water is pumped into the crop field. The camera is controlled from an Android application. The camera captures RGB pictures of soil using an anti-reflective glass window to find the wet and dry areas. The WiFi connection of the smartphone is used to transmit the estimated value to the gateway through a router to control the water pump. M2M communication is applied as a robust mechanism for effective water management during farm irrigation <ns0:ref type='bibr' target='#b30'>(Shekhar et al., 2017)</ns0:ref>. In <ns0:ref type='bibr' target='#b35'>(Vij et al., 2020)</ns0:ref>, the authors proposed a distributed network environment using IoT, ML, and WSN technologies for efficient water usage and reduced soil erosion.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:3:0:NEW 25 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Soil moisture prediction is one of the most important tasks for an automatic irrigation system. Many researchers contributed various methodologies and algorithms for this task <ns0:ref type='bibr' target='#b0'>(Adeyemi et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b32'>Sinwar et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b31'>Singh et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>PROPOSED METHODOLOGY</ns0:head><ns0:p>The proposed three-tier architecture along with the descriptions are mentioned in this section. The solution architecture of our smart irrigation system is given in Fig. <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The details of each level are described below. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>!'#$%& '&$() *&$+,#!- '&$() ,-*.-!/,#!- 012)+-%+&! 34) +-%+&! 5/,-!) .#*. '/6-)+-%+&!) !-/'$%7+809 :;)/%/(<=-! >-+#(,+)&?) :;8)09 1/%'(-! %'!&$') .. 3+-!) @%?&!*/,$&% A!&.) @%?&!*/,$&% '/6-)%-B)+-%+&!) !-/'$%7+),&)09 > C ' 2 4 $ -B ) ' / , / D ) E & * * / % ' ) , F -) ! ' # $ % & > C ' 2 > C ' 2 +G)BF-,F-!),&) ,#!%)&%D&??).#*.) 3+-!)(-6-( A(&#') H'-!6-!I)(-6-( A!&.)?$-(') ;-6-( J & !-E / + ,-' ) B -/ ,F -!) ' / ,/ 5 -/ , F -! ) K @</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 1 Working steps for solution framework.</ns0:p><ns0:p>Input: Authentication details for Login to Android application; All relevant crop information from the drop down list Output: Motor turn on/off 1: Collect all sensor data at regular intervals through the Arduino.</ns0:p><ns0:p>2: Save the sensor readings in the cloud server database.</ns0:p><ns0:p>3: Using the weather forecasted data, the ML analyzer model analyzes these stored sensor data to check whether irrigation is required or not? 4: The recommendation of the ML model is forwarded to the Android through the handler. 5: Based on the ML recommendation and sensor readings the user will inform the Arduino to send an on/off signal to the motor. 6: User may follow the recommendation and irrigate the field. 7: If the user does not follow the recommendation, then feedback will be sent and stored in the database for the corresponding sensor readings.</ns0:p><ns0:p>for analysis. This ML unit is the heart of this intelligent system, which has two sections. One is the regression model that is used to predict the soil and environmental parameters in advance. By doing so, it can be used effectively to improve the performance of the system. The parameters that are considered from forecasted weather data are the atmospheric pressure, precipitation, solar radiation, and wind speed. These predicted values are passed through a clustering model to reduce the predicted errors. The other ML-based model takes the results of the clustering model along with the forecasted weather data as input. This binary classification model categorizes the predicted samples into two predefined classes: irrigation required (Y) or not required (N). The results of these ML models are stored in the database for future actions. The last component of the cloud-based server is the handler used for coordination between the user and field units. Based on the suggestion of the ML model, the handler will send irrigation suggestions to the user via the Android application.</ns0:p><ns0:p>Based on the agricultural literature, the formula used for calculating the water requirement is discussed in Equation <ns0:ref type='formula'>1</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>EV o * C f = W need (1)</ns0:formula><ns0:p>where, EV o = rate of evaporation</ns0:p><ns0:formula xml:id='formula_2'>C f = crop factor</ns0:formula><ns0:p>W need = amount of water needed.</ns0:p><ns0:p>• User Level: At the user level, the user interacts with an Android application to enter the details about the farmer and crop. Farmer credentials are used to authenticate the user through login operation. In crop details, the farmer may have to provide the information by selecting the dropdown menus like a session, crop name, total crop days, date of sowing, and so on. Through this application, the farmer may get all relevant information about the crop and field. Thus, upon receiving the sensor data along with the irrigation suggestion on the Android application, the user may order the on/off command to the microcontroller. Here, the system is an adaptive one that takes feedback from the user for each suggestion by the handler. If the farmer does not follow the recommendation, then feedback is sent to the server for updation. The system will be fine-tuned subsequently based on user feedback. The microcontroller upon receiving the on/off command from the user performs the motor on/off operation for the supply of water to plants. Thus, we have an automatic irrigation system, which can be used to increase the productivity of the crop by providing an optimal amount of water. The working steps of the proposed solution framework have shown in Algorithm 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>ML Model</ns0:head><ns0:p>As mentioned earlier, the ML analyzer is the main building block of our proposed system. On the stored sensor data, the regression tree (RT) algorithm is applied to predict future soil and environmental data.</ns0:p><ns0:p>RTs may grasp non-linear relationships and are reasonably robust to outliers. These predicted parameters are further improved using the agglomerative clustering (AC) algorithm. It may be suitable for this task as</ns0:p></ns0:div>
<ns0:div><ns0:head>5/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:3:0:NEW 25 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the clusters are not supposed to be globular. The forecasted weather data are combined with the predicted soil data to be fed into the classification model. Then, the classifier categorizes the data sample whether irrigation is required or not at that time. This way, the ML technique helps the farmer with the suggestion for irrigation on the crop field. The details about this recommendation architecture is mentioned in Fig. <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>The working steps of the ML model are detailed in Algorithm 2.</ns0:p></ns0:div>
<ns0:div><ns0:head>RT</ns0:head><ns0:p>RT is constructed via a procedure known as binary iterative splitting, which is a repetitive process that partition the data into branches, subbranches, and so on. Each decision node in the tree assesses the value of several input variable's values. The leaf nodes of the RT carry the predicted output (response) variable <ns0:ref type='bibr' target='#b34'>(Torres-Barrán et al., 2019)</ns0:ref>. Here, the AdaBoost algorithm is applied using its library.</ns0:p></ns0:div>
<ns0:div><ns0:head>AC</ns0:head><ns0:p>AC is a type of hierarchical clustering that works on a bottom-up approach <ns0:ref type='bibr' target='#b33'>(Stashevsky et al., 2019)</ns0:ref>.</ns0:p><ns0:p>A fundamental assumption in hierarchical AC is that the merge operation is monotonic. Each scrutiny begins in its own cluster, and clusters merged as one moves up the hierarchy. This clustering may improve the performance of classical regression by partitioning the sample training space into subspaces.</ns0:p></ns0:div>
<ns0:div><ns0:head>SVM</ns0:head><ns0:p>SVM is a supervised ML model that works very well for many classification tasks <ns0:ref type='bibr' target='#b19'>(Lebrini et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Once SVM is fed with sets of labeled training data for each class, they can be categorized into new samples. For non-linear classification, it performs well with a limited number of labeled training data. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_3'>!'#$%&'#()*#%'+#') (%'% ,!--#$'#()&#.&!')(%'% /#0'#&&1!.)''##) ''%1.1.0)2!(#- 3'#(1$'#()&#.&!')(%'% 400-!2#'%'15#) $-6&'#'1.0)%-0!'1'+2)'!) 7'#(1$')2!'#)%$$6'%'#) &#.&!')7%'%2#'#'&)*1'+) 21.12%-)#''!' 89:);<<<= 1.%-)7'#(1$'#()&#.&!')(%'% <#%'+#') 438 !'#$%&'#()*#%'+#')(%'% >?9)$-%&&1@1#')A'%1.1.0)2!(#- >600#&'1!.)!.)1''10%'1!.)'#B61'#()!').!'C</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>bank is used as the source of power. This power source may be replaced with a small solar panel with a battery in the future. The Arduino is used because of its low power consumption. The prototype of the android application is mentioned in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>The performance of the proposed model has been evaluated using Python. All the ML models have been implemented using the Scikit-learn Python library. To validate the effectiveness of the system, our own collected data (GCEKIoTCommunity) 1 along with the Crops dataset of NIT Raipur (NitrrMCACommunity) 2 have been used. We have collected data like soil moisture, soil temperature, humidity, air temperature, and UV radiation by using the relevant sensors. The stored readings of each day are the average of all readings on that day. We gathered 150 samples from our simulation area. In the NIT Raipur dataset, there are 501 samples available. As mentioned earlier, we have integrated the forecasted weather data from Indian meteorological department site (IMD) 3 by using weather application programming interface (APIs) 4 . Through this, we have collected the data like next-day rainfall, amount of rainfall, precipitation, and so on of a particular area to make our irrigation accuracy even better. As the evening is the best time to irrigate, each day in the evening, the ML model runs. Based on its results, the handler sends irrigation suggestions to the farmer along with the stored and forecasted parameters. For the classification task, the five-fold cross-validation method is used for better generalization. The efficiency of the system is estimated based on the most extensively used measures, such as precision, recall, F1-measure, and accuracy (A). Mathematically, these measures are represented in Equations 2 to 4.</ns0:p><ns0:formula xml:id='formula_4'>Precision (P r ) = T p T p + F p (2) Recall (R c ) = T p T p + F n (3) F1 − measure (F1) = 2 * P r * R c P r + R c<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where T p , F p , and F n are denoted as true positives, false positives, and false negatives, respectively. The results of different ML models for classification of whather irrigation required or not are given in <ns0:ref type='table' target='#tab_0'>Table (1)</ns0:ref> and also depicted in Fig. <ns0:ref type='figure'>5</ns0:ref> and Fig 6 <ns0:ref type='figure'>.</ns0:ref> The experimental results show that the performance of our system is quite satisfactory for this automation task. The performance of this system may further improve with experience as new data will be collected. The user feedback will further fine-tune this system even if there are a few wrong suggestions. The experimental results clearly demonstrate that the SVM-based model outperforms other classification models on both datasets. Of the two datasets, the NIT Raipur crop dataset performs better, which may be due to the larger number of samples present in it. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>In this work, a smart irrigation system prototype for efficient usage of water and minimal human intervention is proposed. The proposed recommendation system includes regression of soil and environmental attributes, which are further improved with the help of AC. Forecasted weather data are integrated with these predicted attributes to reduce the irrigation error. Finally, the classification model is used to categorize the combined set of attributes for the suggestion of irrigation is required or not. Based on these results, the system recommends the farmer for the next irrigation. If the farmer rejects the approval, then feedback is sent to the system. Further, it updates and fine-tunes the model subsequently. The experiment reveals that the SVM model outperforms other classification models on both datasets. As the ML-based models are data hungry, there is a strong intuition that the proposed system will perform even better with more samples. This system can be further extended to deciding on spraying appropriate chemicals for proper growth of crop.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>The proposed methodology section discusses the working steps for the solution framework. The results of2/11PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:3:0:NEW 25 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Solution framework</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .Figure 3 .Figure 4 .</ns0:head><ns0:label>234</ns0:label><ns0:figDesc>Figure 2. Architecture of ML-based model</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Performance evaluation of proposed model on our own collected dataset (a) with 70:30 ratio (b) with 5-fold cross-validation</ns0:figDesc><ns0:graphic coords='10,152.45,63.78,186.11,141.73' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Suggestion for irrigation C4.5) 86.15 85.75 83.89 84.81 86.29 85.86 83.95 84.89 SVM 88.05 87.44 86.52 86.98 88.22 87.55 86.59 87.07</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>70:30 ratio of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>5-fold cross</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Classifier</ns0:cell><ns0:cell /><ns0:cell cols='2'>training and testing</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>validation</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>P r</ns0:cell><ns0:cell>R c</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>A</ns0:cell><ns0:cell>P r</ns0:cell><ns0:cell>R c</ns0:cell><ns0:cell>F1</ns0:cell></ns0:row><ns0:row><ns0:cell>Our sensor collected data</ns0:cell><ns0:cell cols='8'>Naive Bayes Decision Tree (C4.5) 85.74 85.12 83.81 84.46 85.83 85.19 83.88 84.53 83.48 82.67 80.95 81.80 83.61 82.77 80.99 81.87 SVM 87.29 86.77 85.42 86.09 87.45 86.85 85.51 86.17</ns0:cell></ns0:row><ns0:row><ns0:cell>NIT Raipur Crop dataset</ns0:cell><ns0:cell>Naive Bayes Decision Tree (</ns0:cell><ns0:cell cols='7'>84.37 83.35 82.63 82.99 84.51 83.44 82.68 83.06</ns0:cell></ns0:row></ns0:table><ns0:note>1 https://github.com/GCEKIoTCommunity/Irrigation-Dataset 2 https://github.com/NitrrMCACommunity/Irrigation-Dataset 3 https://mausam.imd.gov.in/ 4 https://restapitutorial.com/ 8/11 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54857:3:0:NEW 25 Apr 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Article ID: 54857
Title: IoT-IIRS: Internet of Things Based-Intelligent Irrigation Recommendation System Using Machine Learning Approach for Efficient Water Usage
To
The editor-in-chief
PeerJ Computer Science journal
The authors would like to thank the editor-in-chief for allowing us to further improve our manuscript.
The authors would also appreciate all the anonymous reviewers for their constructive comments and suggestions to further improve the quality of our article. Based on the recommendations here we are providing the compliance report. The modifications are shown in “BLUE” in the revised manuscript. We hope the editors and reviewers find our responses and amendments satisfactory.
Reviewer 1 (Anonymous)
Basic reporting
1. The paper has been revised, but I didn't understand how authors have implemented IoT using GSM board ?? ! Moreover, there is a lot of '?' in the paper and I didn't understand why?
Response: Here, we are using GSM module to communicate between the Arduino and server. The GSM module uses a SIM card for Internet connectivity through GPRS to the server. Regarding usage of “?” we have used 1-2 question marks based on the requirement (to take a decision whether irrigation is required or not?).
Experimental design
1. Why you have used Arduino UNO since there is a lot of embedded board which are didicated ti IoT application such as: Arduino nano IoT 33, ESP32, Raspberry ...
Response: We have used Arduino UNO as the embedded board because it is very much energy efficient. It consumes less power to operate in comparison to others available in the market. (mentioned in last line of page-7).
Reviewer 3 (Anonymous)
Basic reporting
1. The manuscript has some typing error, in Cloud Level sub-section before de equation 1: you have has an extra letter in 'literawture' word, please correct it.
Response: Thank you for marking it. It is suitably corrected.
2. Acronyms should be defined where they are first used (e.g. NIT, MSP430, API). In the abstract: NIT is not defined, in the introduction section: MSP 430 is not defined, in the results and discussion section not defined the acronym API, please add it.
Response: All acronyms are suitably defined.
3. Some references are not clear in the manuscript please check it, For instance, (res, 2020) (imd,2020) (rai, 2020), (PME, 2021).
Response: As all these are websites we have removed them from the reference list and added in the footnote.(page-8)
4. Variables should be defined of the equation 1 (eg. E, Vo, Cf, and Wneed)
Response: Thank you so much for bringing this mistake to our notice. All are suitably defined just below the equation (1).
5. Figures 1 to 7 you can improve the legends (number on the bar), changing the size or number style.
Response: Suitably modified.
6. In order to clarify, the author should be added a figure of the prototype experimental in real conditions work and figure of the application web.
Response: Suitably added. The figure of prototype experimental in real conditions work and the android web application has been incorporated in Figure 3(b) and Figure 4 respectively.
Experimental design
1. Crop Field Level subsection, mentions that the microcontroller is connected to the motor. The question is: how is connected to the motor – microcontroller, you use a control circuit or integrated circuit? Can explain this.
Response: We have connected the microcontroller to the motor using the integrated circuit. Initially, we connect the breadboard to the Arduino analog pin. Then transistors are connected in suitable positions. We connect emitter to ground. Diodes are connected. Then we make a connection from Arduino 5V power pin to breadboard and connect one pin of motor with 5V power supply and the other with ground.
2. How energy consumption of the Crop Field Level sensor, in sending messages, acquisition data, and sleep condition?
Response: The energy consumption of different sensors used are in between 3 to13 mA. In sleep or idle condition these sensors consume around 4 to 5mA whereas during data transmission period they consume the maximum like 12 to13mA.
Validity of the findings
1. In my opinion, the strategy implemented in the work is not clear. For instance, Goap et. al. are reported IoT-based smart irrigation using Machine Learning, in my opinion, this work is very similar to your report. Please explain clearly the differences and your contribution.
Response: It may look similar but there are significant contributions in our manuscript. In the process we have used more robust techniques to improve the accuracy. We have applied the agglomerative clustering algorithm for maximum accuracy which ultimately improve the performance of our proposed system. Along with one popularly used dataset we have collected our own data from this drought prone areas of Kalahandi district of Odisha state, India.
" | Here is a paper. Please give your review comments after reading it. |
137 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Matrix Depot is a Julia software package that provides easy access to a large and diverse collection of test matrices. Its novelty is threefold. First, it is extensible by the user, and so can be adapted to include the user's own test problems. In doing so it facilitates experimentation and makes it easier to carry out reproducible research. Second, it amalgamates in a single framework two different types of existing matrix collections, comprising parametrized test matrices (including Hansen's set of regularization test problems and Higham's Test Matrix Toolbox) and real-life sparse matrix data (giving access to the University of Florida sparse matrix collection). Third, it fully exploits the Julia language. It uses multiple dispatch to help provide a simple interface and, in particular, to allow matrices to be generated in any of the numeric data types supported by the language.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>In 1969 Gregory and Karney published a book of test matrices <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. They stated that 'In order to test the accuracy of computer programs for solving numerical problems, one needs numerical examples with known solutions. The aim of this monograph is to provide the reader with suitable examples for testing algorithms for finding the inverses, eigenvalues, and eigenvectors of matrix.' At that time it was common for journal papers to be devoted to introducing and analyzing a particular test matrix or class of matrices, examples being the papers of Clement <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> (in the first issue of SIAM Review), Pei <ns0:ref type='bibr' target='#b23'>[24]</ns0:ref> (occupying just a quarter of a page), and Gear <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>.</ns0:p><ns0:p>Today, test matrices remain of great interest, but not for the same reasons as fifty years ago. Testing accuracy using problems with known solutions is less common because a reference solution correct to machine precision can usually be computed at higher precision without difficulty. The main uses of test matrices nowadays are for exploring the behavior of mathematical quantities (such as eigenvalue bounds) and for measuring the performance of one or more algorithms with respect to accuracy, stability, convergence rate, speed, or robustness.</ns0:p><ns0:p>Various collections of matrices have been made available in software. As well as giving easy access to matrices these collections have the advantage of facilitating reproducibility of experiments <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>, whether by the same researcher months later or by different researchers.</ns0:p><ns0:p>An early collection of parametrizable matrices was given by Higham <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> and made available in MATLAB form. The collection was later extended and distributed as a MATLAB toolbox <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref>. Many of the matrices in the toolbox were subsequently incorporated into the MATLAB gallery function. Marques, Vömel, Demmel, and Parlett <ns0:ref type='bibr' target='#b22'>[23]</ns0:ref> present test matrices for tridiagonal eigenvalue problems (already recognized as important by Gregory and Karney, who devoted the last chapter of their book to such matrices). The Harwell-Boeing collection of sparse matrices <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> has been widely used, and is incorporated in the University of Florida Sparse Matrix Collection <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>, <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>, which contains over 2700 matrices from practical applications, including standard and generalized eigenvalue problems from <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Among other MATLAB toolboxes we mention the CONTEST toolbox <ns0:ref type='bibr' target='#b24'>[25]</ns0:ref>, which produces adjacency matrices describing random networks, and the NLEVP collection of nonlinear eigenvalue problems <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>.</ns0:p><ns0:p>The purpose of this work is to provide a test matrix collection for Julia <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>, <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>, a new dynamic programming language for technical computing. The collection, called Matrix Depot, exploits Julia's multiple dispatch features to enable all matrices to be accessed by one simple interface. Moreover, Matrix Depot is extensible. Users can add matrices from the University of Florida Sparse Matrix Collection 1 and Matrix Market; they can code new matrix generators and incorporate them into Matrix Depot; and they can define new groups of matrices that give easy access to subsets of matrices. The parametrized matrices can be generated in any appropriate numeric data type, such as</ns0:p><ns0:p>• floating-point types Float16 (half precision: 16 bits), Float32 (single precision: 32 bits), and Float64 (double precision: 64 bits);</ns0:p><ns0:p>• integer types Int32 (signed 32-bit integers), UInt32 (unsigned 32-bit integers), Int64 (signed 64-bit integers), and UInt64 (unsigned 32-bit integers);</ns0:p><ns0:p>• Complex, where the real and imaginary parts are of any Real type (the same for both);</ns0:p><ns0:p>• Rational (ratio of integers); and</ns0:p><ns0:p>• arbitrary precision type BigFloat (with default precision 256 bits), which uses the GNU MPFR Library <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>.</ns0:p><ns0:p>This paper is organized as follows. We start by giving a brief demonstration of Matrix Depot in Section 2. Then we explain the design and implementation of Matrix Depot in Section 3, giving details on how multiple dispatch is exploited; how the collection is stored, accessed, and documented; and how it can be extended. In Section 4 we describe the two classes of matrices in Matrix Depot: parametrized test matrices and real-life sparse matrix data. Concluding remarks are given in Section 5.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>A Taste of Matrix Depot</ns0:head><ns0:p>To download Matrix Depot, in a Julia REPL (read-eval-print loop) run the command 1 The University of Florida Sparse Matrix Collection is to be renamed as The SuiteSparse Matrix Collection.</ns0:p></ns0:div>
<ns0:div><ns0:head>> Pkg . add (' MatrixDepot ')</ns0:head><ns0:p>Then import Matrix Depot into the local scope.</ns0:p></ns0:div>
<ns0:div><ns0:head>> using MatrixDepot</ns0:head><ns0:p>Now the package is ready to be used. First, we find out what matrices are in Matrix Depot. All the matrices and groups in the collection are shown. It is also possible to obtain just the list of matrix names.</ns0:p><ns0:p>> matrixdepot (' all ') 56 -element Array { ASCIIString ,1}: ' baart ' ' binomial ' ' blur ' ' cauchy ' ' chebspec ' ' chow ' ' circul ' ' clement ' ' companion ' ' deriv2 ' ... ' spikes ' ' toeplitz ' ' tridiag ' ' triw ' ' ursell ' ' vand ' ' wathen ' ' wilkinson ' ' wing ' Here, '...' denotes that we have omitted some of the output in order to save space. Next, we check the input options of the Hilbert matrix hilb.</ns0:p></ns0:div>
<ns0:div><ns0:head>> matrixdepot (' hilb ')</ns0:head><ns0:p>Hilbert matrix ================ The Hilbert matrix has (i , j ) element 1/( i +j -1). It is notorious for being ill conditioned . It is symmetric positive definite and totally positive . Note that an optional first argument type can be given; it defaults to Float64. The string of equals signs on the third line in the output above is Markdown notation for a header. Julia interprets Markdown within documentation, though as we are using typewriter font for code examples here, we display the uninterpreted source. We generate a 4 × 6 Hilbert matrix with elements in the default double precision type and then in Rational type. A list of all the symmetric matrices in the collection is readily obtained.</ns0:p><ns0:p>> matrixdepot (' symmetric ') 21 -element Array { ASCIIString ,1}: ' cauchy ' ' circul ' ' clement ' ' dingdong ' ' fiedler ' ' hankel ' ' hilb ' ' invhilb ' ' kms ' ' lehmer ' ' minij ' ' moler ' ' oscillate ' ' pascal ' ' pei ' ' poisson ' ' prolate ' ' randcorr ' ' tridiag ' ' wathen ' ' wilkinson '</ns0:p><ns0:p>Here, symmetric is one of several predefined groups, and multiple groups can be intersected. For example, the for loop below prints the smallest and largest eigenvalues of all the 4 × 4 matrices in Matrix Depot that are symmetric positive definite and (potentially) ill conditioned.</ns0:p><ns0:p>> for name in matrixdepot (' symmetric ' , ' pos -def ' , ' ill -cond ') A = full ( matrixdepot ( name , 4)) @printf '%9 s : smallest eigval = %0. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>' gravity ' ' grcar ' ' hadamard ' ' hankel ' ' chebspec ' ' chow ' ' baart ' ' binomial ' ' blur '</ns0:p><ns0:p>Access by number provides a convenient way to run a test on subsets of matrices in the collection. However, the number assigned to a matrix may change if we include new matrices in the collection.</ns0:p><ns0:p>In order to run tests in a way that is repeatable in the future it is best to group matrices into subsets using the macro @addgroup, which stores them by name. For example, the following command will group test matrices frank, golub, gravity, grcar, hadamard, hankel, chebspec, chow, baart, binomial, and blur into test1.</ns0:p><ns0:p>> @addgroup test1 = matrixdepot (15:20 , 5 , 6 , 1:3)</ns0:p><ns0:p>After reloading the package, we can run tests on these matrices using group test1. Here we compute the 2-norms. Since blur (an image deblurring test problem) generates a sparse matrix and the matrix 2-norm is currently not implemented for sparse matrices in Julia, we use full to convert the matrix to dense format.</ns0:p><ns0:p>> for name in matrixdepot (' test1 ') A = full ( matrixdepot ( name , 4)) @printf '%9 s has 2 -norm %0. To download the test matrix SNAP/web-Google from the University of Florida sparse matrix collection (see Section 4.2 for more details), we first download the data with > matrixdepot (' SNAP / web -Google ' , : get ) and then generate the matrix with > matrixdepot (' SNAP / web -Google ' , : r ) 916428 x916428 sparse matrix with 5105039 Float64 entries : Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>[11343 , 1] = 1.0 [11928 , 1] = 1.0 [15902 , 1] = 1.0 [29547 , 1] = 1.0<ns0:label>6</ns0:label></ns0:formula><ns0:p>Computer Science Note that the omission marked '...' was in this case automatically done by Julia based on the height of the terminal window. Matrices loaded in this way are inserted into the list of available matrices, and assigned a number. After downloading further matrices HB/1138_bus, HB/494_bus, and Bova/rma10 the list of matrices is as follows. 3 Package Design and Implementation</ns0:p><ns0:p>In this section we describe the design and implementation of Matrix Depot, focusing particularly on the novel aspects of exploitation of multiple dispatch, extensibility of the collection, and userdefinable grouping of matrices.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Exploiting Multiple Dispatch</ns0:head><ns0:p>Matrix Depot makes use of multiple dispatch in Julia, an object-oriented paradigm in which the selection of a function implementation is based on the types of each argument of the function. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The generic function matrixdepot has eight different methods, where each method itself is a function that handles a specific case. This is neater and more convenient than writing eight 'case' statements, as is necessary in many other languages. As a result, matrixdepot is a versatile function that can be used for a variety of purposes, including returning matrix information and generating matrices from various input parameters.</ns0:p><ns0:p>In the following example we see how multiple dispatch handles different numbers and types of arguments for the Cauchy matrix. Groups : [' inverse ' , ' ill -cond ' , ' symmetric ' , ' pos -def '] References :</ns0:p><ns0:p>N . J . Multiple dispatch is also exploited in programming the matrices. For example, the Hilbert matrix is implemented as function hilb{T}(::Type{T}, m::Integer, n::Integer) H = zeros(T, m, n) for j = 1:n, i = 1:m @inbounds H[i,j] = one(T)/ (i + j -one(T)) end return H end hilb{T}(::Type{T}, n::Integer) = hilb(T, n, n) hilb(args...) = hilb(Float64, args...)</ns0:p><ns0:p>The function hilb has three methods, which enable one to request, for example, hilb(4,2) for a 4 × 2 Hilbert matrix of type Float64, or simply (thanks to the final two lines) hilb(4) for a 4 × 4 Hilbert matrix of type Float64. The keyword @inbounds tells Julia to turn off bounds checking in the following expression, in order to speed up execution. Note that in Julia it is not necessary to vectorize code to achieve good performance <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>.</ns0:p><ns0:p>All the matrices in Matrix Depot can be generated using the function call Manuscript to be reviewed</ns0:p><ns0:p>Computer Science matrixdepot('matrix_name', p1, p2, ...),</ns0:p><ns0:p>where matrix_name is the name of the test matrix, and p1, p2, . . . , are input arguments depending on matrix_name. The help comments for each matrix can be viewed by calling function matrixdepot('matrix_name'). We can access the list of matrix names by number, range, or a mixture of numbers and range.</ns0:p><ns0:p>1. matrixdepot(i) returns the name of the ith matrix;</ns0:p><ns0:p>2. matrixdepot(i:j) returns the names of the ith to jth matrices, where i < j;</ns0:p><ns0:p>3. matrixdepot(i:j, k, m) returns the names of the ith, (i + 1)st, . . ., jth, kth, and mth matrices.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Matrix Representation</ns0:head><ns0:p>Matrix names in Matrix Depot are represented by Julia strings. For example, the Cauchy matrix is represented by 'cauchy'. Matrix names and matrix groups are stored as hash tables (Dict).</ns0:p><ns0:p>In particular, there is a hash table matrixdict that maps each matrix name to its underlying function and a hash table matrixclass that maps each group to its members. The majority of parametrized matrices are dense matrices of type Array{T,2}, where T is the element type of the matrix. Variables of the Array type are stored in column-major order. A few matrices are stored as sparse matrices (see also matrixdepot('sparse')), in the Compressed Sparse Column (CSC) format; these include neumann (a singular matrix from the discrete Neumann problem) and poisson (a block tridiagonal matrix from Poisson's equation). Tridiagonal matrices are stored in the built-in Julia type Tridiagonal, which is defined as follows.</ns0:p><ns0:p>immutable Tridiagonal { T } <: AbstractMatrix { T } dl :: Vector { T } # sub -diagonal d :: Vector { T } # diagonal du :: Vector { T } # sup -diagonal du2 :: Vector { T } # supsup -diagonal for pivoting end</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Matrix Groups</ns0:head><ns0:p>A group is a subset of matrices in Matrix Depot. There are ten predefined groups, described in Table <ns0:ref type='table' target='#tab_8'>1</ns0:ref>, most of which identify matrices with particular properties. Each group is represented by a string. For example, the group of random matrices is represented by 'random'. Matrices can be accessed by group names, as was illustrated in Section 1.</ns0:p><ns0:p>The macro @addgroup is used to add a new group of matrices to Matrix Depot and the macro @rmgroup removes an added group. All the predefined matrix groups are stored in the hash table matrixclass. The macro @addgroup essentially adds a new key-value combination to the hash table usermatrixclass. Using a separate hash table prevents the user from contaminating the predefined matrix groups.</ns0:p><ns0:p>Being able to create groups is a useful feature for reproducible research <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>. For example, if we have implemented algorithm alg01 and we used circul, minij, and grcar as test matrices for alg01, we could type </ns0:p></ns0:div>
<ns0:div><ns0:head>Group Description</ns0:head><ns0:p>all All the matrices in the collection. data The matrix has been downloaded from the University of Florida Sparse Collection or the Matrix Market Collection. eigen Part of the eigensystem of the matrix is explicitly known. ill-cond The matrix is ill-conditioned for some parameter values.</ns0:p><ns0:p>inverse The inverse of the matrix is known explicitly. pos-def The matrix is positive definite for some parameter values.</ns0:p><ns0:p>random The matrix has random entries. regprob The output is a test problem for regularization methods. sparse The matrix is sparse. symmetric The matrix is symmetric for some parameter values.</ns0:p><ns0:p>> @addgroup alg01_group = [' circul ' , ' minij ' , ' grcar ']</ns0:p><ns0:p>This adds a new group to Matrix Depot (we need to reload the package to see the changes). We can then run alg01 on the test matrices by > for name in matrixdepot ( alg01_group ) A = matrixdepot ( name , n ) # n is the dimension of the matrix . @printf ' Test result for %9 s is %0.3 e ' name alg01 ( A ) end</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Adding New Matrix Generators</ns0:head><ns0:p>Generators are Julia functions that generate test matrices. When Matrix Depot is first loaded, a directory myMatrixDepot is created. It contains two files, group.jl and generator.jl, where group.jl is used for storing all the user defined groups (see Section 3.3) and generator.jl is used for storing generator declarations.</ns0:p><ns0:p>Julia packages are simply Git repositories 2 . The directory myMatrixDepot is untracked by Git, so any local changes to files in myMatrixDepot do not make the MatrixDepot package 'dirty'. In particular, all the newly defined groups or matrix generators will not be affected when we upgrade to a new version of Matrix Depot. Matrix Depot automatically loads all Julia files in myMatrixDepot. This feature allows a user to simply drop generator files into myMatrixDepot without worrying about how to link them to Matrix Depot.</ns0:p><ns0:p>A new generator is declared using the syntax include_generator(FunctionName, 'fname', f). This adds the new mapping 'fname' → f to the hash table matrixdict, which we recall maps each matrix name to its underlying function. Matrix Depot will refer to function f using string 'fname' so that we can call function f by matrixdepot('fname'...). The user is free to define new data types and return values of those types. Moreover, as with any Julia function, multiple values can be returned by listing them after the return statement.</ns0:p><ns0:p>For example, suppose we have the following Julia file rand.jl, which contains two generators randsym and randorth and we want to use them from Matrix Depot. The triple quotes in the file delimit the documentation for the functions. * n : the dimension of the matrix ''' randorth ( n ) = qr ( randn (n , n )) <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> We can copy the file rand.jl to the directory myMatrixDepot and add the following two lines to generator.jl.</ns0:p><ns0:p>inclu de_gener ator ( FunctionName , ' randsym ' , randsym ) inclu de_gener ator ( FunctionName , ' randorth ' , randorth )</ns0:p><ns0:p>This includes the functions randsym and randorth in Matrix Depot, as we can see by looking at the matrix list (the new entries are numbered 43 and 45). The new generators can be used just like the built-in ones. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science We can also add group information with the function include_generator. The following lines are put in generator.jl.</ns0:p><ns0:p>inclu de_gener ator ( Group , ' random ' , randsym ) inclu de_gener ator ( Group , ' random ' , randorth )</ns0:p><ns0:p>This adds the functions randsym and randorth to the group random, as we can see with the following query (after reloading the package).</ns0:p><ns0:p>> matrixdepot (' random ') 10 -element Array { ASCIIString ,1}: ' golub ' ' oscillate ' ' randcorr ' ' rando ' ' randorth ' ' randsvd ' ' randsym ' ' rohess ' ' rosser ' ' wathen '</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Documentation</ns0:head><ns0:p>The Matrix Depot documentation is created using the documentation generator Sphinx 3 and is hosted at Read the Docs 4 . Its primary goals are to provide examples of usage of Matrix Depot and to give a brief summary of each matrix in the collection. Matrices are listed alphabetically with hyperlinks to the documentation for each matrix. Most parametrized matrices are presented with heat map plots, which are produced using the Winston package 5 , with the color range determined by the smallest and largest entries of the matrix. For example, Figure <ns0:ref type='figure' target='#fig_7'>1</ns0:ref> shows how the Wathen matrix is documented in Matrix Depot. Manuscript to be reviewed We now describe the matrices that are provided with, or can be downloaded into, Matrix Depot.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='4.1'>Parametrized Matrices</ns0:head><ns0:p>In Matrix Depot v0.5.5, there are 58 parametrized matrices (including the regularization problems described in the next section), most of which originate from the Test Matrix Toolbox <ns0:ref type='bibr' target='#b20'>[21]</ns0:ref>. All these matrices can be generated as matrixdepot('matrix_name', n), where n is the dimension of the matrix. Many matrices can have more than one input parameter, and multiple dispatch provides a convenient mechanism for taking different actions for different argument types. For example, the tridiag function generates a tridiagonal matrix from vector arguments giving the subdiagonal, diagonal, and superdiagonal vectors, but a tridiagonal Toeplitz matrix can be obtained by supplying scalar arguments that specify the dimension of the matrix, the subdiagonal, the diagonal, and the superdiagonal. If a single, scalar argument n is supplied then an n-by-n tridiagonal Toeplitz matrix with subdiagonal and superdiagonal −1 and diagonal 2 is constructed. This matrix arises in applying central differences to a second derivative operator, and the inverse and the condition number are known explicitly <ns0:ref type='bibr'>[22, sec. 28.5</ns0:ref>].</ns0:p><ns0:p>Here is an example of the different usages of tridiag. </ns0:p><ns0:formula xml:id='formula_1'>-1 0 0 -1 2 -1 0 0 -1 2 -1 0 0 -1<ns0:label>2</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head n='4.1.1'>Test Problems for Regularization Methods</ns0:head><ns0:p>A mathematical problem is ill-posed if the solution is not unique or an arbitrarily small perturbation of the data can cause an arbitrarily large change in the solution. Regularization methods are an important class of methods for dealing with such problems <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref>, <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref>. One means of generating test problems for regularization methods is to a given ill-posed problem. Matrix Depot contains a group of regularization test problems derived from Hansen's MATLAB Regularization Tools <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>, <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref> that are mostly discretizations of Fredholm integral equations of the first kind:</ns0:p><ns0:formula xml:id='formula_2'>1 0 K(s, t)f (t) dt = g(s), 0 ≤ s ≤ 1.</ns0:formula><ns0:p>The regularization test problems form the group regprob.</ns0:p><ns0:p>> matrixdepot (' regprob ') 12 -element Array { ASCIIString ,1}: ' baart ' ' blur ' ' deriv2 ' ' foxgood ' ' gravity ' ' heat ' ' parallax ' ' phillips ' ' shaw ' ' spikes ' ' ursell ' ' wing '</ns0:p><ns0:p>Each problem is a linear system Ax = b where the matrix A and vectors x and b are obtained by discretization (using quadrature or the Galerkin method) of K, f , and g. By default, we generate only A, which is an ill-conditioned matrix. The whole test problem will be generated if the parameter matrixonly is set to false, and in this case the output has type RegProb, which is defined as The full name of a matrix in Matrix Market comprises three parts: the collection name, the set name, and the matrix name. For example, the full name of the matrix BCSSTK14 in the set BCSSTRUC2 from the Harwell-Boeing Collection is Harwell-Boeing/bcsstruc2/bcsstk14. Note that both set name and matrix name are in lower case.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>,] dim : the dimension of the matrix . * [ type ,] row_dim , col_dim : the row and column dimensions . Groups : [' inverse ' , ' ill -cond ' , ' symmetric ' , ' pos -def '] References : M . D . Choi , Tricks or treats with the Hilbert matrix , Amer . Math . Monthly , 90 (1983) , pp . 301 -312. N . J . Higham , Accuracy and Stability of Numerical Algorithms , second edition , Society for Industrial and Applied Mathematics , Philadelphia , PA , USA , 2002; sec . 28.1.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:12:8291:1:0:NEW 15 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>7</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8291:1:0:NEW 15 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>> 8 PeerJ</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>matrixdepot (' cauchy ') Cauchy matrix ============= Given two vectors x and y , the (i , j ) entry of the Cauchy matrix is 1/( x [ i ]+ y [ j ]). Input options : * [ type ,] x , y : two vectors . * [ type ,] x : a vector . y defaults to x . Comput. Sci. reviewing PDF | (CS-2015:12:8291:1:0:NEW 15 Mar 2016) Manuscript to be reviewed Computer Science * [ type ,] dim : the dimension of the matrix . x and y default to [1: dim ;].</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>= = = = = = = = = = = = = = = = = = = = Input options : * n : the dimension of the matrix ''' function randsym ( n ) A = zeros (n , n ) for j = 1: n for i = 1: j A [i , j ] = randn () if i != j ; A [j , i ] = A [i , j ] end end end return A end ''' random orthogonal matrix = = = = = = = = = = = = = = = = = = = = = = = = Input options :</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>></ns0:head><ns0:label /><ns0:figDesc>matrixdepot (' randsym ') random symmetric matrix = = = = = = = = = = = = = = = = = = = = = = = Input options : * n : the dimension of the matrix > matrixdepot (' randsym ' ' randorth ') random orthogonal matrix = = = = = = = = = = = = = = = = = = = = = = = = Input options : * n : the dimension of the matrix 13 PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8291:1:0:NEW 15 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Documentation for the Wathen matrix</ns0:figDesc><ns0:graphic coords='16,62.25,137.44,487.50,492.70' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>></ns0:head><ns0:label /><ns0:figDesc>matrixdepot (' tridiag ') Tridiagonal Matrix = = = = = = = = = = = == = = = = = = = Construct a tridiagonal matrix of type Tridiagonal . Input options : * [ type ,] v1 , v2 , v3 : v1 and v3 are vectors of subdiagonal and superdiagonal elements , respectively , and v2 is a vector of diagonal elements . * [ type ,] dim , x , y , z : dim is the dimension of the matrix , x , y , z are scalars . x and z are the subdiagonal and superdiagonal elements , respectively , and y is the diagonal elements . * [ type ,] dim : x = -1 , y = 2 , z = -1. This matrix is also known as the second difference matrix . Groups : [' inverse ' , ' ill -cond ' , ' pos -def ' , ' eigen '] References : J . Todd , Basic Numerical Mathematics , Vol . 2: Numerical Algebra , Birkhauser , Basel , and Academic Press , New York , 1977 , p . 155. > matrixdepot (' tridiag ' , [2 ,5 ,6;] , ones (4) , [3 ,4 ,1;]) 4 x4 Tridiagonal { Float64 }: 1.0 3.0 0.0 0.0 2.0 1.0 4.0 0.0 0.0 5.0 1.0 1.0 16 PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8291:1:0:NEW 15 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>immutable RegProb{T} A::AbstractMatrix{T} # matrix of interest b::AbstractVector{T} # right-hand side x::AbstractVector{T} # the solution to Ax = b end If r is a generated test problem, then r.A, r.b, and r.x are the matrix A and vector x and b respectively. If the solution is not provided by the problem, the output is stored as type RegProbNoSolution, which is defined as immutable RegProbNoSolution{T} A::AbstractMatrix{T} # matrix of interest b::AbstractVector{T} # right-hand side end For example, the test problem wing can be generated as follows. > matrixdepot (' wing ') A Problem with a Discontinuous Solution = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = Input options : * [ type ,] dim , t1 , t2 , [ matrixonly ]: the dimension of matrix is dim . t1 and t2 are two real scalars such that 0 < t1 < t2 < 1. If matrixonly = false , the matrix A and vectors b and x in the linear system Ax = b will be generated ( matrixonly = true by default ). * [ type ,] n , [ matrixonly ]: t1 = 1/3 and t2 = 2/3.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>: --: ----: --: ----: --: --23515 download :/ home / weijian /. julia / v0 .4/ MatrixDepot / src /../ data / uf / Gset / G11 . tar . gz G11 : --: ----: --: ----: --: --24608 ...</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>For example, the following two functions are used for accessing matrices by number and range respectively, where matrix_name_list() returns a list of matrix names. The second function calls the first function in the inner loop.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>function matrixdepot ( num :: Integer )</ns0:cell></ns0:row><ns0:row><ns0:cell>matrixstrings = matrix_name_list ()</ns0:cell></ns0:row><ns0:row><ns0:cell>n = length ( matrixstrings )</ns0:cell></ns0:row><ns0:row><ns0:cell>if num > n</ns0:cell></ns0:row><ns0:row><ns0:cell>error (' There are $ ( n ) parameterized matrices ,</ns0:cell></ns0:row><ns0:row><ns0:cell>but you asked for the $ ( num ) -th .')</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>return matrixstrings [ num ]</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>function matrixdepot ( ur :: UnitRange )</ns0:cell></ns0:row><ns0:row><ns0:cell>matrixnamelist = AbstractString []</ns0:cell></ns0:row><ns0:row><ns0:cell>for i in ur</ns0:cell></ns0:row><ns0:row><ns0:cell>push !( matrixnamelist , matrixdepot ( i ))</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>return matrixnamelist</ns0:cell></ns0:row><ns0:row><ns0:cell>end</ns0:cell></ns0:row></ns0:table><ns0:note>> methods ( matrixdepot ) # 8 methods for generic function ' matrixdepot ': matrixdepot () ... matrixdepot ( name :: AbstractString ) ... matrixdepot ( name :: AbstractString , method :: Symbol ) ... matrixdepot ( props :: AbstractString ...) ... matrixdepot ( name :: AbstractString , args ...) ... matrixdepot ( num :: Integer ) ... matrixdepot ( ur :: UnitRange {T <: Real }) ... matrixdepot ( vs :: Union { Integer , UnitRange {T <: Real }}...) ...</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Predefined groups.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8291:1:0:NEW 15 Mar 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='2'>Git is a free and open source distributed version control system.</ns0:note>
<ns0:note place='foot' n='3'>http://sphinx-doc.org/ 4 http://matrixdepotjl.readthedocs.org 5 https://github.com/nolta/Winston.jl</ns0:note>
<ns0:note place='foot' n='6'>https://github.com/weijianzhang/MatrixDepot.jl 7 https://codecov.io/</ns0:note>
<ns0:note place='foot' n='8'>https://github.com/JuliaGraphs/LightGraphs.jl</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:12:8291:1:0:NEW 15 Mar 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Reply to Referees
Peer J Computer Science
“Matrix Depot: An Extensible Test Matrix Collection for Julia”
Weijian Zhang and Nicholas J. Higham
March 15, 2016
We thank the reviewers for their consideration. Referee 2 did not suggest any changes. We
reply in detail to Referee 1, whose excellent suggestions we have adopted.
I don’t understand, at least at the abstract, why (2a) and (2b) are different.
It seems to me that the regularization problems are simply a special case of the
generatable, parameterized matrices. That is, (2a) and (2b) seem the same to me.
(2b) just seems to be a particular set of generatable matrices, which also have the
right-hand side b and given solution x. But other matrices in (2a) could presumably
give A,b, and x. Likewise, many matrices in the UF collection also have both A and
b.
We agree and have adjusted the paper so that (2b) is treated as part of (2a).
Why treat (2b) as anything special? It seems to me that your matrix depot will
break when someone wants to include matrix problems from another domain that
include more than just r.A, r.b, and r.x (why not r.eigs, ...? r.coord for matrices
arising from a 2D/3D discretization, for which you want to keep the 2D or 3D
coordinates of each node/row/col of the matrix?).
Functions in Julia can return multiple values, and we now mention this on page 12. In addition, the user is free to define their own data types to include a variety of fields if needed.
This is now addressed in the paper. In Matrix Depot, we have defined type RegProb and
RegProbNoSolution (for regularization problems where the solutions are unknown).
For issue (3), I’m interested to know what you will do with a matrix from (2c) the
mix of applications. Suppose you have a matrix given to you with a fixed precision.
What would it mean to (say) create a quad precision version of that? When the
matrix only has 64 bit floating-point values at best?
Similarly, what if you ask for a matrix from the UF collection in rational form?
Do nearest rational representations get created? That sounds like it could be numerically hazardous, unless you use a lossless translation from double precision to a
ratio of integers. The rational format would be really ugly for 0.3333333333 which
might be one epsilon away from 1/3.
We can only generate parametrized matrices in different numeric data type. This is now
clearly stated on page 2.
1
As an aside, I will be renaming the ”University of Florida Sparse Matrix Collection” to ”The SuiteSparse Matrix Collection (formerly known as the University of
Florida Sparse Matrix Collection).” The web site will move from its current ufl.edu
address to a new one at tamu.edu. The content of the collection will remain the
same. Can Julia be easily modified, perhaps by the user, to reflect a change in URL
to the SuiteSparse aka UF collection? You might want to make a note of the upcoming change, in citation [8] perhaps. I will likely be able to continue to mirror
the collection at both sites, however.
Perhaps a comment is useful: ”In case the URL of the external collection changes,
you can give MatrixDepot the new URL by ...”.
A footnote has been added on page 2 to state the future renaming. The package is tested
regularly and we will promptly update the package if the URL of the external collection changes.
Does the Julia download of matrices from the UF collection preserve the meta
data in each UF Problem struct? Such as ”notes”? ”title”? ”author”? etc? Or
does that get lost? Some Problems have problem-specific meta data. For example, the
IMDB movie database has names for each row and column of the matrix. Another
Problem (Moqri/MISKnowledgemap) is a set of documents, and the abstract of each
document is included in the Problem. (matrix ID 2663).
We have updated the package. Now all the metadata is downloaded and can be accessed easily.
By default (meta = false), we only generate the matrix data. But if we set the key argument
meta = true, all the metadata will be generated as a form of dictionary.
What do you do with the explicit zeros that are present in some sparse matrices?
Those are in the *.mtx file, for the Matrix Market format. I assume you download
that copy. Are they preserved in Julia? This is a minor nuance, but an important
one. You don’t have to address this in the paper; just in the code. I preserve them
for MATLAB by including another binary sparse matrix, Problem.Zeros, which is 1
in the (i,j) position if the given matrix has an explicitly provided entry whose value
is zero, in that position. Does that information get preserved? It’s important to
keep it; that structure is important to the matrix problem.
Yes, they are preserved in Julia. If a matrix in the Matrix Market format has explicit zeros,
the zeros will be shown in the generated sparse matrix.
The overall gist of my review is this. The matrix depot for Julia is userextendible, which is great. Can the *content* of each matrix problem also be extendible, to include extra meta data that is specific to each problem? Can this be
done without needing to rewrite the matrix depot interface? If so, the ’regularization
problem’ becomes just one in a host of possible special problems. Is that possible in
Julia? I think such a framework would greatly strengthen the power of this matrix
depot.
Yes, this is possible. As mentioned above, the user can write functions which return more
than one output. They can also define their own data type to include extra metadata for their
convenience.
2
" | Here is a paper. Please give your review comments after reading it. |
138 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Demand for high-speed wireless broadband internet service is ever increasing. Multipleinput-multiple-output (MIMO) Wireless LAN (WLAN) is becoming a promising solution for such high-speed internet service requirements. This paper proposes a novel algorithm to efficiently model the address generation circuitry of the MIMO WLAN interleaver. The interleaver used in the MIMO WLAN transceiver has three permutation steps involving floor function whose hardware implementation is most challenging due to the absence of corresponding digital hardware. In this work, we propose an algorithm with a mathematical background for the address generator, eliminating the need for floor function. The algorithm is converted into digital hardware for implementation on the reconfigurable FPGA platform. Hardware structure for the complete interleaver, including the read address generator and memory module, is designed and modeled in VHDL using Xilinx Integrated Software Environment (ISE) utilizing embedded memory and DSP blocks Spartan 6 FPGA. The functionality of the proposed algorithm is verified through exhaustive software simulation using ModelSim software. Hardware testing is carried out on Zynq 7000 FPGA using Virtual Input Output (VIO) and Integrated Logic Analyzer (ILA) core.</ns0:p><ns0:p>Comparisons with few recent similar works, including the conventional Look-Up Table (LUT) based technique, show the superiority of our proposed design in terms of maximum improvement in operating frequency by 196.83%, maximum reduction in power consumption by 74.27%, and reduction of memory occupancy by 88.9%. In the case of throughput, our design can deliver 8.35 times higher compared to IEEE 802.11n requirement.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The increasing use of multimedia services and the growth of graphics-based web contents have escalated the demand for high-speed wireless broadband communications. The use of more than one antenna at the transmitter and/or at the receiver aims to improve the transmission/reception rate substantially. Orthogonal Frequency Division Multiplexing (OFDM) is becoming a popular technique for high data rate wireless transmission <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. OFDM may be combined with multiple antennas at both the access point and mobile terminal to increase diversity gain and/or enhance system capacity on a time-varying multipath fading channel, resulting in a Multiple-Input-Multiple-Output (MIMO) OFDM system <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. transceiver are described by some researchers in <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref> <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>. Another recent work on the design of an FPGA-based address generator for a multi-standard interleaver is reported in the literature <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>. Actually, the authors made a combined implementation of the address generator for WLAN (802.11a/b/g), WiMAX, and 3GPP LTE, but not of the MIMO WLAN (802.11n). However, the implementations of <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>[10] <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>[12] <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref> <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref> are not specifically focused on interleaver/deinterleaver and do not contain detailed implementation results leaving scope for design optimization with respect to resource utilization, providing compact design, resulting in higher throughput and reduced power consumption.</ns0:p><ns0:p>Very few papers reporting the hardware implementation of MIMO WLAN interleaver are available in the literature. In <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, Zhang et al. presented a de-interleaver address generator implementation on a 0.13µm CMOS platform. The authors claimed that the implementation was also done on the FPGA platform but without any implementation result. 2-D translation of the interleaver equations for hardware simplicity was proposed in <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. The final expressions derived are very complex and do not clearly explain the hardware design issues, especially for 64-QAM.</ns0:p><ns0:p>The implementation platform of this work is reported to be 65nm CMOS technology. Another recent work <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> reported by the authors of <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref> claimed betterment over their previous work in terms of reduction in complexity and improvement in maximum operating frequency keeping the same implementation platform. The improvement claimed by the authors is due to exchanging steps between interleaver and de-interleaver. In <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>, the authors presented an FPGA-based implementation of the complete MIMO PHY modulator for IEEE 802.11n WLAN. The  The complete MIMO WLAN interleaver, including the proposed address generator algorithm, is transformed into digital hardware and implemented on Spartan 6 FPGA <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> using Xilinx ISE 12.1.</ns0:p><ns0:p> To reduce the resource and power consumption and to enhance the throughput, the embedded resources of FPGA like dual-port Block RAM <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> and DSP blocks (DSP48A1) <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> have been successfully interfaced and utilized in the hardware model. This approach makes the design very compact and highly efficient. Comparison with existing similar works endorses the superiority of our proposed design in terms of multiple FPGA parameters.</ns0:p><ns0:p> The functionality test of the address generator has been verified using ModelSim XE-III software.</ns0:p><ns0:p> Further, hardware testing of the algorithm has also been carried out using Virtual Input and Output (VIO) and Integrated Logic Analyser (ILA) module on Zynq 7000 FPGA board, which further validates the proposed algorithm.</ns0:p><ns0:p> The proposed design has been compared with a few recent implementations <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> by converting them into FPGA equivalent implementation using <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. The comparison shows the superiority of the proposed design in terms of operating frequency, power consumption, and memory occupancy.</ns0:p><ns0:p> The performance of the proposed interleaver is compared with IEEE 802.11n. The comparison results show that the proposed interleaver delivers much higher throughput than the maximum throughput requirement of IEEE 802.11n.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 presents the theoretical background of interleaving in MIMO WLAN. Section 3 presents the proposed algorithm, including the mathematical background for the address generator. Description about the transformation of the proposed algorithm into hardware has been made in Section 4. Simulation results followed by FPGA implementation details have been reported in Sections 5 and 6, respectively. The concluding remarks are given in Section 7. In a MIMO WLAN transceiver, the encoded data stream obtained from the convolutional encoder is fed to a special type of block interleaver. Interleaving in 802.11n is a three-step process in which the first two steps provide spatial interleaving and the final step performs frequency interleaving <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. The interleaving steps are defined in the form of three blocks shown in Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>. The first step (B 1 ) ensures that adjacent coded bits are mapped onto non-adjacent subcarriers, while the second step (B 2 ) is responsible for the mapping of adjacent coded bits alternately onto less or more significant bits of the constellation, thus avoiding long runs of lowly reliable bits. If more than one spatial stream exists in the 802.11n physical layer, the third step, called frequency rotation (B 3 ), will be applied to the additional spatial streams. The frequency rotation ensures that the consecutive carriers used across spatial streams are not highly correlated.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig. 1. Block diagram of steps involved in interleaving process for MIMO WLAN</ns0:head><ns0:p>Here N is the block size corresponding to the number of coded bits per allocated sub-channels per OFDM symbol. d represents the number of columns in the interleaver, whose value is 13 and 18 for 20MHz and 40 MHz BW <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>, respectively. The parameter s is defined as s = max (1, N BPSCS ), whereas N BPSCS is the number of coded bits per sub-carrier and takes values 1, 2, 4, or 6 for BPSK, QPSK, 16-QAM, or 64-QAM, respectively. i ss is the index of the spatial stream and N rot is the parameter used for defining different rotation for the 20MHz and 40MHz case. The operator % and represent modulo function and floor function, respectively.  </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Proposed Algorithm for Address Generator of Interleaver</ns0:head><ns0:p>The permutation steps described in the B 1 , B 2, and B 3 blocks of Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1(c) Part of interleaver write addresses with N bpscs = 6, N = 312, i ss =3, BW = 20MHz</ns0:head><ns0:p>The general validity of the proposed mathematical formulation can be established with the help of <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>. As far as spatial permutation is concerned, the steps involved in IEEE 802.16e <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> and IEEE 802.11n <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> are identical. Additionally, the latter undergoes frequency rotation using the frequency interleaving step, as described by B 3 in Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> for spatial streams other than the first.</ns0:p><ns0:p>Further, analysis of the 3 rd step results that the entire term beyond j k (i.e., J rot ) remains constant for a particular spatial stream and can be expressed as <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> (1)</ns0:p><ns0:formula xml:id='formula_0'>𝑟 𝑘 = [𝑗 𝑘 -𝐽 𝑟𝑜𝑡 ]%𝑁 where J rot = [ {(𝑖 𝑠𝑠 -1) * 2}%3 + 3 ⌊ 𝑖 𝑠𝑠 -1 3 ⌋] * 𝑁 𝑟𝑜𝑡 * 𝑁 𝐵𝑃𝑆𝐶𝑆</ns0:formula><ns0:p>As the first stream for all modulation schemes undergoes no frequency rotation, hence</ns0:p><ns0:formula xml:id='formula_1'>𝑟 𝑘 = [𝑗 𝑘 -0]%𝑁 = [𝑗 𝑘 ]%𝑁 = 𝑗 𝑘</ns0:formula><ns0:p>For subsequent streams, the value of J rot differs for each spatial stream, modulation schemes, and BWs. All such possible values of J rot are listed in Table <ns0:ref type='table'>3</ns0:ref>. The expression of j k so derived for all modulation schemes in <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> if substituted in Eq. 1 gives three new equations. The final expressions obtained and the proposed mathematical formulations developed in this work generate the same results and are identical to results obtained through direct implementation of B 1 to B 3 steps. (2)</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 2(a) Proposed algorithm for N bpscs = 1 or 2 (BPSK / QPSK) with all N, i ss , and BW</ns0:head></ns0:div>
<ns0:div><ns0:head>Table 2(b)</ns0:head><ns0:formula xml:id='formula_2'>k n ( 16 -QAM ) = { D * (i + I) + (j + J) when {j < (D -J)}[{i < (C - D * (i + I) + (j + J + 1) when [{j < (D -J)}&(j%2 = 0)]&[{i < (C -I)}&(i%2 = 1)] D * {i -(C -I)} + (j + J) when {j < (D -J)}& [{i ≥ (C -I)}&(i%2 = 0)] D * (i -(C -I)) + (j + J + 1) when [{j < (D -J)}&(j%2 = 0)]& [{i ≥ (C -I)}&(i%2 = 1)] D * (i + I) + (j + J -1) when [{j < (D -J)}&(j%2 = 1)]&[{i < (C -I)}&(i %2 = 1)] D * (i -(C -I)) + (j + J -1) when [{j < (D -J)}&(j%2 = 1)] & [{i ≥ (C -I)}&(i%2 = 1)] D * (i + I + 1) + {j -(D -J)} when {j ≥ (D -J)} & [{i < (C -I -1)}&(i%2 = 0)] D * (i + I + 1) + {j -(D -J -1)} when [{j ≥ (D -J)}&(j%2 = 0)] & [{i < (C -I -1)}&(i%2 = 1)] D * {i -(C -I -1)} + {j -(D -J)} when {j >= (D -J)} & [{i ≥ (C -I -1)}&(i%2 = 0)] D * {i -(C -I -1)} + {j -(D -J -1)} when [{j ≥ (D -J)}&(j%2 = 0)]& [{i ≥ (C -I -1</ns0:formula></ns0:div>
<ns0:div><ns0:head>)}&(i%2 = 1)] D * (i + I + 1) + {j -(D -J + 1)} when [{j ≥ (D -J)} &(j%2 = 1)] & [{i < (C -I -1)}&(i%2 = 1)] D * {i -(C -I -1)} + {j -(D -J + 1)} when [{j ≥ (D -J)}&(j%2 = 1)] & [{i ≥ (C -I -1)}&(i%2 = 1)]</ns0:head><ns0:p>(3) </ns0:p></ns0:div>
<ns0:div><ns0:head>Table 3 Values of J rot for all modulation schemes, spatial streams, and BWs</ns0:head></ns0:div>
<ns0:div><ns0:head>{ D * (i + I) + (j + J) when {j < (D -J)}[{i < (C -I)} (i% D * (i + I) + (j + J + 2) when [{j < (D -J)} (j%3 = 0)][{i < D * (i + I) + (j + J + 1) when [{j < (D -J)}&(j%3 ≠ 2)]&[{i < (C -I)}&(i%3 = 2)] D * {i -(C -I)} + (j + J) when {j < (D -J)}&[{i ≥ (C -I)}&(i%3 = 0)] D * {i -(C -I)} + (j + J + 2) when [{j < (D -J)}&(j%3 = 0)]&[{i ≥ (C -I)}&(i%3 = 1)] D * {i -(C -I)} + (j + J + 1)</ns0:head><ns0:p>when</ns0:p><ns0:formula xml:id='formula_3'>[{j < (D -J)}&(j%3 ≠ 2)]&[{i ≥ (C -I)}&(i%3 = 2)] D * (i + I) + (j + J -1) when [{j < (D -J)}&(j%3 ≠ 0)]&[{i < (C -I)}&(i%3 = 1)] D * {i -(C -I)} + (j + J -1) when [{j < (D -J)}&(j%3 ≠ 0)]&[{i ≥ (C -I)}&(i % 3 = 1)] D * (i + I) + (j + J -2) when [{j < (D -J)}&(j%3 = 2)]&[{i < (C -I)}&(i%3 = 2)] D * {i -(C -I)} + (j + J -2 ) when [{j < (D -J)}&(j%3 = 2)]&[{i ≥ (C -I)}&(i%3 = 2)] D * (i + I + 1) + {j -(D -J )} when {j ≥ (D -J)}&[{i < (C -I -1)}&(i%3 = 0)] D * (i + I + 1) + {j -(D -J -2)} when [{j ≥ (D -J)}&(j%3 = 0)]&[</ns0:formula></ns0:div>
<ns0:div><ns0:head>{i < (C -I -1)}&(i%3 = 1)] D * (i + I + 1) + {j -(D -J -1)} when [{j ≥ (D -J)}&(j%3 ≠ 2)]&[{i < (C -I -1)}&(i%3 = 2)] D * {i -(C -I -1)} + {j -(D -J)} when {j ≥ (D -J)}&[{i ≥ (C -I -1)}&(i%3 = 0)] D * {i -(C -I -1)} + {j -(D -J -2)} when [{j ≥ (D -J)}&(j%3 = 0)]&[{i ≥ (C -I -1)}&(i%3 = 1)] D * {i -(C -I -1)} + {j -(D -J -1)} when [{j ≥ (D -J)}&(j%3 ≠ 2)]&[{i ≥ (C -I -1)}&( i % 3 = 2)] D * (i + I + 1) + {j -(D -J + 1)} when [{j ≥ (D -J)}&(j%3 ≠ 0)]&[{i < (C -I -1)}&(i%3 = 1)] D * {i -(C -I -1)} + {j -(D -J + 1)} when [{j ≥ (D -J)}&(j%3 ≠ 0)]&[{i ≥ (C -I -1)}&(i%3 = 1)] D * (i + I + 1) + {j -(D -J + 2)} when [{j ≥ (D -J)}&(j%3 = 2)] &[{i < (C -I -1)}&(i%3 = 2)] D * {i -(C -I -1)} + {j -(D -J + 2)} when [{j ≥ (D -J)}&(j%3 = 2)] &[{i ≥ (C -I -1)}&(i%3 = 2)]</ns0:head><ns0:p>(4)</ns0:p><ns0:p>The work reported in this paper includes interleaver design for all four modulation schemes (i.e., BPSK, QPSK, 16-QAM, and 64-QAM) as defined in the IEEE 802.11n standard. However, the proposed algorithm may be generalized, as follows, to include any other modulation scheme beyond the above standard.</ns0:p><ns0:p>1) Define the number of coded bits per sub-carrier (N bpscs ) for the modulation scheme beyond the above standard and compute s = max (1, N bpscs ). </ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science all values of N. All other values offset x to be computed using the correlation between the subsequent addresses. 6) Group the above mathematical expressions (obtained from step 5) according to the specific modulation scheme. These expressions exclude floor function, hence, suitable for implementation on the hardware platform. 7) Functional verification of the algorithm may be carried out by comparing the addresses of steps 3 and 6 using suitable software.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Transformation into Hardware</ns0:head><ns0:p>This section describes the transformation of the proposed address generator algorithm into digital hardware. The top-level view of the complete interleaver consisting of the proposed address generator and memory block is shown in Fig. <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Memory Block</ns0:head><ns0:p>The detailed arrangement of the memory block for one spatial stream having a similar structure as in <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>. The structure is generic and is applicable to all spatial streams. It receives three inputs from the address generator block; write address (WA x ), read address (RA x ), and sel x . The requirement of two memory blocks for block interleaving is accomplished here with a dual-port memory (with Port A and B) where the read and write operations can be performed simultaneously. As seen in Fig. <ns0:ref type='figure'>3</ns0:ref>, the first 288H locations are used as Port A and the next 288H locations as Port B. An adder is used to insert the bias of 288H while generating addresses for Port B. When one port is being written, another one is read, and vice versa.</ns0:p><ns0:p>Swapping between read/write operations at the end of a cycle is performed using the signal sel x , which is generated using a toggle flip flop.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig. 2. Top level view of complete interleaver</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2'>Address Generator</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The address generator is the heart of the interleaver. The encoding schemes used in this work for the two inputs, BW and N cbpsc of address generator, are described in Table <ns0:ref type='table' target='#tab_10'>4</ns0:ref>. The i ss1 -i ss4 represent the four different spatial streams of the address generator each consisting of write (WA x ), read (RA x ) address, and select signal (sel x ) output. As shown in Fig. <ns0:ref type='figure'>4</ns0:ref>, in the write address generator, a multiplexer is used to route the desired WA x from four possible sources based on the value of N cbpsc for a particular spatial stream, I ssx .</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref>(a) and (b) show the hardware used for the generation of row count (JCOUNT) and column count (ICOUNT), respectively, using up-counters and comparators. As per <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>, column number is defined as, C = 13/18, for BW = 20/40MHz. Circuit arrangement for the generation of row number, D using BW and N cbpsc is shown in Fig. <ns0:ref type='figure'>6</ns0:ref>. Similarly, Fig. <ns0:ref type='figure'>7</ns0:ref>(a) and (b) describe hardware used for the generation of ICOUNT < (C-I x ), ICOUNT ≥ (C-I x ), JCOUNT < (D-J x ), and JCOUNT ≥ (D-J x ) signals. Here I x and J y is the column and row offset value respectively, used while computing the addresses and is defined in Table <ns0:ref type='table'>5</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Table 4(a) Encoding of BW</ns0:head></ns0:div>
<ns0:div><ns0:head>Table 4(b) Encoding of N cbpsc</ns0:head></ns0:div>
<ns0:div><ns0:head>Fig. 5. Scheme showing generation of (a) row count and (b) column count</ns0:head><ns0:p>The hardware required for the generation of RA x is shown in Fig. <ns0:ref type='figure'>8</ns0:ref>. Like the write address generator, the structure developed for the generation of RA x is also generic and is applicable to Manuscript to be reviewed Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='5'>Simulation Results</ns0:head><ns0:p>The digital hardware of the MIMO WLAN interleaver is translated into a VHDL program using Xilinx ISE 12.1. The proposed design of interleaver is simulated, and the functionality verification is done using ModelSim XE-III. The address generation circuitry of the interleaver is tested for all BWs, spatial streams, and modulation schemes, out of which two results (for BW = 0, N bpscs = 00 and BW = 1, N bpscs = 11) are presented in Fig. <ns0:ref type='figure' target='#fig_12'>11</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>I ss4</ns0:head><ns0:p>). The write address sequence generated by the proposed interleaver for spatial stream 1 (i.e., int_add_1) is 0, 4, 8, 12, … Similarly, the address sequence for spatial stream 2 (i.e., int_add_2) is 26, 30, 34, and so on. The last address sequence (i.e., int_add_4) of Fig. <ns0:ref type='figure' target='#fig_12'>11</ns0:ref>(a) tallies with the address sequences shown in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>(a). Automatic address verification has also been carried out between the addresses generated by our proposed algorithm and the addresses obtained through steps B 1 -B 3 of Section 2 involving floor function by running a separate MATLAB program. This verification further endorses the correctness of the proposed algorithm. </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>FPGA Implementation Results</ns0:head><ns0:p>The proposed design of the interleaver is transformed into a VHDL model using Xilinx ISE 12.1 and is implemented on Xilinx Spartan-6 FPGA. Despite our exhaustive literature survey, any similar implementation on the FPGA platform has not been noticed. As a result, the conventional LUT-based approach has been implemented on the same FPGA platform utilizing Block RAM (BRAM) to house the address LUTs. Four dual port BRAM memory blocks are used to implement the interleaver memory in both designs. Comparative analysis of the two implementations in terms of device utilization is made in Table <ns0:ref type='table'>7</ns0:ref>. The betterment of the proposed technique can be quantified in terms of embedded memory utilization (88.9% memory PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science block saving) and operating speed (37.8% speed improvement). The use of DSP blocks as multiplier improves the performance of the circuit by reducing delay. The circuit works at a maximum clock frequency (f) of 208.7MHz with 28.62mW of total power consumption, which includes static and dynamic power. The use of FPGA's embedded DSP blocks (DSP48A1s) as a multiplier and embedded dual-port memory (BRAM) helps to reduce the memory access time and in turn, improves the throughput of the system. Resource-efficiency and compact design are the key contributors in reducing the power consumption of the interleaver. In addition, Spartan 6 FPGA itself is known for better power efficiency, increased productivity, and higher performance implementation platform.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 7 Device Utilization Summary</ns0:head><ns0:p>The hardware testing of the address generator for the MIMO WLAN interleaver has been performed using VIO and ILA. VIO and ILA are the customizable cores that facilitate both monitoring and driving internal FPGA signals in real-time. Fig. <ns0:ref type='figure' target='#fig_6'>12</ns0:ref> shows the block level design of the test environment using VIO and ILA wherein the proposed address generator block An external clock (clk) signal drives all the modules synchronously.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig. 12. Test arrangement of the address generator using VIO and ILA</ns0:head><ns0:p>The throughputs of the proposed interleaver for all four modulation schemes are computed using <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> and presented in Table <ns0:ref type='table' target='#tab_1'>8</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>Tp = f x N bpscs x i ss (5)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The last column of Table <ns0:ref type='table' target='#tab_1'>8</ns0:ref> justifies our high throughput claim of the proposed interleaver. This provides the opportunity to implement the proposed design in relatively slower and lower-cost FPGAs as well, thereby providing a cost-effective solution.</ns0:p><ns0:p>Besides, a comparison with few works has been made based on the equivalence drawn between FPGA and ASIC implementations in <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. The comparative study of the proposed implementation regarding key FPGA parameters shows betterment over other similar recent works and is presented in Table <ns0:ref type='table' target='#tab_13'>9</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_3'>13</ns0:ref>. The proposed circuit shows betterment over <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>, and LUT-based technique in terms of maximum operating frequency. In terms of power consumption, our implementation is found to be most efficient among the design in reference <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> of Fig. <ns0:ref type='figure' target='#fig_3'>13</ns0:ref>. As direct implementation of floor function is not possible, improvement in terms of memory block used (BRAM) and clock frequency over the LUT-based technique may be considered as the performance improvement of our novel algorithm due to the elimination of floor function from the interleaver address generator circuitry.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_13'>9</ns0:ref> Comparative study with similar works Fig. <ns0:ref type='bibr' target='#b13'>13</ns0:ref> Performance comparison with <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>, and LUT-based work</ns0:p><ns0:p>Massive MIMO system, a key technology being deployed in the 5G system, employs an array of a large number of transmitting antennas at the base station to achieve high throughput has been investigated to compare our FPGA implementation results. Tan et al. <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> have demonstrated CMOS implementation of the message-passing detector (MPD) designed for a 256-QAM massive MIMO system supporting 32 concurrent mobile users in each time-frequency resource with 2.76 Gbps throughput. As far as throughput is concerned, our proposed interleaver on the FPGA platform shows a competitive result with that of <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>Conclusions</ns0:head><ns0:p>This Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Part of interleaver write addresses</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>implementation is further extended to ASIC with 65nm CMOS technology. The authors tabulated the FPGA and ASIC implementation results of the complete MIMO PHY modulator for IEEE 802.11n WLAN without mention of the interleaver's resource occupancy or power consumption separately. The above-mentioned issues have opened up further research scope in improving the implementation of MIMO WLAN interleaver that complies with the high throughput data transmission requirements. In this paper, we propose a novel design of interleaver used in a 4x4 MIMO WLAN transceiver. The proposed address generator algorithm eliminates the requirement of floor function from the address generator of the MIMO WLAN interleaver. The key contributions of this work are: PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021) Manuscript to be reviewed Computer Science  The mathematical modeling of the new algorithm has been derived with general validity.  The novel address generator algorithm has been generalized to accommodate more modulation schemes if required.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021) Manuscript to be reviewed Computer Science 2 Interleaving in IEEE 802.11n</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>schemes, spatial streams, and BWs, are represented by (2)-(4).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Table 1 (</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>a) Part of interleaver write addresses with N bpscs = 1, N = 52, i ss =4, BW = 20MHz Table 1(b) Part of interleaver write addresses with N bpscs = 4, N = 208, i ss =2, BW = 20MHz</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Proposed algorithm for N bpscs = 4 (16-QAM) with all N, i ss , and BW PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021) Manuscript to be reviewed Computer Science Table 2(c) Proposed algorithm for N bpscs = 6 (64-QAM) with all N, i ss , and BW k n(QPSK -BPSK) = { D * (i + I) + (j + J) when j < (D -J) and i < (C -I) D * {i -(C -I)} + (j + J) when j < (D -J) and i ≥ (C -I) D * ( i + I + 1 ) + { j -( D -J )} when j ≥ (D -J) and i < (C -I -1) D * {i -(C -I -1)} + {j -(D -J)} when j ≥ (D -J) and i ≥ (C -I -1)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>k n ( 64 -QAM ) = PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>2 )</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Define interleaver depth (N), number of columns (d), and compute the intermediate addresses after spatial interleaving (j k ) by implementing B 1 -B 2 steps. 3) Compute the final memory addresses (r k ) by implementing step B 3 with appropriate values of frequency rotation parameter (N rot ) corresponding to permissible bandwidths (BWs) for all four values of spatial streams (i ss ). 4) Arrange the addresses obtained in step 3 in (N/N rot ) x d tabular form with j and i as row and column number, respectively. 5) Identify the correlation between the subsequent addresses and re-arrange each address in (N/N rot )*(i ± offset 1 ) + (j ± offset 2 ) format. The offset x = 0 with i ss = 1 for PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Fig. 3 .Fig. 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Fig. 3. Internal structure of memory block</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>all the spatial streams. The first and second level multiplexers select one of the values of interleaver depth from the inputs with BW and mod_typ signal. The rd_count is a 10-bit up counter and generates RA x . While progressing through the count values, when the rd_count value equals the output of M 1, a reset pulse is generated by the comparator, and rd_count goes to the initial state to start another cycle.PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Fig. 6 .Fig. 7 Table 5</ns0:head><ns0:label>675</ns0:label><ns0:figDesc>Fig. 6. Scheme for generation of number of rows (D)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Fig. 8 .Fig. 9 . 2 Fig. 10 .</ns0:head><ns0:label>89210</ns0:label><ns0:figDesc>Fig. 8. Circuit for generation of read address (RA x )</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>(a)-(b). The last four signals (int_add_1 to int_add_4) of Fig. 11(a) and (b) show the sequence of write addresses generated in synchronization with a clock signal (clk) for all the four spatial streams of the interleaver (I ss1 -</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Fig. 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Fig. 11. Write addresses (WA x ) for (a) BW=0 (20MHz), N bpscs = 00 (BPSK), (b) BW=1 (40MHz), N bpscs = 11 (64-QAM)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>WLAN_MIMO_NEW_0) is placed in the middle of the VIO (left side) and ILA (right side) blocks. The VIO injects user-defined RESET, BW, and NBPSCS signals. The outputs generated by the address generator (INT_ADD_1, 2, 3, and 4) are fed to the ILA and VIO for verification.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>work demonstrates the design and implementation of novel interleaver hardware on the FPGA platform to be used in OFDM-based MIMO WLAN applications. A new algorithm has been proposed for the address generator of the interleaver eliminating the requirement of floor PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021) Manuscript to be reviewed Computer Science function, and is supported by the mathematical formulation with general validity. The algorithm is transformed into the digital circuit and is modeled using VHDL software. Simulation results and hardware testing verify the functionality of the proposed algorithm. Hardware implementation of the VHDL model using Xilinx ISE is done and is tested on Xilinx Spartan 6 FPGA. Efficient design and use of FPGA's embedded resources during implementation enables betterment over a few recent similar works and conventional design in terms of multiple FPGA parameters and interleaver throughput.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,213.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,270.00,307.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,290.25,231.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,245.25,223.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,463.50,114.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,247.50,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,181.57,328.50,264.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,181.57,337.50,339.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,204.52,414.75,536.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,204.52,525.00,375.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='49,42.52,178.87,525.00,124.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='50,42.52,178.87,525.00,219.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 6 (b)</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Encryption of signals II6 and JJ6</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Throughput comparison with IEEE 802.11n </ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 (</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>a) Part of interleaver write addresses with N bpscs = 1, N = 52, i ss =4, BW = 20MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Table 1(b) Part of interleaver write addresses with N bpscs = 4, N = 208, i ss =2, BW = 20MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Table 1(c) Part of interleaver write addresses with N bpscs = 6, N = 312, i ss =3, BW = 20MHz</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>1 2 Table 1 ( a )</ns0:head><ns0:label>21a</ns0:label><ns0:figDesc>Part of interleaver write addresses with N bpscs = 1, N = 52, i ss =4, BW = 20MHz 3</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Column no(i)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Row no(j)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>49</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>9</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell>4 5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 (</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>b) Part of interleaver write addresses with N bpscs = 4, N = 208, i ss =2, BW = 20MHz 7</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Column no(i)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 (</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>c) Part of interleaver write addresses with N bpscs = 6, N = 312, i ss =3, BW = 20MHz 11</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Column no(i)</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Proposed algorithm</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>a) Proposed algorithm for N bpscs = 1 or 2 (BPSK / QPSK) with all N, i ss and BW</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>b) Proposed algorithm for N bpscs = 4 (16-QAM) with all N, i ss and BW Table 2(c) Proposed algorithm for N bpscs = 6 (64-QAM) with all N, i ss and BW</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 4 (</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>a) Encoding of BW</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Bandwidth (BW)</ns0:cell><ns0:cell>Encoded bit</ns0:cell></ns0:row><ns0:row><ns0:cell>20MHz</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>40MHz</ns0:cell><ns0:cell>1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 4 (</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>b) Encoding of N cbpsc</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Modulation Scheme</ns0:cell><ns0:cell>Encoded bits</ns0:cell></ns0:row><ns0:row><ns0:cell>(N cbpsc )</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>BPSK (N cbpsc = 1)</ns0:cell><ns0:cell>00</ns0:cell></ns0:row><ns0:row><ns0:cell>QPSK (N cbpsc = 2)</ns0:cell><ns0:cell>01</ns0:cell></ns0:row><ns0:row><ns0:cell>16-QAM (N cbpsc = 4)</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>64-QAM (N cbpsc = 6)</ns0:cell><ns0:cell>11</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Throughput comparison with IEEE 802.11n</ns0:figDesc><ns0:table><ns0:row><ns0:cell>This work</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 9 (on next page)</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparative study with similar works</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:1:0:NEW 31 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparative study with similar works</ns0:figDesc><ns0:table><ns0:row><ns0:cell>FPGA Parameters</ns0:cell><ns0:cell>This work</ns0:cell><ns0:cell>[15]</ns0:cell><ns0:cell>[7]</ns0:cell><ns0:cell>[16]</ns0:cell><ns0:cell>LUT Based</ns0:cell></ns0:row><ns0:row><ns0:cell>Maximum clock frequency, f</ns0:cell><ns0:cell>208.7 MHz</ns0:cell><ns0:cell>109.38MHz</ns0:cell><ns0:cell>70.31MHz</ns0:cell><ns0:cell>125MHz</ns0:cell><ns0:cell>151.45MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Power consumption, P</ns0:cell><ns0:cell>28.62mW</ns0:cell><ns0:cell>111.24mW</ns0:cell><ns0:cell>48mW</ns0:cell><ns0:cell>Not available</ns0:cell><ns0:cell>28.62mW</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "
31st Mar, 2021
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with an opportunity to address the reviewers’ comments.
We thank you and all the reviewers for the generous comments on the manuscript and valuable suggestions for improvement.
We have edited the manuscript significantly to address all the concerns. The point-by-point response to the editor’s and reviewers’ comments are given below.
We believe that the manuscript is now suitable for publication in PeerJ Computer Science.
Best regards,
Pijush Kanti Dutta Pramanik
Dept. of Computer Science & Engineering
National Institute of Technology, Durgapur, India
(On behalf of all authors)
Response to Reviewer’s Comments
Reviewer 1
Basic reporting
In this work, the authors propose a novel algorithm to efficiently model the address generation circuitry of MIMO WLAN interleaver. The interleaver used in MIMO WLAN transceiver has three steps of permutation involving floor function whose hardware implementation is considered to be most challenging due to the absence of corresponding digital hardware. Experimental results are promising, comparisons with few recent similar works including the conventional Look-Up Table (LUT) based technique demonstrated the superiority of the proposed design in terms of maximum improvement in operating frequency by 196.83%, maximum reduction in power consumption by 74.27% and reduction of memory occupancy by 88.9%.
Experimental design
Without comments
Validity of the findings
1 - More comparisons with previous works could be a nice complement for the current manuscript. Then, the authors could demonstrate how the proposed approach outperform the current state of the art.
2 - A post-synthesis simulation could be a nice complement for the current manuscript.
Response:
1 – Extensive literature survey is made once again to identify similar recent works if any. Two more recent works have been identified and discussed in the literature review section (Introduction section). But due to not having FPGA implementation results for interleaver alone, the results couldn’t be compared with our work.
2- Table 7 presents post-synthesis simulation results only.
Comments for the author
In general, the manuscript is interesting, well organized and deserves to be published. There are a little grammatical/style error. In my opinion, a grammar/style revision has to be carried out before the manuscript can be considered for publication.
Response:
Based on the suggestion of the reviewer, the entire paper is scrutinized for possible grammatical/style error and corrected. The authors are thankful to the reviewer for his/her valuable comments.
Reviewer: Vaibhav Rupapara
Basic reporting
In this work, according to the authors, they propose an algorithm with the mathematical background for the address generator eliminating the need for floor function. The algorithm is converted into digital hardware for implementation on the reconfigurable FPGA platform. Hardware structure for the complete interleaver including the reading address generator and the memory module is designed and modeled in VHDL using Xilinx Integrated Software Environment (ISE) utilizing embedded memory and DSP blocks of the target FPGA. in the end, the functionality of the proposed algorithm is verified through exhaustive software simulation. The authors has done work for high-speed communication which is an appreciate able work domain.
I have a concern that can be helpful to improve the quality of the manuscript.
Experimental design
1- Experimental approach of the study is not clear, the author should summarize it in a clear pattern so it can be easy for the reader to understand it.
2- In the abstract, the author mentions that they verified the functionality of the proposed algorithm through exhaustive software simulation, please elaborate it why? is there any other method to verify? if yes then why not others?
Response:
1 – As per the suggestion of the reviewer, the introduction section is re-written to summarize work and experimental approach so that it is easy to understand by the readers.
2- To verify the validity of the proposed algorithm, we performed extensive software simulation. In addition, as per the suggestion of the reviewer, the design is tested on Xilinx Zynq-7000 FPGA using Virtual Input Output (VIO) and Integrated Logic Analyser (ILA) tools as well. Fig. 12 shows the block design of this testing arrangement. The output so obtained also conform to the simulation results presented in Fig 11.
Validity of the findings
No comment
Comments for the author
1- The major issue is the presentation of content it should be better and understandable.
2- English grammar and typos should be checked thoroughly.
3- Some keypoint should be add in the introduction section for the ease of the reader to understand the article's contribution.
Response:
1- Suggestion of the reviewer is well accepted and attempts have been made to improve the readability of the paper.
2- Based on the suggestion of the reviewer, the entire paper is scrutinized for possible grammatical/style error and corrected.
3- Introduction section has been re-written to summarize work and experimental approach so that it is easy to understand by the readers.
The authors are thankful to the reviewer for his/her valuable comments.
Reviewer 3
Basic reporting
.
Experimental design
.
Validity of the findings
.
Comments for the author
Highlights the main scientific contributions in section 1.
Section 5 and 6 are the main section of the paper, however they are extremely poor. Authors must
insert more information and results.
Table 8: If the maximum throughput requirement of IEEE 802.11n is 600 Mbps which is advantage in work with more? The authors must clarify this information.
Table 9: The power consumption increases non-linearly with frequency cube (P ≈ F3). How, does the proposal have small consumption then literature? The authors must clarify this information. Is this information dynamic or static power?
Response:
Introduction section has been re-written to highlights the main scientific contributions (Section 1).
As per the suggestion of the reviewer, Section 5 and 6 have been improved with additional information and results.
In this work, the design has been implemented on Xilinx Spartan-6 FPGA. Due to our efficient hardware design which utilizes the embedded resources like memories and DSP blocks, the throughput is reported to be higher than the requirement. This provides opportunity to implement the proposed design in relatively slower and lower cost FPGAs as well thereby providing a cost-effective solution.
The proposed address generator is implemented on Xilinx Spartan6 FPGA. Following are the reasons for being the design to be working in higher speed as well as consuming lower power:
1) Use of one embedded dual port memory (BRAM) instead of external memory/ two single port memory. This reduces memory access time and speed up the operation.
2) Use of embedded DSP block (DSP48A1s) of the FPGA as multiplier. Multiplication otherwise is computationally intensive operation, leading to slower operation, higher resource utilization and larger power consumption.
3) Moreover, Spartan 6 FPGA itself offer better power efficiency increased productivity and higher performance.
The power consumption data reported in the paper is the total power consumption which includes dynamic and static power.
" | Here is a paper. Please give your review comments after reading it. |
139 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Demand for high-speed wireless broadband internet service is ever increasing. Multipleinput-multiple-output (MIMO) Wireless LAN (WLAN) is becoming a promising solution for such high-speed internet service requirements. This paper proposes a novel algorithm to efficiently model the address generation circuitry of the MIMO WLAN interleaver. The interleaver used in the MIMO WLAN transceiver has three permutation steps involving floor function whose hardware implementation is the most challenging task due to the absence of corresponding digital hardware. In this work, we propose an algorithm with a mathematical background for the address generator, eliminating the need for floor function. The algorithm is converted into digital hardware for implementation on the reconfigurable FPGA platform. Hardware structure for the complete interleaver, including the read address generator and memory module, is designed and modeled in VHDL using Xilinx Integrated Software Environment (ISE) utilizing embedded memory and DSP blocks of Spartan 6 FPGA. The functionality of the proposed algorithm is verified through exhaustive software simulation using ModelSim software. Hardware testing is carried out on Zynq 7000 FPGA using Virtual Input Output (VIO) and Integrated Logic Analyzer (ILA) core. Comparisons with few recent similar works, including the conventional Look-Up Table (LUT) based technique, show the superiority of our proposed design in terms of maximum improvement in operating frequency by 196.83%, maximum reduction in power consumption by 74.27%, and reduction of memory occupancy by 88.9%. In the case of throughput, our design can deliver 8.35 times higher compared to IEEE 802.11n requirement.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The increasing use of multimedia services and the growth of graphics-based web contents have escalated the demand for high-speed wireless broadband communications. The use of more than one antenna at the transmitter and/or at the receiver aims to substantially improve the transmission/reception rate. Orthogonal Frequency Division Multiplexing (OFDM) is becoming a popular technique for high data rate wireless transmission <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. OFDM may be combined with multiple antennas at both the access point and the mobile terminal to increase diversity gain and/or to enhance the system capacity on a time-varying multipath fading channel, resulting in a Multiple-Input-Multiple-Output (MIMO) OFDM system <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>.</ns0:p><ns0:p>The key contributions of this work are:</ns0:p><ns0:p> The mathematical modeling of the new algorithm has been derived with general validity.</ns0:p><ns0:p> The novel address generator algorithm has been generalized to accommodate more modulation schemes if required.</ns0:p><ns0:p> The complete MIMO WLAN interleaver, including the proposed address generator algorithm, is transformed into digital hardware and implemented on Spartan 6 FPGA <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> using Xilinx ISE 12.1.</ns0:p><ns0:p> To reduce the resource and power consumption and to enhance the throughput, the embedded resources of FPGA like dual-port Block RAM <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> and DSP blocks (DSP48A1) <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> have been successfully interfaced and utilized in the hardware model. This approach makes the design very compact and highly efficient. Comparison with existing similar works endorses the superiority of our proposed design in terms of multiple FPGA parameters.</ns0:p><ns0:p> The functionality test of the address generator has been verified using ModelSim XE-III software.</ns0:p><ns0:p> Further, hardware testing of the algorithm has also been carried out using Virtual Input and Output (VIO) and Integrated Logic Analyser (ILA) module on Zynq 7000 FPGA board, which further validates the proposed algorithm.</ns0:p><ns0:p> The proposed design has been compared with a few recent implementations <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> by converting them into FPGA equivalent implementation using <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. The comparison shows the superiority of the proposed design in terms of operating frequency, power consumption, and memory occupancy.</ns0:p><ns0:p> The performance of the proposed interleaver is compared with IEEE 802.11n. The comparison results show that the proposed interleaver delivers much higher throughput than the maximum throughput requirement of IEEE 802.11n.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 presents the theoretical background of the interleaving process in the MIMO WLAN transceiver. Section 3 presents the proposed algorithm, including the mathematical background for the address generator. A description of the transformation of the proposed algorithm into digital hardware has been made in Section 4.</ns0:p></ns0:div>
<ns0:div><ns0:head>Simulation results followed by FPGA implementation details have been reported in Sections 5</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science and 6, respectively. The concluding remarks are given in Section 7.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Interleaving in IEEE 802.11n</ns0:head><ns0:p>In a MIMO WLAN transceiver, the encoded data stream obtained from the convolutional encoder is fed to a special type of block interleaver. Interleaving in 802.11n is a three-step process in which the first two steps provide spatial interleaving, and the final step performs frequency interleaving <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. The interleaving steps are defined in the form of three blocks shown in Fig. <ns0:ref type='figure'>1</ns0:ref>. The first step (B 1 ) ensures that adjacent coded bits are mapped onto non-adjacent subcarriers, while the second step (B 2 ) is responsible for the mapping of adjacent coded bits alternately onto less or more significant bits of the constellation, thus avoiding long runs of lowly reliable bits. If more than one spatial stream exists in the 802.11n physical layer, the third step, called frequency rotation (B 3 ), will be applied to the additional spatial streams. The frequency rotation ensures that the consecutive carriers used across spatial streams are not highly correlated.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig. 1. Block diagram of steps involved in interleaving process for the MIMO WLAN</ns0:head><ns0:p>Here, N is the block size corresponding to the number of coded bits per allocated sub-channels per OFDM symbol. d represents the number of columns in the interleaver, whose values are 13 and 18 for 20MHz and 40MHz BW <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>, respectively. The parameter s is defined as s = max (1, N BPSCS ), whereas N BPSCS is the number of coded bits per sub-carrier and takes the values 1, 2, 4, and 6 for BPSK, QPSK, 16-QAM, and 64-QAM, respectively. i ss is the index of the spatial stream, and N rot is the parameter used for defining different rotations for the 20MHz and 40MHz cases. The operator % and represent modulo function and floor function, respectively.  </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Proposed Algorithm for Address Generator of Interleaver</ns0:head><ns0:p>The permutation steps described in the B 1 , B 2, and B 3 blocks of Fig. <ns0:ref type='figure'>1</ns0:ref> (1)</ns0:p><ns0:formula xml:id='formula_0'>k n ( 16 -QAM ) =</ns0:formula></ns0:div>
<ns0:div><ns0:head>{ D * (i + I) + (j + J) when {j < (D -J)}[{i < (C -D * (i + I) + (j + J + 1) when [{j < (D -J)}&(j%2 = 0)]&[{i < (C -I)}&(i%2 = 1)] D * {i -(C -I)} + (j + J) when {j < (D -J)}& [{i ≥ (C -I)}&(i%2 = 0)] D * (i -(C -I)) + (j + J + 1) when [{j < (D -J)}&(j%2 = 0)]& [{i ≥ (C -I)}&(i%2 = 1)] D * (i + I) + (j + J -1) when [{j < (D -J)}&(j%2 = 1)]&[{i < (C -I)}&(i %2 = 1)] D * (i -(C -I)) + (j + J -1) when [{j < (D -J)}&(j%2 = 1)] & [{i ≥ (C -I)}&(i%2 = 1)] D * (i + I + 1) + {j -(D -J)} when {j ≥ (D -J)} & [{i < (C -I -1)}&(i%2 = 0)] D * (i + I + 1) + {j -(D -J -1)} when [{j ≥ (D -J)}&(j%2 = 0)] & [{i < (C -I -1)}&(i%2 = 1)] D * {i -(C -I -1)} + {j -(D -J)} when {j >= (D -J)} & [{i ≥ (C -I -1)}&(i%2 = 0)] D * {i -(C -I -1)} + {j -(D -J -1)} when [{j ≥ (D -J)}&(j%2 = 0)]& [{i ≥ (C -I -1)}&(i%2 = 1)] D * (i + I + 1) + {j -(D -J + 1)} when [{j ≥ (D -J)} &(j%2 = 1)] & [{i < (C -I -1)}&(i%2 = 1)] D * {i -(C -I -1)} + {j -(D -J + 1)} when [{j ≥ (D -J)}&(j%2 = 1)] & [{i ≥ (C -I -1)}&(i%2 = 1)]</ns0:head><ns0:p>(2) </ns0:p></ns0:div>
<ns0:div><ns0:head>{ D * (i + I) + (j + J) when {j < (D -J)}[{i < (C -I)} (i% D * (i + I) + (j + J + 2) when [{j < (D -J)} (j%3 = 0)][{i < D * (i + I) + (j + J + 1) when [{j < (D -J)}&(j%3 ≠ 2)]&[{i < (C -I)}&(i%3 = 2)] D * {i -(C -I)} + (j + J) when {j < (D -J)}&[{i ≥ (C -I)}&(i%3 = 0)] D * {i -(C -I)} + (j + J + 2) when [{j < (D -J)}&(j%3 = 0)]&[{i ≥ (C -I)}&(i%3 = 1)] D * {i -(C -I)} + (j + J + 1)</ns0:head><ns0:p>when</ns0:p><ns0:formula xml:id='formula_1'>[{j < (D -J)}&(j%3 ≠ 2)]&[{i ≥ (C -I)}&(i%3 = 2)] D * (i + I) + (j + J -1) when [{j < (D -J)}&(j%3 ≠ 0)]&[{i < (C -I)}&(i%3 = 1)] D * {i -(C -I)} + (j + J -1) when [{j < (D -J)}&(j%3 ≠ 0)]&[{i ≥ (C -I)}&(i % 3 = 1)] D * (i + I) + (j + J -2) when [{j < (D -J)}&(j%3 = 2)]&[{i < (C -I)}&(i%3 = 2)] D * {i -(C -I)} + (j + J -2 ) when [{j < (D -J)}&(j%3 = 2)]&[{i ≥ (C -I)}&(i%3 = 2)] D * (i + I + 1) + {j -(D -J )} when {j ≥ (D -J)}&[{i < (C -I -1)}&(i%3 = 0)] D * (i + I + 1) + {j -(D -J -2)} when [{j ≥ (D -J)}&(j%3 = 0)]&[</ns0:formula></ns0:div>
<ns0:div><ns0:head>{i < (C -I -1)}&(i%3 = 1)] D * (i + I + 1) + {j -(D -J -1)} when [{j ≥ (D -J)}&(j%3 ≠ 2)]&[{i < (C -I -1)}&(i%3 = 2)] D * {i -(C -I -1)} + {j -(D -J)} when {j ≥ (D -J)}&[{i ≥ (C -I -1)}&(i%3 = 0)] D * {i -(C -I -1)} + {j -(D -J -2)} when [{j ≥ (D -J)}&(j%3 = 0)]&[{i ≥ (C -I -1)}&(i%3 = 1)] D * {i -(C -I -1)} + {j -(D -J -1)} when [{j ≥ (D -J)}&(j%3 ≠ 2)]&[{i ≥ (C -I -1)}&( i % 3 = 2)] D * (i + I + 1) + {j -(D -J + 1)} when [{j ≥ (D -J)}&(j%3 ≠ 0)]&[{i < (C -I -1)}&(i%3 = 1)] D * {i -(C -I -1)} + {j -(D -J + 1)} when [{j ≥ (D -J)}&(j%3 ≠ 0)]&[{i ≥ (C -I -1)}&(i%3 = 1)] D * (i + I + 1) + {j -(D -J + 2)} when [{j ≥ (D -J)}&(j%3 = 2)] &[{i < (C -I -1)}&(i%3 = 2)] D * {i -(C -I -1)} + {j -(D -J + 2)} when [{j ≥ (D -J)}&(j%3 = 2)] &[{i ≥ (C -I -1)}&(i%3 = 2)]</ns0:head><ns0:p>(</ns0:p><ns0:p>The general validity of the proposed mathematical formulation can be established with the help of <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>. As far as spatial permutation is concerned, the steps involved in IEEE 802.16e <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> and IEEE 802.11n <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref> are identical. Additionally, the latter undergoes frequency rotation using the frequency interleaving step, as described by B 3 in Fig. <ns0:ref type='figure'>1</ns0:ref> for spatial streams other than the first.</ns0:p><ns0:p>Further, analysis of the 3 rd step results that the entire term beyond j k (i.e., J rot ) remains constant for a particular spatial stream and expressed by Eq. 4 <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>.</ns0:p><ns0:p>(4)</ns0:p><ns0:formula xml:id='formula_3'>𝑟 𝑘 = [𝑗 𝑘 -𝐽 𝑟𝑜𝑡 ]%𝑁 where J rot = [ {(𝑖 𝑠𝑠 -1) * 2}%3 + 3 ⌊ 𝑖 𝑠𝑠 -1 3 ⌋] * 𝑁 𝑟𝑜𝑡 * 𝑁 𝐵𝑃𝑆𝐶𝑆</ns0:formula><ns0:p>As the first stream for all modulation schemes undergoes no frequency rotation, hence</ns0:p><ns0:formula xml:id='formula_4'>𝑟 𝑘 = [𝑗 𝑘 -0]%𝑁 = [𝑗 𝑘 ]%𝑁 = 𝑗 𝑘</ns0:formula><ns0:p>For subsequent streams, the value of J rot differs for each spatial stream, modulation schemes, and BWs. All such possible values of J rot are listed in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>. The expression of j k so derived for all modulation schemes in <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> if substituted in Eq. The work reported in this paper includes interleaver design for all four modulation schemes (i.e., BPSK, QPSK, 16-QAM, and 64-QAM) as defined in the IEEE 802.11n standard. However, the proposed algorithm may be generalized, as follows, to include any other modulation scheme beyond the above standard. 1) Define the number of coded bits per sub-carrier (N bpscs ) for the modulation scheme beyond the above standard and compute s = max (1, N bpscs ). Manuscript to be reviewed Computer Science 6) Group the above mathematical expressions (obtained from step 5) according to the specific modulation scheme. These expressions exclude floor function, hence, suitable for implementation on the hardware platform. 7) Functional verification of the algorithm may be carried out by comparing the addresses of steps 3 and 6 using suitable software.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Transformation into Hardware</ns0:head><ns0:p>This section describes the transformation of the proposed address generator algorithm into digital hardware. The top-level view of the complete interleaver consisting of the proposed address generator and memory block is shown in Fig. <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Memory Block</ns0:head><ns0:p>The detailed arrangement of the memory block for one spatial stream having a similar structure as in <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>. The structure is generic and applies to all spatial streams. It receives three inputs from the address generator block; write address (WA x ), read address (RA x ), and sel x .</ns0:p><ns0:p>The requirement of two memory blocks for block interleaving is accomplished here with a dualport memory (with Port A and B) where the read and write operations can be performed simultaneously. As seen in Fig. <ns0:ref type='figure'>3</ns0:ref>, the first 288H locations are used as Port A, and the next 288H locations as Port B. An adder is used to insert the bias of 288H while generating the addresses for Port B. When one port is being written, another one is read, and vice versa. Swapping between read/write operations at the end of a cycle is performed using the signal sel x , generated using a toggle flip flop.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig. 2. A top-level view of the complete interleaver</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2'>Address Generator</ns0:head><ns0:p>The address generator is the heart of the interleaver. The encoding schemes used in this work for the two inputs, BW and N cbpsc of the address generator, are described in Manuscript to be reviewed Computer Science (WA x ), read (RA x ) address, and select signal (sel x ) output. As shown in Fig. <ns0:ref type='figure'>4</ns0:ref>, in the write address generator, a multiplexer is used to route the desired WA x from four possible sources based on the value of N cbpsc for a particular spatial stream, I ssx .</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref>(a) and (b) show the hardware used for the generation of row-count (JCOUNT) and column-count (ICOUNT), respectively, which consist of up-counters and comparators. As per <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>, the column number is defined as C = 13 and 18, for BW = 20 MHz and 40MHz, respectively. Circuit arrangement for the generation of row number, D using BW and N cbpsc is shown in Fig. <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. Similarly, Fig. <ns0:ref type='figure'>7</ns0:ref> <ns0:ref type='table'>5</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Table 4(a) Encoding of BW Table 4(b) Encoding of N cbpsc</ns0:head></ns0:div>
<ns0:div><ns0:head>Fig. 5. Scheme showing the generation of (a) row-count and (b) column-count</ns0:head><ns0:p>The hardware required for the generation of RA x is shown in Fig. <ns0:ref type='figure'>8</ns0:ref>. Like the write address generator, the structure developed for the generation of RA x is also generic and is applicable to all the spatial streams. The first and second level multiplexers select one of the values of interleaver depth from the inputs with BW and mod_typ signal. The rd_count is a 10-bit up counter and generates RA x . While progressing through the count values, when the rd_count value equals the output of M 1, a reset pulse is generated by the comparator, and rd_count goes to the initial state to start another cycle. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Fig. 7 Arrangement showing the generation of (a) ICOUNT < (C-I x ) and ICOUNT ≥ (C-I x ) (b) JCOUNT < (D-J y ) and JCOUNT ≥ (D-J y )</ns0:head><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> Definition of I x and J y for all streams and BW Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science 5 Simulation Results</ns0:head><ns0:p>The digital hardware of the MIMO WLAN interleaver is translated into a VHDL program using Xilinx ISE 12.1. The proposed design of the interleaver is simulated, and the functionality verification is done using ModelSim XE-III. The address generation circuitry of the interleaver is tested for all BWs, spatial streams, and modulation schemes, out of which two results (for BW = 0, N bpscs = 00 and BW = 1, N bpscs = 11) are presented in Fig. <ns0:ref type='figure' target='#fig_11'>11</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>FPGA Implementation Results</ns0:head><ns0:p>The proposed design of the interleaver is transformed into a VHDL model using Xilinx ISE 12.1 and is implemented on Xilinx Spartan-6 FPGA. Despite our exhaustive literature survey, any similar implementation on the FPGA platform has not been noticed. As a result, the conventional LUT-based approach has been implemented on the same FPGA platform utilizing Block RAM (BRAM) to house the address LUTs. Four dual port BRAM memory blocks are used to implement the interleaver memory in both designs. Comparative analysis of the two implementations in terms of device utilization is made in Table <ns0:ref type='table'>7</ns0:ref>. The betterment of the proposed technique can be quantified in terms of embedded memory utilization (88.9% memory block saving) and operating speed (37.8% speed improvement). The use of DSP blocks as multiplier improves the performance of the circuit by reducing delay. The circuit works at a PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science maximum clock frequency (f) of 208.7MHz with 28.62mW of total power consumption, which includes static and dynamic power. The use of FPGA's embedded DSP blocks (DSP48A1s) as a multiplier and embedded dual-port memory (BRAM) helps to reduce the memory access time and, in turn, improves the throughput of the system. Resource efficiency and compact design are the key contributors to reducing the power consumption of the interleaver. In addition, Spartan 6 FPGA itself is known for better power efficiency, increased productivity, and higher performance implementation platform.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 7 Device Utilization Summary</ns0:head><ns0:p>The hardware testing of the address generator for the MIMO WLAN interleaver has been performed using VIO and ILA. VIO and ILA are the customizable cores that facilitate both monitoring and driving internal FPGA signals in real-time. Fig. <ns0:ref type='figure' target='#fig_4'>12</ns0:ref> shows the block level design of the test environment using VIO and ILA wherein the proposed address generator block An external clock (clk) signal drives all the modules synchronously.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fig. 12. Test arrangement of the address generator using VIO and ILA</ns0:head><ns0:p>The throughputs of the proposed interleaver for all four modulation schemes are computed using Eq. 5 and presented in Table <ns0:ref type='table' target='#tab_7'>8</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>Tp = f x N bpscs x i ss (5)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Besides, a comparison with few works has been made based on the equivalence drawn between FPGA and ASIC implementations in <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. The comparative study of the proposed implementation regarding key FPGA parameters shows betterment over other similar recent works and is presented in Table <ns0:ref type='table' target='#tab_19'>9</ns0:ref> and Fig. <ns0:ref type='figure'>13</ns0:ref>. The proposed circuit shows betterment over <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>, and LUT-based technique in terms of maximum operating frequency. In terms of power consumption, our implementation is found to be the most efficient among the designs presented in <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, and <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> of Fig. <ns0:ref type='figure'>13</ns0:ref>. As direct implementation of floor function is not possible, improvement in terms of memory block used (BRAM) and clock frequency over the LUT-based technique may be considered as the performance improvement of our novel algorithm due to the elimination of floor function from the interleaver address generator circuitry.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_19'>9</ns0:ref> Comparative study with similar works Fig. <ns0:ref type='bibr' target='#b13'>13</ns0:ref> Performance comparison with <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>, <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>, and LUT-based work</ns0:p><ns0:p>Massive MIMO system, a key technology being deployed in the 5G system, employs an array of a large number of transmitting antennas at the base station to achieve high throughput has been investigated to compare our FPGA implementation results. Tan et al. <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> have demonstrated CMOS implementation of the message-passing detector (MPD) designed for a 256-QAM massive MIMO system supporting 32 concurrent mobile users in each time-frequency resource with 2.76 Gbps throughput. As far as throughput is concerned, our proposed interleaver on the FPGA platform shows a competitive result with that of <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>Conclusions</ns0:head><ns0:p>This work demonstrates the design and implementation of novel interleaver hardware on the Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Part of interleaver write addresses</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>schemes, spatial streams, and BWs, are represented by Eq. 1-3.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>k n(QPSK -BPSK) = { D * (i + I) + (j + J) when j < (D -J) and i < (C -I) D * {i -(C -I)} + (j + J) when j < (D -J) and i ≥ (C -I) D * ( i + I + 1 ) + { j -( D -J )} when j ≥ (D -J) and i < (C -I -1) D * {i -(C -I -1)} + {j -(D -J)} when j ≥ (D -J) and i ≥ (C -I -1)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>k n ( 64 -QAM ) = PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>1 gives three new equations. The final expressions obtained and the proposed mathematical formulations developed in this work PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021) Manuscript to be reviewed Computer Science generate the same results and are identical to results obtained through direct implementation of B 1 to B 3 steps.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>2 )</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Define interleaver depth (N), number of columns (d), and compute the intermediate addresses after spatial interleaving (j k ) by implementing B 1 -B 2 steps. 3) Compute the final memory addresses (r k ) by implementing step B 3 with appropriate values of frequency rotation parameter (N rot ) corresponding to the permissible bandwidths (BWs) for all four values of spatial streams (i ss ). 4) Arrange the addresses obtained in step 3 in (N/N rot ) x d tabular form with j and i as row and column numbers, respectively. 5) Identify the correlation between the subsequent addresses and re-arrange each address in (N/N rot )*(i ± offset 1 ) + (j ± offset 2 ) format. The offset x = 0 with i ss = 1 for all values of N. All other values of offset x to be computed using the correlation between the subsequent addresses. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>(a) and (b) describe hardware used for the generation of ICOUNT < (C-I x ), ICOUNT ≥ (C-I x ), JCOUNT < (D-J x ), and JCOUNT ≥ (D-J x ) signals. Here I x and J y are the column and row offset values, respectively, which are used while computing the addresses, and are defined in Table</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Fig. 3 .Fig. 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Fig. 3. Internal structure of memory block</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Fig. 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Fig. 6. Scheme for generation of number of rows (D)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figures 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figures 9 and 10 show the rest of the circuit details required to generate WA x with BPSK/QPSK, 16-QAM, and 64-QAM modulation schemes. In these figures, the adders (A 1 , A 2 , and A 3 ) receive two inputs; one from the row-count part (purple colored) and the other from the columncount part (blue colored) of the circuit. In Fig. 9, the JCOUNT + J y signal is generated by an adder (A 4 ), whereas the two subtractors (S 1 and S 2 ) generate the signal JCOUNT -(D-J y ). Based on the value of JCOUNT < (D-J y ) signal, the multiplexer (M 2 ) routes one of these signals to the input of the A 1 . Similar hardware structures can be found to generate signals like ICOUNT + I x , ICOUNT + I x + 1, ICOUNT -(C -I x ), etc., in the column-count part. The column-count part's output gets multiplied with D in the multiplier (ML 1 ) to generate the second input of A 1 . In Fig. 10, the circuit details for generating signals like ICOUNT + I x , ICOUNT -(C -I x ), JCOUNT + J y , JCOUNT -(D-J y ), etc., are not shown to avoid repetition and clumsiness. The condition for the generation of select inputs (II4, JJ4, II6, and JJ6) for the multiplexers of Fig. 10, are described and encoded in Table 6(a) and (b).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Fig. 8 .Fig. 9 . 2 Fig. 10 .</ns0:head><ns0:label>89210</ns0:label><ns0:figDesc>Fig. 8. Circuit for the generation of read address (RA x )</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>(a)-(b). The last four signals (int_add_1 to int_add_4) of Fig. 11(a) and (b) show the sequence of write addresses generated in synchronization with a clock signal (clk) for all the four spatial streams of the interleaver (I ss1 -I ss4). The write address sequence generated by the proposed interleaver for spatial stream 1 (i.e., int_add_1) is 0, 4, 8, 12, … Similarly, the address sequence for spatial stream 2 (i.e., int_add_2) is 26, 30, 34, and so on. The last address sequence (i.e., int_add_4) of Fig.11(a) tallies with the address sequences shown in Table1(a). Automatic address verification has also been carried out between the addresses generated by our proposed algorithm and the addresses obtained through steps B 1 -B 3 of Section 2 involving floor function by running a separate MATLAB program. This verification further endorses the correctness of the proposed algorithm.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Fig. 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Fig. 11. Write addresses (WA x ) for (a) BW=0 (20MHz), N bpscs = 00 (BPSK), (b) BW=1 (40MHz), N bpscs = 11 (64-QAM)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>WLAN_MIMO_NEW_0) is placed in the middle of the VIO (left side) and ILA (right side) blocks. The VIO injects user-defined RESET, BW, and NBPSCS signals. The outputs generated by the address generator (INT_ADD_1, 2, 3, and 4) are fed to the ILA and VIO for verification.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>FPGA platform to be</ns0:head><ns0:label /><ns0:figDesc>used in OFDM-based MIMO WLAN applications. A new algorithm has been proposed for the address generator of the interleaver eliminating the requirement of floor function, and is supported by the mathematical formulation with general validity. The algorithm is transformed into the digital circuit and is modeled using VHDL software. Simulation results PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021) Manuscript to be reviewed Computer Science and hardware testing verify the functionality of the proposed algorithm. Hardware implementation of the VHDL model using Xilinx ISE is done and is tested on Xilinx Spartan 6 FPGA. Efficient design and use of FPGA's embedded resources during implementation enables betterment over a few recent similar works and conventional design in terms of multiple FPGA parameters and the interleaver throughput.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,213.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,270.00,307.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,178.87,290.25,231.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,245.25,223.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,178.87,463.50,114.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,178.87,247.50,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='45,42.52,181.57,328.50,264.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='46,42.52,181.57,337.50,339.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='47,42.52,204.52,414.75,536.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='48,42.52,204.52,525.00,375.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='49,42.52,178.87,525.00,124.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='50,42.52,178.87,525.00,219.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (a) Part of</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>interleaver write addresses with N bpscs = 1, N = 52, i ss =4, BW = 20MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Table 1(b) Part of interleaver write addresses with N bpscs = 4, N = 208, i ss =2, BW = 20MHz</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 (c) Part</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>of interleaver write addresses with N bpscs = 6, N = 312, i ss =3, BW = 20MHz</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 (a)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Proposed algorithm for N bpscs = 1 or 2 (BPSK / QPSK) with all N, i ss , and BW</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 (b)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Proposed algorithm for N bpscs = 4 (16-QAM) with all N, i ss , and BW</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 (c)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Proposed algorithm for N bpscs = 6 (64-QAM) with all N, i ss , and BW</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Values of J rot for all modulation schemes, spatial streams, and BWs</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The i ss1 -i ss4 represent the four different spatial streams of the address generator, each consisting of write</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Throughput comparison with IEEE 802.11nThe last column of Table8justifies our high throughput claim of the proposed interleaver. This provides the opportunity to implement the proposed design in relatively slower and lower-cost FPGAs as well, thereby providing a cost-effective solution.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 1 (</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>a) Part of interleaver write addresses with N bpscs = 1, N = 52, i ss =4, BW = 20MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Table 1(b) Part of interleaver write addresses with N bpscs = 4, N = 208, i ss =2, BW = 20MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Table 1(c) Part of interleaver write addresses with N bpscs = 6, N = 312, i ss =3, BW = 20MHz</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>1 2 Table 1 ( a )</ns0:head><ns0:label>21a</ns0:label><ns0:figDesc>Part of interleaver write addresses with N bpscs = 1, N = 52, i ss =4, BW = 20MHz 3</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Column no(i)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Row no(j)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell>0</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>49</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>9</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>…</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell>4 5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 1 (</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>b) Part of interleaver write addresses with N bpscs = 4, N = 208, i ss =2, BW = 20MHz 7</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Column no(i)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 1 (</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>c) Part of interleaver write addresses with N bpscs = 6, N = 312, i ss =3, BW = 20MHz 11</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Column no(i)</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Proposed algorithm</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>a) Proposed algorithm for N bpscs = 1 or 2 (BPSK / QPSK) with all N, i ss and BW</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>b) Proposed algorithm for N bpscs = 4 (16-QAM) with all N, i ss and BW Table 2(c) Proposed algorithm for N bpscs = 6 (64-QAM) with all N, i ss and BW</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_16'><ns0:head>Table 4 (</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>a) Encoding of BW</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Bandwidth (BW)</ns0:cell><ns0:cell>Encoded bit</ns0:cell></ns0:row><ns0:row><ns0:cell>20MHz</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>40MHz</ns0:cell><ns0:cell>1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 4 (</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>b) Encoding of N cbpsc</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Modulation Scheme</ns0:cell><ns0:cell>Encoded bits</ns0:cell></ns0:row><ns0:row><ns0:cell>(N cbpsc )</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>BPSK (N cbpsc = 1)</ns0:cell><ns0:cell>00</ns0:cell></ns0:row><ns0:row><ns0:cell>QPSK (N cbpsc = 2)</ns0:cell><ns0:cell>01</ns0:cell></ns0:row><ns0:row><ns0:cell>16-QAM (N cbpsc = 4)</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>64-QAM (N cbpsc = 6)</ns0:cell><ns0:cell>11</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_18'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Throughput comparison with IEEE 802.11n</ns0:figDesc><ns0:table><ns0:row><ns0:cell>This work</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_19'><ns0:head>Table 9 (on next page)</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparative study with similar works</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54917:2:0:NEW 29 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_20'><ns0:head>Table 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Comparative study with similar works</ns0:figDesc><ns0:table><ns0:row><ns0:cell>FPGA Parameters</ns0:cell><ns0:cell>This work</ns0:cell><ns0:cell>[15]</ns0:cell><ns0:cell>[7]</ns0:cell><ns0:cell>[16]</ns0:cell><ns0:cell>LUT Based</ns0:cell></ns0:row><ns0:row><ns0:cell>Maximum clock frequency, f</ns0:cell><ns0:cell>208.7 MHz</ns0:cell><ns0:cell>109.38MHz</ns0:cell><ns0:cell>70.31MHz</ns0:cell><ns0:cell>125MHz</ns0:cell><ns0:cell>151.45MHz</ns0:cell></ns0:row><ns0:row><ns0:cell>Power consumption, P</ns0:cell><ns0:cell>28.62mW</ns0:cell><ns0:cell>111.24mW</ns0:cell><ns0:cell>48mW</ns0:cell><ns0:cell>Not available</ns0:cell><ns0:cell>28.62mW</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "
29th April, 2021
Dear Editor,
Thank you for allowing a resubmission of our manuscript, with a minor revision.
As suggested by you and the reviewer, we carefully revised the manuscript for any grammatical mistakes and typos.
The point-by-point response to the editor’s and reviewers’ comments are given below.
We believe that the manuscript is now suitable for publication in PeerJ Computer Science.
Best regards,
Pijush Kanti Dutta Pramanik
Dept. of Computer Science & Engineering
National Institute of Technology, Durgapur, India
(On behalf of all authors)
Response to Reviewer’s Comments
Editor (Li-minn Ang)
Minor revisions
Please look at the following comment 'typos are still a problem in the manuscript'.
Response: As per your and reviewer’s suggestion, the entire paper is scrutinized for possible grammatical mistakes and typos, and corrected, where ever found.
Reviewer 2 (Vaibhav Rupapara)
Basic reporting
The author proposes an algorithm to efficiently model the address generation circuitry of the MIMO WLAN interleaved. The interleaved used in the MIMO WLAN transceiver has three permutation steps involving floor function whose hardware implementation is most challenging due to the absence of corresponding digital hardware. They proposed an algorithm with a mathematical background for the address generator, eliminating the need for floor function. The algorithm is converted into digital hardware for implementation on the reconfigurable FPGA platform. Hardware structure for the complete interleaved, including the read address generator and memory module, is designed and modeled in VHDL using Xilinx Integrated Software Environment (ISE) utilizing embedded memory and DSP blocks Spartan 6 FPGA.
Author work in a good manner in revision on raised point and tackle the things very well.
Experimental design
The experimental explanation and flow are cleared in revision.
Response: Thank you for the acknowledgement.
Validity of the findings
The author presents results well which improves the quality of the manuscript.
Response: Thank you for the encouraging words.
Comments for the author
The manuscript is much strong after revision but typos are still a problem in the manuscript.
Response: As per your suggestion, the entire paper is scrutinized for possible grammatical mistakes and typos, and corrected, where ever found.
" | Here is a paper. Please give your review comments after reading it. |
140 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Shapley values have become increasingly popular in the machine learning literature, thanks to their attractive axiomatisation, flexibility, and uniqueness in satisfying certain notions of 'fairness'. The flexibility arises from the myriad potential forms of the Shapley value game formulation. Amongst the consequences of this flexibility is that there are now many types of Shapley values being discussed, with such variety being a source of potential misunderstanding. To the best of our knowledge, all existing game formulations in the machine learning and statistics literature fall into a category, which we name the model-dependent category of game formulations. In this work, we consider an alternative and novel formulation which leads to the first instance of what we call model-independent Shapley values. These Shapley values use a measure of non-linear dependence as the characteristic function. The strength of these Shapley values is in their ability to uncover and attribute non-linear dependencies amongst features. We introduce and demonstrate the use of the energy distance correlations, affine-invariant distance correlation, and Hilbert-Schmidt independence criterion as Shapley value characteristic functions. In particular, we demonstrate their potential value for exploratory data analysis and model diagnostics. We conclude with an interesting expository application to a medical survey data set.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>There are many different meanings of the term 'feature importance', even in the context of Shapley values. Indeed, the meaning of a Shapley value depends on the underlying game formulation, referred to by <ns0:ref type='bibr' target='#b16'>Merrick and Taly (2019)</ns0:ref> as the explanation game. Although, this is so far rarely discussed explicitly in the existing literature. In general, Shapley value explanation games can be distinguished as either belonging to the model-dependent category or the model-independent category. The latter category is distinguished by an absence of assumptions regarding the data generating process (DGP). Here, the term model-dependent refers to when the Shapley value depends on a choice of fitted model (such as the output of a machine learning algorithm), or on a set of fitted models (such as the set of sub-models of a linear model).</ns0:p><ns0:p>Shapley values that uncover non-linear dependencies (Sunnies) is, to the best of our knowledge, the only Shapley-based feature importance method that falls into the model-independent category. In this category, feature importance scores attempt to determine what is a priori important, in the sense of understanding the partial dependence structures within the joint distribution describing the DGP. We show that these methods that generate model-independent feature importance scores can appropriately be used as model diagnostic procedures, as well as procedures for exploratory data analysis.</ns0:p><ns0:p>Existing methods in the model-dependent category, on the other hand, seek to uncover what is perceived as important by the model (or class of models), either with regards to a performance measure (e.g., a goodness-of-fit measure) or for measuring local influences on model predictions. Model-dependent definitions of feature importance scores can be distinguished further according as to whether they depend PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science on a fitted (i.e., trained) model or on an unfitted class of models. We refer to these as within-model scores and between-model scores, respectively. This distinction is important, since the objectives are markedly different.</ns0:p><ns0:p>Within-model Shapley values seek to describe how the model reacts to a variety of inputs, while, e.g., accounting for correlated features in the training data by systematically setting 'absent' features to a reference input value, such as a conditional expectation. There are many use cases for within-model Shapley values, such as providing transparency to model predictions, e.g. for explaining a specific credit decision or detecting algorithmic discrimination <ns0:ref type='bibr' target='#b4'>(Datta et al., 2016)</ns0:ref>, as well as understanding model structure, measuring interaction effects and detecting concept drift <ns0:ref type='bibr' target='#b13'>(Lundberg et al., 2020)</ns0:ref>.</ns0:p><ns0:p>All within-model Shapley values that we are aware of fall into the class of single reference games, described by <ns0:ref type='bibr' target='#b16'>Merrick and Taly (2019)</ns0:ref>. These include SAGE <ns0:ref type='bibr' target='#b2'>(Covert et al., 2020)</ns0:ref>; SHAP <ns0:ref type='bibr' target='#b14'>(Lundberg and Lee, 2017a)</ns0:ref>; Shapley Sampling Values <ns0:ref type='bibr' target='#b29'>(Štrumbelj and Kononenko, 2013)</ns0:ref>; Quantitative Input Influence <ns0:ref type='bibr' target='#b4'>(Datta et al., 2016)</ns0:ref>; Interactions-based Method for Explanation (IME) <ns0:ref type='bibr'>(Štrumbelj et al., 2009)</ns0:ref>; and</ns0:p><ns0:p>TreeExplainer <ns0:ref type='bibr' target='#b13'>(Lundberg et al., 2020)</ns0:ref>. Note that some within-model feature importance methods, such as SHAP, can be described as model agnostic methods, since they may be applied to any trained model.</ns0:p><ns0:p>Regardless, such values are dependent on a prior choice of fitted model.</ns0:p><ns0:p>In contrast to within-model Shapley values, between-model Shapley values seek to determine which features influence an outcome of the model fitting procedure, itself, by repeatedly refitting the model to compute each marginal contribution. Such scores have been applied, for example, as a means for feature importance ranking in regression models. These include Shapley Regression Values <ns0:ref type='bibr' target='#b12'>(Lipovetsky and Conklin, 2001)</ns0:ref>, <ns0:ref type='bibr'>ANOVA Shapley values (Owen and Prieur, 2017)</ns0:ref>, and our prior work <ns0:ref type='bibr' target='#b5'>(Fryer et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The existing between-model feature importance scores are all global feature importance scores, since they return a single Shapley value for each feature, over the entire data set. Sunnies is also a global score, though not a between-model score.</ns0:p><ns0:p>A number of publications and associated software have been produced recently to efficiently estimate or calculate SHAP values. Tree SHAP, Kernel SHAP, Shapley Sampling Values, Max Shap, Deep Shap, Linear-SHAP and Low-Order-SHAP are all methods for either approximating or calculating SHAP values. However, these efficient model-dependent methods for calculating or approximating SHAP values are developed for local within-model scores, and are not suitable for Sunnies, which is a global and model-independent score. While Sunnies does not fit under the model-dependent frameworks for efficient estimation, Shapley values in general can be approximated via a consistent Monte Carlo algorithm introduced by <ns0:ref type='bibr' target='#b28'>Song et al. (2016)</ns0:ref>. While efficient approximations do exist, computational details are not the focus of this paper, where we focus on the concept and relevance of Sunnies.</ns0:p><ns0:p>In Section 2, we introduce the concept of the Shapley value and its decomposition. We then introduce the notion of attributed dependence on labels (ADL), and briefly demonstrate the behaviour of the R 2 characteristic function on a data set with non-linear dependence, to motivate our alternative measures of non-linear dependence in place of R 2 . In Section 2.2, we describe three such measures: the Hilbert Schmidt Independence Criterion (HSIC), the Distance Correlation (DC) and the Affine-Invariant Distance Correlation (AIDC). We use these as characteristic functions throughout the remainder of the work, although we focus primarily on the DC.</ns0:p><ns0:p>The DC, HSIC and AIDC do not constitute an exhaustive list of the available measures of non-linear dependence. We do not provide here a comparison of their strengths and weaknesses. Instead, our objective is to propose and demonstrate a variety of use cases for the general technique of computing Shapley values for model-independent measures of statistical dependence.</ns0:p><ns0:p>In Section 3, we demonstrate the value of ADL for exploratory data analysis, using a simulated DGP that exhibits mutual dependence without pairwise dependence. We also leverage this example to compare ADL to popular pairwise and model-dependent measures of dependence, highlighting a drawback of the pairwise methods, and of the popular XGBoost built-in 'feature importance' score. We also show that SHAP performs favourably here. In Section 4, we introduce the concepts of attributed dependence on predictions (ADP) and attributed dependence on residuals (ADR). Using simulated DGPs, we demonstrate the potential for ADL, ADP and ADR to uncover and diagnose model misspecification and concept drift.</ns0:p><ns0:p>For the concept drift demonstration (Section 4.1.1), we see that ADL provides comparable results to SAGE and SHAP, but without the need for a fitted model. Conclusions are drawn in Section Section 6.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2'>SHAPLEY DECOMPOSITION</ns0:head><ns0:p>In approaching the question: 'How do the different features X = (X 1 , . . . , X d ) in this data set affect the outcome Y ?', the concept of a Shapley value is useful. The Shapley value has a long history in the theory of cooperative games, since its introduction in <ns0:ref type='bibr' target='#b27'>Shapley (1953)</ns0:ref>, attracting the attention of various Nobel prize-winning economists (cf. <ns0:ref type='bibr' target='#b25'>Roth, 1988)</ns0:ref>, and enjoying a recent surge of interest in the statistics and machine learning literature. <ns0:ref type='bibr' target='#b27'>Shapley (1953)</ns0:ref> formulated the Shapley value as the unique game theoretic solution concept, which satisfies a set of four simple and apparently desirable axioms: efficiency, additivity, symmetry and the null player axiom. For a recent monograph, defining these four axioms and introducing solution concepts in cooperative games, consult <ns0:ref type='bibr' target='#b0'>Algaba et al. (2019)</ns0:ref>.</ns0:p><ns0:p>As argued by <ns0:ref type='bibr' target='#b12'>Lipovetsky and Conklin (2001)</ns0:ref>; <ns0:ref type='bibr' target='#b11'>Israeli (2007)</ns0:ref>; <ns0:ref type='bibr' target='#b10'>Huettner and Sunder (2012)</ns0:ref>, we can think of the outcome C(S) of a prediction or regression task as the outcome of a cooperative game, in which the set S = {X 1 , . . . , X d } of data features represent a coalition of players in the game. The function C is known as the characteristic function of the game. It maps elements S, in the power set 2 [d] of players, to a set of payoffs (or outcomes) and thus fully describes the game. Let d be the number of players. The marginal contribution of a player v ∈ S to a team S is defined as C(S ∪ {v}) −C(S). The average marginal contribution of player v, over the set S k of all teams of size k that exclude v, is</ns0:p><ns0:formula xml:id='formula_0'>C k (v) = 1 |S k | ∑ S∈S k [C(S ∪ {v}) −C(S)] ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where</ns0:p><ns0:formula xml:id='formula_1'>|S k | = d−1 k .</ns0:formula><ns0:p>The Shapley value of player v, then, is given by</ns0:p><ns0:formula xml:id='formula_2'>φ v (C) = 1 d d−1 ∑ k=0 C k (v) ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>i.e., φ v (C) is the average of C k (v) over all team sizes k.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Attributed Dependence on Labels</ns0:head><ns0:p>The characteristic function C(S) in (1) produces a single payoff for the features with indices in S. In the context of statistical modelling, the characteristic function will depend on Y and X. To express this we introduce the notation X| S = (X j ) j∈S as the projection of the feature vector onto the coordinates specified by S, and we write the characteristic function C Y (S) with subscript Y to clarify its dependence on Y as well as X (via S). Now, we can define a new characteristic function R Y in terms of the popular coefficient of multiple correlation R 2 , as</ns0:p><ns0:formula xml:id='formula_3'>R Y (S) = R 2 (Y, X| S ) = 1 − | Cor(Y, X| S )| | Cor(X| S )| ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where | • | and Cor(•) are the determinant operator and correlation matrix, respectively (cf. <ns0:ref type='bibr' target='#b5'>Fryer et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The set of Shapley values of all features in X, using characteristic function C, is known as the Shapley decomposition of C amongst the features in X. For example, the Shapley decomposition of R Y , from (3),</ns0:p><ns0:formula xml:id='formula_4'>is the set {φ v (R Y ) : v ∈ [d]}, calculated via (2).</ns0:formula><ns0:p>In practice, the joint distribution of (Y, X T ) is unknown, so the Shapley decomposition of C is estimated via substitution of an empirical characteristic function Ĉ in (1). In this context, we work with an n × |S| data matrix X| S , whose ith row is the vector x| S = (x i j ) j∈S , representing a single observation from X| S .</ns0:p><ns0:p>As a function of this observed data, along with the vector of observed labels y = (y i ) i∈ <ns0:ref type='bibr'>[n]</ns0:ref> , the empirical characteristic function Ĉy produces an estimate of C Y that, with (1), gives the estimate φ v ( Ĉy ), which we refer to as the Attributed Dependence on Labels (ADL) for feature v.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.1'>Recognising dependence: Example 1</ns0:head><ns0:p>For example, the empirical R 2 characteristic function Ry is given by</ns0:p><ns0:formula xml:id='formula_5'>Ry (S) = 1 − |ρ(y, X| S )| |ρ(X| S )| , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>)</ns0:formula><ns0:p>where ρ is the empirical Pearson correlation matrix. <ns0:ref type='formula' target='#formula_7'>5</ns0:ref>), cross section with 100 least squares lines of best fit, each produced from a random sample of size 1000, from the simulated population of size 10,000. The estimate of R 2 is 0.0043, with 95% bootstrap confidence interval (0.001, 0.013) over 100 fits. The R 2 is close to 0 despite the presence of strong (non-linear) dependence.</ns0:p><ns0:p>Regardless of whether we use a population measure or an estimate, the R 2 measures only the linear relationship between the response (i.e., labels) Y and features X. This implies the R 2 may perform poorly as a measure of dependence in the presence of non-linearity. The following example from a non-linear DGP demonstrates this point.</ns0:p><ns0:p>Suppose the features X j , j ∈ [d] are independently uniformly distributed on [−1, 1]. Given a diagonal matrix A = diag(a 1 , . . . , a d ), let the response variable Y be determined by the quadratic form</ns0:p><ns0:formula xml:id='formula_7'>Y = X T AX = a 1 X 2 1 + . . . + a d X 2 d .<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Then, the covariance Cov(Y, X j ) = 0 for all j ∈</ns0:p><ns0:formula xml:id='formula_8'>[d]. This is because Cov(X T AX, X j ) = d ∑ j=1</ns0:formula><ns0:p>Cov(X 2 j , X j ) = 0, since E[X j ] = 0 and E[X 3 j ] = 0. In Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, we display the X 4 cross section of 10,000 observations generated from (5) with d = 5 and A = diag(0, 2, 4, 6, 8), along with the least squares line of best fit and associated R 2 value. We visualize the results for the corresponding Shapley decomposition in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>.</ns0:p><ns0:p>As expected, we see that the R 2 is not able to capture the non-linear dependence structure of (5), and thus neither is its Shapley decomposition.</ns0:p><ns0:p>We note that improvements on the results in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> can be obtained by choosing a suitable linearising transformation of the features or response prior to calculating R 2 , but such a transformation is not known to be discernible from data in general, except in the simplest cases.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Measures of non-linear dependence</ns0:head><ns0:p>In the following, we describe three measures of non-linear dependence that, when used as a characteristic function C, have the following properties.</ns0:p><ns0:p>• Independence is detectable (in theory), i.e., if C(S) = 0, then the variables Y and X| S are independent.</ns0:p><ns0:p>Equivalently, dependence is visible, i.e., if Y and X| S are dependent, then C(S) = 0. Note that this property does not guarantee that dependence is visible in any single attribution by the Shapley value to one feature, since these characteristic functions may decrease in |S|. It does, however, guarantee that dependence is visible in the sum of <ns0:ref type='bibr'>Shapley values, C([d]</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science • C is model-independent. Thus, no assumptions are made about the DGP and no associated feature engineering or transformation of X or Y is necessary.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.1'>Distance correlation and affine invariant distance correlation</ns0:head><ns0:p>The distance correlation, and its affine invariant adaptation, were both introduced by <ns0:ref type='bibr' target='#b31'>Székely et al. (2007)</ns0:ref>.</ns0:p><ns0:p>Unlike the Pearson correlation, the distance correlation between Y and X is zero if and only if Y and X are statistically independent. However, the distance correlation is equal to 1 only if the dimensions of the linear spaces spanned by Y and X are equal, almost surely, and Y is a linear function of X.</ns0:p><ns0:p>First, the population distance covariance between the response Y and feature vector X is defined as a weighted L 2 norm of the difference between the joint characteristic function 1 , f Y X and the product of marginal characteristic functions f Y f X . In essence, this is a measure of squared deviation from the assumption of independence, i.e., the hypothesis that</ns0:p><ns0:formula xml:id='formula_9'>f Y X = f Y f X .</ns0:formula><ns0:p>The empirical distance covariance V 2 n is based on Euclidean distances between sample elements, and can be computed from data matrices Y, X as</ns0:p><ns0:formula xml:id='formula_10'>V 2 (Y, X) = n ∑ i, j=1 A(Y) i j A(X) i j ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where the matrix function A(W) for W ∈ {Y, X} is given by</ns0:p><ns0:formula xml:id='formula_11'>A(W) i j = B(W) i j − 1 n n ∑ i=1 B(W) i j − 1 n n ∑ j=1 B(W) i j + 1 n 2 n ∑ i, j=1 B(W) i j ,</ns0:formula><ns0:p>where || • || denotes the Euclidean norm, and</ns0:p><ns0:formula xml:id='formula_12'>B(W) is the n × n distance matrix with B(W) i j = ||w i − w j ||,</ns0:formula><ns0:p>where w i denotes the ith observation (row) of W. Here, Y is in general a matrix of observations, with 1 In this context, we refer to the characteristic function of a probability distribution. We would like to make the reader aware that this is a different use of the term 'characteristic function' than that used to describe a cooperative game in the context of Shapley values, as in (1).</ns0:p></ns0:div>
<ns0:div><ns0:head>5/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:ref> Manuscript to be reviewed Computer Science potentially multiple features. Notice the difference between Y and y, where the latter is the (single column) label vector introduced in Section 2.1.</ns0:p><ns0:p>The empirical distance correlation R is given by</ns0:p><ns0:formula xml:id='formula_13'>R2 (Y, X) = V 2 (Y, X) V 2 (Y, Y) V 2 (X, X) ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>for V 2 (Y, Y) V 2 (X, X) = 0, and R(Y, X) = 0 otherwise. For our purposes, we define the distance correlation characteristic function estimator</ns0:p><ns0:formula xml:id='formula_14'>Dy (S) = R2 (y, X| S ) .<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>A transformation of the form x → Ax + b for a matrix A and vector b is called affine. Affine invariance of the distance correlation is desirable, particularly in the context of hypothesis testing, since statistical independence is preserved under the group of affine transformations. When Y and X are first scaled as</ns0:p><ns0:formula xml:id='formula_15'>Y ′ = YS −1/2 Y and X ′ = XS −1/2 X</ns0:formula><ns0:p>, the distance correlation V (Y ′ , X ′ ), becomes invariant under any affine transformation of Y and X <ns0:ref type='bibr'>(Székely et al., 2007, Section 3.2)</ns0:ref>. Thus, the empirical affine invariant distance correlation is defined by</ns0:p><ns0:formula xml:id='formula_16'>R′ (Y, X) = R(YS −1/2 Y , XS −1/2 X ) ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>and we define the associated characteristic function estimator D′ y in the same manner as (8). Monte Carlo studies regarding the properties of these measures are given by <ns0:ref type='bibr' target='#b31'>Székely et al. (2007)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.2'>Hilbert-Schmidt independence criterion</ns0:head><ns0:p>The Hilbert Schmidt Independence Criterion (HSIC) is a kernel-based independence criterion, first introduced by <ns0:ref type='bibr' target='#b6'>Gretton et al. (2005a)</ns0:ref>. Kernel-based independence detection methods have been adopted in a wide range of areas, such as independent component analysis <ns0:ref type='bibr' target='#b7'>(Gretton et al., 2007)</ns0:ref>. The link between energy distance-based measures, such as the distance correlation, and kernel-based measures, such as the HSIC, was established by <ns0:ref type='bibr' target='#b26'>Sejdinovic et al. (2013)</ns0:ref>. There, it is shown that the HSIC is a certain formal extension of the distance correlation.</ns0:p><ns0:p>The HSIC makes use of the cross-covariance operator, C Y X , between random vectors Y and X, which generalises the notion of a covariance. The response Y and feature vector X are each mapped to functions in a Reproducing Kernel Hilbert Spaces (RKHS), and the HSIC is defined as the Hilbert-Schmidt (HS) norm ||C Y X || 2 HS of the cross-covariance operator between these two spaces <ns0:ref type='bibr' target='#b8'>(Gretton et al., 2005b</ns0:ref><ns0:ref type='bibr' target='#b7'>(Gretton et al., , 2007</ns0:ref><ns0:ref type='bibr' target='#b6'>(Gretton et al., , 2005a))</ns0:ref>. Given two kernels ℓ, k, associated to the RKHS of Y and X, respectively, and their empirical evaluation matrices L, K with row i and column j elements ℓ i j = ℓ(y i , y j ) and k i j = k(x i , x j ), where y i , x i denote the ith observation (row) in data matrices X and Y, respectively, the empirical HSIC can be calculated as</ns0:p><ns0:formula xml:id='formula_17'>HSIC(Y, X) = 1 n 2 n ∑ i, j k i j ℓ i j + 1 n 4 n ∑ i, j,q,r k i j ℓ qr − 2 n 3 n ∑ i, j,q k i j ℓ iq .<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>As in Section 2.2.1, notice the difference between Y and y, where the latter is the (single column) label vector introduced in Section 2.1. Intuitively, this approach endows the cross-covariance operator with the ability to detect non-linear dependence, and the HS norm measures the combined magnitude of the resulting dependence. For a thorough discussion of positive definite Kernels, with a machine learning emphasis, see the work of <ns0:ref type='bibr' target='#b9'>Hein and Bousquet (2004)</ns0:ref>.</ns0:p><ns0:p>Calculating the HSIC requires selecting a kernel. The Gaussian kernel is a popular choice that has been subjected to extensive testing in comparison to other kernel methods (see, e.g., <ns0:ref type='bibr' target='#b6'>Gretton et al., 2005a)</ns0:ref>. For our purposes, we define the empirical HSIC characteristic function by</ns0:p><ns0:formula xml:id='formula_18'>Ĥy (S) = HSIC(y, X| S ) ,<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>and use a Gaussian kernel. Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> shows the Shapley decomposition of Ĥ amongst the features generated from (5), again with d = 4 and A = diag(0, 2, 4, 6). The decomposition has been normalised</ns0:p><ns0:p>for comparability with the other measures of dependence presented in the figure. The HSIC can also be generalised to provide a measure of mutual dependence between any finite number of random vectors <ns0:ref type='bibr' target='#b22'>(Pfister et al., 2016)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>EXPLORATION</ns0:head><ns0:p>In machine learning problems, complete formal descriptions of the DGP are often impractical. However, there are advantages to gaining some understanding of the dependence structure. In particular, such an understanding is useful when inference about the data generating process is desired, such as in the contexts of causal inference, scientific inquiries (in general), or in qualitative investigations (cf. <ns0:ref type='bibr' target='#b19'>Navarro, 2018)</ns0:ref>. In a regression or classification setting, the dependence structure between the features and response is an immediate point of focus. As we demonstrate in Section 3.0.1, the dependence structure cannot always be effectively probed by computing measures of dependence between labels and feature subsets, even when the number of marginal contributions is relatively small. In such cases, the Shapley value may not only allow us to summarise the interactions from many marginal contributions, but also to fairly distribute strength of dependence to the features.</ns0:p><ns0:p>Attributed dependence on labels (ADL) can be used for exploration in the absence of, or prior to, a choice of model; but, ADL can also be used in conjunction with a model -for example, to support, and even validate, model explanations. Even when a machine learning model is not parsimonious enough to be considered explainable, stakeholders in high risk settings may depend on the statement that 'feature X i is important for determining Y ' in general. However, it is not always clear, in practice, whether such a statement about feature importance is being used to describe a property of the model, or a property of the DGP. In the following example, we demonstrate that ADL can be used to make statements about the DGP and to help qualify statements about a model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.0.1'>Recognising dependence: Example 2</ns0:head><ns0:p>Consider a DGP involving the XOR function of two binary random variables X 1 , X 2 , with distributions given by P(X 1 = 1) = P(X 2 = 1) = 1/2. The response is given by</ns0:p><ns0:formula xml:id='formula_19'>Y = XOR(X 1 , X 2 ) = X 1 (1 − X 2 ) + X 2 (1 − X 1 ) .<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>Notice that P(Y = i|X k = j) = P(Y = i), for all i, j ∈ {0, 1} and k ∈ {1, 2}. Thus, in this example, Y is completely statistically independent of each individual feature. However, since Y is determined entirely in terms of (X 1 , X 2 ), it is clear that Y is statistically dependent on the pair. Thus, the features individually appear to have little impact on the response, yet together they have a strong impact when their mutual influence is considered.</ns0:p><ns0:p>Faced with a sample from (Y, X 1 , X 2 ), when the DGP is unknown, a typical exploratory practice is Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Importances of features X 1 and X 2 assigned by various methods, using a sample size of 10,000 from DGP (12). For pairwise XGBoost, we take the difference in mean squared prediction error between each XGBoost model and the null model (which always guesses 1). Pairwise dependence includes pairwise DC, HSIC, AIDC and Pearson correlation, which all give the same result of 0, due to statistical independence. Although the XGBoost classifier easily achieves a perfect classification accuracy on a validation set, the associated XGBoost gain for X 1 is Gain(X 1 ) ≈ 0, while Gain(X 2 ) ≈ 1, or vice versa. In other words the full weight of the XGBoost feature importance under XOR is given to either one or the other feature.</ns0:p></ns0:div>
<ns0:div><ns0:head>Method</ns0:head><ns0:p>This is intuitively misleading, as both features are equally important in determining XOR, and any single one of the two features is alone not sufficient to achieve a classification accuracy greater than random guessing. In practice, ADL can help identify such flaws with other model explanation methods.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>DIAGNOSTICS</ns0:head><ns0:p>In the following diagnostics sections, we present results using the distance correlation. However, similar results can also be obtained using the HSIC and the AIDC.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Model attributed dependence</ns0:head><ns0:p>Given a fitted model f , with associated predictions Ŷ = f (X), we seek to attribute shortcomings of the fitted model to individual features. We can do this by calculating the Shapley decomposition of the estimated strength of dependence between the model residuals ε = Y − Ŷ , and the features X. In other words, feature v receives the attribution φ v (C ε ); estimated by φ v ( Ĉe ), where e = y − ŷ. We refer to this as the Attributed Dependence on Residuals (ADR) for feature v.</ns0:p><ns0:p>A different technique, for diagnosing model misspecification, is to calculate the Shapley decomposition of the estimated strength of dependence between Ŷ and X, so that each feature v receives attribution φ v ( Ĉŷ ). We call this the Attributed Dependence on Predictions (ADP), for feature v. This picture of the model generated dependence structure may then be compared, for example, to the observed dependence structure given the ADL {φ v ( Ĉy ) : v ∈ [d]}. The diagnostic goal, then, may be to check that, for all v,</ns0:p><ns0:formula xml:id='formula_20'>|φ v ( Ĉŷ ) − φ v ( Ĉy )| < δ ,<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>for some δ tolerance. In other words, a diagnostic strategy making use of ADP is to compare estimates of feature importance under the model's representation of the joint distribution, to estimates of feature importance under the empirical joint distribution, and thus to individually inspect each feature for an apparent change in predictive relevance.</ns0:p><ns0:p>We note that these techniques, ADP and ADR, are agnostic to the chosen model. All that is needed is the model outputs and the corresponding model inputs -the inner workings of the model are irrelevant for attributing dependence on predictions and residuals to individual features in this way.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1.1'>Demonstration with concept drift</ns0:head><ns0:p>We illustrate the ADR and ADL techniques together with a simple and intuitive synthetic demonstration involving concept drift, where the DGP changes over time, impacting the mean squared prediction error (MSE) of a deployed XGBoost model. The model is originally trained with the assumption that the DGP is static, and the performance of the model is monitored over time with the intention of detecting violations of this assumption, as well as attributing any such violation to one or more features. A subset of the deployed features can be selected for scrutiny, by considering removal only of those selected features from the model. To highlight this, our simulated DGP has 50 features, and we perform diagnostics on 4 out of those 50 features.</ns0:p><ns0:p>For comparison, we compute SAGE values of the model mean squared error <ns0:ref type='bibr' target='#b2'>(Covert et al., 2020)</ns0:ref> and we compute the mean SHAP values of the logarithm of the model loss function <ns0:ref type='bibr' target='#b13'>(Lundberg et al., 2020)</ns0:ref>.</ns0:p><ns0:p>We will refer to the latter as SHAPloss. For SAGE and SHAPloss values, we employ a DGP similar to ( <ns0:ref type='formula' target='#formula_21'>14</ns0:ref>), with sample size 1000, but with X i ≡ 0 for all i > 4. These features were nullified for tractability of the SAGE computation, since, unlike for Sunnies, the authors are not aware of any established method for selectively computing SAGE values of a subset of the full feature set. SAGE and SHAPloss were chosen for their popularity and ability to provide global feature importance scores.</ns0:p><ns0:p>At the initial time t = 0, we define the DGP as a function of temporal increments t ∈ N ∪ {0},</ns0:p><ns0:formula xml:id='formula_21'>Y = X 1 + X 2 + 1 + t 10 X 3 + 1 − t 10 X 4 + 50 ∑ i=5 X i ,<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>where X i ∼ N(0, 4), for i = 1, 2, 3, 4, and X i ∼ N(0, 0.05), for 5 ≤ i ≤ 50. Features 1 through 4 are the most effectual to begin with, and we can imagine that these were flagged as important during model development, justifying the additional diagnostic attention they enjoy after deployment. We see from ( <ns0:ref type='formula' target='#formula_21'>14</ns0:ref>) that, after deployment, i.e., during periods 1 ≤ t ≤ 10, the effect of X 4 decreases linearly to 0, while the effect of X 3 increases proportionately over time. In what follows, these changes are clearly captured by the residual and response dependence attributions of those features, using the DC characteristic function.</ns0:p><ns0:p>The results, with a sample size of n = 1000, from the DGP in ( <ns0:ref type='formula' target='#formula_21'>14</ns0:ref>), are presented in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>.</ns0:p><ns0:p>According to the ADL (top), X 4 shows early signs of significantly reduced importance φ 4 ( Ĉy ), as X 3</ns0:p><ns0:p>shows an increase in importance φ 3 ( Ĉy ), which is roughly symmetrical to the decrease in φ 4 ( Ĉy ). The ADR (bottom) show early significant signs that X 3 is disproportionately affecting the residuals, with high φ 3 ( Ĉe ). The increase in residual attribution φ 4 ( Ĉe ) is also evident, though the observation φ 4 ( Ĉe ) < φ 3 ( Ĉe )</ns0:p><ns0:p>suggests that the drift impact from X 3 is the larger of the two.</ns0:p><ns0:p>The resulting SAGE and mean SHAPloss values are presented in Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>. Interestingly, the behaviours of SHAPloss and SAGE are (up to scale and translation) analogous to the behaviour of ADL, rather than ADR, despite the model-independence of ADL. A reason for this, in this example, is that the feature with higher (resp. lower) dependence on Y contributes less (resp. more) to the residuals.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2'>Demonstration with misspecified model</ns0:head><ns0:p>To illustrate the ADL, ADP and ADR techniques, we demonstrate a case where the model is misspecified on the training set, due to model bias. The inadequacy of this misspecified model is then detected on the validation set. Unlike the example given in Section 4.1, the DGP is unchanging between the two data sets.</ns0:p><ns0:p>The key technique used in this demonstration is the comparison of differences between ADL (calculated in the absence of any model) and ADP (calculated using the output of a fitted model), in order to identify any differences in the attributions between dependence on labels and the dependence on the predictions produced by the misspecified model. Such a comparison, between model absence and model outputs, is not possible using purely model-dependent Shapley values.</ns0:p><ns0:p>To make this example intuitive, we avoid using a complex model such as XGBoost, in favour of a linear regression model. Since the simulated DGP is also linear, this example allows a simple comparison between the correct model and the misspecified model. The DGP in this example is</ns0:p><ns0:formula xml:id='formula_22'>Y = X 1 + X 2 + 5X 3 X 4 X 5 + ε ,<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>where X 1 , X 2 , X 3 ∼ N(0, 1) are continuous, ε ∼ N(0, 0.1) is a small random error, and X 4 , X 5 ∼ Bernoulli(1/2) are binary. Hence, we can make the interpretation that the effect of X 3 is modulated by X 4 and X 5 , such that X 3 is effective, only if X 4 = X 5 = 1. For this demonstration, we fit a misspecified linear model <ns0:ref type='formula' target='#formula_21'>14</ns0:ref>). The bootstrap confidence bands are the 95% middle quantiles (Q 0.975 − Q 0.025 ) from 100 subsamples of size 1000. The ADL of features X 3 and X 4 appear to decay / increase over time, leading to significantly different ADL, compared to the other features. We also see that X 3 and X 4 have significantly higher ADR than the other features. <ns0:ref type='formula' target='#formula_21'>14</ns0:ref>) (see Section 4.1.1), with sample size 1000. The results are comparable to ADL (see Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>). Features with high SAGE value contribute less to residuals, and vice versa for the SHAPloss values.</ns0:p><ns0:formula xml:id='formula_23'>EY = β 0 + β X, where X T = (X i ) d</ns0:formula><ns0:p>i=1 is the vector of features, and</ns0:p><ns0:formula xml:id='formula_24'>β 0 , β = (β i ) d i=1 are real coefficients.</ns0:formula><ns0:p>This is a simple case where the true DGP is unknown to the analyst, who therefore seeks to summarise the 240 marginal contributions from 5 features into 5 Shapley values.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> shows the outputs for attributed dependence on labels, residuals and predictions, via ordinary least squares estimation. From these results, we make the following observations:</ns0:p><ns0:p>(i) For X 3 the ADP is significantly higher than the ADL.</ns0:p><ns0:p>(ii) For X 4 and X 5 the ADP is significantly lower than the ADL.</ns0:p><ns0:p>(iii) For X 1 , X 2 there is no significant difference between ADP and ADL.</ns0:p><ns0:p>(iv) For X 1 , X 2 ADR is negative, while X 3 , X 4 , X 5 have positive ADR.</ns0:p><ns0:p>Observations (i) and (ii) suggest that the model EY = β 0 + β X overestimates the importance of X 3 and underestimates the importance of X 4 and X 5 . Observations (iii) and (iv) suggest that the model may adequately represent X 1 , X 2 , X 3 , but that X 3 , X 4 and X 5 are significantly more important for determining structure in the residuals than X 1 and X 2 . A residuals versus fits plot may be useful for confirming that this structure is present and of large enough magnitude to be considered relevant.</ns0:p><ns0:p>Having observed the result in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>, for the misspecified linear model EY = β 0 + β X, we now fit the correct model: EY = β 0 + β X + X 3 X 4 X 5 , which includes the three-way interaction effect X 3 X 4 X 5 .</ns0:p><ns0:p>The results, shown in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>, show no significant difference between the ADL and ADP for any of the features, and no significant difference in ADR between the features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>APPLICATION TO DETECTING GENDER BIAS</ns0:head><ns0:p>We analyse a mortality data set produced by the U.S. Centers for Disease Control (CDC) via the National Health and Nutrition Examination Survey (NHANES I) and the NHANES I Epidemiologic Follow-up Study (NHEFS) <ns0:ref type='bibr' target='#b3'>(Cox, 1998)</ns0:ref>. The data set consists of 79 features from medical examinations of 14,407 individuals, aged between 25 and 75 years, followed between 1971 and 1992. Amongst these people, 4,785 deaths were recorded before 1992. A version of this data set was also recently made available in the SHAP package <ns0:ref type='bibr' target='#b15'>(Lundberg and Lee, 2017b)</ns0:ref>. The same data were recently analysed in Lundberg et al.</ns0:p><ns0:p>(Section 2.7, 2020) (see also https://github.com/suinleelab/treeexplainer-study/ tree/master/notebooks/mortality).</ns0:p><ns0:p>We use a Cox proportional hazards objective function in XGBoost, with learning rate (eta) 0.002, maximum tree depth 3, subsampling ratio 0.5, and 5000 trees. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. presented here are purely intended as a proof of concept -the results have not been investigated in a controlled study and none of the authors are experts in medicine. We do not intend for our results to be treated as a work of medical literature.</ns0:p><ns0:formula xml:id='formula_25'>X 1 X 2 X 3 X 4 X 5 0.0 0.1 0.2 0.3 Shapley value type ADL ADP (a) X 1 X 2 X 3 X 4 X 5 0.0 0.1 0.2 0.3</ns0:formula></ns0:div>
<ns0:div><ns0:head>Shapley value type</ns0:head><ns0:p>We decompose dependence on the labels, model predictions and residuals, amongst the three features:</ns0:p><ns0:p>age, systolic blood pressure (SBP) and physical activity (PA), displaying the resulting ADL, ADP and ADR for each of the three test data sets in Figure <ns0:ref type='figure'>7</ns0:ref>(using the DC characteristic function). From this analysis we make the following observations.</ns0:p><ns0:p>(i) Age has a significantly higher attributed dependence on residuals compared with each of the other features, across all three test sets. This suggests that age may play an important role in the structure of the model's residuals. This observation is supported by the dumbbells for age, which suggest a significant and sizeable difference between attributed dependence on prediction and attributed dependence on labels; that is, we have evidence that the model's predictions show a greater attributed dependence on age than the labels do.</ns0:p><ns0:p>(ii) For SBP, we observe no significant difference between ADL and ADP for the balanced and all male test sets. However, in the all female test set, we do see a significant and moderately sized reduction in the attributed dependence on SBP for the model's predictions compared with that of labels. This suggests that the model may represent the relationship between SBP and log relative risk of mortality less effectively on the all female test set than on the other two test sets. This observation is supported by the attributed dependence on residuals for SBP, which is significantly higher in the all female test set compared to the other two sets.</ns0:p><ns0:p>(iii) For PA, we see a low attributed dependence on residuals, and a non-significant difference between ADL and ADP, for all three test sets. Thus we do not have any reason, from this investigation, to suspect that the effect of physical activity is being poorly represented by the model.</ns0:p><ns0:p>The results regarding potential heterogeneity due to gender and systolic blood pressure are not suprising given that we expect, a priori, there to be a relationship between systolic blood pressure and risk of mortality <ns0:ref type='bibr' target='#b23'>(Port et al., 2000a)</ns0:ref>, and that studies also indicate this relationship to be non-linear <ns0:ref type='bibr' target='#b1'>(Boutitie et al., 2002)</ns0:ref>, as well as dependent on age and gender <ns0:ref type='bibr' target='#b24'>(Port et al., 2000b)</ns0:ref>. Furthermore, the mortality risk also depends on age and gender, independently of blood pressure <ns0:ref type='bibr' target='#b24'>(Port et al., 2000b)</ns0:ref>. We also expect physical activity to be important in predicting mortality risk <ns0:ref type='bibr' target='#b18'>(Mok et al., 2019)</ns0:ref>. Figure <ns0:ref type='figure'>7</ns0:ref>. (a) Shapley decomposition of attribution dependence on labels (ADL, pink) and predictions (ADP, blue), and (b) residuals (ADR, orange) for the three features age, physical activity (PA) and systolic blood pressure (SBP), on three different test data sets consisting of an equal proportion of females and males ('balanced'), only male ('all male') and only females ('all female'). Attributes were based on the DC characteristic function. We provide two demonstrations of the diagnostic methods: in Section 4.1.1, we use a data generating process which changes over time, and where the deployed model was trained at one initial point in time.</ns0:p><ns0:p>Here, Sunnies successfully uncovers changes in the dependence structures of interest, and attributes them to the correct features, early in the dynamic process. The second demonstration, in Section 4.1.2, shows how we use the attributed dependence on labels, model predictions and residuals, to detect which features' dependencies or interactions are not being correctly captured by the model. Implicit in these demonstrations is the notion that the information from many marginal contributions is being summarised into a human digestible number of quantities. For example, in Section 4.1.2, the 240 marginal contributions from 5 features are summarised as 5 Shapley values, in each of ADL, ADP and ADR, facilitating the simple graphical comparison in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>.</ns0:p><ns0:p>There is a practical difference between model-independent and model-dependent methods, highlighted in Section 4.1.2, when comparing the dependence structure in a data set, to the dependence structure captured by a model. Model-independent methods can be applied to model predictions and residuals, but can also be applied to data labels as well. Thus, techniques using model-independent Shapley values will be markedly different from model-dependent methods in both design and interpretation. Indeed, consider that there is a different interpretation between (a) the decomposition of a measure of statistical dependence, e.g., as a measure of distance between the joint distribution functions, with and without the independence assumption, and (b) the attribution of a measure of the functional dependence of a model on the value of its inputs.</ns0:p><ns0:p>While the DC does provide a population level (asymptotic) guarantee that dependence will be detected, it must be noted that, as discussed in Section 2.2.1, the DC tends to be greater for a linear association than for a non-linear association. These are not strengths, or weaknesses, of using a measure of non-linear statistical dependence as the Shapley value characteristic function (i.e., the method we call Sunnies) but rather of the particular choice of characteristic function in this method. Work is needed to investigate other measures of statistical dependence in place of DC, HSIC or AIDC, and to provide a comparison between these methods, including a detailed analysis of strengths, limitations and computational efficiency. In this paper, we have not focused on such a detailed experimental evaluation and comparison, but on the exposition of the Sunnies method itself.</ns0:p><ns0:p>Finally, in Section 5, we apply Sunnies to a study on mortality data, with the aim of detecting effects caused by gender differences. We find that, when the model is trained on a gender balanced data set, a significant difference is detected between the model's representation of the dependence structure via its predictions (ADP) and the dependence structure on the labels (ADL); a difference which is significant for females and not for males, even though the training data was gender balanced. Although we do not claim that our result is causal, it does provide evidence regarding the potential of Sunnies to uncover and attribute discrepancies that may otherwise go unnoticed, in real data.</ns0:p></ns0:div>
<ns0:div><ns0:head>16/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A well-known limitation when working with Shapley values, is their exponential computational time complexity. Ideally, in Section 5, we would have calculated Shapley values of all 17 features. However, it is important to note that we do not need to calculate Shapley values of all features, if there is prior knowledge available regarding interesting or important features, or if features can be partitioned into independent blocks. To illustrate the idea of taking advantage of independent blocks, suppose we have a model with 15 features. If we know in advance that these features partition into 3 independent blocks of 5 features, then we can decompose the pairwise dependence of each block into 5 Shapley values. In this way, 15 Shapley values are computed from 240 within-block marginal contributions, rather than the full number of 32,768 marginal contributions.</ns0:p><ns0:p>Finally, note that we have made the distinction that Shapley feature importance methods may or may not be model-dependent, but this distinction holds for model explanation methods in general. We believe that complete and satisfactory model explanations should ideally include a description from both categories.</ns0:p><ns0:p>All code and data necessary to produce the results in this manuscript are available on github.com/ ex2o/sunnies.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Feature X 4 , from (5), cross section with 100 least squares lines of best fit, each produced from a random sample of size 1000, from the simulated population of size 10,000. The estimate of R 2 is 0.0043, with 95% bootstrap confidence interval (0.001, 0.013) over 100 fits. The R 2 is close to 0 despite the presence of strong (non-linear) dependence.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>indep.cr. Distance correlation Affine invariant dist. corr</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Shapley decompositions using the four measures of dependence described in Section 2.2, normalised for comparability, with sample size 1000 over 1000 iterations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Attributed dependence on (a) labels (ADL) and (b) residuals (ADR), using the DC characteristic function; results for times t ∈ {0, . . . , 10}, from a simulation with sample size 1000 from the DGP (14). The bootstrap confidence bands are the 95% middle quantiles (Q 0.975 − Q 0.025 ) from 100 subsamples of size 1000. The ADL of features X 3 and X 4 appear to decay / increase over time, leading to significantly different ADL, compared to the other features. We also see that X 3 and X 4 have significantly higher ADR than the other features.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. SAGE and SHAPloss values for the modified simulated DGP (14) (see Section 4.1.1), with sample size 1000. The results are comparable to ADL (see Figure3). Features with high SAGE value contribute less to residuals, and vice versa for the SHAPloss values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Our training set containt 3370 observations, 11/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. (a) Barbell plots showing differences in attributed dependence on labels (ADL), based on the DC characteristic function between the training and test sets, for each feature, for the misspecified model EY = β 0 + β X with DGP (15). Larger differences indicate that the model fails to capture the dependence structure, effectively. (b) Bar chart representing attributed dependence on residuals (ADR) for the test set. The shaded rectangles represent bootstrap confidence intervals, taken as the 95% middle quantile (Q 0.975 − Q 0.025 ) from 100 resamples of size 1000. Non-overlapping rectangles indicate significant differences. Point makers represent individual observations from each of the 100 resamples.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. (a) Barbell plots showing differences in attributed dependence on labels (ADL), based on the DC characteristic function between the training and test sets, for each feature, for the correctly specified model with DGP (15). The shaded rectangles represent bootstrap confidence intervals, taken as the 95% middle quantile (Q 0.975 − Q 0.025 ) from 100 resamples of size 1000. Overlapping rectangles indicate non-significant differences, suggesting no evidence of misspecification. Point markers represent individual observations from each of the 100 resamples. (b) Bar chart representing attributed dependence on residuals (ADR) for the test set. Compare to Figure 5.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>As demonstrated by the results in Table1, it is not possible for pairwise methods to capture interaction effects and mutual dependencies between features. However, Shapley feature attributions can overcome this limitation, both in the case of Sunnies and in the case of SHAP. By taking an exhaustive permutations based approach, Shapley values are able to effectively deal with partial dependencies and interaction effects amongst features. Note, all the Sunnies marginal contributions can, in this example, be derived from Table1: the pairwise results state that DY ({1}) = DY ({2}) = D′ The discrete XOR example demonstrates that ADL captures important symmetry between features, while pairwise methods fail to do so. The results in the final two rows of Table1are produced as</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Y ({1}) = D′ Y ({2}) = ĤY ({1}) = Y ({1, 2}) = 2 × 0.265 = 0.53 and ĤY ({2}) = 0, and from Table 1 we can also derive DY ({1, 2}) = D′ ĤY ({1, 2}) = 2 × 0.16 = 0.32.</ns0:cell></ns0:row></ns0:table><ns0:note>to take a sample correlation matrix to estimate Cor(Y, X), producing all pairwise sample correlations as estimates of Cor(Y, X i ), for i ∈ [d]. A similar approach, in the presence of suspected non-linearity, is to produce all pairwise distance correlations, or all pairwise HSIC values, rather than all pairwise correlations. Both the above approaches are model-independent. For comparison, consider a pairwise model-dependent approach: fitting individual single-feature models M i , for i ∈[d], that each predict Y as a function of one feature X i ; and reporting a measure of model performance for each of the d models, standardised by the result of a null feature model -that is, a model with no features (that may, for example, guess labels completely at random, or may use empirical moments of the response distribution to inform its guesses, ignoring X entirely).follows: we train an XGBoost classifier on the discrete XOR problem in (12). Then, to ascertain the importance of each of the features X 1 and X 2 , in determining the target class, we use the XGBoost 'feature importance' method, which defines a feature's gain as 'the improvement in accuracy brought by a feature to the branches it is on' (see https://xgboost.readthedocs.io/en/latest/ R-package/discoverYourData.html).7/18PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Common experiences from users suggest that the XGBoost feature importance method can be unstable for less important features and in the presence of strong correlations between features (see</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Result X 1 Result X 2</ns0:cell></ns0:row><ns0:row><ns0:cell>SHAP</ns0:cell><ns0:cell>3.19</ns0:cell><ns0:cell>3.19</ns0:cell></ns0:row><ns0:row><ns0:cell>Shapley DC</ns0:cell><ns0:cell>0.265</ns0:cell><ns0:cell>0.265</ns0:cell></ns0:row><ns0:row><ns0:cell>Shapley AIDC</ns0:cell><ns0:cell>0.265</ns0:cell><ns0:cell>0.265</ns0:cell></ns0:row><ns0:row><ns0:cell>Shapley HSIC</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.16</ns0:cell></ns0:row><ns0:row><ns0:cell>Pairwise XGB</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Pairwise dependence</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>XGB feature importance 1</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table><ns0:note>e.g. https://stats.stackexchange.com/questions/279730/). However, in the current XOR example, features X 1 and X 2 are statistically independent (thus uncorrelated) and have the maximum importance that two equally important features can share (that is, together they produce the response deterministically).</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The 16 features used for fitting a Cox proportional hazards model to NHANES I and NHEFS data.Of the features inTable 2, we focus on the Shapley values for a subset of well-established risk factors for mortality: age, physical activity, systolic blood pressure, cholesterol and BMI. Note that the results</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature name</ns0:cell><ns0:cell>Feature name</ns0:cell></ns0:row><ns0:row><ns0:cell>Age</ns0:cell><ns0:cell>Sex</ns0:cell></ns0:row><ns0:row><ns0:cell>Race</ns0:cell><ns0:cell>Serum albumin</ns0:cell></ns0:row><ns0:row><ns0:cell>Serum cholesterol</ns0:cell><ns0:cell>Serum iron</ns0:cell></ns0:row><ns0:row><ns0:cell>Serum magnesium</ns0:cell><ns0:cell>Serum protein</ns0:cell></ns0:row><ns0:row><ns0:cell>Poverty index</ns0:cell><ns0:cell>Physical activity</ns0:cell></ns0:row><ns0:row><ns0:cell>Red blood cells</ns0:cell><ns0:cell>Diastolic blood pressure</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Systolic blood pressure Total iron binding capacity</ns0:cell></ns0:row><ns0:row><ns0:cell>Transferrin saturation</ns0:cell><ns0:cell>Body mass index</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>After distinguishing between model-dependent and model-independent Shapley values, in Section 2.2, we introduce energy distance-based and kernel-based characteristic functions, for the Shapley game formulation, as measures of non-linear dependence. We assign the name 'Sunnies' to Shapley values that arise from such measures.In Section 2.1.1 and Section 3, we demonstrate that the resulting model-independent Shapley values provide reasonable results compared to a number of alternatives on certain DGPs. The alternatives investigated are the XGBoost built-in feature importance score, pairwise measures of non-linear dependence, and the R 2 characteristic function. The investigated DGPs are a quadratic form, for its simple non-linearity; and an XOR functional dependence, for its absence of pairwise statistical dependence. These examples are simple but effective, as they act as counter-examples to the validity of the targeted measures of dependence to which we draw comparison.In Section 4, we demonstrate how the Shapley value decomposition, with these non-linear dependence measures as characteristic function, can be used for model diagnostics. In particular, we see a variety of interesting examples, where model misspecification and concept drift can be identified and attributed to specific features. We approach model diagnostics from two angles, by scrutinising two values: The dependence attributed on predictions by the model (ADP), and the dependence between the model residuals and the input features (ADR). These are proofs of concept, and the techniques of ADL, ADP and ADR require development to become standard tools. However, the examples highlight the techniques' potential, and we hope that this encourages greater interest in them.</ns0:figDesc><ns0:table /><ns0:note>15/18 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53755:1:1:NEW 16 Mar 2021) Manuscript to be reviewed Computer Science 6 DISCUSSION AND FUTURE WORK</ns0:note></ns0:figure>
</ns0:body>
" | "TO BOTH REVIEWERS: Thank you for your comments; we believe that the manuscript is now
much improved upon addressing your reviews. Additional and edited text are coloured blue for
your convenience. Please see below for point-by-point responses to the individual comments.
The Reviewers’ comments are repeated in black and our comments are coloured blue.
Reviewer 1 (Roshan Tourani)
Basic reporting
+ Couple of typos it seems.
Line 250: Delete X3. Done. Thank you.
Line 296: Change Age to SBP. Done. Thank you.
+ Paragraph starting at line 63 can be improved.
We know computational complexity is a big issue using Shapley values. Discussed at lines 63
and 343.
If proposed Sunnies can use any of the ideas proposed for efficient computation, state it. If not,
also state it explicitly, saying future work is needed for larger data sets.
This has now been addressed; please see paragraph starting line 70, and the small edits before
this paragraph.
Experimental design
+ Equation (5) with d=4 and A=diag(0,2,4,6) is used for figure 2. But there are five features in
figure 2. I think there is a typo somewhere.
Thank you for spotting this typo. We have now made the adjustments: d = 5 and diag(0,2,4,6,8).
+ Concept drift experiment (line 228) can use some more context.
Comparison methods might help, e.g. XGBoost feature importance (which you can say is
model-dependent), or pairwise correlations (linear or non-linear).
Or maybe state that clearly a linear (or non-linear) pairwise correlation would do here, but the
point here is that your method is general. I might be overthinking this!
We thank the Reviewer for this suggestion. We now include comparisons to the SAGE and
SHAP methods in Section 4.1.1.
Validity of the findings
Please bear with me while I elaborate more on complexity issue again.
Line 343 you mention it is useful to partition features to independent blocks which is technically
true.
But a big question arises, if researchers in a domain can partition features to (non-interacting)
independent blocks (of size of about five features), why not just study non-linear independence
relations (using DC, AIDC, or HSIC), or even more traditional, why not try interaction terms in
regression models and check likelihood scores? After all, there are only handful of features in
each independent block.
Let me put it another way. If we only have small number of features
why not look at each term C(SU{v})-C(S) separately, (which probably you've saved when
calculating non-linear Shapley values), rather than adding these up to get Shapley value and
lose the local information? i.e. simply using non-linear correlations you listed, but not doing the
summation over subsets S and reporting single value per feature.
The suggested explanation is now provided on lines 389-393. See also lines 423-427 and
183-187. To illustrate taking advantage of independent blocks, suppose we have a model with
15 features. If we know in advance that these features partition into 3 independent blocks of 5
features, then we can decompose the pairwise dependence of each block into 5 Shapley
values. In this way, 15 Shapley values are computed from 240 within-block marginal
contributions, rather than the full number of 32,768 marginal contributions.
Comments for the Author
+ Example 2 (line 175). I think it'd be informative to also list DC, AIDC, HSIC among {X1}, {X2},
{X1, X2} and Y.
These values can be derived from the table. We now included an explanation of this point on
lines 216-219.
+ To convince the audience that non-linear Shapley values are helpful, you kind of have to show
them at least one example that the number of features is high and simply looking at a list of DC,
AIDC, or HSIC is not practical.
This point has now been clarified, since it is the case in Section 4.1.2. We have added further
explanation on lines 300-301. Also see the text on lines 389-393 (as per our answer to a
previous comment).
+ The misspecified model (line 241) is a very clear and essential example! Here one has to
move away from (pairwise or linear) and consider ('more global' and non-linear) hence your
non-linear Shapley.
But again, this example is also too small! One can look at the list of nonlinear correlations rather
than adding them up into Shapley.
We have addressed this point on lines 300-301 and 389-393 (as per our answer to a previous
comment). Since feature X_3 is modulated by X_4 and X_5 (and indeed since the DGP is
unknown to the analyst) the analyst may compute the Shapley values to produce Figure 5 rather
than attempting to digest the 240 marginal contributions.
+ (Plug alert!) In our paper (S. Ma, -- 2020), we showed that Shapley values can be misleading
for learning local causal neighborhoods, using two simple examples of Markovian graphs (i.e.
faithfulness assumption were met).
The general statement (which is our current belief) is related to the issues above. Basically,
adding up all C(SU{v})-C(S) terms, one loses the important local information about v and certain
S. At least for causal discovery, Shapley's way of adding C(SU{v})-C(S) terms, though
mathematically attractive with additivity on C and all, seems arbitrary, or I shall say,
counterproductive.
We thank the reviewer for the reference, which we found to be very insightful. In our recent work
[1] we consider the pathology that Reviewer has alluded to as well as other potential
pathologies and pitfalls regarding the incorrect interpretation of the Shapley value, especially in
relation to feature selection.
[1] Fryer, Daniel, Inga Strümke, and Hien Nguyen. 'Shapley values for feature selection: The
good, the bad, and the axioms.' arXiv preprint arXiv:2102.10936 (2021).
Reviewer 2 (Gabriel Erion)
Basic reporting
No comment.
Experimental design
I like the proposed method but believe it needs a much more rigorous investigation ('Rigorous
investigation performed to a high technical & ethical standard). There are several experiments
where Shapley value based, model-dependent methods exist for the chosen problems and
should be compared against. It is not necessary, especially given PeerJ's explicit willingness to
publish negative or inconclusive results, to outperform these other methods, but it is necessary
to include them in experiments.
3.0.1 - Here a decision tree perfectly learns the XOR function but the built-in feature
importances do not correctly assign importance. The take-away message seems to be that
model-dependent explanations don't correctly handle interactions like XOR or at least that they
are flawed in scenarios like this one, and I don't think this is true. The authors cite
TreeExplainer and SHAP, and the SHAP repository actually uses XOR as an example where
XGBoost's built-in importance does not correctly assign credit but TreeSHAP/TreeExplainer
does. So I think it's necessary to include a comparison to TreeSHAP/TreeExplainer here (which
should correctly assign credit).
https://shap.readthedocs.io/en/latest/example_notebooks/tree_explainer/Understanding%20Tre
e%20SHAP%20for%20Simple%20Models.html
We thank the Reviewer for the positive comments and the constructive suggestion. Following
the Reviewer’s comment, we now consider the application of the SHAP values to our example.
We have now included SHAP results in Table 1 and alluded to the SHAP results on line 214.
Minor comment - I don't think it's ever explained what the 'X1' and 'X2' columns of Table 1 are
-- presumably the importances of X1 and X2 under various methods.
Thank you for this observation. This point has now been clarified in the caption of Table 1.
4.1.1 - Model monitoring and explaining the loss are great experiments; however, this is also
done in TreeExplainer, which should be referenced and compared against.
We have now included a reference to this use of TreeExplainer on line 53.
This comparison is especially important because TreeExplainer is cited earlier in the paper and
tree models are used for this experiment. TreeExplainer demonstrates how to (1) calculate
Shapley values of the model's loss and (2) use loss explanations to perform model monitoring in
the presence of feature drift. It's hard to justify publishing a section on explaining model
mistakes with feature drift in trees without referencing existing experiments that do exactly that. I
think the results will be informative and useful regardless of their exact outcome. It is also
possible to calculate Shapley values with standard model-agnostic SHAP methods by treating
the function to be explained as the model loss rather than the model output, and I think it would
be useful (1) try doing so and (2) discuss how the ADR method differs from that approach.
We thank the Reviewer for this suggestion. Our response to this point is presented in Figure 4
as well as on lines 264-270 and 283-286.
4.1.2 - Some minor comments: Point (iii) should not include X3.
This has now been fixed. Thank you.
Also, I'm not sure lines 261-263 contribute to the main points of the section. Which of the linear
models is both accurate and parsimonious? I think this is meant to be the one that includes the
correct interaction term. However, the paper doesn't include R^2 or other metrics to compare
accuracy of these models, and even if such metrics were present I'm not sure there's an
important broader point to be made, since the section isn't really about how to train
sparse/accurate ML models.
We thank the Reviewer for this comment. Following the Reviewer’s suggestions, we have
decided to remove the lines in question.
Overall, the experimental results are interesting, but feel incomplete without including
comparisons to some existing methods for answering similar questions.
Validity of the findings
No comment.
Comments for the Author
I like this paper -- I think the method is a great idea and the paper contributes to important
discussions in ML about explaining the model vs explaining the dataset. I do believe there are
large experimental gaps in the current version of the paper, and that the paper should not be
published without addressing these gaps and acknowledging other important methods (e.g.,
Shapley values of the model loss) for answering the questions the paper poses. However, if
these gaps are adequately addressed and sufficient additional experiments are run, the paper
would be a valuable resource for the ML community. My most important concerns are the
experimental concerns noted under (2) Experimental Design. Several other questions and
concerns:
We thank the Reviewer for the positive comment and hope that our additional experiments and
discussions address the Reviewer’s concerns.
1 - I am not sure there is enough discussion of why we should consider HSIC, AIDC, etc to be
non-parametric and not consider explanations based on a complex model (gradient boosting,
deep models, etc) to be non-parametric.
a) Sufficiently complex models are often themselves described as non-parametric.
We agree that our use of the expression “non-parametric” was a potential point of confusion and
not well defined (to clarify, we meant non-parametric in the sense that the methods are not
dependent on a parameterised class of functions). We have now opted to not use this
terminology. Please see our new lines 32-35 regarding this point.
b) No free lunch theorems, etc, in ML make me think that there is probably not a perfect solution
to feature dependence just as there is no perfect solution to prediction. I would probably be
more comfortable with an explicit discussion of the pros and cons of model-dependent versus
model-independent Shapley methods than with an overly broad claim that model-independent
methods make no assumptions about the DGP. c) Some specific pros and cons that come to
mind: a strength of the model-independent methods is the guarantee in 2.2 that dependencies
will always be detected. A strength of model-dependent methods is that they may be better
suited to measure the *strength* of interaction -- for example, from 2.2.1, distance correlation
should counterintuitively be higher for a small linear association than for a large nonlinear one,
which would likely not hold for an XGBoost model explanation. More discussion of the value of
the chosen properties would be very useful -- maybe there are certain cases where it's
particularly important that we never accidentally miss an important feature, in which the
guarantee from 2.2 is particularly important. In other cases maybe other properties are more
important and other methods should be used. Overall I am not concerned about whether
model-independent explanations are useful -- they definitely seem to be! But I think more
discussion of the assumptions made in this approach and how they differ from the assumptions
in model-dependent explanations would be helpful.
We thank the Reviewer for this insight. Please see lines 394-411 in the discussion regarding our
response to this point. Also, regarding the guarantee you mentioned in 2.2, we have also added
a clarification to this point on lines 139-142.
2 - I think it would help to explain why we need to use Shapley values with HSIC or AIDC as
characteristic functions, rather than simply applying them out of the box. For example, a raw
pairwise correlation vector is often used to understand relationships between features and
labels 'in the data'. Why not simply calculate, e.g., AIDC between all features and labels? Why
is the Shapley value helpful? Personally, I have an intuition for why marginal effects differ from
Shapley values for ML models but don't have a similar intuition in the case of HSIC and AIDC.
This point is addressed in Section 3, and it is the focus of Examples 1 and 2. We have now
provided further clarification by the addition of the lines 183-187.
3 - In Section 4, ADL is frequently discussed and plotted but it is not clear to me how it is
calculated (i.e., with HSIC or AIDC, etc). I may have missed these details but I think it would be
helpful to clarify exactly what method is being used in the plots. I think it is essential to clarify
this before publication.
We have clarified the use of the DC on lines 240-241 and 276, and in the captions of Figures
3,5 and 6. ADL itself is defined in Section 2.1.
" | Here is a paper. Please give your review comments after reading it. |
142 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In recent years, Machine Learning (ML) researchers have changed their focus towards biological problems that are difficult to analyse with standard approaches. Large initiatives such as TCGA have allowed the use of omic data for the training of these algorithms. In order to study the state of the art, this review is provided to cover the main works that have used ML with TCGA data. Firstly, the principal discoveries made by the TCGA consortium are presented. Once these bases have been established, we begin with the main objective of this study, the identification and discussion of those works that have used the TCGA data for the training of different ML approaches. After a review of more than 100 different papers, it has been possible to make a classification according to following three pillars: the type of tumour, the type of algorithm and the predicted biological problem. One of the conclusions drawn in this work shows a high density of studies based on two major algorithms: Random Forest and Support Vector Machines. We also observe the rise in the use of deep artificial neural networks. It is worth emphasizing, the increase of integrative models of multi-omic data analysis. The different biological conditions are a consequence of molecular homeostasis, driven by both protein coding regions, regulatory elements and the surrounding environment. It is notable that a large number of works make use of genetic expression data, which has been found to be the preferred method by researchers when training the different models. The biological problems addressed have been classified into five types: prognosis prediction, tumour subtypes, Microsatellite Instability (MSI), immunological aspects and certain pathways of interest. A clear trend was detected in the prediction of these conditions according to the type of tumour. That is the reason for which a greater number of works have focused on the BRCA cohort, while specific works for survival, for example, were centred on the GBM cohort, due to its large number of events. Throughout this review, it will be possible to go in depth into the works and the methodologies used to study TCGA cancer data. Finally, it is intended that this work will serve as a basis for future research in this field of study.</ns0:p></ns0:div>
<ns0:div><ns0:head>42</ns0:head><ns0:p>major problem that arises in cancer is the difficulty in its genetic diagnosis. Similar to Mendelian diseases,</ns0:p></ns0:div>
<ns0:div><ns0:head>43</ns0:head><ns0:p>where the disease develops due to the alteration in the function of a single gene, the development of cancer 44 is a consequence of epistatic behaviour of genes. There is already an extremely large search space in the 45 identification of alterations in a single gene, including exonic and intronic mutations, single nucleotide 46 polymorphisms (SNPs), copy number variants, indels, post-transcriptional alterations, post-translational 47</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In recent years, Machine Learning (ML) researchers have changed their focus towards biological problems that are difficult to analyse with standard approaches. Large initiatives such as TCGA have allowed the use of omic data for the training of these algorithms. In order to study the state of the art, this review is provided to cover the main works that have used ML with TCGA data. Firstly, the principal discoveries made by the TCGA consortium are presented. Once these bases have been established, we begin with the main objective of this study, the identification and discussion of those works that have used the TCGA data for the training of different ML approaches. After a review of more than 100 different papers, it has been possible to make a classification according to following three pillars: the type of tumour, the type of algorithm and the predicted biological problem. One of the conclusions drawn in this work shows a high density of studies based on two major algorithms: Random Forest and Support Vector Machines. We also observe the rise in the use of deep artificial neural networks. It is worth emphasizing, the increase of integrative models of multi-omic data analysis. The different biological conditions are a consequence of molecular homeostasis, driven by both protein coding regions, regulatory elements and the surrounding environment. It is notable that a large number of works make use of genetic expression data, which has been found to be the preferred method by researchers when training the different models. The biological problems addressed have been classified into five types: prognosis prediction, tumour subtypes, Microsatellite Instability (MSI), immunological aspects and certain pathways of interest. A clear trend was detected in the prediction of these conditions according to the type of tumour. That is the reason for which a greater number of works have focused on the BRCA cohort, while specific works for survival, for example, were centred on the GBM cohort, due to its large number of events. Throughout this review, it will be possible to go in depth into the works and the methodologies used to study TCGA cancer data. Finally, it is intended that this work will serve as a basis for future research in this field of study.</ns0:p></ns0:div>
<ns0:div><ns0:head>INTRODUCTION 40</ns0:head><ns0:p>The appearance of the carcinogenic phenotype is the consequence of an alteration of one or more genes.</ns0:p></ns0:div>
<ns0:div><ns0:head>41</ns0:head><ns0:p>In addition, the appearance of subtypes occurs in different ways in individuals of a population. Hence, a alterations, three-dimensional assembly of the protein, epigenetic modifications, etc. Thus, the search space for alterations when we encounter a subgroup of 40 genes is immense. When we do not know exactly which genes are involved, we have to search among the more than 20,000 coding regions or even in whole genome sequence. In these cases the search space grows to incalculable levels. All this complexity is the result of intermolecular communications in and among cells, a phenomenon that constitutes an environment of molecular communication that is extremely complicated to understand and identify.</ns0:p><ns0:p>In order to lay the foundation and achieve great advances in the prevention, early detection, stratification and success in the treatment of cancer, it is necessary to identify the complete changes generated by each type of cancer in its genome. Further, researchers must understand how these changes interact with the cancer microenvironment, intra-and intercellularly, to manifest itself. Hence, the National Cancer Institute (NCI) and the National Human Genome Research Institute (NHGRI) of the United States established The Cancer Genome Atlas (TCGA), with the aim of obtaining comprehensive multidimensional genomic maps of all key changes in several types and subtypes of cancer <ns0:ref type='bibr' target='#b90'>(Network et al., 2008)</ns0:ref>. An initial pilot project in 2006 confirmed that an atlas of these changes could be specifically created for different types of cancer. Subsequently, TCGA has collected tissues from more than 11,000 cancer and healthy patients, an endeavour that allows the study of more than 33 types and subtypes of cancer, including 10 rare cancers. The most interesting aspect of this initiative is that all the information is free and accessible to any researcher who wants to focus their efforts on the disease. The different types of data presented by the TCGA project are summarised in Table <ns0:ref type='table'>1</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows, for each cancer type, the percentage that each data type represents in the subtype's total. Data are provided open access to the community, a factor that facilitates the generation of novel models without requiring an initial financial investment to obtain the data. Therefore, there are increasingly specific models for the analysis of omics data. In particular, the rise and success of machine learning (ML) techniques to process a large amount of data is revolutionising bioinformatics and conventional forms of genetic diagnosis. These methods have focused on making predictions by using general learning algorithms to find patterns in complex, larger and hard-to-handle problems. In addition, these ML methods work really well with very large datasets, even when the number of variables in each observation is much greater than the total number of observations (n << p).</ns0:p><ns0:p>This survey presents the state-of-the-art research on TCGA analysis using machine learning. Efforts have involved both supervised and unsupervised learning problems, as well as survival analysis, disease omics human cancer data to imaging. Therefore, review articles are needed to show an overview of machine learning-based analysis of TCGA data to highlight the findings and to discuss future research lines so that the obtained knowledge is useful and can be translated to clinical practice.</ns0:p><ns0:p>There are few published review articles on machine learning for biomedical genomic analysis <ns0:ref type='bibr' target='#b69'>(Leung et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b61'>Karczewski and Snyder, 2018)</ns0:ref>. These review articles are before 2018 and do not present a discussion on TCGA data nor a discussion on machine learning results neither present a multi-omic and imaging point of view for different biological questions. To the best of our knowledge, no survey has been conducted on Machine Learning analysis of TCGA using multi-level cancer data. Thus, this survey aims to present a comprehensive summary of the previous machine learning approaches applied to TCGA during the span of 2008-2020. The contributions of this review are:</ns0:p><ns0:p>• This review includes exhaustive review of the main results obtained by the TCGA consortium using conventional approaches in order to understand if machine learning is increasing the knowledge in the area.</ns0:p><ns0:p>• This review includes machine learning results by the TCGA consortium.</ns0:p><ns0:p>• A classification of supervised, unsupervised and clustering methods that may point researchers to new approaches or new problems.</ns0:p><ns0:p>• Identification of data types mostly used in machine learning research of TCGA.</ns0:p><ns0:p>• A comprehensive discussion on biological questions solved by machine learning algorithms:</ns0:p><ns0:p>prognosis, immunological phenotype, pathways, MSI status, and subtype prediction.</ns0:p><ns0:p>• A deeper examination of the most used TCGA cohort: Breast Cancer Adenocarcinoma (BRCA).</ns0:p><ns0:p>• We point data integration approaches as the future trend in TCGA analysis using machine learning.</ns0:p><ns0:p>We believe that researchers in machine learning, bioinformatics, biology, computational biology and data integration would benefit from the findings of this exhaustive and comprehensive review.</ns0:p></ns0:div>
<ns0:div><ns0:head>• Inclusion criteria:</ns0:head><ns0:p>manuscripts written in English language and published by indexed journals in Pubmed to ensure the health science specialization and Scopus using TCGA as the main source of data</ns0:p><ns0:p>• Exclusion criteria:</ns0:p><ns0:p>manuscripts using machine learning marginally or without solid biological conclusions manuscripts in preprint without peer review</ns0:p></ns0:div>
<ns0:div><ns0:head>Article selection</ns0:head><ns0:p>The TCGA consortium papers were identified in the website and were included. Initially 345 papers were identified in Pubmed and Scopus using the search keywords. Of these, we filtered by the inclusion/exclusion criteria. In addition, duplicated papers retrieved from multiple sources were removed.</ns0:p><ns0:p>Finally, more than 150 articles were included.</ns0:p></ns0:div>
<ns0:div><ns0:head>TCGA CONSORTIUM</ns0:head><ns0:p>TCGA began as a pilot project for three years, with a focus on the characterisation of three types of human cancer: glioblastoma multiforme (GBM), lung squamous cell carcinoma (LUSC) and ovarian cancer In 2018, a series of works were published in Cell editorial, where they were exhaustively analysed the samples recruited throughout the project. These studies led to the identification and examination of mechanisms that underlie all types of tumours. These findings allow researchers to draw conclusions about tumour origins, molecular biology and subtyping. In this series of publications-and in order to understand the molecular biology underlying cancer-the TCGA consortium cross-checked general molecular aspects in all tumour types. To this end, they exhaustively studied, in the more than 10,000 samples stored in their repository, the process of alternative splicing <ns0:ref type='bibr' target='#b59'>(Kahles et al., 2018)</ns0:ref> and they identified the specific variants <ns0:ref type='bibr' target='#b54'>(Huang et al., 2018)</ns0:ref> and driver genes <ns0:ref type='bibr' target='#b5'>(Bailey et al., 2018)</ns0:ref> that generate greater predisposition to tumour development. They also analysed the effect of enhancer activation on different tumour types <ns0:ref type='bibr' target='#b16'>(Chen et al., 2018a)</ns0:ref> and the effect of aneuploidy <ns0:ref type='bibr' target='#b130'>(Taylor et al., 2018)</ns0:ref>. They also catalogued the variants of the 10 pathways that are most frequently altered in most tumours <ns0:ref type='bibr' target='#b116'>(Sanchez-Vega et al., 2018)</ns0:ref>, in addition to alterations in genes related to the ubiquitin <ns0:ref type='bibr' target='#b44'>(Ge et al., 2018)</ns0:ref>, DNA damage repair <ns0:ref type='bibr' target='#b64'>(Knijnenburg et al., 2018)</ns0:ref> and the MYC pathways <ns0:ref type='bibr' target='#b117'>(Schaub et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The consortium also features a strong technology component; they published an integrated pancancerous clinical data resource from TCGA with the aim of driving the analysis of high-quality survival results <ns0:ref type='bibr' target='#b74'>(Liu et al., 2018a)</ns0:ref>. In addition, they conducted studies where they used ML and deep learning algorithms to identify stemness features in tumour cells <ns0:ref type='bibr' target='#b78'>(Malta et al., 2018)</ns0:ref>, the prediction of Ras pathway activation <ns0:ref type='bibr' target='#b136'>(Way et al., 2018)</ns0:ref> and the detection of tumour infiltrating lymphocytes using images <ns0:ref type='bibr' target='#b114'>(Saltz et al., 2018)</ns0:ref>. In <ns0:ref type='bibr' target='#b33'>(Ellrott et al., 2018)</ns0:ref> they described the Multi-Center Mutation Calling project, which aims to generate a complete encyclopaedia of somatic mutations from TCGA data that allows a robust analysis for different tumour types. They performed different studies that proposed new classifications among tumours. For example, they identified new immune tumour types across the 33 types of cancer that differ by somatic aberrations, microenvironment and survival <ns0:ref type='bibr' target='#b131'>(Thorsson et al., 2018)</ns0:ref>. Furthermore, they classified tumours based on metabolic expression and subsequently proposed different subtypes that were not previously contemplated <ns0:ref type='bibr' target='#b105'>(Peng et al., 2018)</ns0:ref>. In addition, they carried out exhaustive studies on groupings of tumours according to their origin in order to elucidate new therapeutic targets that might be useful for gastrointestinal adenocarcinomas <ns0:ref type='bibr' target='#b75'>(Liu et al., 2018b)</ns0:ref>, gynaecological tumours and breast cancers <ns0:ref type='bibr' target='#b7'>(Berger et al., 2018)</ns0:ref> and squamous carcinomas <ns0:ref type='bibr' target='#b11'>(Campbell et al., 2018)</ns0:ref>. In these papers, they performed clustering techniques to subtype patients into new groups for treatment or diagnosis. Finally, they studied tumours by cell <ns0:ref type='bibr' target='#b50'>(Hoadley et al., 2018)</ns0:ref> and tissue <ns0:ref type='bibr' target='#b51'>(Hoadley et al., 2014)</ns0:ref> of origins.</ns0:p><ns0:p>There are many results reported by TCGA that have had a very important impact on oncology. The results obtained by the consortium show a roadmap to follow and open countless avenues in this field where new research groups, until now unable to carry out their research globally, will be able to report important results in this field.</ns0:p></ns0:div>
<ns0:div><ns0:head>MACHINE LEARNING AS A SOURCE OF NEW KNOWLEDGE</ns0:head><ns0:p>ML is the process by which machines acquire the ability to learn an action or behaviour. These processes are defined by different algorithms that enable the computer to learn a behaviour <ns0:ref type='bibr'>(classify, identify, etc.)</ns0:ref> and extract patterns from the data. These patterns are ultimately inherent knowledge of the problem to be analysed that the algorithms can extract and learn to identify. Subsequently, given a new case, these techniques can evaluate and predict to which group it is most likely to belong, always in accordance with prior knowledge. It is therefore critical that such techniques are applied with careful experimental design <ns0:ref type='bibr' target='#b40'>(Fernandez-Lozano et al., 2016)</ns0:ref> and that the data are as accurate as possible to define the problem. These techniques will learn and maximally exploit the intrinsic knowledge that underlies the data.</ns0:p><ns0:p>Depending on how this information extraction process is performed, we can speak of different approaches: supervised and unsupervised learning. Although in practice there are more types of learning, we will only focus on these two, mainly because these approaches have been the most widely used in biomedicine.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58456:1:0:CHECK 9 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The TCGA consortium and ML</ns0:p><ns0:p>The TCGA consortium has analysed cancer based on ML algorithms, sometimes with novel approaches specifically designed for the TCGA data. TCGA researchers recently presented a new ML that can predict the differentiation of certain tumour tissues <ns0:ref type='bibr' target='#b78'>(Malta et al., 2018)</ns0:ref>. In this case, using data from non-differentiated stem cells and their differentiated progenitors (data obtained from public repositories), they constructed two classes of indicators that reflect epigenetic and genetic expression traits of the cells.</ns0:p><ns0:p>Once they constructed these descriptors, they used a variant of one-class logistic regression to classify the different TCGA samples according to their degree of differentiation, a crucial characteristic for the development of the tumour and its invasive potential.</ns0:p><ns0:p>Another study <ns0:ref type='bibr' target='#b136'>(Way et al., 2018)</ns0:ref> used three types of omics platforms (expression, copy number and mutation) to predict the activation of the Ras pathway, which has been widely studied throughout oncological research. This model predicted whether this pathway was activated using RNAseq expression data. From the copy number and mutation data, the researchers were able to label the patients to design a supervised learning problem. Therefore, it was observed that certain omic patterns could be predicted from different omic data. This enables the prediction of a significant number of characteristics in tumours. This approach was also performed in another study by modifying the target in order to predict the activation of the TP53 pathway <ns0:ref type='bibr' target='#b64'>(Knijnenburg et al., 2018)</ns0:ref>.</ns0:p><ns0:p>In other study, deep learning based on convolutional neural networks (CNN) mapped tumour infiltrating lymphocytes (TIL) based on haematoxylin and eosin (H&E) images. In this case, 13 types of TCGA tumours exhibited almost perfect performance when differentiating these cell types <ns0:ref type='bibr' target='#b114'>(Saltz et al., 2018)</ns0:ref>. In this work, the TCGA consortium highlighted the importance of the images it stores and questions their relatively limited use by different researchers in comparison with omics platforms. The images in the TCGA repository will be discussed in the following sections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Popular ML models with TCGA data</ns0:head><ns0:p>The TCGA consortium has relied on both supervised and unsupervised ML techniques to extract new knowledge from its data. However, it is interesting to identify the work developed by other researchers who have used TCGA data. The approaches taken and the results obtained from the various published works will be discussed below.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows the proportion of published papers according to the type of algorithm and the type of omic data used. We reviewed more than 100 papers that have used ML approaches with TCGA data. For each one, we identified: the algorithm and data type/s. Almost half of the identified works used variants of the support vector machine (SVM) or tree-based algorithms, followed by linear models as can be seen in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.a. On the other hand, Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.b clearly shows that gene expression data is most abundant data type used in ML research. Other data types such as images, methylation, miRNA and copy number have been used, but majority in a combination with gene expression data.</ns0:p><ns0:p>The findings of this review highlight the low variability of reported research and analytical methods.</ns0:p><ns0:p>It is true that the mostly used algorithms, Random Forest (RF) and SVM, as well as the types of omic data (expression) have reported promising results in the biomedical field during the last years. We believe that the low variability in the approaches established by researchers is mainly due to two reasons. First, the intrinsic characteristics of biomedical data, and specifically the omic data, present a much greater number of characteristics than observations. This fact is generally not idyllic for the training of ML algorithms.</ns0:p><ns0:p>In this sense, the use of algorithms is mainly determined by the use of which type of omic data is being analyzed. In the context mentioned above, certain algorithms are able to handle some characteristics of the data better than others. For example, neural networks are more sensitive to the lack of observations than in this case RF, SVM or linear models. Given that the vast majority of works identified have used expression data, it is logical to observe a high density of works that have used RF or SVM type algorithms. On the other hand, those works that have used image data are more likely to use neuron networks. Secondly, there is no doubt that the possibilities in the exploitation of these data by ML algorithms are yet to be discovered. A break in the arrival of ML-based applications in the field of biomedicine has been detected. This is partly due to the complexity of the omic data, and the need for specialists in this field for its modelling and good practical use. Possible applications that could revolutionize the field of biomedicine could be the use of NLP (Natural Language Processing) algorithms for the analysis of Whole Genome Sequencing (WGS) data.</ns0:p><ns0:p>After all, if there is something to highlight in the results observed in the Manuscript to be reviewed Computer Science more and more work integrating different omic data. Even so, this trend is not reflected in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.a, in which a variety of algorithms and/or new known and standardized methodologies that can solve this problem are not observed. This is the great challenge in the coming years presented by biomedicine, which could generate very useful predictions for tackling complex diseases, such as cancer.</ns0:p></ns0:div>
<ns0:div><ns0:head>A general perspective of unsupervised learning with TCGA data</ns0:head><ns0:p>In oncology, clustering methods are extremely useful for subtyping or reclassification of patients in a particular cohort. Over the years, the classic clustering methods have been most widely used, including partitioning clustering or hierarchical clustering. Even today, they are widely used with their respective variations. For example, the TCGA consortium has used them to subtype different tumours (see publications in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>). The problem with these algorithms is that they can only model a single set of data and the concatenation of different types of data does not perform adequately. The complexity of the tumour is manifested at distinct biological levels; hence, methods that can accept different types of data are preferable. Thus, researchers developed a new integrative clustering method based on a joint latent variable model (iCluster) <ns0:ref type='bibr' target='#b122'>(Shen et al., 2009)</ns0:ref> and used it with TCGA data <ns0:ref type='bibr' target='#b120'>(Shen et al., 2012)</ns0:ref>. iCluster fits a regularized latent variable model based clustering that generates an integrated cluster assigment based on joint inference across data types. In addition, the implementation in several programming languages is very intuitive. On other hand, an extended version (iClusterPlus) was also developed <ns0:ref type='bibr' target='#b80'>(Mo et al., 2013)</ns0:ref>.</ns0:p><ns0:p>One of the most important works using this method was <ns0:ref type='bibr' target='#b27'>(Curtis et al., 2012)</ns0:ref>, identifying 12 different breast tumour subtypes.</ns0:p><ns0:p>In addition to beforementioned works, there are a huge examples of iCluster use with TCGA. For instance, in <ns0:ref type='bibr' target='#b141'>(Xie et al., 2019)</ns0:ref> an integrative analysis was carried out with iCluster through RNAseq and proteomics data to analyse the OV subtype. The results showed two clusters with different survival rates; the method identified 18 mRNAs and 38 proteins as distinct molecules among subtypes. Another study proposed a modified iCluster model to discover key processes in the tumour collection through unsupervised integration of multiple types of molecular data and functional annotations <ns0:ref type='bibr' target='#b8'>(Bismeijer et al., 2018)</ns0:ref>. Further, <ns0:ref type='bibr' target='#b79'>(Mo et al., 2017)</ns0:ref> described a novel modification (iClusterBayes) capable of jointly modelling omics data of continuous and discrete data types for the identification of tumour subtypes and relevant omics characteristics. In the work of <ns0:ref type='bibr' target='#b62'>(Kim et al., 2017)</ns0:ref>, they modified this procedure to subtype patients using sequential double regularisation. Another pathway-based variant incorporates pathway data to group patients into cancer subtypes <ns0:ref type='bibr' target='#b77'>(Mallavarapu et al., 2019)</ns0:ref>. Additionally, in Jean-Quartier et al.</ns0:p><ns0:p>(2021) clustered GBM patients into several age subgroups with different age-related biomarkers. Finally, a work developed in <ns0:ref type='bibr' target='#b100'>(Nguyen et al., 2017)</ns0:ref>, named PINS, allows omics data integration and molecular patient stratification automatically.</ns0:p><ns0:p>With the above, the trend in genome research is evident. An increasing number of works are attempting to integrate the greater amount of information provided by the different omic data into their models. Due to the complexity of cancer, stratifying patients according to a single source of information is becoming obsolete. Therefore, it is vitally important to improve models that are capable of multi-omic integration, as is the case with iCluster. Moreover, there is a need of novel approaches to automated medical decision pipelines building on machine learning, information fusion and explainability <ns0:ref type='bibr' target='#b52'>(Holzinger et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b6'>Barredo Arrieta et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Medical imaging as a data source for ML algorithms</ns0:head><ns0:p>An important event occurred in 2012 during the celebration of the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) <ns0:ref type='bibr' target='#b112'>(Russakovsky et al., 2015)</ns0:ref>. A deep learning model (specifically, a CNN) halved the second best error rate in the image classification task. The goal of this challenge was the detection of objects and the classification of images using a large-scale database. Furthermore, deep learning algorithms can automatically find the best subset of features that describe the nuances of images. In As discussed in previous sections, the TCGA consortium has used deep learning methods <ns0:ref type='bibr' target='#b114'>(Saltz et al., 2018)</ns0:ref>. Specifically, they used CNN to detect tumour-infiltrating lymphocytes (TILs) based on H&E images in 13 tumour types. They reported a local spatial structure in the TIL patterns and their correlation with overall survival. These data modify densities and spatial structure among tumour types, immune subtypes and molecular tumour subtypes. Spatial infiltration of lymphocytes might reflect particular aberration states of tumour cells.</ns0:p><ns0:p>Based on these findings, several studies have used this and other repositories to create their own models. It is important to distinguish among data types. On the one hand, there are works that have used radiological images for the classification of stages of gliomas <ns0:ref type='bibr' target='#b103'>(Park et al., 2019)</ns0:ref>. In this work, they did not use the radiological images directly; rather, they extracted 250 characteristics from them to train their models, obtaining an area under the receiver operating characteristic curve (AUROC) of 72%. Notably, this model, which was validated with very heterogeneous cohorts such as TCGA, considerably reduced the performance. These results indicate that manual extraction of characteristics does not provide sufficient generalisation.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b127'>(Sun et al., 2018b)</ns0:ref> utilised contrast-enhanced CT images and RNASeq data to assess CD8 cell infiltration in tumour biopsies. They first extracted features from both types of data to ultimately keep eight features and train an elastic-net regularised regression method. They used this signature to predict the response to anti-programmed cell death protein 1 (PD1) or anti-programmed death-ligand 1 (PDL1) treatments. Magnetic resonance imaging (MRI) was used in to predict the status of MGMT, a promoter of methylation that has been related to better outcomes on GBM patients integrated with expression data (accuracy of 73%; <ns0:ref type='bibr' target='#b60'>(Kanas et al., 2017)</ns0:ref>).</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b41'>(Fischer et al., 2018)</ns0:ref> reported a new method for histopathological image analysis-sparse coding-using a dictionary optimised for biomedical images. They stated that they generally obtained better performance rates compared to transfer learning. In <ns0:ref type='bibr' target='#b146'>(Yu et al., 2016)</ns0:ref>, they predicted the prognosis of non-small cell lung tumours. Using the CellProfiler software, they extracted 9,879 quantitative characteristics and trained different algorithms, such as SVM or random forest. Finally, with a variant of the SVM algorithm, they achieved an AUROC of 81%. Besides, they developed a low-complexity method for classification and disease grading in histopathological images. This method-discriminative feature-oriented dictionary learning (DFDL)-learns from specific class dictionaries in such a way that under a dispersion restriction, the learned dictionaries allows it to represent a new image in a simplified way. However, it is unable to represent samples from other classes. In <ns0:ref type='bibr' target='#b26'>(Coudray et al., 2018)</ns0:ref> used histopathology images of lung cancers to classify squamous cell carcinomas, adenocarcinomas and normal samples with a 97% of AUROC . In the work of <ns0:ref type='bibr' target='#b14'>(Cheerla and Gevaert, 2019)</ns0:ref>, they were able to extract information from several datasets and obtain a model capable to predict patient prognosis. In, <ns0:ref type='bibr' target='#b34'>(Ertosun and Rubin, 2015)</ns0:ref> subtyped gliomas with CNN algorithms by using raw images for this task; there was more than 90% accuracy for glioma classification and almost 80% for glioma grade identification. In <ns0:ref type='bibr' target='#b109'>(Rendleman et al., 2019)</ns0:ref> used a CNN to evaluate distinct histological tumour growth patterns such as solid, micropapillary, acinar and cribriform (84% accuracy). An important work was developed in <ns0:ref type='bibr' target='#b56'>(Janowczyk et al., 2019a)</ns0:ref>.</ns0:p><ns0:p>They developed an unsupervised encoder to compress four data modalities, including Whole slide images (WSIs), into a single feature vector for each patient. The model was trained with TCGA data and predict single cancer overall survival, achieving a C-index of 0.78 overall.</ns0:p><ns0:p>It is important to highlight the need to pre-process the histopathological images before their analysis. This step is crucial to achieve great performances in the models. The images housed in TCGA are not homogeneous in size, shape and brightness. Therefore, it is necessary to use a pre-processing stage in order to standardise all the images before the analysis. Open source tools as HistoQC <ns0:ref type='bibr' target='#b57'>(Janowczyk et al., 2019b)</ns0:ref> are relevant in the extraction of knowledge and the good use of images in research.</ns0:p></ns0:div>
<ns0:div><ns0:head>Biological questions solved by ML algorithms</ns0:head><ns0:p>In addition to all the existing omics data in TCGA, the inclusion of the clinical information from each patient increases the ability to generate analytical models. The dependent variable in supervised learning problems can potentially be any of the 100 clinical outcomes offered by TCGA, depending on the biological response to be answered. For classification problems, researchers have information on the anatomical division of the neoplasm, the clinical stage of the patient, TNM status, MSI status, ethnicity, age and gender, survival and/or relapse of the tumor among others. Thus, we can infer whether we can Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>predict the anatomical division of the tumour or its clinical stage from the methylation marks of the patients (among other possibilities). For regression, we can use the initial age at diagnosis or the prognosis of the patient by means of the Karnofsky Performance Status Scale. Also, independently of clinical data, classificacion and regression models could be created to determinate other omics outcomes. For instance, sobreexpression of driver genes, methylation status or mutation types. In addition, the data storaged in TCGA repository allows any potential researcher to study survival in the cohort: it presents data on life status and the days that have elapsed between events, such as the death of the patient or other events of interest (relapse or disease-free survival).</ns0:p><ns0:p>The quantification of the number of papers published for each cancer subtype is shown in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. As shown in this figure, the most used are those with the highest number of samples: BRCA, LUAD and OV.</ns0:p><ns0:p>The great number of dimensions and observations, together with the large number of available clinical variables (pathological state, TNM classification status, drug effect, treatment response, etc.) generates an ideal data analysis environment for the use of both supervised and unsupervised ML techniques. For supervised learning problems, contingent on the dependent variable to be predicted, these problems may be regression (patient survival time, expression of a specific gene or individual age) or classification (classification of patients according to some driver gene status, disease or metastasis stages, etc.) problems.</ns0:p><ns0:p>In terms of unsupervised learning problems, most work focuses on finding new subtypes of the disease.</ns0:p><ns0:p>As for the other tumours, there is a significant decrease in the number of publications, mainly due to the number of samples collected. This fact is due to the intrinsic functioning of the ML algorithms, which, because they work on the basis of examples, are able to generalise more as the number of observations in their training phase increases. We can observe in the Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> also how there are several works that use different cohorts in the same analysis. After reviewing these papers, two trends have been observed in this type of article. Firstly, there are those that train models to predict cross-sectional and/or basic conditions of tumours. For example, in <ns0:ref type='bibr' target='#b41'>(Fischer et al., 2018)</ns0:ref> they predicts MSI status from histopathological images.</ns0:p><ns0:p>In this case, the different TCGA cohorts are treated together for the training of the models. On the other hand, other works have been identified in which the cohorts are used independently. These works are mainly based on model improvements or development of new technologies that are then tested with each cohort. This is the example of <ns0:ref type='bibr' target='#b17'>(Chen et al., 2018b)</ns0:ref>, where they develop a new model of autoencoders for the search of new genetic signatures. This model is later validated in each of the available TCGA cohorts.</ns0:p><ns0:p>Another example is <ns0:ref type='bibr' target='#b15'>(Cheerla and Gevaert, 2017)</ns0:ref>, where they obtain a model that recommends the type of treatment from miRNA expression data. This recommendation is validated in the different TCGA cohorts. Therefore, there are many approaches that can be used by researchers to use this type of data.</ns0:p><ns0:p>In this review, we classify the identified works on five major groups according biological problem solved. Although there are more than 100 variables in the TCGA clinical database, there is very little variability observed in the type of analysed problems.</ns0:p><ns0:p>In order to observe the distribution of publications according to this type of classification along the different types of tumours, pay attention to the Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref> shows the distribution of the published papers according to the different types of tumors and the type of biological problem. The different biological problems show a different distribution according to tumor type. It can be seen how prognosis prediction is more common in GBM cohorts. In this case GBM is a type of tumour with high mortality rates, so it is a cohort where there are numerous events with which robust ML models can be created.</ns0:p><ns0:p>Following GBM, OV and LUAD cohorts were the most used. Furthermore, it is observed how this type of problem is addressed in different cohorts. This is not the case for MSI prediction, as few tumours are defined by MSI status. The most common ones in this case are COAD, READ, STAD and UCEC. Paying attention to the prediction of subtypes, we see that the BRCA cohort is the most used. Regarding the immunological phenotype, the works have used cohorts mainly of solid tumours, which are the ones that present the best response to treatments with immunological therapies. Finally, few tumours have been addressed in the prediction of pathways. The works identified used the OV, COAD, LUAD and BRCA cohorts. The following sections are a review of the works according to the five classes identified.</ns0:p></ns0:div>
<ns0:div><ns0:head>Prognosis prediction</ns0:head><ns0:p>The prognosis in the different types of cancer varies greatly due to their heterogeneity, their environment and their unique behaviour in each patient. It is therefore crucial to be able to predict the events that will develop in the patient and have a direct effect on the prognosis of the cancer. These events can be deaths, recurrence and/or relapse events, metastases or the classification of patients into specific stages.</ns0:p><ns0:p>Numerous studies have been identified that have addressed this field of study with ML-based analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58456:1:0:CHECK 9 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='table' target='#tab_2'>2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>0 0 0 0 1 0 0 0 1 5 0 0 1 1 2 MSI 0 0 2 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 Immunological 1 2 0 0 0 0 0 1 3 2 1 2 0 1 0 0 0 0 Pathways 3 0 3 0 0 0 0 0 1 0 0 0 2 0 0 0 0 0 Prognosis 3 0 2 5 0 2 0 0 3 0 0 0 0 2 1 0<ns0:label>0</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Within this category, many of the papers identified have aimed to predict events related to patient survival time. Furthermore, it has been observed that expression data are the most used in this type of problem, due to their better performance in predictions, together with methylation data <ns0:ref type='bibr' target='#b125'>(Stephen and Lewis, 2013)</ns0:ref>. In <ns0:ref type='bibr' target='#b140'>(Wong et al., 2019)</ns0:ref> they use them as input from a deep learning network, while in <ns0:ref type='bibr' target='#b37'>(Fatai and Gamieldien, 2018)</ns0:ref> they use the SVM algorithm. In both problems they obtained gene signatures that were highly correlated with the survival events of the patients. Other works have addressed this type of problem by integrating expression data with other data sets. For example, in <ns0:ref type='bibr' target='#b145'>(Yasser et al., 2018)</ns0:ref>, using FS techniques, they obtained subgroups of features from sets of ANC, methylation and expression. In <ns0:ref type='bibr' target='#b147'>(Zhang et al., 2016)</ns0:ref>, they add a layer of complexity, adding to the integration of miRNA data by means of multiple kernel and FS techniques. This technique was also used in <ns0:ref type='bibr' target='#b124'>(Srivastava et al., 2013)</ns0:ref> for the integration of expression and miRNA data. On the other hand, one paper has used only lncRNA data capable of predicting survival events over 19 months <ns0:ref type='bibr' target='#b22'>(Cheng, 2018)</ns0:ref>. Works were also identified that have addressed this problem from histopathological images. In these cases, they extract characteristics from the images in order to train different types of ML algorithms and be able to predict survival times and/or events <ns0:ref type='bibr' target='#b55'>(Ing et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b146'>Yu et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b106'>Powell et al., 2017)</ns0:ref>.</ns0:p><ns0:p>In addition to survival events and times, there are other events that are interesting to predict for clinical practice. In this case, the events of tumour recurrence, which are when the tumour is detected again after a treatment process. Knowing, therefore, the probabilities of a cancer relapse in a given patient is interesting for clinicians. Using the ML approach, this problem has been addressed mainly with transcriptomical expression data. For example, in <ns0:ref type='bibr'>(Wang et al., 2018)</ns0:ref>, from miRNA data, lncRNA and mRNA identified 36 features capable of classifying with 91% accuracy whether a tumour will recur or not. In <ns0:ref type='bibr' target='#b149'>(Zhou et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b126'>Sun et al., 2018a;</ns0:ref><ns0:ref type='bibr' target='#b142'>Xu et al., 2017)</ns0:ref>, similar performances were obtained with RNASeq data, while in <ns0:ref type='bibr' target='#b137'>(Wei, 2018)</ns0:ref> from RNASeq data predict metastasis processes. On the other hand, in <ns0:ref type='bibr' target='#b38'>(Feng et al., 2018)</ns0:ref> they predicted recurrence based on data from the tumour microenvironment.</ns0:p><ns0:p>Usually the different types of tumours are classified in different stages which correlate with different prognoses. Therefore, this has been another problem addressed by the researchers, in which the ML has been able to offer a solution. Again, the RNASeq data were the most used to address this problem. In <ns0:ref type='bibr' target='#b35'>(Fan et al., 2018)</ns0:ref>, they obtained a signature of 12 genes capable of distinguishing patients with lung cancer with different risks, while in <ns0:ref type='bibr' target='#b21'>(Chen et al., 2017)</ns0:ref> they identified pathways of interest capable of classifying the different stages of lung adenocarcinoma. For example, in <ns0:ref type='bibr' target='#b143'>(Yang et al., 2018)</ns0:ref>, they used only the features corresponding to lncRNA, obtaining a signature of six lncRNA capable of classifying patients with melanoma according to their stages.</ns0:p></ns0:div>
<ns0:div><ns0:head>Immunological phenotype prediction</ns0:head><ns0:p>Currently, one of the most successful and promising therapies against cancer are drugs that act against immune checkpoint inhibitors (ICI). These drugs block the proteins produced by certain immune cells to prevent immune responses from becoming too strong. The activation of these checkpoints can cause the cells of our immune system not to be able to kill the cancer cells. The treatment of most types of tumours is helped by this type of therapy, although there are some that do not respond in the same way. This is the case of HGSOC tumours. In <ns0:ref type='bibr' target='#b30'>(Dai et al., 2018)</ns0:ref>, they analysed genomic data from HGSOC patients to predict their immune phenotype of the tumour microenvironment. After a comparison with the analysis of other solid tumours, such as BLCA, SKCM, KIRC, LUSC and LUAD, they identified ten dominant factors that determine the immunogenicity of HGSOCs. Using the ML they were able to classify tumours with high and low cytolytic activity, noting also that mutations in BRCA1 may be a good predictive biomarker for guiding ICI therapies of HGSOC patients.</ns0:p><ns0:p>Moreover, they developed and independently validated an eight-feature signature based on CD8 cell radiomic imaging for the response to (PD)-1 and (PD-L1). This imaging predictor provides a promising way to predict the immunologic phenotype of tumors and infer clinical outcomes for cancer patients who had been treated with anti-PD-1 and PD-L1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pathways prediction</ns0:head><ns0:p>Some of the genetic drivers specific to each tumour are well known, as well as certain pathways that influence the process of tumour development. Although the identification of status is a complex issue, it holds a great deal of information in the diagnosis and treatment of patients. This is why researchers have addressed this problem using ML techniques. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science which they infer the status of different cancer driver pathways <ns0:ref type='bibr' target='#b113'>(Rykunov et al., 2016)</ns0:ref>, damaged pathways <ns0:ref type='bibr' target='#b63'>(Klein et al., 2017)</ns0:ref> and level of apoptosis <ns0:ref type='bibr' target='#b115'>(Salvucci et al., 2017)</ns0:ref>. In <ns0:ref type='bibr' target='#b19'>(Chen et al., 2012)</ns0:ref>, RNASeq data and copy number data are used to detect pathways capable of differentiating expression patterns between different phenotypes. In the case of (Ou- <ns0:ref type='bibr' target='#b102'>Yang et al., 2017)</ns0:ref>, they developed a cross-platform method for the identification of new molecular pathways related to tumour types.</ns0:p></ns0:div>
<ns0:div><ns0:head>MSI status prediction</ns0:head><ns0:p>Microsatellite instability is the mutation predisposition of certain tumours due to defects in the DNA mismatch repair machinery. It is of great importance to identify MSI status in certain tumours as it is a great predictor and marker for diagnosis and treatment. In this review two papers were identified that have addressed this problem with MSI techniques. The first of these, called <ns0:ref type='bibr' target='#b134'>(Wang and Liang, 2018)</ns0:ref>, classified the different MSI subtypes based on mutational annotation data. They used an SVM algorithm and obtained a total accuracy of 0.91 for the COAD, READ, STAD and UCEC cohorts. They used a total of 22 features for the classification, such as the count of SNPs, indels, total mutations, missence mutations or the ratio between mutations and SNPs. On the other hand, in <ns0:ref type='bibr' target='#b18'>(Chen et al., 2018c)</ns0:ref> they made a classification from the expression data. Using ML algorithms and FS techniques they obtained a classifier capable of discerning the different subtypes.</ns0:p></ns0:div>
<ns0:div><ns0:head>Subtypes prediction</ns0:head><ns0:p>Finally, another problem that has been addressed by researchers and where ML techniques can contribute significantly is the prediction of the different subtypes of the disease. It is interesting to recognise which are the different omic data sets that hold enough information to build a classification system robust enough to obtain the appropriate yields. As usual, RNASeq was the technique par excellence from which the data were obtained to train the models <ns0:ref type='bibr' target='#b144'>(Yang et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b47'>Graudenzi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b43'>Gao et al., 2017)</ns0:ref>. In addition, the expression data were combined with other sets such as miRNA <ns0:ref type='bibr' target='#b139'>(Wilop et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b82'>Nair et al., 2015)</ns0:ref>, methylation <ns0:ref type='bibr' target='#b73'>(List et al., 2014)</ns0:ref> or miRNA and methylation <ns0:ref type='bibr' target='#b100'>(Nguyen et al., 2017)</ns0:ref>.</ns0:p><ns0:p>In addition to expression data, two papers have used exclusively image data to classify subtypes of the disease. Firstly from MRI images <ns0:ref type='bibr' target='#b129'>(Sutton et al., 2017)</ns0:ref> and with qCT-TA data <ns0:ref type='bibr' target='#b65'>(Kocak et al., 2018)</ns0:ref>. Other work, for example, used mutation data <ns0:ref type='bibr' target='#b133'>(Vural et al., 2016)</ns0:ref> and miRNA data <ns0:ref type='bibr' target='#b81'>(Muhamed Ali et al., 2018)</ns0:ref>.</ns0:p><ns0:p>It is logical to think that the ML algorithms now attempt to analyse the most studied problems to determine whether they can reach the same conclusions as conventional statistical approximations. In general, ML approximations analyse the importance of each of the variables in the dataset without making any a priori assumptions, so the generalisation of the model does not have to be based on inherent biological knowledge of the data. Although there are ML approximations that base the selection of genes from each data platform to certain pathways of interest <ns0:ref type='bibr' target='#b118'>(Seoane et al., 2013)</ns0:ref> , this field is still open field for new approximations.</ns0:p><ns0:p>One study observed that the ML algorithms reached similar conclusions and also provided a certain degree of diversity in the results (Liñares <ns0:ref type='bibr' target='#b72'>Blanco et al., 2019)</ns0:ref>. This outcome aids the examination of new omics variables that might be of interest to study the development of cancer. Cancer is a multifactorial and complex disease, so it makes sense that the analysis should consider the differences that characterise the patients as a whole and not individually.</ns0:p></ns0:div>
<ns0:div><ns0:head>A deeper examination of the BRCA cohort</ns0:head><ns0:p>The TCGA consortium jointly analysed genomic DNA copy number arrays, DNA methylation, exome sequencing, mRNA arrays, miRNA sequencing and reverse-phase protein arrays <ns0:ref type='bibr' target='#b84'>(Network et al., 2012b)</ns0:ref> .</ns0:p><ns0:p>In this study, they demonstrated the existence of four main classes of breast cancer by combining data from five platforms; there was great heterogeneity. Mutations in only three genes (TP53, PIK3CA and GATA3) occurred in more than 10% of all the samples. In addition, they identified two new subgroups defined by protein expression-produced primarily by the tumour microenvironment. Besides, the comparison of basal-type breast tumours with high-grade serous ovarian tumours showed a myriad of molecular similarities, a finding that indicates a related aetiology and similar therapeutic opportunities.</ns0:p><ns0:p>In one study <ns0:ref type='bibr' target='#b25'>(Ciriello et al., 2015)</ns0:ref> , the authors discovered that invasive lobular carcinoma (ILC) is a clinically and molecularly distinct disease. In this case, patients with ILC show CDH1 and PTEN loss, AKT activation and mutations in TBX3 and FOXA1. The proliferation and expression of genes related to the immune system defined three ILC subtypes.</ns0:p><ns0:p>The findings made by TCGA are leading the way in the search for new treatment and diagnostic opportunities for patients, in this case, with breast cancer. Although the work of the TCGA has been Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>exhaustive, the possibilities offered by giving free access to its data are enormous. For this reason, many researchers have taken these data as a reference and have reported results of great interest to the community.</ns0:p><ns0:p>We identified several publications that utilised ML to analyse TCGA BRCA data. There are published works using miRNA data <ns0:ref type='bibr' target='#b123'>(Sherafatian, 2018)</ns0:ref> , methylation data <ns0:ref type='bibr' target='#b48'>(Hao et al., 2017)</ns0:ref> , expression data <ns0:ref type='bibr' target='#b138'>(Wen et al., 2018)</ns0:ref> , integrative analysis of expression and methylation data <ns0:ref type='bibr' target='#b12'>(Cappelli et al., 2018)</ns0:ref> and even expression data from isomiRs <ns0:ref type='bibr' target='#b71'>(Liao et al., 2018)</ns0:ref> . These works achieved prominent outcomes, notably the ability to infer that the problems of classification for diagnosis (healthy or disease patients) are problems that the ML algorithms solve quite easily, even with different types of data.</ns0:p><ns0:p>Several papers have been published to address this patient stratification. For example, to classify the subtypes of PR, ER and HER2 with miRNA data <ns0:ref type='bibr' target='#b123'>(Sherafatian, 2018;</ns0:ref><ns0:ref type='bibr' target='#b71'>Liao et al., 2018)</ns0:ref> , the status of the basal subtype through the analysis of images with deep learning algorithms <ns0:ref type='bibr' target='#b24'>(Chidester et al., 2018)</ns0:ref> and the different subtypes of BRCA by the expression of molecular pathways <ns0:ref type='bibr' target='#b47'>(Graudenzi et al., 2017)</ns0:ref> , mutation data <ns0:ref type='bibr' target='#b133'>(Vural et al., 2016)</ns0:ref> or even the integration of expression and methylation data <ns0:ref type='bibr' target='#b73'>(List et al., 2014)</ns0:ref> .</ns0:p><ns0:p>Cancer subtypes can be studied by unsupervised learning techniques and the integration of different data (expression, methylation, miRNA and CNV) <ns0:ref type='bibr' target='#b100'>(Nguyen et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Finally, other works have studied the interaction between miRNA and mRNA <ns0:ref type='bibr' target='#b66'>(Koo et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b46'>Ghoshal et al., 2018)</ns0:ref> , the identification of altered pathways by mRNA expression data <ns0:ref type='bibr' target='#b63'>(Klein et al., 2017)</ns0:ref> or by integrating expression and mutation data <ns0:ref type='bibr' target='#b113'>(Rykunov et al., 2016)</ns0:ref> , the response to drugs in different cell lines <ns0:ref type='bibr' target='#b29'>(Daemen et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b45'>Geeleher et al., 2017)</ns0:ref> and the identification of variants by means of genomic data <ns0:ref type='bibr' target='#b32'>(Dong et al., 2016)</ns0:ref> and by means of images with artificial vision techniques <ns0:ref type='bibr' target='#b129'>(Sutton et al., 2017)</ns0:ref> .</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>Many studies on cancer have been performed in recent years with ML that uses molecular data. These data have mainly included diagnostic studies, prognosis or patient stratification. More recently, there have been promising results in response to drugs or genetic interactions. In this review, we investigated and identified those relevant works that have used TCGA data through algorithms or pipelines of analysis based on ML.</ns0:p><ns0:p>ML techniques can extract the underlying knowledge from a set of data, so it is relevant to understand the appropriateness of the data. In other words, these techniques must be used with certain precautions.</ns0:p><ns0:p>Indeed, researchers should be aware that the conclusions they obtain may be biased due to poor data selection or analytical methodology. Among the different learning techniques, supervised learning has analysed the most problems using TCGA data. This endeavour has emphasised the use of genetic expression data through different variants of the SVM algorithm. There are still infinite opportunities and possibilities for the exploitation of TCGA data with ML. ML techniques can reach conclusions that are similar to conventional approaches and also to obtain a degree of variability that is extremely useful when searching for novel predictors.</ns0:p><ns0:p>It is clear that we are still at an early stage in the analysis of this pathology and it is necessary to develop and use more complex algorithms. For example, the use of kernel-based models can integrate different datasets in the same process. The integration of data in the analysis of complex and multifactorial diseases continues to be a challenge for which it is necessary to invest even more time and money in finding better algorithms. As discussed above, the quantity of existing data will not stop growing and all derive from the same biological sample. Thus, it is expected that the connection between omics platforms can improve the performance of the models. It is still necessary to take a step forward in the development of multidimensional ML models for cancer research.</ns0:p><ns0:p>Complex problems, such as the prediction of different cell statuses (methylation, apoptosis or mutation), are already being tackled with promising results. We and others hope that the links between biological information extracted from the same patient will be further explored in order to elucidate the origin of the disease by ML techniques. Currently, the focus is on certain types and subtypes of cancer (e.g. BRCA, LUAD or OV), usually due to the number of people afflicted with it and the importance attributed by society. It is also necessary to increase investment in the generation of data that is related to relatively minor or especially aggressive cancer types in order to provide the algorithms with sufficient information in their learning phase and to avoid biases in their learning.</ns0:p></ns0:div>
<ns0:div><ns0:head>15/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58456:1:0:CHECK 9 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In this work, we exhaustively reviewed studies that have used ML techniques for the analysis of different types of cancer using TCGA data. In our opinion, the era of individual analysis has passed and we are entering the era of data integration studies-at the clinical-genomic level as well as medical imaging or evolution analysis by means of time series. We are working on the development of complex data integration algorithms in different fields, one of which is artificial intelligence. There are currently ML models that are demonstrating great effectiveness and are gaining followers. These methods include the aforementioned deep learning techniques, but research is required to render the results understandable and explain why a certain prediction is made, especially from a clinical point of view. The great challenge of the integration techniques is the incessant increase in the number of dimensions and the heterogeneity of the data sets generated from the same patient/biological process <ns0:ref type='bibr' target='#b67'>(Kristensen et al., 2014)</ns0:ref>.</ns0:p><ns0:p>Finally, we hope that this review will serve as a starting point for researchers in bioinformatics and computer science who are interested in studying cancer, as well as those researchers who are more focused on the use of ML techniques to know the potential of their algorithms with TCGA data. More research and the development of new algorithms are required to overcome the disease. </ns0:p></ns0:div>
<ns0:div><ns0:head>ABBREVIATIONS</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Quantification of the number of samples in the TCGA repository, classified by type of tumour and type of biotechnological analysis. Clin = Clinical; SNP6 = SNP6 CopyNum; DNAseq = LowPass DNASeq CopyNum; Mutat = Mutation Annotation File; Met = Methylation; rawMut = rawMutation Annotation File; Prot = Reverse Phase Protein Array</ns0:figDesc><ns0:graphic coords='3,141.73,67.33,413.58,220.41' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. a) Number of papers that used each type of algorithm, and b) relations between omics data used in each work.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>addition, transfer learning was borne: an attempt to reuse the representation of the learning characteristic of one problem to solve another. Deep learning techniques are on the rise in cancer research, namely for object detection and image classification. Initiatives such as TCGA offer the possibility of training deep learning models by making a large quantity of biomedical images available for research. Specifically, TCGA provides two types of images: tissue slide and digital imaging and communications in medicine (DICOM) images. DICOM images such as X-rays or computed tomography (CT), are used to extract quantitative characteristics from the images. Algorithms are trained to identify those characteristics. Histopathological images are used for 9/22 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58456:1:0:CHECK 9 Apr 2021) Manuscript to be reviewed Computer Science direct image processing.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:58456:1:0:CHECK 9 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Number of papers published with each of the TCGA cohorts. Upset plot showing the number of works published with each tumor type and their combinations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>OV BLCA COAD GBM READ STAD UCEC KIRC LUAD LUSC LIHC HNSC BRCA SKCM LGG PRAD LAML KIPAN Subtypes 0 0 0</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The proportion of published works with ML techniques according to the type of biological problem.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>, KIPAN and STES) are not original-they are combinations of other cohorts. Among the remaining 34 cancer cohorts are tumours of different tissue types, as can be seen in Table2. To date, TCGA has characterised and published about 33 different types of tumours in leading international journals. Table 2 provides greater depth for each of the publications that TCGA has made in each recruited cohort.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(OV). TCGA currently presents data from a total of 38 different cohorts. Four of them (COADREAD,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58456:1:0:CHECK 9 Apr 2021)</ns0:cell><ns0:cell>4/22</ns0:cell></ns0:row></ns0:table><ns0:note>GBMLGG</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Enumeration of the different cohorts presented by the TCGA repository, classified according to the tissue of origin of the tumour. In addition, original paper published by TCGA consortium is cited.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>6/22PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58456:1:0:CHECK 9 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>After the review carried out in this work, works have been detected that were able to model this problem. Most of them are based on RNASeq data, with</ns0:figDesc><ns0:table /><ns0:note>13/22PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58456:1:0:CHECK 9 Apr 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "
8 April 2021
Dear Editor and Reviewers,
Below you will find our point-to-point response to reviewers’ comments, indicating the changes we have made in the manuscript. If you believe that any comment or modification has not been correctly or fully addressed, please let us know and we will try to rectify it as soon as possible.
We hope you will agree with these corrections and propose this work for publication.
Finally, we would like to thank the Editor and Reviewers for their comments and suggestions that have helped to improve the quality of the manuscript.
Yours sincerely,
On behalf of the authors,
Carlos Fernandez-Lozano, PhD
Department of Computer Science and Information Technologies
Assistant Professor - Universidade da Coruña
Affiliated Researcher - Centre for Information and Communications Technology Research
Phone number: +34 881 01 6013
ORCID-ID: 0000-0003-0413-5677
Reviewer reports:
Reviewer 1 (Anonymous)
Basic reporting
The authors present a review to reflect the state of the art in machine learning particularly on The Cancer Genome Atlas Program data.
First, some of the work of the TCGA consortium itself is described to give the reader a good foundation. However, the main goal of this work is to identify and discuss those works that have used the TCGA data to train different ML approaches. The authors classify them according to three main criteria: the type of tumor, the type of algorithm, and the predicted biological problem. One of the conclusions drawn in this work shows a high density of studies based on two main algorithms: Random Forest and Support Vector Machines and naturally an increase is described from the use of deep artificial neural networks. An increase of integrative models of multi-omic data analysis is also presented. Biological problems were classified into five types: Prognosis prediction, tumor subtypes, microsatellite instabilities, immunological aspects, and specific signaling pathways.
A clear trend was found in the prediction of these conditions according to tumor type. This is why a greater number of papers have focused on the BRCA cohort, while specific papers for survival, for example, have focused on the GBM cohort, due to its largeNumber of events.
Any work dedicated to fighting cancer is important. This work is a very good contribution and very useful for the international research community and it is very well written, well motivated and good to read. For all these reasons, this reviewer endorses this work, recommends acceptance, and makes some recommendations below to help further improve the work:
1) For the beginner in this field, a list of all abbreviations used would be very helpful, e.g. MSI = Microsatellite Instable is not mentioned anywhere and newcomers should be able to find their way quickly.
We thank the Reviewer for this appreciation. A new section with abbreviations in the manuscript has been added and spelled out at first use.
2) Figure 2, the descriptions are very hard to read - maybe the image can be optimized
3) Figure 2 b - is practically illegible
4) Figure 4 also very hard to read
We thank the Reviewer for this appreciation. Figures 2, 3 and 4 were revised to make them more readable.
5) In the summary section or before, a bit of an outlook on future important research topics should be given, e.g. comprehensibility, interpretability to further explore causal relationships which is eminently important in cancer research. Here, however, it is totally important to point out that in the medical field always several different components contribute to a result - but this is often negated in current machine learning, consequently, it would be good to say so, and to point to a brand new current work that deals exactly with this matter [x].
[x] Holzinger, A., Malle, B., Saranti, A. & Pfeifer, B. 2021. Towards Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI. Information Fusion, 71, (7), 28-37, doi:10.1016/j.inffus.2021.01.008
We thank the Reviewer for this appreciation. We have reviewed this work and indeed it fits very well so we added accordingly to our manuscript.
6) Readers within the subsection 'A general perspective of unsupervised learning with TCGA data' may also interested be interested in this brand new work [y]:
[y] Jean-Quartier, C., et al. A. 2021. Mutation-based clustering and classification analysis reveals distinctive age groups and age-related biomarkers for glioma. BMC medical informatics and decision making, 21, (77), 1-14, doi:10.1186/s12911-021-01420-1
Thank you for pointing us to this publication. We have reviewed this work and indeed it fits very well in the section you indicate. We have added it to the manuscript.
Experimental design
nothing to add - very good
Validity of the findings
nothing to add - very good
Comments for the Author
please see comments above.
Thank you for your time and comments, much appreciated.
Reviewer 2 (Anonymous)
Basic reporting
No comment
Experimental design
No comment
Validity of the findings
No comment
Comments for the Author
Summary of Paper:
In this paper, the authors provide a comprehensive review on literature using machine learning techniques for the analysis of different types of cancer using TCGA data. Starting with explaining the methodologies used in this survey, the authors present the main results obtained by the TCGA consortium followed by reviewing the capabilities of machine learning algorithms to solve biological problems. Overall, the paper is well written. Below are my minor concerns on the paper.
Minor Concerns:
• On line 102, 'Machine learning as a source o new knowledge' -> 'Machine learning as a source of new knowledge.'
• On line 107, ‘To his aim …’ -> ‘To this aim’.
Thank you very much for reporting these typos. We have modified them in the manuscript.
• It is advisable to write the article inclusion/exclusion criteria on lines 127-136 more clearly. For example, I am assuming that the last point ‘Article/conference manuscripts using machine learning marginally or without solid biological conclusions’ should be the exclusion criteria.
We thank the Reviewer for this appreciation. We modified the manuscript accordingly.
• ‘neuron networks’ on lines 241 and line 244 should be replaced by ‘neural networks’.
Thank you very much for reporting these typos. We have modified them in the manuscript.
• Consider rephrasing the sentence ‘In this Figure 4 is interesting to observe how the different problems addressed are distributed differently in the types of tumour.’ on lines 391-392.
We thank the Reviewer for this appreciation. We have considered your comment and have reconstructed the sentence. We now believe it is better worded and can be better understood.
“Figure 4 shows the distribution of the published papers according to the different types of tumors and the type of biological problem. The different biological problems show a different distribution according to tumor type.”
• ‘Immnulogical phenotype prediction’ -> ‘Immunological phenotype prediction’ on line 441
• ‘immunologicc phenotype’ -> ‘immunologic phenotype’ on line 455.
Thank you very much for reporting these typos. We have modified them in the manuscript.
Thank you for your time and comments, much appreciated.
" | Here is a paper. Please give your review comments after reading it. |
143 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Mass spectrometry imaging (MSI) enables the unbiased characterization of surfaces with respect to their chemical composition. In biological MSI, zones with differential mass profiles hint towards localized physiological processes, such as the tissue-specific accumulation of secondary metabolites, or diseases, such as cancer. Thus, the efficient discovery of 'regions of interest' (ROI) is of utmost importance in MSI. However, often the discovery of ROIs is hampered by high background noise and artifact signals. Especially in ambient ionization MSI, unmasking biologically relevant information from crude data sets is challenging. Therefore, we implemented a Threshold Intensity Quantization (TrIQ) algorithm for augmenting the contrast in MSI data visualizations. The simple algorithm reduces the impact of extreme values ('outliers') and rescales the dynamic range of mass signals. We provide an R script for post-processing MSI data in the imzML community format (https://bitbucket.org/lababi/msi.r) and implemented the TrIQ in our open-source imaging software RmsiGUI (https://bitbucket.org/lababi/rmsigui/). Applying these programs to different biological MSI data sets demonstrated the universal applicability of TrIQ for improving the contrast in the MSI data visualization. We show that TrIQ improves a subsequent detection of ROIs by sectioning. In addition, the adjustment of the dynamic signal intensity range makes MSI data sets comparable.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Mass spectrometry imaging (MSI) datasets contain spatially resolved spectral information.</ns0:p><ns0:p>The parallel detection of multiple compounds with high sensitivity established mass spectrometry (MS) as the first-choice tool for exploratory studies, particularly in combination with data mining methods <ns0:ref type='bibr' target='#b17'>(López-Fernández et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b44'>Winkler, 2015)</ns0:ref>. Numerous MSI technologies have been reported; the most commonly used MSI platform is based on matrix-assisted laser desorption/ionization (MALDI), suitable for a wide range of molecules (Rae <ns0:ref type='bibr' target='#b29'>Buchberger et al., 2018)</ns0:ref>. However, conventional MSI usually requires significant sample preparation and physical conditions, which are incompatible with life. Therefore, there is a keen interest in developing ambient ionization MSI (AIMSI) methods because they enable the direct analysis of delicate materials and biological tissues with no or minimal sample preparation <ns0:ref type='bibr' target='#b16'>(Lu et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Unlike digital image acquisition equipment where the luminous intensity is digitalized, MSI signal intensities must be converted to discrete values by a process termed quantization. Each discrete intensity level is known as gray level; intensity quantized images are called grayscale images. Zones in MSI spectra with distinct intensities indicate localized biochemical activity, such as the biosynthesis of natural products or physiological processes. Such regions of interest (ROI) can be identified by visual inspection or automated segmentation <ns0:ref type='bibr' target='#b14'>(Gormanns et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bemis et al., 2015)</ns0:ref>.</ns0:p><ns0:p>The human vision is limited to the perception of only 700 to 900 shades of gray <ns0:ref type='bibr' target='#b15'>(Kimpe and Tuytschaever, 2007)</ns0:ref>. With the mapping of gray levels to colors, the visualization of features in an image can be improved. However, color perception is a subjective experience affected by illumination and the individual response of rod and cone photoreceptors located in our eyes.</ns0:p><ns0:p>Thus, the used color schemes significantly affect the human perception of scientific data <ns0:ref type='bibr'>(Rogowitz et al., 1996, Race and</ns0:ref><ns0:ref type='bibr' target='#b28'>Bunch (2015)</ns0:ref>). The frequently used rainbow color map generates colorful images that accentuate differences in signal intensities. However, the resulting images do not comply with 'perceptual ordering,' i.e., the spectators of a rainbow-colored MSI visualization cannot intuitively assign the different colors according to the signal intensities. Thus, rainbow color schemes are confusing and even might actively mislead viewers <ns0:ref type='bibr' target='#b4'>(Borland and Taylor Ii, 2007)</ns0:ref>. The online tool hclwizard (http://hclwizard.org) helps in the generation of custom color maps, which follow the hue-chroma-luminance (HCL) concept and are suitable for different types of scientific data visualizations <ns0:ref type='bibr' target='#b46'>(Zeileis et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b38'>Stauffer et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b9'>Gamboa-Becerra et al., 2015)</ns0:ref>. The current gold standard for plotting scientific data is the color map Viridis, which provides linear perception and considers viewers with color vision deficiencies (CVD) <ns0:ref type='bibr' target='#b21'>(Nuñez et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Another critical aspect of image processing is the adaption of quantitative data to the human vision, i.e., an adjustment of physical data to the biological receiver <ns0:ref type='bibr' target='#b39'>(Stockham, 1972)</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>MATERIALS AND METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Data sets and formats</ns0:head><ns0:p>We analyzed publicly available mass spectrometry imaging data sets from different MSI acquisition techniques, lateral resolutions, and sample types. The datasets used in this work are listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The original DESI data set consists of four samples per image, which we separated for further processing.</ns0:p><ns0:p>All datasets comply with the imzML data format community standard <ns0:ref type='bibr' target='#b37'>(Schramm et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b36'>Römpp et al., 2011)</ns0:ref>, implemented in many proprietary and open source programs for MSI data processing <ns0:ref type='bibr' target='#b43'>(Weiskirchen et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Threshold Intensity Quantization (TrIQ) Algorithm</ns0:head><ns0:p>An image is a function 𝑓 (𝑥, 𝑦) that assigns an intensity level for each point 𝑥, 𝑦 in a two-dimension space. For visualizing 𝑓 (𝑥, 𝑦) on a computer screen or printer, the image must be digitized for both intensity and spatial coordinates. As MSI is a scanning technique, spatial coordinates 𝑥</ns0:p></ns0:div>
<ns0:div><ns0:head>3/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:1:0:NEW 3 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and 𝑦 are already discrete values related to the lateral resolution of the scanning device. The intensity values provided by the MS ion detector are analog quantities that must be transformed into discrete ones. Quantization is a process for mapping a range of analog intensity values to a single discrete value, known as a gray level. Zero-memory is a widely used quantization method.</ns0:p><ns0:p>The zero-memory quantizer computes equally spaced intensity bins of width 𝑤:</ns0:p><ns0:formula xml:id='formula_0'>𝑤 = [ max(𝑓 ) − min(𝑓 ) 𝑛 ]<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where 𝑛 represents the number of discrete values, usually 256; min(𝑓 ) and max(𝑓 ) operators provide minimum and maximum intensity values. Quantization is based on a comparison with the transition levels 𝑡 𝑘 :</ns0:p><ns0:formula xml:id='formula_1'>𝑡 𝑘 = 𝑤 + min(𝑓 ), 2𝑤 + min(𝑓 ), … , 𝑛𝑤 + min(𝑓 )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Finally, the discrete mapped value 𝑄 is obtained:</ns0:p><ns0:formula xml:id='formula_2'>𝑄(𝑓 (𝑥, 𝑦)) = ⎧ { ⎨ { ⎩ 0, 𝑓 (𝑥, 𝑦) ≤ 𝑡 1 𝑘, 𝑡 𝑘 < 𝑓 (𝑥, 𝑦) ≤ 𝑡 𝑘+1<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>(AI)MSI methods often produce outliers, i.e., infrequent extreme intensity values, which drastically reduce image contrast. The Threshold Intensity Quantization, or TrIQ, addresses this issue by setting a new upper limit 𝑇; intensities above this threshold will be grouped within the highest bin. 𝑇 computation involves the cumulative distributive function 𝑝(𝑘) (CDF), defined as</ns0:p><ns0:formula xml:id='formula_3'>𝑞 ≈ 𝑝(𝑘) = 𝑘 ∑ 𝑖=1 ℎ(𝑖) 𝑁<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>ℎ(𝑖) stands for the 𝑖 bin's frequency within an image histogram, 𝑁 is the image's pixel count.</ns0:p><ns0:p>Given a target probability 𝑞 it is possible to find the bin 𝑘 whose CDF closely resembles 𝑞. Then, the upper limit of the bin 𝑘 in ℎ will be used as the threshold value 𝑇. The new transition levels can be defined as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>𝑤 = [ 𝑇 − min(𝑓 ) 𝑛 − 1 ]<ns0:label>(5)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Therefore, 𝑄 mapping will be</ns0:p><ns0:formula xml:id='formula_5'>𝑄(𝑓 (𝑥, 𝑦)) = ⎧ { { { ⎨ { { { ⎩ 0, 𝑓 (𝑥, 𝑦) ≤ 𝑡 1 𝑘, 𝑡 𝑘 < 𝑓 (𝑥, 𝑦) ≤ 𝑡 𝑘+1 𝑛 − 1, 𝑓 (𝑥, 𝑦) > 𝑡 𝑛−1 (7)</ns0:formula><ns0:p>with 𝑘 running from 1 to 𝑛 − 1. From equation ( <ns0:ref type='formula'>7</ns0:ref>) follows that a higher 𝑘 leads to a better approximation of 𝑞. Default values for 𝑘 and 𝑞 in RmsiGUI are 100 and 98%, respectively.</ns0:p><ns0:p>Using the default values of the TrIQ, 98% (=𝑞) of the image's total intensity are visualized.</ns0:p><ns0:p>Pixel intensities above the calculated threshold value T are limited to a maximum value. Therefore, the rescaled 100 (=𝑘) bins visualize the dataset's intensity levels with more detail.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Implementation</ns0:head><ns0:p>Several programs and workflows for MSI data analysis employ the statistical language R (R Core Team, 2018), such as MSI.R <ns0:ref type='bibr' target='#b9'>(Gamboa-Becerra et al., 2015)</ns0:ref>, Cardinal <ns0:ref type='bibr' target='#b3'>(Bemis et al., 2015)</ns0:ref>, and the Galaxy MSI module <ns0:ref type='bibr' target='#b8'>(Föll et al., 2019)</ns0:ref>. The Otsu segmentation method <ns0:ref type='bibr' target='#b24'>(Otsu, 1979)</ns0:ref> tested in this work comes with the R package EBImage <ns0:ref type='bibr' target='#b26'>(Pau et al., 2010)</ns0:ref>. Recently, we published an R-based platform for MSI data processing with a graphical user interface, RmsiGUI, which provides modules for the control of an open hardware imaging robot (Open LabBot), the processing of raw data, and the analysis of MSI data <ns0:ref type='bibr' target='#b31'>(Rosas-Román et al., 2020)</ns0:ref>. We integrated the TrIQ algorithm into RmsiGUI and provide the R code snippets for facilitating its adoption into other programs. The source code is freely available from the project repository https://bitbucket.org/lababi/rmsigui/.</ns0:p><ns0:p>We use the viridis color map, which is optimized for human perception and people with color vision deficiencies <ns0:ref type='bibr' target='#b21'>(Nuñez et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b10'>, Garnier et al. (2018)</ns0:ref>). Reading and processing MSI data in imzML format are done with the MALDIquantForeign and MALDIquant libraries <ns0:ref type='bibr' target='#b12'>(Gibb and Strimmer, 2012;</ns0:ref><ns0:ref type='bibr' target='#b11'>Gibb and Franceschi, 2019)</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows the graphical user interface of RmsiGUI with the TrIQ option selector.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RESULTS AND DISCUSSION</ns0:head><ns0:p>The contrast of digital images can be enhanced with additional operations, such as global or local histogram equalization algorithms; however, such image processing tools do not preserve the original gray level scale's linearity. In contrast, our Threshold Intensity Quantization (TrIQ)</ns0:p><ns0:p>approach finds an intensity threshold 𝑇 for saturating the images' last gray level. Importantly, the linearity of the experimentally determined intensity scale is preserved.</ns0:p><ns0:p>In the next sections, we demonstrate the application of the TrIQ for the processing of mass spectrometry imaging (MSI) datasets. The term raw image is used in this paper for denoting images rendered with the default quantization method of R.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:1:0:NEW 3 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Contrast optimization</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> shows the mass spectrometry image of a human colorectal adenocarcinoma sample, acquired with DESI and 100 𝜇m spatial resolution <ns0:ref type='bibr'>(Oetjen et al., 2015)</ns0:ref>. The imaged signal of 885.55 m/z corresponds to de-protonated phosphatidylinositol (18:0/20:4), [C 47 H 83 O 13 P-H] − <ns0:ref type='bibr' target='#b41'>(Tillner et al., 2017)</ns0:ref>.</ns0:p><ns0:p>The direct plotting of the extracted m/z slice results in the image shown in Figure <ns0:ref type='figure' target='#fig_3'>2a</ns0:ref>). The A typical data transformation for imaging is the use of a logarithmic intensity scale. Figure <ns0:ref type='figure' target='#fig_3'>2b</ns0:ref>) shows the image after applying the natural logarithm to the MSI signal intensities and default quantization. The contrast is improved. However, further operations would be necessary, such as the subtraction of the background level. Besides, the interpretation of the non-linear color scale is not intuitive.</ns0:p><ns0:p>The conventional sequence for improving the contrast of an MSI image is applying the zeromemory quantization of MSI data and a transformation function on the quantized pixels. Figure <ns0:ref type='figure' target='#fig_3'>2c</ns0:ref>) shows the result of this process. Although improved contrast is gained with linear equaliza-</ns0:p></ns0:div>
<ns0:div><ns0:head>7/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:1:0:NEW 3 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Background optimization</ns0:head><ns0:p>Various causes can result in background noise, such as sample metabolites with low vapor pressure and matrix compounds. Removing outliers with TrIQ and reducing gray levels lead to improved image brightness (see Figures <ns0:ref type='figure' target='#fig_5'>3a) and b</ns0:ref>)). Nevertheless, background noise is also enhanced, and the sample shape is not well defined (see ion 209 m/z in Figure <ns0:ref type='figure' target='#fig_5'>3b</ns0:ref>)). There are two possibilities for background correction with TrIQ. The first option is reducing the number of gray levels, thus grouping a wider range of values within a single bin. Reducing the gray levels from 32 to 9 produced an almost perfectly uniform background and a well-defined sample shape, as shown in figure <ns0:ref type='figure' target='#fig_5'>3c</ns0:ref>). The second method finds a new black level threshold that substitutes the operator 𝑚𝑖𝑛(𝑓 ) in equation 5. As the color bars for this approach indicate, the black level thresholds depend on the individual image data (see Figure <ns0:ref type='figure' target='#fig_5'>3d</ns0:ref>)). Both methods efficiently diminish the impact of background noise. But whereas reducing the gray levels is the simpler approach, defining a new black level threshold maintains the color depth.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Normalization for comparable mass spectrometry images</ns0:head><ns0:p>Comparing mass spectrometry images (MSI) is a challenge because standard quantization procedures create images with distinct intensity and color scales, even if they were measured under the same experimental conditions.</ns0:p><ns0:p>Global TrIQ finds the highest 𝑇 among a given MSI set. This threshold is used for computing the transition levels and mapping MSI intensities to discrete values on every image within the set.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref> shows human colorectal adenocarcinoma images. The samples come from the same tissue, cut into slices with a thickness of 10 𝜇m <ns0:ref type='bibr'>(Oetjen et al., 2015)</ns0:ref>. All images visualize the abundance of the ion 885.55 m/z. The plotting of the raw data resulted in images of low contrast, which are difficult to interpret and to compare (see Figure <ns0:ref type='figure' target='#fig_8'>4a</ns0:ref>). Using the Global TrIQ algorithm, a maximum threshold 𝑇 of 4,970 was calculated and applied for all twelve images; the accumulated probability was set to 0.91. The resulting color scale is the same for all images (see Figure <ns0:ref type='figure' target='#fig_8'>4b</ns0:ref>). Thus, the relative abundance and distribution of the ion in multiple images can be evaluated at a single glance. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr'>et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b18'>Maldonado-Torres et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Gamboa-Becerra et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b5'>Cervantes-Hernández et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Both examples demonstrate the usefulness of TrIQ to normalize MSI data for image comparison.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Segmentation and visualization of regions of interest (ROI)</ns0:head><ns0:p>Finding regions of interest (ROI) in MSI datasets is essential for biomarker discovery and physiological studies. High-contrast and low-noise images favor the automated segmentation. Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> compares binary masks obtained using the standard Otsu algorithm and TrIQ. The input image was zero-memory quantized with 256 bins before applying the Otsu algorithm. The TrIQ binary masks were obtained by setting TRUE all non-zero values of the median filtered images of Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>.</ns0:p><ns0:p>TrIQ-median filtered images show more extensive and homogenous regions compared to the Otsu masks and fit well with the anatomical fruit sections of the optical image (see Figure <ns0:ref type='figure' target='#fig_10'>6</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>11/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:1:0:NEW 3 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref> shows the Global TrIQ segmentation of mouse urinary bladder data, scanned with AP-MALDI MSI at a lateral resolution of 10 μm <ns0:ref type='bibr' target='#b32'>(Römpp et al., 2010</ns0:ref><ns0:ref type='bibr' target='#b33'>(Römpp et al., , 2014))</ns0:ref>. The distribution of 741.53 m/z, identified as a sphingomyelin [C 39 H 79 N 2 O 6 P + K] + ion, is related to muscle tissue.</ns0:p><ns0:p>743.54 m/z is associated with the lamina propria structure, while 798.54 m/z is mainly found in the urothelium <ns0:ref type='bibr' target='#b32'>(Römpp et al., 2010)</ns0:ref>. Global TrIQ was applied with P = 0.95 and 25 gray levels.</ns0:p><ns0:p>TrIQ and median filtering allow clear discrimination between muscle tissue and lamina propria using the marker ions 741.53 and 743.54 m/z. For separating the urothelium structure from the muscular tissue, the binary image for 798.54 m/z was calculated by zeroing gray levels below 9.</ns0:p><ns0:p>In contrast, if the Otsu method is applied to the 798.54 m/z ion, the urothelium region is isolated automatically. This result is expected since the Otsu method assumes an image histogram distribution with a deep sharp valley between two peaks. Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> shows an overlay of the TrIQ processed ion images, representing correctly the anatomical structures of the mouse urinary bladder.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Computational performance of the algorithm</ns0:head><ns0:p>For estimating the computational performance of the Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>TrIQ algorithm for large images, we built synthetic slices by sequentially doubling the DESI data.</ns0:p><ns0:p>Time calculation was executed twenty-five times to account for variations in system processes of the operating system. Figure <ns0:ref type='figure' target='#fig_13'>9</ns0:ref> demonstrates the results from running the R script Timing.R (provided as supplemental code) on a standard Linux laptop (Intel(R) Core(TM) i7-7700HQ CPU with 2.80GHz, 16 Gb RAM, Peppermint OS 10). On average, more than 1,000 pixels were processed per millisecond. In the tested range, i.e., up to >500,000 pixels, the execution speed was proportional to the image's size. Thus, the TrIQ algorithm is computationally efficient and scalable.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSIONS</ns0:head><ns0:p>Threshold As with any data filtering method, TrIQ could remove or alter valuable information. Therefore, it is recommendable to compare raw and processed images and carefully adjust the target probability and the number of gray levels. MSI technology itself causes further sources of errors. Fixation agents, solvents, and matrix compounds can lead to background signals. To some extent, such off-sample ions can be removed manually, or using MSI software <ns0:ref type='bibr' target='#b25'>(Ovchinnikova et al., 2020)</ns0:ref>. Technical variations caused by the sample topology or unstable ionization add additional inaccuracies <ns0:ref type='bibr' target='#b2'>(Bartels et al., 2017)</ns0:ref>. Further, the ion count is not strictly proportional to the abundance of a molecule but depends on the local sample structure and composition, and the desorption/ ionization principle. Thus, complementary MSI, optical and histological methods should be used on the same sample <ns0:ref type='bibr' target='#b40'>(Swales et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Applying TrIQ to a set of images equalizes their intensity scales and makes them comparable. We demonstrated the implementation of the TrIQ algorithms in R to process MSI data in the community format imzML. The algorithm is computationally fast and only requires basic operations, and thus can be quickly adapted to any programming language. TrIQ can be applied to improve any scientific data plotting with extreme values, respecting the original intensity levels of raw data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CODE AVAILABILITY</ns0:head><ns0:p>RmsiGUI is freely available from https://bitbucket.org/lababi/rmsigui/. We released TrIQ Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science 6 ACKNOWLEDGMENTS</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Histogram equalization is widely used for enhancing the contrast in images and therefore supporting the recognition of patterns. The application of histogram equalization is simple and 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:1:0:NEW 3 Feb 2021) Manuscript to be reviewed Computer Science computationally fast. In the global histogram equalization (GHE), the entire image data are used to remap the representation levels (Abdullah-Al-Wadud et al., 2007). But the original intensity levels of the pixels are lost, and the fidelity of the data visualization is infringed. Thus, we introduce the use of Threshold Intensity Quantization (TrIQ) for the processing of conventional and ambient ionization mass spectrometry imaging (MSI/AIMSI) datasets.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>𝑡 𝑘 = 𝑤 + min(𝑓 ), 2𝑤 + min(𝑓 ), … , (𝑛 − 1)𝑤 + min(𝑓 ) Sci. reviewing PDF | (CS-2020:10:53565:1:0:NEW 3 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Implementation of the Threshold Intensity Quantization (TrIQ) in the graphical user interface of RmsiGUI (dataset and image from Oetjen et al. (2015)).</ns0:figDesc><ns0:graphic coords='7,141.73,220.69,453.60,304.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Contrast enhancement by Threshold Intensity Quantization (TrIQ). The mass trace 885.55 m/z of DESI MSI data of a human colorectal adenocarcinoma sample was visualized with different quantization methods. a) Raw data, rendered with 256 gray levels. b) Plotting with applying the natural logarithm to the intensity values. c) Image contrast improvement after zero-memory quantization and linear histogram equalization d) Image contrast improvement using TrIQ with 95 percent and 32 gray levels. Histograms a,b and d were computed with 32 bins; histogram c has 256 bins. e) Histological staining of a tissue slice (dataset and image e) from Oetjen et al. (2015)).</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,453.60,157.44' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>image contains pixels with intensity values of up to 70,280 arbitrary units. Such extreme and infrequent intensity values are called outliers and drastically reduce image contrast. The histogram below reveals the reason for the low contrast of the image: Most of the pixels fall into the first four bins after the default R quantization.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Background optimization with TrIQ. Three mass traces of LAESI MSI data from a Arabidopsis thaliana leaf are plotted. a) Raw data plotting with 256 gray levels b) TrIQ with 32 gray levels, c) TrIQ with 9 gray levels. The background uniformity is improved by reducing the intensity and gray levels. d) TrIQ with 32 gray levels and black level adjustment eliminates background noise without reducing color depth (dataset from Zheng et al. (2020)).</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,453.52,250.17' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 3a) shows the rendering of signals from an Arabidopsis 8/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:1:0:NEW 3 Feb 2021) Manuscript to be reviewed Computer Science thaliana leaf, analyzed by LAESI MSI with a lateral resolution of 200 𝜇m (Zheng et al., 2020). The images correspond to the putative negative ions of 4-hydroxymethyl-3-methoxyphenoxyacetic acid ([C 10 H 12 O 5 -3H] − , 209.0 m/z), 4-methylsulfonylbutyl glucosinolate ([C 12 H 23 NO 10 S 3 -H] − , 436.0 m/z), and indol-3-ylmethyl glucosinolate ([C 16 H 20 N 2 O 9 S 2 -H] − , 447.1 m/z)<ns0:ref type='bibr' target='#b45'>(Wu et al., 2018)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 provides another example of global TrIQ. The image compares the 62.1, 84.1, and 306.1 m/z ions of chili (Capsicum annuum) slices sampled at 1 mm lateral resolution with lowtemperature plasma (LTP) MSI (Maldonado-Torres et al., 2014). LTP MSI detects small, volatile compounds. Therefore, the images resulting from this ambient ionization technique are noisy. TrIQ with 𝑃 = 0.98 and five gray levels improves the contrast. The remaining noise is efficiently 9/19</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Global TrIQ applied to the ion 885.55 m/z of human colorectal adenocarcinoma DESI MSI slices, with P = 0.91 and 32 gray levels. Compared to the raw data visualization (a), the contrast is drastically enhanced by applying the TrIQ algorithm. The normalization also allows a direct comparison of the images (dataset from Oetjen et al. (2015)).</ns0:figDesc><ns0:graphic coords='11,178.43,131.12,340.19,448.06' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. LTP MSI of a chili Capsicum annuum fruit with 1 mm lateral resolution. TrIQ with P = 0.98 and 5 gray levels improves the contrast; an additional median filter removes technical noise (dataset from Maldonado-Torres et al. (2014, 2017)).</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,453.61,251.39' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Image segmentation for Capsicum annuum obtained with Otsu and TrIQ. For images with intensity outliers and noisy regions, TrIQ combined with median filtering gives larger and more uniform regions compared to the standard Otsu algorithm (dataset from Maldonado-Torres et al. (2014, 2017)).</ns0:figDesc><ns0:graphic coords='13,141.73,216.07,453.60,278.16' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. AP-MALDI MSI of a mouse urinary bladder imaged with a lateral resolution of 10 μm. TrIQ was applied with P = 0.95 and 25 gray levels. Binary images serve for defining regions of interest (ROI) and segmentation: 741.53 m/z -muscle tissue, 743.54 m/z -lamina propria structure, 798.54 m/z -urothelium (dataset from Römpp et al. (2010, 2014)).</ns0:figDesc><ns0:graphic coords='14,141.73,63.78,453.60,246.96' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Overlay image, representing the anatomical structures in mouse urinary bladder. Blue: 741.53 m/z -muscle tissue, red: 743.54 m/z -lamina propria , yellow: 798.54 m/zurothelium (dataset from Römpp et al. (2010, 2014)).</ns0:figDesc><ns0:graphic coords='15,141.73,92.22,453.60,187.92' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Execution speed of the TrIQ algorithm, implemented in R, using a standard laptop (experimental and synthetic data). About 1,000 pixels were processed per millisecond. The TrIQ algorithm scales linearly.</ns0:figDesc><ns0:graphic coords='15,141.73,408.74,453.60,227.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Intensity Optimization (TrIQ) improves the visualization of mass spectrometry imaging (MSI) data by augmenting the contrast and homogenizing the background. Contrary to histogram equalization algorithms, TrIQ preserves the linearity of measured ion intensities. The processing with TrIQ facilitates the recognition of regions of interest (ROI) in MSI data sets, either by visual inspection of by automated segmentation algorithms, supporting the interpretation of MSI data in biology and medicine.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>R</ns0:head><ns0:label /><ns0:figDesc>scripts and the R package RmsiGUI under the terms of the GNU General Public License, GPL V3 (http://gplv3.fsf.org/). 15/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:1:0:NEW 3 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Mass spectrometry imaging data sets. AP-MALDI -Atmospheric pressure matrixassisted laser desorption/ionization, DESI -Desorption electrospray ionization, LAESI -Laser ablation/ electrospray ionization, LTP -Low-temperature plasma, Res. -lateral resolution, Tol.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>-mass tolerance. References: Oetjen et al. (2015), Zheng et al. (2020), Maldonado-Torres et al.</ns0:cell></ns0:row><ns0:row><ns0:cell>(2014);Maldonado-Torres et al. (2017), Römpp et al. (2010); Römpp et al. (2014).</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Organism, tissue, ref. Method Res. [μm] Tol. [m/z] Size [pix.]</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Human, colorectal cancer(Oetjen et al.,</ns0:cell><ns0:cell>DESI (-)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>± 0.3</ns0:cell><ns0:cell>67 × 64</ns0:cell></ns0:row><ns0:row><ns0:cell>2015)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Arabidopsis thaliana, leaf(Zheng et al., 2020)</ns0:cell><ns0:cell>LAESI (-)</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>± 0.3</ns0:cell><ns0:cell>46 × 26</ns0:cell></ns0:row><ns0:row><ns0:cell>Chili, fruit(Maldonado-Torres</ns0:cell><ns0:cell>LTP (+)</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>± 0.3</ns0:cell><ns0:cell>85 × 50</ns0:cell></ns0:row><ns0:row><ns0:cell>et al., 2014, 2017)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mouse, urinary bladder(Römpp et al.,</ns0:cell><ns0:cell>AP-MALDI (+)</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>± 0.1</ns0:cell><ns0:cell>260 × 134</ns0:cell></ns0:row><ns0:row><ns0:cell>2010, 2014)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Prof. Dr. rer. nat. Robert Winkler, CINVESTAV Unidad Irapuato
Km. 9.6 Libramiento Norte Carr. Irapuato-León, 36824 Irapuato Gto., México
Robert.Winkler@cinvestav.mx, Tel.: +52-462-6239-635
PeerJ Computer Science
Reply to comments
Contrast optimization of mass spectrometry imaging (MSI) data visualization
by threshold intensity quantization (TrIQ)
Irapuato, 3rd of February 2021
Dear Editor and dear Peer Reviewers,
Thank you very much for your feedback which helped to improve our manuscript.
Following, we provide a point-by-point reply to all comments:
Reviewer 1
Basic reporting
No remarks on the basic reporting, only a suggestion for figure 9.
Figure 9:
It is unfortunate that the majority of the information related to (LAESI, LTP, DESI, MALDI)
shown in this graph is crowded in the figure, making it difficult to interpret this
information. Perhaps it would help to adapt the scale to improve readability of this figure?
Authors: We appreciate your feedback and modified figure 9. Additionally to the
overview including the synthetic data, the experimental data now are zoomed in an
additional graph.
Reviewer 1:
A small correction:
Line 232: Figure 7 -> Figure 8
Authors: Thanks. We corrected the figure reference.
Experimental design
Figure 2 and lines 165 - 166 ‘The mass image processed with the TrIQ algorithm
reassembles best the anatomical details which are recognizable in the optical image’
The contrast enhancement difference between using the natural logarithmic versus a
linear equalizing approach comes out clearly. It could be due to the quality of the images
but currently it is not easy to see where TriQ improves over the Linear Equalizing
approach. Which anatomical details are you referring to? It would be good to clarify or
highlight the obtained improvements and in particular the anatomical details you are
referring to in the images.
Authors: Indeed, the images obtained with zero-memory quantization and following
linear equalization and TrIQ look similar (fig. 2c and d). However, the underlying raw data
are irreversibly truncated in the linear equalization, whereas TrIQ represents the original
intensity level. We stressed this out in lines 151- and in the conclusions: “Contrary to
histogram equalization algorithms, TrIQ preserves the linearity of measured ion
intensities.” As you can see in the below histograms and in the color level scales, the TrIQ
algorithm provides finer transitions, which is e.g. visible in the central structures of the
images.
Reviewer 1:
Figure 3 – background optimization
How does the TriQ approach compare to the state-of-the art for background optimization
in addition to the raw results shown? It would also be good to add the histology section
for comparison as well.
Authors: Thanks for your recommendation; we added the optical picture of the leaf.
Reviewer 1:
Figure 4 – lines 193 – 198 Global TriQ
How do the results shown in figure 4 compare to the slices without global TriQ being
applied?
Authors: Thanks for this question. Based on your comment we added the raw data
plotting of Figure 4, and the respective explications in the figure caption and the
manuscript text. The direct comparison of raw and TrIQ processed images clearly
demonstrates the usefulness of TrIQ.
Validity of the findings
I like the approach of the global TriQ to improve comparability of ion images in MSI data,
which is an important problem to solve. However, I feel the appropriate controls are
missing making it difficult to properly assess the validity of the findings.
Authors: Calibration, quantification, etc. still represent major problems in MSI. Thus, we
compared the outcome of the TrIQ with the results of other commonly used data
processing algorithms. Even for well-established techniques such as MALDI-ToF MSI, it is
known that the mass signals do not exactly reflect the true distribution of compounds,
due to matrix effects, ion suppression etc. Thus, testing the validity of results is only
possible by using complementary MSI and biochemical methods. However, such studies
are beyond the processing and visualization of MSI data. For this reason, we added the
recommendation of using complementary methods in the Conclusions: 254-
Reviewer: Ahmed Moussa
Basic reporting
The manuscript descirbes the implementation of a Threshold Intensity Quantization
algorithm for augmenting the contrast in Mass Spectrometry Imaging (MSI) data
visualizations.
The authors method provides an improvement of MSI data through increasing the
contrast and homogenizing the background.
The language used throughout the manuscript is clear.
Authors:
Thanks a lot for your positive feedback and the time for evaluating our paper.
Experimental design
The method was implemented as an open-source software RmsiGUI with an R script for
post-processing.
They validated the developed method using different dataset acquired from different
techniques. The results show an improvement in the contrast and in the detection of
‘region of interest’.
I compliment the authors on their vast data set used to validate the method. However,
the mathematical part in the section 2.2 need to be improved by providing more details.
Authors:
We added an additional explanation of the algorithm, to make its function more clear:
“Using the default values of the TrIQ, 98% (=q) of the image's total intensity are
visualized. Pixel intensities above the calculated threshold value T are limited to the
maximum value. Therefore, the rescaled 100 (=k) bins visualize the dataset's intensity
levels with more detail.”
Reviewer Ahmed Moussa:
The authors claimed that their algorithm is computationally fast (line 266) but no strong
information is provided to justify this. Please provide a quantifiable evidence (running
time, memory expenses, …) to justify this, since figure 9 is not clear and does not provide
sufficient details.
Authors: We improved figure 9. Now the linear scalability of the TrIQ algorithm is clear;
an important characteristic for the increasing data load coming from MSI experiments.
About 1000 pixels were processed per millisecond on a standard laptop (see section
‘Computational performance of the algorithm’ for details), and the tested synthetic
datasets were larger than typical experimental MSI data. Thus, we can state the
computational efficiency and practical usability of the TrIQ algorithm.
Validity of the findings
The research question is well defined.
The authors stated that several programs for MSI data analysis exists and employ the
statistical language R but did not compare their developed algorithm with these existing
ones. Authors are suggested to include one or more of these existing methods as a
comparison to prove that their method is working fine.
Authors are also suggested to provide proper applications of this tools as the novelty and
the impact is not well assessed.
Authors: In figure 2 , we compare the TrIQ with common image quantization methods.
The result obtained with TrIQ is similar to zero-memory quantization and linear
equalization (fig. 2 c), but the TrIQ has higher fidelity with the raw data. We mention this
advantage of the TrIQ algorithm in the conclusions, were we also claim different
applications (“interpretation of MSI data in biology and medicine”, “..makes [set of
images] comparable”, “can be applied to improve any scientific data plotting with
extreme values”). Although we believe that the TrIQ algorithm is an important
contribution to mass spectrometry imaging data analysis, we don’t want to oversell it;
therefore, we suggest not to add special applications (e.g. cancer biomarker discovery).
Comments for the author
The introduction shows the context but needs more details. I suggest that you improve
the description by some up-to-date references.
Authors: To our best knowledge, we cite the relevant state-of-the-art literature about
concepts in MSI data visualization, and current limitations – the knowledge gap we want
to close with our paper. Concerning MSI software: This is a wide field. We cite a review
article about MSI programs we published 2019 (Weiskirchen, R., Weiskirchen, S., Kim, P.,
and Winkler, R. (2019). Software solutions for evaluation and visualization of laser
ablation inductively coupled plasma mass spectrometry imaging (LA-ICP-MSI) data: a
short overview. Journal of Cheminformatics, 11(1):16.), for readers who are interested in
MSI programs.
Reviewer Ahmed Moussa:
The figures are relevant and well labeled, however some are not of high quality (Figure 1
and Figure 9).
Authors: Some of the figure lost resolution, when compressing the submission PDF. For
publication, the high-resolution images will be used. The screenshot (Figure 1) was
created with the highest possible screen resolution; Figure 9 was modified.
We hope that you are satisfied with our modifications of the manuscript and
consider its publication in PeerJ Computer Science.
In case of any additional questions, please do not hesitate to contact us.
Yours sincerely,
Robert Winkler
" | Here is a paper. Please give your review comments after reading it. |
144 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Mass spectrometry imaging (MSI) enables the unbiased characterization of surfaces with respect to their chemical composition. In biological MSI, zones with differential mass profiles hint towards localized physiological processes, such as the tissue-specific accumulation of secondary metabolites, or diseases, such as cancer. Thus, the efficient discovery of 'regions of interest' (ROI) is of utmost importance in MSI. However, often the discovery of ROIs is hampered by high background noise and artifact signals. Especially in ambient ionization MSI, unmasking biologically relevant information from crude data sets is challenging. Therefore, we implemented a Threshold Intensity Quantization (TrIQ) algorithm for augmenting the contrast in MSI data visualizations. The simple algorithm reduces the impact of extreme values ('outliers') and rescales the dynamic range of mass signals. We provide an R script for post-processing MSI data in the imzML community format (https://bitbucket.org/lababi/msi.r) and implemented the TrIQ in our open-source imaging software RmsiGUI (https://bitbucket.org/lababi/rmsigui/). Applying these programs to different biological MSI data sets demonstrated the universal applicability of TrIQ for improving the contrast in the MSI data visualization. We show that TrIQ improves a subsequent detection of ROIs by sectioning. In addition, the adjustment of the dynamic signal intensity range makes MSI data sets comparable.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Mass spectrometry imaging (MSI) datasets contain spatially resolved spectral information.</ns0:p><ns0:p>The parallel detection of multiple compounds with high sensitivity established mass spectrometry (MS) as the first-choice tool for exploratory studies, particularly in combination with data mining methods <ns0:ref type='bibr' target='#b17'>(López-Fernández et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b44'>Winkler, 2015)</ns0:ref>. Numerous MSI technologies have been reported; the most commonly used MSI platform is based on matrix-assisted laser desorption/ionization (MALDI), suitable for a wide range of molecules (Rae <ns0:ref type='bibr' target='#b29'>Buchberger et al., 2018)</ns0:ref>. However, conventional MSI usually requires significant sample preparation and physical conditions, which are incompatible with life. Therefore, there is a keen interest in developing ambient ionization MSI (AIMSI) methods because they enable the direct analysis of delicate materials and biological tissues with no or minimal sample preparation <ns0:ref type='bibr' target='#b16'>(Lu et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Unlike digital image acquisition equipment where the luminous intensity is digitalized, MSI signal intensities must be converted to discrete values by a process termed quantization. Each discrete intensity level is known as gray level; intensity quantized images are called grayscale images. Zones in MSI spectra with distinct intensities indicate localized biochemical activity, such as the biosynthesis of natural products or physiological processes. Such regions of interest (ROI) can be identified by visual inspection or automated segmentation <ns0:ref type='bibr' target='#b14'>(Gormanns et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bemis et al., 2015)</ns0:ref>.</ns0:p><ns0:p>The human vision is limited to the perception of only 700 to 900 shades of gray <ns0:ref type='bibr' target='#b15'>(Kimpe and Tuytschaever, 2007)</ns0:ref>. With the mapping of gray levels to colors, the visualization of features in an image can be improved. However, color perception is a subjective experience affected by illumination and the individual response of rod and cone photoreceptors located in our eyes.</ns0:p><ns0:p>Thus, the used color schemes significantly affect the human perception of scientific data <ns0:ref type='bibr'>(Rogowitz et al., 1996, Race and</ns0:ref><ns0:ref type='bibr' target='#b28'>Bunch (2015)</ns0:ref>). The frequently used rainbow color map generates colorful images that accentuate differences in signal intensities. However, the resulting images do not comply with 'perceptual ordering,' i.e., the spectators of a rainbow-colored MSI visualization cannot intuitively assign the different colors according to the signal intensities. Thus, rainbow color schemes are confusing and even might actively mislead viewers <ns0:ref type='bibr' target='#b4'>(Borland and Taylor Ii, 2007)</ns0:ref>. The online tool hclwizard (http://hclwizard.org) helps in the generation of custom color maps, which follow the hue-chroma-luminance (HCL) concept and are suitable for different types of scientific data visualizations <ns0:ref type='bibr' target='#b46'>(Zeileis et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b38'>Stauffer et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b9'>Gamboa-Becerra et al., 2015)</ns0:ref>. The current gold standard for plotting scientific data is the color map Viridis, which provides linear perception and considers viewers with color vision deficiencies (CVD) <ns0:ref type='bibr' target='#b21'>(Nuñez et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Another critical aspect of image processing is the adaption of quantitative data to the human vision, i.e., an adjustment of physical data to the biological receiver <ns0:ref type='bibr' target='#b39'>(Stockham, 1972)</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>MATERIALS AND METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Data sets and formats</ns0:head><ns0:p>We analyzed publicly available mass spectrometry imaging data sets from different MSI acquisition techniques, lateral resolutions, and sample types. The datasets used in this work are listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The original DESI data set consists of four samples per image, which we separated for further processing.</ns0:p><ns0:p>All datasets comply with the imzML data format community standard <ns0:ref type='bibr' target='#b37'>(Schramm et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b36'>Römpp et al., 2011)</ns0:ref>, implemented in many proprietary and open source programs for MSI data processing <ns0:ref type='bibr' target='#b43'>(Weiskirchen et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Threshold Intensity Quantization (TrIQ) Algorithm</ns0:head><ns0:p>An image is a function 𝑓 (𝑥, 𝑦) that assigns an intensity level for each point 𝑥, 𝑦 in a two-dimension space. For visualizing 𝑓 (𝑥, 𝑦) on a computer screen or printer, the image must be digitized for both intensity and spatial coordinates. As MSI is a scanning technique, spatial coordinates 𝑥 </ns0:p><ns0:p>Finally, the discrete mapped value 𝑄 is obtained:</ns0:p><ns0:formula xml:id='formula_1'>𝑄(𝑓 (𝑥, 𝑦)) = ⎧ { ⎨ { ⎩ 0, 𝑓 (𝑥, 𝑦) ≤ 𝑡 1 𝑘, 𝑡 𝑘 < 𝑓 (𝑥, 𝑦) ≤ 𝑡 𝑘+1<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>(AI)MSI methods often produce outliers, i.e., infrequent extreme intensity values, which drastically reduce image contrast. The Threshold Intensity Quantization, or TrIQ, addresses this issue by setting a new upper limit 𝑇; intensities above this threshold will be grouped within the highest bin. 𝑇 computation involves the cumulative distributive function 𝑝(𝑘) (CDF), defined as</ns0:p><ns0:formula xml:id='formula_2'>𝑞 ≈ 𝑝(𝑘) = 𝑘 ∑ 𝑖=1 ℎ(𝑖) 𝑁<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>ℎ(𝑖) stands for the 𝑖 bin's frequency within an image histogram, 𝑁 is the image's pixel count.</ns0:p><ns0:p>Given a target probability 𝑞 it is possible to find the bin 𝑘 whose CDF closely resembles 𝑞. Then, the upper limit of the bin 𝑘 in ℎ will be used as the threshold value 𝑇. The new transition levels can be defined as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_3'>𝑤 = [ 𝑇 − min(𝑓 ) 𝑛 − 1 ]<ns0:label>(5)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Therefore, 𝑄 mapping will be</ns0:p><ns0:formula xml:id='formula_4'>𝑄(𝑓 (𝑥, 𝑦)) = ⎧ { { { ⎨ { { { ⎩ 0, 𝑓 (𝑥, 𝑦) ≤ 𝑡 1 𝑘, 𝑡 𝑘 < 𝑓 (𝑥, 𝑦) ≤ 𝑡 𝑘+1 𝑛 − 1, 𝑓 (𝑥, 𝑦) > 𝑡 𝑛−1 (7)</ns0:formula><ns0:p>with 𝑘 running from 1 to 𝑛 − 1. From equation ( <ns0:ref type='formula'>7</ns0:ref>) follows that a higher 𝑘 leads to a better approximation of 𝑞. Default values for 𝑘 and 𝑞 in RmsiGUI are 100 and 98%, respectively.</ns0:p><ns0:p>Using the default values of the TrIQ, 98% (=𝑞) of the image's total intensity are visualized.</ns0:p><ns0:p>Pixel intensities above the calculated threshold value T are limited to a maximum value. Therefore, the rescaled 100 (=𝑘) bins visualize the dataset's intensity levels with more detail.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Implementation</ns0:head><ns0:p>Several programs and workflows for MSI data analysis employ the statistical language R (R Core Team, 2018), such as MSI.R <ns0:ref type='bibr' target='#b9'>(Gamboa-Becerra et al., 2015)</ns0:ref>, Cardinal <ns0:ref type='bibr' target='#b3'>(Bemis et al., 2015)</ns0:ref>, and the Galaxy MSI module <ns0:ref type='bibr' target='#b8'>(Föll et al., 2019)</ns0:ref>. The Otsu segmentation method <ns0:ref type='bibr' target='#b24'>(Otsu, 1979)</ns0:ref> tested in this work comes with the R package EBImage <ns0:ref type='bibr' target='#b26'>(Pau et al., 2010)</ns0:ref>. Recently, we published an R-based platform for MSI data processing with a graphical user interface, RmsiGUI, which provides modules for the control of an open hardware imaging robot (Open LabBot), the processing of raw data, and the analysis of MSI data <ns0:ref type='bibr' target='#b31'>(Rosas-Román et al., 2020)</ns0:ref>. We integrated the TrIQ algorithm into RmsiGUI and provide the R code snippets for facilitating its adoption into other programs. The source code is freely available from the project repository https://bitbucket.org/lababi/rmsigui/.</ns0:p><ns0:p>We use the viridis color map, which is optimized for human perception and people with color vision deficiencies <ns0:ref type='bibr' target='#b21'>(Nuñez et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b10'>, Garnier et al. (2018)</ns0:ref>). Reading and processing MSI data in imzML format are done with the MALDIquantForeign and MALDIquant libraries <ns0:ref type='bibr' target='#b12'>(Gibb and Strimmer, 2012;</ns0:ref><ns0:ref type='bibr' target='#b11'>Gibb and Franceschi, 2019)</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows the graphical user interface of RmsiGUI with the TrIQ option selector.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RESULTS AND DISCUSSION</ns0:head><ns0:p>The contrast of digital images can be enhanced with additional operations, such as global or local histogram equalization algorithms; however, such image processing tools do not preserve the original gray level scale's linearity. In contrast, our Threshold Intensity Quantization (TrIQ)</ns0:p><ns0:p>approach finds an intensity threshold 𝑇 for saturating the images' last gray level. Importantly, the linearity of the experimentally determined intensity scale is preserved.</ns0:p><ns0:p>In the next sections, we demonstrate the application of the TrIQ for the processing of mass spectrometry imaging (MSI) datasets. The term raw image is used in this paper for denoting images rendered with the default quantization method of R.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:2:0:NEW 26 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Contrast optimization</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> shows the mass spectrometry image of a human colorectal adenocarcinoma sample, acquired with DESI and 100 𝜇m spatial resolution <ns0:ref type='bibr'>(Oetjen et al., 2015)</ns0:ref>. The imaged signal of 885.55 m/z corresponds to de-protonated phosphatidylinositol (18:0/20:4), [C 47 H 83 O 13 P-H] − <ns0:ref type='bibr' target='#b41'>(Tillner et al., 2017)</ns0:ref>.</ns0:p><ns0:p>The direct plotting of the extracted m/z slice results in the image shown in Figure <ns0:ref type='figure' target='#fig_3'>2a</ns0:ref>). The A typical data transformation for imaging is the use of a logarithmic intensity scale. Figure <ns0:ref type='figure' target='#fig_3'>2b</ns0:ref>) shows the image after applying the natural logarithm to the MSI signal intensities and default quantization. The contrast is improved. However, further operations would be necessary, such as the subtraction of the background level. Besides, the interpretation of the non-linear color scale is not intuitive.</ns0:p><ns0:p>The conventional sequence for improving the contrast of an MSI image is applying the zeromemory quantization of MSI data and a transformation function on the quantized pixels. Figure <ns0:ref type='figure' target='#fig_3'>2c</ns0:ref>) shows the result of this process. Although improved contrast is gained with linear equaliza-</ns0:p></ns0:div>
<ns0:div><ns0:head>7/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:2:0:NEW 26 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Background optimization</ns0:head><ns0:p>Various causes can result in background noise, such as sample metabolites with low vapor pressure and matrix compounds. Removing outliers with TrIQ and reducing gray levels lead to improved image brightness (see Figures <ns0:ref type='figure' target='#fig_5'>3a) and b</ns0:ref>)). Nevertheless, background noise is also enhanced, and the sample shape is not well defined (see ion 209 m/z in Figure <ns0:ref type='figure' target='#fig_5'>3b</ns0:ref>)). There are two possibilities for background correction with TrIQ. The first option is reducing the number of gray levels, thus grouping a wider range of values within a single bin. Reducing the gray levels from 32 to 9 produced an almost perfectly uniform background and a well-defined sample shape, as shown in figure <ns0:ref type='figure' target='#fig_5'>3c</ns0:ref>). The second method finds a new black level threshold that substitutes the operator 𝑚𝑖𝑛(𝑓 ) in equation 5. As the color bars for this approach indicate, the black level thresholds depend on the individual image data (see Figure <ns0:ref type='figure' target='#fig_5'>3d</ns0:ref>)). Both methods efficiently diminish the impact of background noise. But whereas reducing the gray levels is the simpler approach, defining a new black level threshold maintains the color depth.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Normalization for comparable mass spectrometry images</ns0:head><ns0:p>Comparing mass spectrometry images (MSI) is a challenge because standard quantization procedures create images with distinct intensity and color scales, even if they were measured under the same experimental conditions.</ns0:p><ns0:p>Global TrIQ finds the highest 𝑇 among a given MSI set. This threshold is used for computing the transition levels and mapping MSI intensities to discrete values on every image within the set.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref> shows human colorectal adenocarcinoma images. The samples come from the same tissue, cut into slices with a thickness of 10 𝜇m <ns0:ref type='bibr'>(Oetjen et al., 2015)</ns0:ref>. All images visualize the abundance of the ion 885.55 m/z. The plotting of the raw data resulted in images of low contrast, which are difficult to interpret and to compare (see Figure <ns0:ref type='figure' target='#fig_8'>4a</ns0:ref>). Using the Global TrIQ algorithm, a maximum threshold 𝑇 of 4,970 was calculated and applied for all twelve images; the accumulated probability was set to 0.91. The resulting color scale is the same for all images (see Figure <ns0:ref type='figure' target='#fig_8'>4b</ns0:ref>). Thus, the relative abundance and distribution of the ion in multiple images can be evaluated at a single glance. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr'>et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b18'>Maldonado-Torres et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Gamboa-Becerra et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b5'>Cervantes-Hernández et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Both examples demonstrate the usefulness of TrIQ to normalize MSI data for image comparison.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Segmentation and visualization of regions of interest (ROI)</ns0:head><ns0:p>Finding regions of interest (ROI) in MSI datasets is essential for biomarker discovery and physiological studies. High-contrast and low-noise images favor the automated segmentation. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref> shows the Global TrIQ segmentation of mouse urinary bladder data, scanned with AP-MALDI MSI at a lateral resolution of 10 μm <ns0:ref type='bibr' target='#b32'>(Römpp et al., 2010</ns0:ref><ns0:ref type='bibr' target='#b33'>(Römpp et al., , 2014))</ns0:ref>. The distribution of 741.53 m/z, identified as a sphingomyelin [C 39 H 79 N 2 O 6 P + K] + ion, is related to muscle tissue.</ns0:p><ns0:p>743.54 m/z is associated with the lamina propria structure, while 798.54 m/z is mainly found in the urothelium <ns0:ref type='bibr' target='#b32'>(Römpp et al., 2010)</ns0:ref>. Global TrIQ was applied with P = 0.95 and 25 gray levels.</ns0:p><ns0:p>TrIQ and median filtering allow clear discrimination between muscle tissue and lamina propria using the marker ions 741.53 and 743.54 m/z. For separating the urothelium structure from the muscular tissue, the binary image for 798.54 m/z was calculated by zeroing gray levels below 9.</ns0:p><ns0:p>In contrast, if the Otsu method is applied to the 798.54 m/z ion, the urothelium region is isolated automatically. This result is expected since the Otsu method assumes an image histogram distribution with a deep sharp valley between two peaks. Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> shows an overlay of the TrIQ processed ion images, representing correctly the anatomical structures of the mouse urinary bladder.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Computational performance of the algorithm</ns0:head><ns0:p>For estimating the computational performance of the Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>TrIQ algorithm for large images, we built synthetic slices by sequentially doubling the DESI data.</ns0:p><ns0:p>Time calculation was executed twenty-five times to account for variations in system processes of the operating system. Figure <ns0:ref type='figure' target='#fig_13'>9</ns0:ref> demonstrates the results from running the R script Timing.R (provided as supplemental code) on a standard Linux laptop (Intel(R) Core(TM) i7-7700HQ CPU with 2.80GHz, 16 Gb RAM, Peppermint OS 10). On average, more than 1,000 pixels were processed per millisecond. In the tested range, i.e., up to >500,000 pixels, the execution speed was proportional to the image's size. Thus, the TrIQ algorithm is computationally efficient and scalable.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSIONS</ns0:head><ns0:p>Threshold As with any data filtering method, TrIQ could remove or alter valuable information. Therefore, it is recommendable to compare raw and processed images and carefully adjust the target probability and the number of gray levels. MSI technology itself causes further sources of errors. Fixation agents, solvents, and matrix compounds can lead to background signals. To some extent, such off-sample ions can be removed manually, or using MSI software <ns0:ref type='bibr' target='#b25'>(Ovchinnikova et al., 2020)</ns0:ref>. Technical variations caused by the sample topology or unstable ionization add additional inaccuracies <ns0:ref type='bibr' target='#b2'>(Bartels et al., 2017)</ns0:ref>. Further, the ion count is not strictly proportional to the abundance of a molecule but depends on the local sample structure and composition, and the desorption/ ionization principle. Thus, complementary MSI, optical and histological methods should be used on the same sample <ns0:ref type='bibr' target='#b40'>(Swales et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Applying TrIQ to a set of images equalizes their intensity scales and makes them comparable. We demonstrated the implementation of the TrIQ algorithms in R to process MSI data in the community format imzML. The algorithm is computationally fast and only requires basic operations, and thus can be quickly adapted to any programming language. TrIQ can be applied to improve any scientific data plotting with extreme values, respecting the original intensity levels of raw data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CODE AVAILABILITY</ns0:head><ns0:p>RmsiGUI is freely available from https://bitbucket.org/lababi/rmsigui/. We released TrIQ Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science 6 ACKNOWLEDGMENTS</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Histogram equalization is widely used for enhancing the contrast in images and therefore supporting the recognition of patterns. The application of histogram equalization is simple and 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:2:0:NEW 26 Mar 2021) Manuscript to be reviewed Computer Science computationally fast. In the global histogram equalization (GHE), the entire image data are used to remap the representation levels (Abdullah-Al-Wadud et al., 2007). But the original intensity levels of the pixels are lost, and the fidelity of the data visualization is infringed. Thus, we introduce the use of Threshold Intensity Quantization (TrIQ) for the processing of conventional and ambient ionization mass spectrometry imaging (MSI/AIMSI) datasets.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>𝑡 𝑘 = 𝑤 + min(𝑓 ), 2𝑤 + min(𝑓 ), … , (𝑛 − 1)𝑤 + min(𝑓 ) Sci. reviewing PDF | (CS-2020:10:53565:2:0:NEW 26 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Implementation of the Threshold Intensity Quantization (TrIQ) in the graphical user interface of RmsiGUI (dataset and image from Oetjen et al. (2015)).</ns0:figDesc><ns0:graphic coords='7,141.73,220.69,453.60,304.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Contrast enhancement by Threshold Intensity Quantization (TrIQ). The mass trace 885.55 m/z of DESI MSI data of a human colorectal adenocarcinoma sample was visualized with different quantization methods. a) Raw data, rendered with 256 gray levels. b) Plotting with applying the natural logarithm to the intensity values. c) Image contrast improvement after zero-memory quantization and linear histogram equalization d) Image contrast improvement using TrIQ with 95 percent and 32 gray levels. Histograms a,b and d were computed with 32 bins; histogram c has 256 bins. e) Histological staining of a tissue slice (dataset and image e) from Oetjen et al. (2015)).</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,453.60,157.44' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>image contains pixels with intensity values of up to 70,280 arbitrary units. Such extreme and infrequent intensity values are called outliers and drastically reduce image contrast. The histogram below reveals the reason for the low contrast of the image: Most of the pixels fall into the first four bins after the default R quantization.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Background optimization with TrIQ. Three mass traces of LAESI MSI data from a Arabidopsis thaliana leaf are plotted. a) Raw data plotting with 256 gray levels b) TrIQ with 32 gray levels, c) TrIQ with 9 gray levels. The background uniformity is improved by reducing the intensity and gray levels. d) TrIQ with 32 gray levels and black level adjustment eliminates background noise without reducing color depth (dataset from Zheng et al. (2020)).</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,453.52,250.17' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 3a) shows the rendering of signals from an Arabidopsis 8/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:2:0:NEW 26 Mar 2021) Manuscript to be reviewed Computer Science thaliana leaf, analyzed by LAESI MSI with a lateral resolution of 200 𝜇m (Zheng et al., 2020). The images correspond to the putative negative ions of 4-hydroxymethyl-3-methoxyphenoxyacetic acid ([C 10 H 12 O 5 -3H] − , 209.0 m/z), 4-methylsulfonylbutyl glucosinolate ([C 12 H 23 NO 10 S 3 -H] − , 436.0 m/z), and indol-3-ylmethyl glucosinolate ([C 16 H 20 N 2 O 9 S 2 -H] − , 447.1 m/z)<ns0:ref type='bibr' target='#b45'>(Wu et al., 2018)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 provides another example of global TrIQ. The image compares the 62.1, 84.1, and 306.1 m/z ions of chili (Capsicum annuum) slices sampled at 1 mm lateral resolution with lowtemperature plasma (LTP) MSI (Maldonado-Torres et al., 2014). LTP MSI detects small, volatile compounds. Therefore, the images resulting from this ambient ionization technique are noisy. TrIQ with 𝑃 = 0.98 and five gray levels improves the contrast. The remaining noise is efficiently 9/19</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Global TrIQ applied to the ion 885.55 m/z of human colorectal adenocarcinoma DESI MSI slices, with P = 0.91 and 32 gray levels. Compared to the raw data visualization (a), the contrast is drastically enhanced by applying the TrIQ algorithm. The normalization also allows a direct comparison of the images (dataset from Oetjen et al. (2015)).</ns0:figDesc><ns0:graphic coords='11,178.43,131.12,340.19,448.06' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. LTP MSI of a chili Capsicum annuum fruit with 1 mm lateral resolution. TrIQ with P = 0.98 and 5 gray levels improves the contrast; an additional median filter removes technical noise (dataset from Maldonado-Torres et al. (2014, 2017)).</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,453.61,251.39' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 Figure 6 .</ns0:head><ns0:label>66</ns0:label><ns0:figDesc>Figure 6. Image segmentation for Capsicum annuum obtained with Otsu and TrIQ. For images with intensity outliers and noisy regions, TrIQ combined with median filtering gives larger and more uniform regions compared to the standard Otsu algorithm (dataset from Maldonado-Torres et al. (2014, 2017)).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. AP-MALDI MSI of a mouse urinary bladder imaged with a lateral resolution of 10 μm. TrIQ was applied with P = 0.95 and 25 gray levels. Binary images serve for defining regions of interest (ROI) and segmentation: 741.53 m/z -muscle tissue, 743.54 m/z -lamina propria structure, 798.54 m/z -urothelium (dataset from Römpp et al. (2010, 2014)).</ns0:figDesc><ns0:graphic coords='14,141.73,63.78,453.60,246.96' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Overlay image, representing the anatomical structures in mouse urinary bladder. Blue: 741.53 m/z -muscle tissue, red: 743.54 m/z -lamina propria , yellow: 798.54 m/zurothelium (dataset from Römpp et al. (2010, 2014)).</ns0:figDesc><ns0:graphic coords='15,141.73,92.22,453.60,187.92' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Execution speed of the TrIQ algorithm, implemented in R, using a standard laptop (experimental and synthetic data). About 1,000 pixels were processed per millisecond. The TrIQ algorithm scales linearly.</ns0:figDesc><ns0:graphic coords='15,141.73,408.74,453.60,227.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Intensity Optimization (TrIQ) improves the visualization of mass spectrometry imaging (MSI) data by augmenting the contrast and homogenizing the background. Contrary to histogram equalization algorithms, TrIQ preserves the linearity of measured ion intensities. The processing with TrIQ facilitates the recognition of regions of interest (ROI) in MSI data sets, either by visual inspection of by automated segmentation algorithms, supporting the interpretation of MSI data in biology and medicine.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>R</ns0:head><ns0:label /><ns0:figDesc>scripts and the R package RmsiGUI under the terms of the GNU General Public License, GPL V3 (http://gplv3.fsf.org/). 15/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:2:0:NEW 26 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,141.73,216.07,453.60,278.16' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Mass spectrometry imaging data sets. AP-MALDI -Atmospheric pressure matrixassisted laser desorption/ionization, DESI -Desorption electrospray ionization, LAESI -Laser ablation/ electrospray ionization, LTP -Low-temperature plasma, Res. -lateral resolution, Tol.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>-mass tolerance. References: Oetjen et al. (2015), Zheng et al. (2020), Maldonado-Torres et al.</ns0:cell></ns0:row><ns0:row><ns0:cell>(2014);Maldonado-Torres et al. (2017), Römpp et al. (2010); Römpp et al. (2014).</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Organism, tissue, ref. Method Res. [μm] Tol. [m/z] Size [pix.]</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Human, colorectal cancer(Oetjen et al.,</ns0:cell><ns0:cell>DESI (-)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>± 0.3</ns0:cell><ns0:cell>67 × 64</ns0:cell></ns0:row><ns0:row><ns0:cell>2015)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Arabidopsis thaliana, leaf(Zheng et al., 2020)</ns0:cell><ns0:cell>LAESI (-)</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>± 0.3</ns0:cell><ns0:cell>46 × 26</ns0:cell></ns0:row><ns0:row><ns0:cell>Chili, fruit(Maldonado-Torres</ns0:cell><ns0:cell>LTP (+)</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>± 0.3</ns0:cell><ns0:cell>85 × 50</ns0:cell></ns0:row><ns0:row><ns0:cell>et al., 2014, 2017)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mouse, urinary bladder(Römpp et al.,</ns0:cell><ns0:cell>AP-MALDI (+)</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>± 0.1</ns0:cell><ns0:cell>260 × 134</ns0:cell></ns0:row><ns0:row><ns0:cell>2010, 2014)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>𝑦 are already discrete values related to the lateral resolution of the scanning device. The intensity values provided by the MS ion detector are analog quantities that must be transformed into discrete ones. Quantization is a process for mapping a range of analog intensity values to a single discrete value, known as a gray level. Zero-memory is a widely used quantization method.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>The zero-memory quantizer computes equally spaced intensity bins of width 𝑤:</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>𝑤 = [ where 𝑛 represents the number of discrete values, usually 256; min(𝑓 ) and max(𝑓 ) operators max(𝑓 ) − min(𝑓 ) 𝑛 (1) ] provide minimum and maximum intensity values. Quantization is based on a comparison with</ns0:cell></ns0:row><ns0:row><ns0:cell>the transition levels 𝑡 𝑘 :</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>𝑡 𝑘 = 𝑤 + min(𝑓 ), 2𝑤 + min(𝑓 ), … , 𝑛𝑤 + min(𝑓 )</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53565:2:0:NEW 26 Mar 2021)</ns0:cell><ns0:cell>3/19</ns0:cell></ns0:row></ns0:table><ns0:note>and</ns0:note></ns0:figure>
</ns0:body>
" | "Prof. Dr. rer. nat. Robert Winkler, CINVESTAV Unidad Irapuato
Km. 9.6 Libramiento Norte Carr. Irapuato-León, 36824 Irapuato Gto., México
Robert.Winkler@cinvestav.mx, Tel.: +52-462-6239-635
PeerJ Computer Science
Reply to comments on Revision 1
Contrast optimization of mass spectrometry imaging (MSI) data visualization
by threshold intensity quantization (TrIQ)
Irapuato, 26th of March 2021
Dear Editor and dear Peer Reviewers,
Thank you very much for your feedback which helped to improve our manuscript.
Following, we provide a point-by-point reply to all comments:
Reviewer 1
Basic reporting
No further comments.
Experimental design
No further comments.
Validity of the findings
No further comments.
Comments for the author
Nice work!
Authors:
Thank you very much for endorsing our manuscript!
Reviewer: Ahmed Moussa
Basic reporting
The manuscript descirbes the implementation of a Threshold Intensity Quantization
algorithm for augmenting the contrast in Mass Spectrometry Imaging (MSI) data
visualizations.
The authors method provides an improvement of MSI data through increasing the
contrast and homogenizing the background.
The method was implemented as an open-source software RmsiGUI with an R script for
post-processing.
They validated the developed method using different dataset acquired from different
techniques.
The results show an improvement in the contrast and in the detection of ‘region of
interest’.
The language used throughout the manuscript is clear.
Authors:
Thanks a lot for your positive feedback and the time for evaluating our paper.
Experimental design
The introduction shows the context but needs more details. I suggest that you improve
the
description by some up-to-date references.
Authors: To our best knowledge, we cite the relevant state-of-the-art literature about
concepts in MSI data visualization, and current limitations – the knowledge gap we want
to close with our paper. Concerning MSI software: This is a wide field. We cite a review
article about MSI programs we published 2019 (Weiskirchen, R., Weiskirchen, S., Kim, P.,
and Winkler, R. (2019). Software solutions for evaluation and visualization of laser
ablation inductively coupled plasma mass spectrometry imaging (LA-ICP-MSI) data: a
short overview. Journal of Cheminformatics, 11(1):16.), for readers who are interested in
MSI programs.
Reviewer Ahmed Moussa:
The figures are relevant and well labeled, however some are not of high quality (Figure 1
and Figure 9).
Authors: Some of the figure lost resolution, when compressing the submission PDF. For
publication, the high-resolution images will be used. The screenshot (Figure 1) was
created with the highest possible screen resolution; Figure 9 was modified.
Reviewer Ahmed Moussa:
The research question is well defined.
I compliment the authors on their vast data set used to validate the method. However,
the mathematical part in the section 2.2 need to be improved by providing more details.
Authors:
We added an additional explanation of the algorithm, to make its function more clear:
“Using the default values of the TrIQ, 98% (=q) of the image's total intensity are
visualized. Pixel intensities above the calculated threshold value T are limited to the
maximum value. Therefore, the rescaled 100 (=k) bins visualize the dataset's intensity
levels with more detail.”
Reviewer Ahmed Moussa:
The authors claimed that their algorithm is computationally fast (line 266) but no strong
information is provided to justify this. Please provide a quantifiable evidence (running
time, memory expenses, …) to justify this, since figure 9 is not clear and does not provide
sufficient details.
Authors:
We improved figure 9. Now the linear scalability of the TrIQ algorithm is clear; an
important characteristic for the increasing data load coming from MSI experiments. About
1000 pixels were processed per millisecond on a standard laptop (see section
‘Computational performance of the algorithm’ for details), and the tested synthetic
datasets were larger than typical experimental MSI data. Thus, we can state the
computational efficiency and practical usability of the TrIQ algorithm.
Reviewer Ahmed Moussa:
The authors stated that several programs for MSI data analysis exists and employ the
statistical language R but did not compare their developed algorithm with these existing
ones.
Validity of the findings
Authors are suggested to include one or more of these existing methods as a comparison
to prove that their method is working fine. Authors are also suggested to provide proper
applications of this tools as the novelty and the impact
is not well assessed.
Authors: In figure 2 , we compare the TrIQ with common image quantization methods.
The result obtained with TrIQ is similar to zero-memory quantization and linear
equalization (fig. 2 c), but the TrIQ has higher fidelity with the raw data. We mention this
advantage of the TrIQ algorithm in the conclusions, were we also claim different
applications (“interpretation of MSI data in biology and medicine”, “..makes [set of
images] comparable”, “can be applied to improve any scientific data plotting with
extreme values”). Although we believe that the TrIQ algorithm is an important
contribution to mass spectrometry imaging data analysis, we don’t want to oversell it;
therefore, we suggest not to add special applications (e.g. cancer biomarker discovery).
We hope that you are satisfied with our modifications of the manuscript and
consider its publication in PeerJ Computer Science.
In case of any additional questions, please do not hesitate to contact us.
Yours sincerely,
Robert Winkler
" | Here is a paper. Please give your review comments after reading it. |
145 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The traffic congestion and the rise in the number of vehicles have become a grievous issue, and it is focused worldwide. One of the issues with traffic management is that the timer of the traffic light is not dynamic, so one has to stay more although there are no or quite a few vehicles on a roadway, which causes unnecessary waiting time, fuel consumption, and leads to pollution. Prior work on smart traffic management system repurposes the use of the Internet of things, Time Series Forecasting, and Digital Image Processing. Instead, we frame a real-time traffic light optimization algorithm that uses Machine Learning and DeepLearning Technique for predicting the optimal time required by the vehicles to clear the lane. This paper concentrates on a two-step approach. The first step is to obtain the count of the independent category of the class of vehicles. For this, we employ the You Only Look Once version 4 (YOLOv4) object detection technique. In the second step, we adopt an ensemble technique called eXtreme Gradient Boosting (XGBoost) for predicting the optimal time of the green light window. We compared our approach further with other implemented versions of YOLOv4; and we also evaluated the different prediction algorithms. From the experimental analysis, it gets observed that YOLOv4 with the XGBoost algorithm displayed the best result with a balance of accuracy and inference time. Our solution reduces an average of 32.3 per-cent of waiting time with usual traffic on a roadway.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Traffic management and pollution due to traffic are a significant issue in many metropolitan areas of India.</ns0:p><ns0:p>According to the Environmental Statistics 2019 <ns0:ref type='bibr' target='#b25'>(IndianMinistryOfStatistics, 2019)</ns0:ref>, transportation is the third largest cause of air pollution. The Government of India had begun to promote public transport and to increase the taxes on vehicles but does not have a significant impact. For traffic management, different methods were proposed, which are using Internet of Things <ns0:ref type='bibr' target='#b24'>Greengard (2015)</ns0:ref>, Time Series Forecasting <ns0:ref type='bibr' target='#b26'>(Jenkins, 1994)</ns0:ref> and Digital Image processing <ns0:ref type='bibr' target='#b35'>(Rafael, 1992</ns0:ref>), but they were not very cost-effective, fast, accurate, and work in real-time. To overcome these issues, we presented a method using the Convolutional Neural Network <ns0:ref type='bibr' target='#b28'>(KrizhevskyA, 2012)</ns0:ref>. Our approach takes lesser inference time, affordable as majorly depending on the Algorithm, accurate, and works over CPU (Central Processing Unit) with minimal processing power. The advancements and development of the effective algorithms in Computer Vision <ns0:ref type='bibr' target='#b39'>(Szeliski, 2010)</ns0:ref> allows the computers to do the task without the use of any sensor or human intervention and convey the real-time scene information. Hence, this paper aims to provide an Artificial Intelligence based <ns0:ref type='bibr' target='#b30'>(Long et al., 2020)</ns0:ref> solution to reduce air pollution and unwanted waiting time by detecting the traffic in the frame and predicting the optimal time required to clear the lane.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Generally, at the intersection, the timer of the traffic lights are fixed, and so, when there are no or a lesser number of vehicles, then also waiting time is more. For instance, consider a traffic flow towards the south during the peak office hours, but decidedly less traffic flow in any other direction, then also the timer assigned to each lane is the same. Hence, due to this, unnecessary congestion and waiting time arise. The government of India had already installed the CCTV (Closed-Circuit Television) cameras at the crossroads and started for the electronic format based paper work commonly called as e-challan, and it has a good impact, but still, the government has not implemented the system for dynamic traffic light control. Also, there are certain areas in India where the traffic police manage hand-held traffic movements, which is very exhaustive. Hence, we have bestowed the solution to change the timer of the green light window based on the kind of vehicles present on the lane. State of the art, You Only Look Once (YOLOv4) <ns0:ref type='bibr' target='#b8'>Bochkovskiy et al. (2020)</ns0:ref> Algorithm is proposed, for detecting and counting the independent categories of the vehicles. The powerful ensemble technique, eXtreme Gradient Boosting Algorithm (XGBoost) <ns0:ref type='bibr' target='#b15'>(Chen and Guestrin, 2016</ns0:ref>) is used to predict optimal time for the green light window. We used our custom collected dataset for training the prediction model and Microsoft Common Object in Context (MS COCO) <ns0:ref type='bibr' target='#b29'>(Lin et al., 2014)</ns0:ref> datasets for detecting the different classes of vehicles. After fine-tuning the prediction and detection model, our system reduced 32.3 % of the average waiting time in the regular traffic.</ns0:p><ns0:p>The remainder of the paper is organized as follows. In section 2., we describe the related work on the Smart Traffic Management system. Section 3., discuss materials and methodologies followed to build the architecture. In Section 4. the result and discussion are carried out. The conclusion and future work are described in section 5.</ns0:p></ns0:div>
<ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>The different approaches were proposed for the smart traffic management system. The most common approaches, were IoT-based approaches, Time-Series based approaches and Computer vision-based Approaches.</ns0:p><ns0:p>IoT Based Approach <ns0:ref type='bibr' target='#b27'>(Kalaiselvi et al., 2017)</ns0:ref> had proposed the Light Fidelity (LiFi) technology, which transmits signals to the traffic control system indicating the presence of an emergency vehicle and the traffic control system turns into green light accordingly. <ns0:ref type='bibr' target='#b2'>(Akhil and Parvatha, 2017)</ns0:ref> had proposed dynamic traffic control based on Sound Navigation and Ranging (SONAR). In their proposed system, the density of vehicles was measured with an array of UltraSonic Sensors placed in such a way that it continuously scans the incoming vehicles.</ns0:p><ns0:p>The timer of the green light and red light were derived from the values of SONAR readings, and the number of sensors placed.</ns0:p><ns0:p>However, due to numerous intersections in only one metropolitan area, this approach is not much cost-effective, and as sensors are not repurposing, so for surveillance, multiple sensors will be required which makes it more costly to implement while our approach is cost-efficient as well as repurposing</ns0:p></ns0:div>
<ns0:div><ns0:head>Time-Series Forecasting</ns0:head><ns0:p>Other approaches for Smart traffic management systems were to predict the flow of traffic based on time series forecasting. <ns0:ref type='bibr' target='#b33'>(Natafgi et al., 2018)</ns0:ref> had proposed the solution to traffic congestion by adapting the system using reinforcement learning on the variation of traffic dynamically. The multiple agents were assigned at a crossroad, and the agent will learn based on reward and penalty of how correct their action was. The parameters they used to train agents were queue length and delay. <ns0:ref type='bibr' target='#b6'>(Ata et al., 2019)</ns0:ref> had proposed the Smart Road Traffic Congestion control model using Artificial Backpropagation Neural Network, using that they were predicting the delay based on <ns0:ref type='bibr'>Time, Traffic Speed, Traffic Flow, Humidity, Wind Speed and Air Temperature. (Chen et al., 2016)</ns0:ref> had optimized the Back-propagation Neural network with the Genetic Algorithm. The parameters used for fitness function were the number of vehicles passing the green light and the average arrival of the vehicles at the red light.</ns0:p><ns0:p>The problem with the historical data is that it is not efficient as to work in real-time while our system is taking all the data observed from the real time frame after that the prediction is made for determining the optimal timer for the green light window.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Vision and Digital Image Processing Based Approach</ns0:head><ns0:p>In Digital Image Processing based approach, the image was subtracted by using foreground-background subtraction with the reference image, and the blob of the object is obtained. Then the traffic light is controlled by counting the detected number of blobs. After extracting the features <ns0:ref type='bibr' target='#b17'>(Choudekar et al., 2011)</ns0:ref>. The problem with the classic image processing approach is that it is a local solution and not generalized as it is not robust towards the varying light condition while our model is robust towards varying and low light conditions. The problem with this approach was that it is biased towards the dense traffic and will not consider the lane with fewer vehicles. Other proposed methods are that after detecting the vehicles from object detector, the coordinate of the last vehicle in an image passed into the Object Tracker, and it performs tracking of the last vehicle until it crosses the border. <ns0:ref type='bibr' target='#b5'>(Asha and Narasimhadhan, 2018)</ns0:ref> This method is also a local solution and not generalized as it is challenging to determine the last vehicle, and if the vehicle somehow stops before crossing, then the whole system will fail. While in our model, we are not continuously tracking or detecting the vehicles on the lane. Our model only becomes activated and processes when there remain the last five seconds of the timer, due to which any external lag or processing time not required. Also, instead of prioritizing the lane, we are performing the prediction of green light for every lane. <ns0:ref type='bibr' target='#b13'>(Castaño et al., 2017)</ns0:ref> and <ns0:ref type='bibr' target='#b14'>(Castaño et al., 2018)</ns0:ref> had introduced the method for obstacle detection were they are recognizing the object for collision avoidance using Support Vector Machine, Multi-Layer Perceptron and Self organizing map. Also, they are proposing the 3DLiDAR technique for collision detection. They are minimizing the 3DLiDAR using Reinforcement Learning but again its extremely costly to fix LiDAR in every car. For object recognition they are using Support vector machine (SVM) classifier. The problem with SVM is that the complexity ramps up as the training sample increases which requires more computation <ns0:ref type='bibr' target='#b12'>(Caruana and Niculescu-Mizil, 2006</ns0:ref>). In the worst-case scenario of SVM we may landed up with so many support vectors which cause overfitting. The Self Organized Map is an unsupervised approach but for predicting the time we need to classify each detected vehicle in Bus, Truck, Bike or Car and based on that allocating time hence Self Organized Map is not taken in consideration.</ns0:p><ns0:p>The CNN are advanced neural networks than Multi-Layer Perceptron as they are using filters and kernels to extract important features in image. Hence the CNN based object detection are better, accurate and requires less computation. Also, this paper is focused on how to reduce unnecessary waiting time and there is no emphasis over collision avoidance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Overview of Object detection Algorithms</ns0:head><ns0:p>Before the advancement in the deep Convolutional Neural Network, the Image processing-based approaches were made for counting the objects. The most popularly used methods were Scale-Invariant Feature Transformation (SIFT) <ns0:ref type='bibr' target='#b32'>(Lowe, 2004)</ns0:ref>, and the Histogram of Oriented Gradients (HOG) method <ns0:ref type='bibr' target='#b18'>(Dalal and Triggs, 2005)</ns0:ref>. After that, the Convolutional Neural Networks (KrizhevskyA, 2012) emerged, and a sliding window approach counts the objects. Region based Convolutional Neural Network (R-CNN) <ns0:ref type='bibr' target='#b23'>(Girshick et al., 2014)</ns0:ref> had modified that approach by proposing the 'Region Proposals', which follows as obtaining the subset of the image, and then classifying the object using Convolutional Neural Network <ns0:ref type='bibr' target='#b21'>(Fukushima et al., 1983)</ns0:ref>. Hence R-CNN is a two-stage detector. Due to this, it takes more time for inference, and the Frame per Second (FPS) rate gets lowered down.</ns0:p><ns0:p>You Only Look Once (YOLO) <ns0:ref type='bibr' target='#b37'>(Redmon and Farhadi, 2018</ns0:ref>) is a one stage detector where the image gets divided into the SxS grid; each grid is a classifier cell that predicts the bounding box and confidence ratio, and after that, the bounding boxes get tuned appropriately. The Fast R-CNN <ns0:ref type='bibr' target='#b22'>(Girshick, 2015)</ns0:ref>, which is a variant of R-CNN, had an mAP of 70 % with 0.5FPS, while the YOLO-based model had 63.4 % mAP with 155FPS on Pascal VOC 2007 dataset <ns0:ref type='bibr' target='#b19'>(Everingham, 2012)</ns0:ref>.The mAP value is the average of average precision, which combines the value of precision and recall given as ∑ r P@r R <ns0:ref type='bibr' target='#b41'>(Zhang and Zhang, 2009)</ns0:ref>. So, R-CNN models were more accurate than YOLO also YOLO suffers more in localizing the small objects, hence advancements were made in YOLOv2 <ns0:ref type='bibr' target='#b36'>(Redmon and Farhadi, 2017)</ns0:ref> and YOLOv3 <ns0:ref type='bibr' target='#b37'>(Redmon and Farhadi, 2018)</ns0:ref>.</ns0:p><ns0:p>YOLOv2 employs the Batch Normalization, which increases the accuracy by 2 %. The concept of anchor box prediction was introduced, which says that the bounding box had a certain height-width ratio, the dimensions of the bounding box get predicted using K Means Clustering. The YOLOv2 was getting mAP of 76.8 % with Frame Per Second rate of 67, while the Faster RCNN on got 73.2 mAP with 7FPS</ns0:p></ns0:div>
<ns0:div><ns0:head>3/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>over Pascal VOC 2007 dataset <ns0:ref type='bibr' target='#b20'>(Everingham et al., 2010)</ns0:ref>.</ns0:p><ns0:p>The YOLOv3 had improved the loss function given in Equation ( <ns0:ref type='formula'>1</ns0:ref>). Also, it introduced the different layers for different sizes of the object, which made the YOLO architecture more efficient. In Equation <ns0:ref type='formula'>1</ns0:ref>1i j obj is representing that the output value will be 1 if box and cell are matching else the output will be 0. While 1i j noobj is representing that 1 if box and cell are not matching else the output will be 0. So using the Bounding box coordinate regression, Bounding Box score prediction and class score prediction, the whole loss function of YOLO is formulated.</ns0:p><ns0:formula xml:id='formula_0'>λ coord S 2 ∑ i=0 B ∑ j=0 1i j obj (xi − xi) 2 + (yi − ŷi ) 2 +λ coord S 2 ∑ i=0 B ∑ j=0 1i j obj √ wi − √ ŵi 2 + √ hi − ĥi 2 + S 2 ∑ i=0 B ∑ j=0 1i j obj Ci − Ĉi 2 +λ noobj S 2 ∑ i=0 B ∑ j=0 1i j noobj Ci − Ĉi 2 + S 2 ∑ i=0 1i obj ∑ c ∈ dasses (p i (c) − pi (c)) 2 (1)</ns0:formula><ns0:p>With the introduction of YOLOv4, it achieves an mAP value of 43 % with 43 FPS, while FasterRCNN has 39.8 % mAP with 9.4 FPS over Titan X Pascal GPU. Hence YOLO based object detection is more appropriate than any other object detection architecture.</ns0:p><ns0:p>In our paper, we compared the different novel YOLO architectures and different implementations of YOLO object detectors at AP 50 and AP M along with the inference time. The AP 50 is the average precision at IoU of 50 while AP M is the average precision with the object size medium. In this paper for the Smart Traffic Management system, the YOLOv4 based architecture is proposed for counting different types of vehicles.</ns0:p></ns0:div>
<ns0:div><ns0:head>Overview of Machine Learning Prediction Models</ns0:head><ns0:p>For regression based prediction models there are two methods polynomial curve-based approach and decision tree-based approach.</ns0:p><ns0:p>In polynomial curve-based approach, the Elastic Net <ns0:ref type='bibr' target='#b42'>(Zou and Hastie, 2008)</ns0:ref> and polynomial kernel of Support Vector Machine regressor <ns0:ref type='bibr' target='#b38'>(Scholkopf, 1998)</ns0:ref> were used. While in the decision tree-based approach, the Random Forest <ns0:ref type='bibr' target='#b10'>(Breiman, 1996)</ns0:ref> and eXtreme Gradient Boosting <ns0:ref type='bibr' target='#b15'>(Chen and Guestrin, 2016)</ns0:ref> used. From the analysis, the tree based approach perfectly fits our dataset.</ns0:p><ns0:p>The elastic net is the combination of the Lasso and Ridge regularization methods. It uses the polynomial features for regression and will allow to train only the essential features of the model and will penalize the other features.The Equation 2 represents the polynomial regression equation.Here the x representing the input features and b representing the constants While the cost function made up of using elastic net which requires to optimize using different optimizers like Gradient Descent is represented in Equation <ns0:ref type='formula' target='#formula_1'>3</ns0:ref>. where y i is the actual value and β ′ x i is the predicted value while λ 1 and λ 2 are the regularization constants. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>argmin β y = b 0 + b 1 x 1 + b 2 x 2 1 + . . . + b n x n 1 (2) argmin β ∑ i y i − β ′ x i 2 + λ 1 ∑ k=1 |β k | + λ 2 ∑ k=1 β 2 k (<ns0:label>3</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Support Vector Machine, the polynomial kernel is used to find the best fit hyperplane representing the similarity of vectors, and the maximum margin classifier reduces the error rate. Support Vector Regression tries to fit the error rate within a threshold ε. Then it finds the hyperplane function by optimizing the primal function <ns0:ref type='bibr' target='#b40'>(Vapnik, 1995)</ns0:ref>. The boundary lines formed by support vectors are datapoints that are close to the boundary and ε distance apart from the hyper-plane. The kernel equation for the Support Vector Machines are shown in Equation (4) and Equation ( <ns0:ref type='formula'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>y = N ∑ i=1 (α i − α * i ) • ϕ (x i ) , ϕ(x) + b (4) y = N ∑ i=1 (α i − α * i ) • K (x i , x) + b (5)</ns0:formula><ns0:p>Random forest is a bagging ensemble technique, while the Gradient Boosting tree is the boosting ensemble technique. In bagging, the output gets predicted by considering the value of all the decision trees. The splitting of the tree is decided based on Information Gain. The Information Gain is depended completely on the degree of uncertainty. Gini Index and Entropy are the two methods to determine the degree of uncertainty which is represented in Equation 6 and 7 where p i is representing the probability values. In boosting, the weights get adjusted by considering the prediction from the previous decision tree. In both the methods, the residual error which is Ȳ = Y |Fo(x) calculated, where Fo(x) is the initial prediction.This residual error gets added to the initial value taken during the first step. The residual error fine-tuned until the expected value gets close to the ground truth values. However, the eXtreme Gradient Boosting proposed as a boosting Algorithm for better speed and performance. The In-built cross-validation ability, efficient handling of missing data, regularization for avoiding over-fitting, catch awareness, tree pruning, and parallelized tree building are the common advantages of the XGBoost Algorithm, which makes XGBoost more powerful. The objective function of XGBoost is shown in Equation <ns0:ref type='formula' target='#formula_4'>8</ns0:ref>, and it is solved using the Second-order Taylor polynomial approximation. <ns0:ref type='formula'>7</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_3'>Entropy = C ∑ i=1 −p i * log 2 (p i ) (6) GiniIndex = 1 − C ∑ i=1 (p i ) 2 (</ns0:formula><ns0:formula xml:id='formula_4'>L (t) = n ∑ i=1 l y i , ŷ(t−1) i + f t (x i ) + Ω ( f t )<ns0:label>(8)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>MATERIALS AND METHODOLOGIES Proposed Methodology for Smart Traffic Management System</ns0:head><ns0:p>In this section, we describe our method to the Smart Traffic Management System. Our proposed method is simple, robust, accurate and applicable both to round-about, cross-roads and fly-overs. Figure ( <ns0:ref type='formula'>1</ns0:ref>) explains the architecture of the proposed method for the Smart Traffic Management System. The whole architecture is divided into two phases. The first phase of architecture is to perform the object detection, and the second phase is to perform the Machine Learning based regression task using which the optimal time for the green light window is predicted. Hence, the two different datasets were used for performing the computer vision related task and machine learning related task.</ns0:p><ns0:p>The first step is to fetch the IP address of each lane. After that, the user has to select the Region Of Interest (ROI) for each lane manually once. For selecting the Region Of Interest(ROI), we proposed the use of Perspective Transformation <ns0:ref type='bibr'>(Shakunaga and Kaneko, 1989)</ns0:ref>. The perspective transformation-based ROI is the best way to detect the region of a lane.</ns0:p><ns0:p>After that the frame gets processed by the state of the art YOLOv4 object detector, to count the number of cars, buses, trucks, and bikes present on the lane.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_5'>T IMER ←− 0; Loop LANE ←− (LANE + 1)modN; V EHICLES ←− Y OLOv4.Detect( f rame); foreach objects ob j in VEHICLES do COUNT V EHICLES [ob j] ←− ob j.value ; RAIN ←− 0; isPrecipitation ←− W EAT HER API(LAT, LON); if isPrecipitation then RAIN ←− 1; PREDICT ED T IME ←− XGBoost.Predict(COUNT V EHICLES ); while T IMER not 0 do WAIT; T IMER ←− PREDICT ED T IME ; GREEN SIGNAL ←− T RUE ; if T IMER is 5 then Y ELLOW SIGNAL ←− T RUE;</ns0:formula><ns0:p>The main modules of our Algorithm are 1) Obtaining the density of the vehicles using the YOLOv4 Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Object Detection Model and 2) Predicting the optimal time for the green light window using the eXtreme Gradient Boost Prediction Model which briefly described in the subsequent sections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Object Detection</ns0:head><ns0:p>The object detection Algorithm is used to determine the density of the vehicles on the lane. We proposed the use of OpenCV based Leaky YOLOv4. The classes that we considered for the density of vehicles are Car, Bus, Truck, and Bike. Fixed confidence of 25 % while 5 % for Tiny model, NMS threshold of 50 % are configured as hyperparameter for Object Detection. These count of vehicles is passed to the prediction time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Prediction Model</ns0:head><ns0:p>To predict the timer of the green light window, we trained polynomial curve fitting and tree-based decisionmaking machine learning models. The methodology that we followed is that we first fit the model into the training data set. Then, from the 10-fold cross-validation evaluation metrics, the hyperparameters of the machine learning model were fine-tuned. After that, the unbiased evaluation of the model was made out of the unseen samples of the testing dataset.</ns0:p><ns0:p>The count of vehicles and information about the precipitation from Open Weather Map API <ns0:ref type='bibr'>(Ol-gaUkolova, 2017)</ns0:ref> in one hot encoded form passed to the model for training, which finally predicts the optimal time required by the vehicles to clear the lane.</ns0:p><ns0:p>This predicted time is set to the green light window as the timer of the previous lane gets completed.</ns0:p><ns0:p>When the last 5 seconds remaining, the light of the current lane changes from green to yellow, and the next lane gets processed by YOLOv4 and XGBoost without any delay.</ns0:p><ns0:p>This process gets repeated for each lane. Hence in our proposed Algorithm, the processing of lane and predicting is done on periodic bases only once.</ns0:p></ns0:div>
<ns0:div><ns0:head>Datasets</ns0:head><ns0:p>The proposed architecture encompasses two models Object Detection Model and Prediction Model.</ns0:p><ns0:p>The different Object Detection Models were experimented with using the Microsoft Common Objects in Context (MS COCO) dataset <ns0:ref type='bibr' target='#b29'>(Lin et al., 2014)</ns0:ref>. This dataset is very much useful for making the classification, detection, and segmentation task. The dataset contains 91 different types of objects with around 328K images. For performing the analysis of different Object Detection Algorithms, we utilized 80 classes. Among which Car, Bus, Truck, and Bike classes were gets segregated to pass the next phase, which is the prediction task.</ns0:p><ns0:p>For Prediction Models, we collected the dataset of 1128 sample points of different cross-roads which are from Vadodara City of Gujarat, India. The Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>, shows the six cross-roads of Vadodara City which are analyzed for prediction.</ns0:p><ns0:p>We randomly divided the total dataset into 90 % of the training set and 10 % of the testing set. So, we have 1015 sample data points in training while 113 sample points in the testing data set. For the validation, we have used the 10-Fold cross-validation <ns0:ref type='bibr' target='#b11'>(Browne, 2000)</ns0:ref> technique, which further split the training dataset into the validation dataset; the equal partition of 10 blocks is made.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>Our Algorithm is a two-step approach. In the first step, the count of the independent category of vehicles is obtained, which serves as input to the next step for predicting the optimal time of the green light window.</ns0:p><ns0:p>Thus the whole approach is the combination of detection using the YOLOv4 (You Only Look Once version4) object detection Algorithm and prediction using the XGBoost (eXtreme Gradient Boosting)</ns0:p><ns0:p>Algorithm; hence it is an amalgamation of YOLOv4 and XGBoost for managing the Traffic present at the lane.</ns0:p></ns0:div>
<ns0:div><ns0:head>Analysis of Object Detection Algorithm</ns0:head><ns0:p>The inference time is the crux for traffic management; hence, a model that takes lesser inference time and gives stupendous accuracy required. For that, we conducted a comparative performance study between different YOLOv4-based detectors to determine the best model over CPU based on accuracy and inference time in a constrained environment. Examples include OpenCV DNN leaky and Mish YOLOv4 <ns0:ref type='bibr' target='#b9'>(Bradski, 2000)</ns0:ref>, Open Neural Network Exchance (ONNX) YOLOv4 <ns0:ref type='bibr' target='#b7'>(Bai et al., 2019)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The inference and mAP analysis are shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, accuracy and time is determined by taking into account of 1xsingle core hyperthreaded Xeon Processors @2.3Ghz, i.e. (1 core, 2 threads) and 12.6 GB RAM. This analysis concludes that lite models are remarkably fast, but the error rate is malignant, which has a much higher impact while predicting optimum time. Darknet YOLOv4 is perfect for detection, but its inference time is more than 5 seconds, and it is unfeasible. We tested the PP-YOLO over traditional greedy NMS, and it is accurate and faster than Darknet YOLOv4 But, the inference time of PP YOLO is closer to 5 seconds; it also does not meet our requirements. The AP value of YOLOv4 ONNX and</ns0:p><ns0:p>OpenCV DNN Mish is the same, but as OpenCV is highly optimized for CPU, the inference time of</ns0:p><ns0:p>OpenCV DNN is lesser than ONNX. The equation of the Mish Activation Function and Leaky Activation Function is represented in Equations 9 and 10.</ns0:p><ns0:formula xml:id='formula_6'>f (x) = x • tanh(so f t plus(x)) = x • tanh (ln (1 + e x ))<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Leaky ReLU max(0.1x, x) (10)</ns0:p><ns0:p>In YOLOv4, a combination of Mish function + Cross Stage Partial Network (CSPDarknet53) is used, which, although a bit costly, improves detection accuracy by a significant amount, but in the case of vehicle detection, Mish and Leaky are very close, so we chose OpenCV based implementation of Leaky YOLOv4. <ns0:ref type='bibr' target='#b9'>(Bradski, 2000)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Analysis of Prediction Model</ns0:head><ns0:p>From the analysis of Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>., it is concluded that the dataset is best suited over decision tree-based models rather than polynomial-based models. The Extreme Gradient Boosting has outperformed during training, validation, and testing.</ns0:p><ns0:p>The evaluation metrics used are the r-squared coefficient of determination and the mean squared error (MSE) which are shown in Equation <ns0:ref type='formula'>11</ns0:ref>,12 and 13.</ns0:p><ns0:formula xml:id='formula_7'>r 2 = 1 − Sum o f Square o f Residual Error (ss res ) Sum o f Suare o f Total Variation (ss tot ) (11) SS res = ∑ i ((y i ) − f i ) 2 = ∑ i e 2 i and SS tot = ∑ i ((y i ) − ( y)) 2 (12) MSE = 1 n n ∑ i=1 (y i − y i ) 2 (13)</ns0:formula><ns0:p>For choosing the best parameters we have adopted the Bayesian Based Parameter Optimization from the hyperparameter framework known as 'Optuna' <ns0:ref type='bibr' target='#b4'>(Akiba et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The randomly selected data points from the testing dataset are analyzed in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. The count of vehicles using OpenCV YOLOv4 Leaky and precipitation information is taken by the XGBoost prediction model and the prediction of the optimal time is anticipated. This time gets compared with the Static Time, and the calculation of reduced waiting time carried out whose Equation is represented in Equation <ns0:ref type='formula' target='#formula_8'>14</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed This comparison shows that our system can reduce the waiting time at the intersection and improve traffic lights' efficiency. Our model predicts a minimum time of 15 seconds when there are no vehicles and with rain.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>1</ns0:p><ns0:formula xml:id='formula_8'>N N ∑ i=1 (Original time − Predicted time) Original time * 100%<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> shows the frequency count of the prediction. The lowest allocated time is 15 seconds when there are very few or no vehicles and the maximum allocated time is 100 seconds. Figure <ns0:ref type='figure'>6</ns0:ref> shows how the time is varying. When there is 0% to 25% density of vehicles then the allocated time is between 15 to 40 seconds. When there is 25% to 50% density of vehicles then the allocated time is between 25 to 60 seconds. When there is 50% to 75% density of vehicles then the allocated time is between 40 to 90 seconds. When there is 75% to 100% density, the allocated time is between 75 seconds to 100 seconds.</ns0:p><ns0:p>So, undershoot and overshoot between predicted time and labeled time had happened but overall the difference between Labelled Time and Model Prediction in all figures is minor indicating the model has a perfect fit in Gaussian Curve. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science Using this footage, the optimal time required to clear the lane by the vehicles present is manipulated. Our approach is simple to understand and to implement. It is scalable as it can handle the traffic with any flow and cost-efficient as our architecture does not require much maintenance or not using any sensors, and it is completely based on input frames from CCTV cameras. Our approach is robust as it can work very efficiently in the daytime as well as in the night time and perfectly works in any season, and we also provided the mechanism in software in which in case of failure, the notification sent to the control room and can handle traffic lights manually. We use real-time traffic data and give predictions based on a count of the independent category of vehicles present on the lane, unlike other proposed techniques.</ns0:p><ns0:p>Our proposed approach works both with low light and blur images. With this approach, the unnecessary waiting time and fuel consumption get reduced, resulting in lower air pollution. As we are modernizing and renovating the conventional methodology of Traffic Management System with Artificial Intelligence hence it is considered as Smart Traffic Management System.</ns0:p><ns0:p>The future scope is to implement the lane clearance for an emergency vehicle, obtaining rain and not rain information by classifying the frame itself, including specific time events such as rush hour and peak hours, etc.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>One of the most recent upgradations of YOLOv3 is the Paddle-Paddle YOLO (PP-YOLO) (Long et al., 2020), based on the Paddle Paddle Detector Framework presented by Baidu Inc. PP-YOLO uses ResNet50-vd as Backbone, Feature Pyramid Network (FPN) with DropBlock regularization as Neck and YOLOv3 as Head. PP-YOLO is faster than YOLOv4, improving the overall AP. They increased the mean Average Precision (mAP) by 1.5 % by introducing Intersection over Union (IoU) Loss, IoU Knowledge, and Grid Sensitive modules. The use of Matrix based Non-Max Suppression, which improves mAP by 0.6 %. They further improved the mAP of 0.3 % by introducing the CordConv and SPP.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Flowchart for the Smart Traffic Management System</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>, PP-YOLO (Long et al., 7/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Cross Roads undertaken for Analytics from Vadodara City -(Map data ©2021 Google)</ns0:figDesc><ns0:graphic coords='9,178.44,63.78,340.05,283.47' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Different Object Detection Algorithm Analysis for Traffic at Cross-Road in Day Time A) OpenCV DNN Leaky YOLOv4 B) OpenCV DNN Mish YOLOv4 C) ONNX YOLOv4 D) PP-YOLO E) Darknet YOLOv4 F) Darknet YOLOv4 Tiny</ns0:figDesc><ns0:graphic coords='10,141.74,63.78,413.57,340.16' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Different Object Detection Algorithm Analysis for Traffic at Cross-Road in Night Time A) OpenCV DNN Leaky YOLOv4 B) OpenCV DNN Mish YOLOv4 C) ONNX YOLOv4 D) PP-YOLO E) Darknet YOLOv4 F) Darknet YOLOv4 Tiny</ns0:figDesc><ns0:graphic coords='12,141.74,63.78,413.58,340.16' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Figure 7 shows how time is reduced against static time. The time of static time is different with different crossroad capacity mainly three categories they are having of 60 seconds, 90 seconds and 120 seconds and in all the cases the model predicted time is either equal to the static time when there is the full capacity of road occupied or in rest of cases it is less. 11/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Histogram of Predicted Time</ns0:figDesc><ns0:graphic coords='13,141.73,63.78,413.13,297.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Average value time</ns0:figDesc><ns0:graphic coords='14,206.79,63.78,283.46,198.42' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>YOLO Object Detection Comparison between YOLOv4 ONNX,YOLOv4 Darknet,YOLOv4 Darknet Tiny YOLOv4,PP-YOLO,OpenCV Leaky YOLOv4 and OpenCV YOLOv4. The Inference Time and Accuracy is calculated by using fixed Computational Environment .</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Inference time inDay light</ns0:cell><ns0:cell>Inference time in Evening (low light)</ns0:cell><ns0:cell>AP50 (416x416)</ns0:cell><ns0:cell>APM (416x416)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>YOLOv4 ONNX YOLOv4 Darknet YOLOv4 Darknet Tiny PP-YOLO OpenCV Leaky YOLOv4 ∼1.40821 sec ∼3.1327 sec ∼8.865 sec ∼0.44 sec ∼4.489 sec OpenCV Mish YOLOv4 ∼1.6733 sec</ns0:cell><ns0:cell>∼3.168 sec ∼8.965 sec ∼0.1 sec ∼4.468 sec ∼1.4109 sec ∼1.679 sec</ns0:cell><ns0:cell>63.3% 63.3% 40.2% 62.8% 62.7% 63.3%</ns0:cell><ns0:cell>44.4% AP 44.4% AP -45.2% AP 43.7% AP 44.4% AP</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Regression Model Comparison between Elastic Net, Support Vector Machine Regressor (SVR),Random Forest regressor and Extreme Gradient Boosting tree based (XGBoost GBT). The hyperparameters of each model are optimized and similar Training, Validation and Testing set used for each models for analysis.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Hyper-parameters Training set Cross validation Testing set</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Degree: 2</ns0:cell></ns0:row><ns0:row><ns0:cell>Elastic Net</ns0:cell><ns0:cell>Interaction: True Learning Rate: 0.05</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>L1 ratio: 0.5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Waiting Time Reduced by XGBoost in comparision to Static Time. The parameters used for prediction are number of Cars, Bus or Trucks, Bike and Precipitation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>CAR</ns0:cell><ns0:cell>BUS and TRUCK</ns0:cell><ns0:cell cols='2'>BIKE RAIN</ns0:cell><ns0:cell>XGBoost predicted (in sec)</ns0:cell><ns0:cell>Static Time (in sec)</ns0:cell><ns0:cell>Reduced Waiting Time</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>20%</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>NO</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>20%</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>33%</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>27%</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>42%</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>NO</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>47%</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>55%</ns0:cell></ns0:row></ns0:table><ns0:note>10/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:1:1:NEW 21 Mar 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Date: 25 February 2021
Dr. Li Zhang
Academic Editor, Peerj
Sub: Submission of manuscript entitled “An amalgamation of YOLOv4 and XGBoost for nextgen smart traffic management system” by Pritul et al.
Dear Dr. Zhang,
Greetings!
I, on behalf of my co-authors, am pleased to submit a manuscript entitled “An amalgamation
of YOLOv4 and XGBoost for next-gen smart traffic management system” for your kind
consideration to be published in Peerj.
Our manuscript shows how the waiting time can be reduced at cross-roads which in result
reduce the air-pollution. This paper emphasize over detecting the traffic in the frame and
predicting the optimal time required to clear the lane. Based on experiments it gets to
observe that the average of 32.3% of waiting time is reduced.
Our analysis will be of great interest to the readers of the Peerj especially those involved in
the Autonomous System.
We confirm that this manuscript has not been published elsewhere and is not under
consideration in whole or in part by another journal. All authors have approved the
manuscript and agree with submission to the Peerj.
All authors have substantially contributed to conducting the underlying research and wrote
this manuscript. Additionally, the authors declare that they have no conflicts of interest.
The authors hope to have a favourable response on this manuscript. We are happy to provide
any further details in support of this manuscript.
Sincerely,
Pritul Dave
Computer Science and Engineering Department,
Devang Patel Institute of Advance Technology and Research (DEPSTAR)
Charotar University of Science and Technology (CHARUSAT),
Email:pritul.dave@gmail.com;
Tel: +91 9904513475
Reviewer 1 (Anonymous)
Basic Reporting
The review of the state of the art is insufficiently addressed. Literature references should be
improved. See for example:
F. Castaño, G. Beruvides, R. E. Haber, and A. Artuñedo, “Obstacle recognition based on
machine learning for on-chip lidar sensors in a cyber-physical system,” Sensors (Switzerland),
vol. 17, no. 9, 2017, doi: 10.3390/s17092109.
F. Castaño, G. Beruvides, A. Villalonga, and R. E. Haber, “Self-tuning method for increased
obstacle detection reliability based on internet of things LiDAR sensor models,” Sensors
(Switzerland), vol. 18, no. 5, 2018, doi: 10.3390/s18051508.
The quality of figures should be improved. Font size is too small in some cases.
We have undergone both the literature review, and added in the manuscript. We have also
improved the font size of the figures, and the images are directly taken from CCTV.
Experimental design
Other machine learning methods should be explored. Why are not explored selfparameterization using gradient-free optimization methods? See for example:
R.-E. Precup and R.-C. David, Nature-Inspired Optimization Algorithms for Fuzzy Controlled
Servo Systems, Butterworth-Heinemann, Elsevier, Oxford, UK, 2019.
R. Haber et al., 'A Simple Multi-Objective Optimization Based on the Cross-Entropy Method,'
IEEE Access, vol. 5, pp. 22272-22281, 2017.
G. Wang and L. Guo, 'A novel hybrid bat algorithm with harmony search for global
numerical optimization,' Journal of Applied Mathematics, vol. 2013, 2013.
R. H. Guerra et al., “Digital Twin-Based Optimization for Ultraprecision Motion Systems with
Backlash and Friction,” IEEE Access, vol. 7, pp. 93462–93472, 2019, doi:
10.1109/ACCESS.2019.2928141.
We have explored gradient free optimization methods and we are using one such method
for hyper-parameter tunning as an parameter optimizer. We are using Bayesian based
probabilistic method from the framework Optuna for parameter tunning.
Validity of the findings
Other performance indices and techniques should be included in the comparison to assess
the real contribution of the proposed approach.
Conclusions are adequate according to the study.
We have added some more analysis to depict and elucidate the proposed approach.
Reviewer 2 (Anonymous)
-This paper does not present a Professional English, since the authors must write the
acronyms correctly, and some acronyms do not have their meaning. Correct this error. Also,
the words 'Figure', 'Table', 'Equation', 'Algorithm', must be written with the first letter in
capital letters. Review and correct it in the article.
We have updated English sentences as well as acronyms. We have corrected the error of
capital letters.
The authors have not placed some reference scientific literatures, so I suggest performing a
more in-depth investigation and adding these articles:
----- Zambrano-Martinez, J. L., Calafate, C. T., Soler, D., Lemus-Zúñiga, L. G., Cano, J. C.,
Manzoni, P., & Gayraud, T. (2019). A Centralized Route-Management Solution for
Autonomous Vehicles in Urban Areas. Electronics, 8 (7), 722.
--- Zhang, X .; Onieva, E .; Perallos, A .; Osaba, E .; Lee, V. Hierarchical fuzzy rule-based
system optimized with genetic algorithms for short term traffic congestion prediction.
Transp. Res. C Emerg. Technol. 2014, 43, 127–142
The structure of the article is well detailed and will match the results with the hypothesis
they present.
We have referred this articles and we updated the literature review and we are focusing
more over reducing waiting time at cross-roads.
The results are a bit weak on the part of the authors, because the authors must include some
Figure with their respective explanation, when the scenario is not applied the prediction, and
when the scenario is applied the prediction, that is noticed when it has exceeded the
prediction.
We have added some more analysis and figures for explaining the results.
Experimental Design
1) The authors do not present which scenario is being studied. Furthermore, they do not
present the parameters for that study. Some Figure of the stage must be placed.
The parameters we are studying is the density of vehicles individually and precipitation. We
have added algorithm and some more figures to make it more coherent.
2) Figure 2 can be transformed into an Algorithm, if it is done it would give more quality to
the paper.
We have transformed Figure 2 into algorithm.
3) Is there a figure that shows the prediction that the authors comment?
We are predicting the optimal time required to clear all the vehicles. The figure 1, table 3
and figure 5 are showing about prediction.
4) With which simulator has the study been performed?
We are not using any simulator. The study is done completely based on the real time frames
from CCTV images.
5) More graphics are needed for the reader to understand more about the work done.
We have updated with the figures and graphics.
6) How is the Smart Traffic Management System presented by the authors composed?
As we are modernizing and renovating the conventional methodology of Traffic
Management System with Artificial Intelligence hence it is considered as Smart Traffic
Management System. We have also updated in conclusion.
7) What are the routes that the vehicles have taken to predict traffic?
We have added the cross roads that we are analysing in Figure 2.
Validity of Findings:The authors do not present where the data comes from to perform the experiment. Which
must be detailed, and referenced
For object detection we are using MS COCO Dataset. For Prediction Models, we collected
the dataset of 1128 sample points of different cross-roads which are from Vadodara City of
Gujarat, India.
The conclusions must be improved according to the new implementations suggested by the
reviewers that support the results.
We have updated and improved the conclusion.
" | Here is a paper. Please give your review comments after reading it. |
147 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The traffic congestion and the rise in the number of vehicles have become a grievous issue, and it is focused worldwide. One of the issues with traffic management is that the traffic light's timer is not dynamic. As a result, one has to remain longer even if there are no or some number vehicles on a roadway, causing unnecessary waiting time, fuel consumption and leads to pollution. Prior work on smart traffic management systems repurposes the use of Internet of things, Time Series Forecasting, and Digital Image Processing. Computer Vision-based smart traffic management is an emerging area of research. Therefore a real-time traffic light optimization algorithm that uses Machine Learning and Deep Learning Techniques to predict the optimal time required by the vehicles to clear the lane is presented. This paper concentrates on a two-step approach.</ns0:p><ns0:p>The first step is to obtain the count of the independent category of the class of vehicles.</ns0:p><ns0:p>For this, the You Only Look Once version 4 (YOLOv4) object detection technique is employed. In the second step, an ensemble technique named eXtreme Gradient Boosting (XGBoost) for predicting the optimal time of the green light window is implemented.</ns0:p><ns0:p>Furthermore, the different implemented versions of YOLO and different prediction algorithms are compared with the proposed approach. The experimental analysis signifies that YOLOv4 with the XGBoost algorithm produces the most precise outcomes with a balance of accuracy and inference time. The proposed approach elegantly reduces an average of 32.3% of waiting time with usual traffic on the road.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Traffic management and pollution arising from the traffic are significant issues in India's metropolitan areas, resulting in unnecessary waiting time and congestion. The transportation is the third predominant cause of air pollution, according to the official data of Environmental Statistics 2019 <ns0:ref type='bibr' target='#b30'>(IndianMinistryOfStatistics, 2019)</ns0:ref>. Although India's government began to encourage public transport and escalated the private vehicle taxes, the effect is minimal. In recent times different methods for the regulation and management of the traffic, including Internet of Things <ns0:ref type='bibr' target='#b28'>(Greengard, 2015)</ns0:ref>, Time Series Forecasting <ns0:ref type='bibr' target='#b31'>(Jenkins, 1994)</ns0:ref> and Digital</ns0:p><ns0:p>Image processing <ns0:ref type='bibr' target='#b41'>(Rafael, 1992)</ns0:ref> are proposed. Although these approaches achieve impressive results, they are still not cost-effective, fast, accurate, and far from satisfactory to work in real-time scenarios. To overcome these, a method using the Convolutional Neural Network (KrizhevskyA, 2012) is bestowed.</ns0:p><ns0:p>The advancements and development of the effective algorithms in Computer Vision <ns0:ref type='bibr' target='#b46'>(Szeliski, 2010)</ns0:ref> enables the computers to perform the task without the use of sensors or human intervention and convey real-time information. As a result, an Artificial Intelligence-based <ns0:ref type='bibr' target='#b36'>(Long et al., 2020)</ns0:ref> Computer Vision solution is proposed to detect the traffic density on the lanes and anticipate the optimum time required to clear the traffic. Moreover, the computer vision-based approaches serves the visual interpretation of the scenarios, resulting in model transparency. The analyzed results validates that the proposed approach is faster at inference, cost-effective, accurate, and operates over CPU (Central Processing Unit) with minimal processing power.</ns0:p><ns0:p>At the intersection, regardless of the number of vehicles present over the lane, the traffic lights timer is in the constant round-robin phase. For instance, during the peak hours, the stream of traffic towards the south is significant, but decidedly less traffic flow in the other direction. However, the timer allocated to each lane remains constant, and as a repercussion, extraneous waiting time emerges. The Indian government has already established the CCTV (Closed-Circuit Television) cameras at the intersections and began using electronic format-based paperwork commonly known as e-challan, having a beneficial impact, but yet the dynamic traffic light not implemented. Additionally, in certain areas, the traffic police regulate traffic through hand-held traffic movements, which is arduous and cumbersome. Therefore, a simple and effective solution is bestowed to change the timer of the green light window based on the different classes of vehicles present over the lanes. Moreover, the precipitation details are taken into consideration. as it broadly affects the lane clearance time. State of the art, You Only Look Once (YOLOv4) <ns0:ref type='bibr' target='#b9'>(Bochkovskiy et al., 2020)</ns0:ref> algorithm is employed for detecting the vehicles and counting different classes of the vehicles since it is one stage detector and having higher accuracy along with lower inference time. The robust ensemble technique, eXtreme Gradient Boosting Algorithm (XGBoost) <ns0:ref type='bibr' target='#b17'>(Chen and Guestrin, 2016)</ns0:ref> is proposed for predicting the optimal time of the green light window as it is fast, efficient, accurate, and prune to overfitting. The prediction model is constructed based on analyzing the traffic patterns from the city of Vadodara during rush hours. Furthermore, Microsoft Common Object in Context (MS COCO) <ns0:ref type='bibr' target='#b35'>(Lin et al., 2014)</ns0:ref> dataset is being considered for detecting the different classes of vehicles. After constructing and fine-tuning the prediction and the detection model, the experimental results shows a reduction of 32.3% in the average waiting time of vehicles during the time interval of regular traffic. Moreover, in this paper, different objection detection algorithms and regression-based prediction models are explored. Among them YOLOv4 (You Look Only Once version 4) <ns0:ref type='bibr' target='#b9'>(Bochkovskiy et al., 2020)</ns0:ref> and XGBoost (Extreme Gradient Boosting) <ns0:ref type='bibr' target='#b17'>(Chen and Guestrin, 2016)</ns0:ref> outperformed in all the constrained scenarios.</ns0:p><ns0:p>The remainder of the paper is organized as follows: Section 2 presents the related work addressing the Smart Traffic Management framework using different techniques. Section 3 describes the materials and proposed methodology along with algorithm for building the system's architecture. The models' results are discussed in Section 4, along with that the models' output is addressed. Section 5 concludes the paper by discussing possible future work for smart traffic management systems.</ns0:p></ns0:div>
<ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>Preliminary approaches for smart traffic management systems are IoT-based approaches <ns0:ref type='bibr' target='#b28'>(Greengard, 2015)</ns0:ref>, Time-Series based approaches <ns0:ref type='bibr' target='#b31'>(Jenkins, 1994)</ns0:ref>, and Computer Vision-based approaches <ns0:ref type='bibr' target='#b41'>(Rafael, 1992)</ns0:ref>. Although these approaches yield plausible results, they are expensive, not repurposing, less trustworthy, and less interpretable. <ns0:ref type='bibr' target='#b32'>(Kalaiselvi et al. (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b45'>Sharma et al. (2018)</ns0:ref>) have proposed a technology-driven solution, Light Fidelity (LiFi), that transmits signals to the traffic control system signaling an emergency vehicle's arrival. <ns0:ref type='bibr' target='#b2'>(Akhil and Parvatha, 2017)</ns0:ref> discussed the Sound Navigation and Ranging (SONAR) technique which measures the density of vehicles with an array of UltraSonic Sensors. The timer of the green light and red light is derived from SONAR readings. <ns0:ref type='bibr' target='#b13'>(Bui et al., 2017)</ns0:ref> proposed an IoT-based sensor network combined with the game theory. Every IoT-based entities such as vehicles, sensors, and traffic lights exchange the information in that proposed approach. The Cournot competition model for non-priority vehicles and the Stackelberg competition model for priority vehicles are adopted. Based on this game model, the required time to allocate for a traffic light is determined. <ns0:ref type='bibr' target='#b7'>(Atta et al., 2020)</ns0:ref> has discussed methodology using RFID sensor modules for sensing the density of vehicles and minimizing the congestion. For every incoming vehicle, the signal is sent to the RFID receiver, and accordingly, the count of vehicles is incremented.</ns0:p></ns0:div>
<ns0:div><ns0:head>IoT Based Approach</ns0:head></ns0:div>
<ns0:div><ns0:head>2/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Following that, the fuzzy inference is drawn to predict the estimated time. <ns0:ref type='bibr' target='#b48'>(Zambrano-Martinez et al., 2019)</ns0:ref> proposed a load balancing algorithm in which traffic is routed to the particular lane where there is lesser traffic. A route server is implemented to handle all of the city's traffic. It includes the SUMO and DFROUTER tools, which produce data from real-world traffic traces. The ABATIS connection interface connects two simulators, the SUMO traffic, and the OMNET++ network simulator. Furthermore, the results are validated by injecting 34,065 vehicles. An 8% increase in travel time and a 16% improvement under heavy loads is observed in their proposed approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Time-Series Forecasting</ns0:head><ns0:p>The time-series methods are based on historical data. <ns0:ref type='bibr' target='#b39'>(Natafgi et al., 2018)</ns0:ref> proposed an adaptive reinforcement learning approach. In which the multiple agents are assigned to a crossroad and learn optimal time and distance to travel based on reward and penalty of their actions. <ns0:ref type='bibr' target='#b6'>(Ata et al., 2019)</ns0:ref> proposed Artificial Backpropagation Neural Network for the Smart Road Traffic Congestion, which predicts time delay based on traffic speed, humidity, wind speed, and air temperature. Furthermore, <ns0:ref type='bibr'>(Chen et al., 2016)</ns0:ref> refined this method using Genetic Algorithms. The number of vehicles heading towards the green light and the vehicles halted at the red light are used as parameters. However, historical data is less accurate when compared with real-time scenarios and can yield erroneous results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Vision and Machine Learning Based Approach</ns0:head><ns0:p>In the Digital Image Processing-based approach, the image is subtracted using foreground-background subtraction with the reference image, and the blob of the object is obtained. Thereafter the traffic light is controlled by counting the number of detected blobs <ns0:ref type='bibr' target='#b19'>(Choudekar et al. (2011</ns0:ref><ns0:ref type='bibr' target='#b24'>),Frank et al. (2019)</ns0:ref>).</ns0:p><ns0:p>However, conventional image processing approaches are not much robust in terms of the changing light conditions, and become skewed against dense traffic. Other proposed methods involve detecting and tracking the rearmost vehicle in the frame <ns0:ref type='bibr' target='#b4'>(Asha and Narasimhadhan, 2018)</ns0:ref>. However, since determining the last vehicle is difficult, this approach is limited to a local solution. <ns0:ref type='bibr' target='#b15'>(Castaño et al., 2017)</ns0:ref> and <ns0:ref type='bibr' target='#b16'>(Castaño et al., 2018)</ns0:ref> introduced the obstacle detection method where the object is recognized using Support Vector Machine. Collision avoidance is accomplished using a Multi-Layer Perceptron and a Self-Organizing Map. Along with that for collision detection, the 3DLiDAR technique is presented and minimized using Reinforcement Learning. In order to recognize objects, the Support vector machine (SVM) classifier is used. Nonetheless, the complexity of SVM escalates as the training sample size increases, necessitating further computation and resulting in overfitting <ns0:ref type='bibr' target='#b14'>(Caruana and Niculescu-Mizil, 2006)</ns0:ref>. However, Convolutional Neural Networks (CNN) are more advanced than Multi-Layer Perceptrons because filters and kernels extract only the essential features from the image. <ns0:ref type='bibr' target='#b29'>(Harrou et al., 2020)</ns0:ref> proposed a piecewise linear traffic (PWSL) and Kalman filter as a virtual sensor for estimating traffic densities. The residuals from the actual and virtual sensors are fed as an input to the unsupervised K-nearest neighbor (KNN) algorithm. The temporal clustering optical flow features based on Temporal Unknown Incremental Clustering are proposed by <ns0:ref type='bibr' target='#b34'>(Kumaran et al., 2019)</ns0:ref>. In that approach, the moving cluster of objects are tracked based on the region of interest. These moving clusters determines the density of vehicles on the lane. The signal time is estimated based on the type of vehicle rather than the number of vehicles. Throughput and Average Waiting Time Optimization Model, which is based on Gaussian regression, has been trained for the departure and arrival rates of different clusters. <ns0:ref type='bibr' target='#b38'>(Lv et al., 2014)</ns0:ref> predicted the density of traffic at different roads. The 15,000 data points are gathered and analyzed from different sensors. The stacked autoencoder is used to make the prediction. It consists of multiple autoencoders stacked together followed by a logistic regression layer at the top. Furthermore, the results of stacked autoencoders are compared with different neural networks. This paper focuses on efficiently reducing excessive waiting time in real-time scenarios by using computer vision-based object detection and machine learning-based prediction models. As a result, two distinct studies are discussed in subsequent sections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Overview Object detection Algorithms</ns0:head><ns0:p>Prior to advancing the deep Convolutional Neural Network (KrizhevskyA, 2012), Image Processing-based methods for object detections were used. The most widely used methods were Scale-Invariant Feature Transformation (SIFT) <ns0:ref type='bibr' target='#b37'>(Lowe, 2004)</ns0:ref>, and the Histogram of Oriented Gradients (HOG) method <ns0:ref type='bibr' target='#b20'>(Dalal and Triggs, 2005)</ns0:ref>. Thereafter, a sliding window approach is introduced for object detection. Region-based Convolutional Neural Network (R-CNN) <ns0:ref type='bibr' target='#b27'>(Girshick et al., 2014)</ns0:ref> <ns0:ref type='table' target='#tab_4'>2020:10:54950:3:0:NEW 8 May 2021)</ns0:ref> Manuscript to be reviewed Computer Science the 'Region Proposals,' which consist of obtaining a subset of the image and then classifying the object using Convolutional Neural Network <ns0:ref type='bibr' target='#b25'>(Fukushima et al., 1983)</ns0:ref>. Hence R-CNN is a two-stage detector, and it requires more time for the inference.</ns0:p><ns0:p>You Only Look Once (YOLO) <ns0:ref type='bibr' target='#b43'>(Redmon and Farhadi, 2018</ns0:ref>) is a one-stage detector in which the image is divided into the SxS grids, each of which acts as a classifier cell predicting the bounding box and confidence ratio. The Fast R-CNN <ns0:ref type='bibr' target='#b26'>(Girshick, 2015)</ns0:ref>, a variant of R-CNN, has an mAP of 70% with 0.5 FPS, while the YOLO-based model has 63.4% mAP with 155 FPS on Pascal VOC 2007 dataset <ns0:ref type='bibr' target='#b21'>(Everingham, 2012)</ns0:ref>. The mAP value is the mean of average precision, which combines the value of precision and recall values given as ∑ r P@r R <ns0:ref type='bibr' target='#b49'>(Zhang and Zhang, 2009)</ns0:ref>.</ns0:p><ns0:p>YOLOv2 <ns0:ref type='bibr' target='#b42'>(Redmon and Farhadi, 2017</ns0:ref>) employs the Batch Normalization, which improved accuracy by 2%. Furthermore, the principle of anchor box prediction is introduced. The bounding box's dimensions are predicted using KMeans Clustering. The mAP value of YOLOv2 is 76.8%, which is 3.6% higher than Faster RCNN over Pascal VOC 2007 dataset <ns0:ref type='bibr' target='#b23'>(Everingham et al., 2010)</ns0:ref>.</ns0:p><ns0:p>YOLOv3 improved the loss function, which is exhibited in Equation ( <ns0:ref type='formula'>1</ns0:ref>). The whole loss function of YOLOv3 is formulated on regression loss, confidence loss, and classification loss. Moreover, the concept of different layers for the different sizes of the object is introduced. In Equation, 1 1i j obj states that the output value will be 1 if the box and cell value matches; otherwise, it will be 0. When their is no entity 1i j noobj reverses the output value.</ns0:p><ns0:formula xml:id='formula_0'>λ coord S 2 ∑ i=0 B ∑ j=0 1i j obj (xi − xi) 2 + (yi − ŷi ) 2 +λ coord S 2 ∑ i=0 B ∑ j=0 1i j obj √ wi − √ ŵi 2 + √ hi − ĥi 2 + S 2 ∑ i=0 B ∑ j=0 1i j obj Ci − Ĉi 2 +λ noobj S 2 ∑ i=0 B ∑ j=0 1i j noobj Ci − Ĉi 2 + S 2 ∑ i=0 1i obj ∑ c ∈ dasses (p i (c) − pi (c)) 2 (1)</ns0:formula><ns0:p>The YOLOv4 achieves an mAP value of 43% with 43 FPS, whereas FasterRCNN achieves 39.8% mAP with 9.4 FPS over Titan X Pascal GPU.</ns0:p><ns0:p>The Paddle-Paddle YOLO (PP-YOLO) <ns0:ref type='bibr' target='#b36'>(Long et al., 2020)</ns0:ref> In this paper, for the Smart Traffic Management System, the YOLOv4 based architecture is employed to detect the vehicles.</ns0:p></ns0:div>
<ns0:div><ns0:head>Overview of Machine Learning Prediction Models</ns0:head><ns0:p>Machine learning-based regression models are categorized into two types: polynomial curve-based approach and decision tree-based approach.</ns0:p><ns0:p>The Elastic Net <ns0:ref type='bibr' target='#b50'>(Zou and Hastie, 2008)</ns0:ref> and Support Vector Machine regressor <ns0:ref type='bibr' target='#b44'>(Scholkopf, 1998)</ns0:ref> are based on polynomial approach. While the Random Forest <ns0:ref type='bibr' target='#b11'>(Breiman, 1996)</ns0:ref> and eXtreme Gradient</ns0:p><ns0:p>Boosting <ns0:ref type='bibr' target='#b17'>(Chen and Guestrin, 2016)</ns0:ref> are the decision tree-based approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The Elastic Net is based on the Lasso and Ridge regularization methods. The regularization allows to train only the essential features of the model while penalizing the rest and prevent from overfitting.</ns0:p><ns0:p>Equation 2, represents the polynomial regression, in which the corresponding constants b are multiplied for each polynomial input x. These values optimized using Gradient Descent Optimizer. Equation <ns0:ref type='formula'>3</ns0:ref>, represents the Gradient Descent cost function. The y i represents the actual feature value and β ′ x i is the predicted value and λ 1 and λ 2 are the regularization constants.</ns0:p><ns0:p>argmin</ns0:p><ns0:formula xml:id='formula_1'>β y = b 0 + b 1 x 1 + b 2 x 2 1 + . . . + b n x n 1 (2) argmin β ∑ i y i − β ′ x i 2 + λ 1 ∑ k=1 |β k | + λ 2 ∑ k=1 β 2 k (3)</ns0:formula><ns0:p>In the Support Vector Machine, the polynomial kernel computes the optimal fit hyperplane by scaling down the maximum margin classifier's error rate within a threshold ε. Thereafter, the hyperplane function is computed by optimizing the primal function <ns0:ref type='bibr' target='#b47'>(Vapnik, 1995)</ns0:ref>. The support vectors are the data points that are near to the boundary of hyperplane and at ε distance far. Equations ( <ns0:ref type='formula'>4</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_2'>5</ns0:ref>) represent the kernel equation of the Support Vector Machine. an advanced Gradient Boosting Tree Algorithm. The built-in cross-validation capability, efficient handling of missing data, regularization to avoid overfitting, catch awareness, tree pruning, and parallelized tree building are all features that contribute to XGBoost's robustness. Equation <ns0:ref type='formula' target='#formula_4'>8</ns0:ref>, shows the objective function of XGBoost, which is solved using the Second-order Taylor polynomial approximation. <ns0:ref type='formula'>7</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>y = N ∑ i=1 (α i − α * i ) • ϕ (x i ) , ϕ(x) + b (4) y = N ∑ i=1 (α i − α * i ) • K (x i , x) + b (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>Entropy = C ∑ i=1 −p i * log 2 (p i ) (6) GiniIndex = 1 − C ∑ i=1 (p i ) 2 (</ns0:formula><ns0:formula xml:id='formula_4'>L (t) = n ∑ i=1 l y i , ŷ(t−1) i + f t (x i ) + Ω ( f t )<ns0:label>(8)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>MATERIALS AND METHODOLOGIES Proposed Methodology for Smart Traffic Management System</ns0:head><ns0:p>This section discusses the proposed methodology for Smart Traffic Management System. The proposed method is simple, robust, accurate, and applicable to traffic circles, cross-roads, and fly-overs. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The XGBoost predicts the optimal lane clearance time for the green light window once the assigned previous lane's timer is completed.</ns0:p><ns0:p>The entire process works seamlessly without creating any buffer zone in processing time. The processing of the next lane is completed in the last 5 seconds when there is a yellow light. As a result, lane processing and prediction are performed once in a complete sprint. This flow of the whole system is depicted in Algorithm 1.</ns0:p><ns0:p>Our algorithm's main modules are 1) Obtaining the vehicles' density using the YOLOv4 Object Detection Model and 2) Predicting the optimal time for the green light window using the eXtreme Gradient Boost Prediction Model, which is briefly described in the subsequent sections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Object Detection</ns0:head><ns0:p>The OpenCV-based Leaky YOLOv4 is proposed for analyzing the density of vehicles presented over the lane. The classes that are taken into consideration for vehicles' density are Car, Heavy Loaded Vehicles, and Bike. Fixed confidence of 25% while 5% for the Tiny model, NMS threshold of 50% are configured as hyper-parameter for Object Detection. After that, the count of vehicles is passed to determine the prediction time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Prediction Model</ns0:head><ns0:p>Polynomial curve fitting and tree-based regression models are proposed to predict the green light window This process is repeated for each lane. Therefore in our proposed algorithm, the processing of lanes and predictions are performed only once per cycle.</ns0:p></ns0:div>
<ns0:div><ns0:head>Datasets</ns0:head><ns0:p>The proposed architecture comprises of two models Object Detection Model and Prediction Model. Hence two distinct datasets are used.</ns0:p><ns0:p>1) The Microsoft Common Objects in Context (MS COCO) dataset <ns0:ref type='bibr' target='#b35'>(Lin et al., 2014)</ns0:ref> is considered for object detection. This dataset is often used for performing classification, detection, and segmentation.</ns0:p><ns0:p>The dataset consists of 91 different types of objects with around 328K images. The Car, Bus, Truck, and Bike classes are segregated, and the vehicle's count is passed to the prediction model.</ns0:p><ns0:p>2) A dataset of 1128 sample points from six different crossroads in Vadodara City, Gujarat, India, is acquired for training the prediction model. These particular crossroads are illustrated in Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>. The total dataset is partitioned into 90% of the training set and 10% of the testing set, which comprises 1015 as the sample training data points and 113 testing data points. For the validation, the 10-Fold cross-validation <ns0:ref type='bibr' target='#b12'>(Browne, 2000)</ns0:ref> technique is utilized, which further split the training dataset into the validation dataset;</ns0:p><ns0:p>with the equal partition of 10 blocks.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS AND DISCUSSION</ns0:head><ns0:p>The proposed Algorithm is a two-step approach. In the first step, the count of the independent category of vehicles is obtained, which serves as input to the next step for predicting the optimal time of the green light window. Consequently, the whole approach is the amalgamation of detection using the YOLOv4 (You Only Look Once version4) object detection Algorithm and prediction using the XGBoost (eXtreme Gradient Boosting) Algorithm; As a result, it is an amalgamation of YOLOv4 and XGBoost for regulating the Traffic present at the lane.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Analysis of Object Detection Algorithm</ns0:head><ns0:p>The inference time is essential in traffic management; hence, it is reasonable to construct a better accuracy model with lesser inference time. Comparative performance analysis of different YOLO-based detectors is performed to evaluate the accuracy and inference time over CPU in a constrained environment. The six different models are proposed and evaluated including OpenCV DNN based leaky and Mish YOLOv4 <ns0:ref type='bibr' target='#b10'>(Bradski, 2000)</ns0:ref>, Open Neural Network Exchange (ONNX) based YOLOv4 <ns0:ref type='bibr' target='#b8'>(Bai et al., 2019)</ns0:ref>, PP-YOLO <ns0:ref type='bibr' target='#b36'>(Long et al., 2020)</ns0:ref>, Darknet YOLOv4, and Darknet YOLOv4 lite <ns0:ref type='bibr' target='#b9'>(Bochkovskiy et al., 2020)</ns0:ref> from MS COCO dataset. A fixed confidence level of 5% is defined for the tiny model in the experiment, and a threshold of 25% is defined for the remaining models. The NMS threshold is set to 50%, with an input image size of 1359x720x3 for Day Time and 1366x768x3 for Evening Time. The network resolution is set to 416x416x3 pixels. Figures <ns0:ref type='figure' target='#fig_7'>3 and 4</ns0:ref>, depict the study of the different object detection algorithms during the day and at night under low -light conditions.</ns0:p><ns0:p>The inference and mAP analysis is described in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>. The accuracy and inference time of object detection models are evaluated under the constrained environment with 1xsingle core hyperthreaded Xeon Processors @2.3GHz (1 core, 2 threads) with 12.6 GB RAM. From this analysis, it is concluded that lite models are remarkably fast, but the error rate is high, resulting in a significant impact over predicting the optimal time. Darknet YOLOv4 is highly accurate, but the inference time exceeds 5 seconds, making it infeasible. Although the PP-YOLO using conventional greedy NMS is more accurate and faster than Darknet YOLOv4, its inference time is closer to 5 seconds. The AP value of YOLOv4 ONNX and</ns0:p><ns0:p>OpenCV DNN Mish is the same; however, since OpenCV is substantially optimized for CPU, OpenCV DNN has a lesser inference time than ONNX. Equations 9 and 10 represent the Mish Activation Function and Leaky Activation Function respectively.</ns0:p><ns0:formula xml:id='formula_5'>f (x) = x • tanh(so f t plus(x)) = x • tanh (ln (1 + e x ))<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Leaky ReLU max(0.1x, x) (10)</ns0:p></ns0:div>
<ns0:div><ns0:head>8/15</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b10'>(Bradski, 2000)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_6'>MSE = 1 n n ∑ i=1 (y i − y i ) 2<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>The randomly selected data points from the testing dataset are analyzed in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>. The XGBoost 306 prediction model considers the number of vehicles and precipitation details to determine the optimal time.</ns0:p></ns0:div>
<ns0:div><ns0:head>307</ns0:head><ns0:p>Thereafter, the predicted time is compared with the static time, and the calculation of reduced waiting The histogram along with kernel density estimation (KDE) for prediction model is depicted in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>. The data distribution is based on the density of vehicles against the time predicted by the model. The probability density of continuous or non-parametric data variables is visualised using the Kernel Density Estimate. It is calculated as the region in the interval between the density function (graph) and the x-axis which is represented in the Equation <ns0:ref type='formula'>15</ns0:ref>. The b is representing the bandwidth of bin, the kernel density function K(.) is the chosen Kernel weight function and x i is the observing data point. The KDE reflects how far the individual data point is from the mean data point in the same bin. This plot clearly shows that the distribution is not skewed. The average predicted value under normal traffic is 46.834 seconds while </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In this work, a unified Algorithm for the Smart Traffic Management System is introduced to address traffic congestion by predicting the optimal time of the green light window. The proposed approach replaces the conventional method of allocating timer in a round-robin pattern with the dynamic time allocation. The research includes the use of incoming frames from the CCTV cameras. As a result, integrating it with traffic lights is simpler and efficient. The proposed approach is scalable as the system works appropriately during the high stream and in low traffic and it is cost-efficient as it does not require much maintenance or any sensors; instead, it is completely based on input frames from the CCTV cameras.</ns0:p><ns0:p>Furthermore, the proposed approach is robust, versatile, and works efficiently during the day and at night time with low street lights, and it works flawlessly under all weather conditions. An additional Manuscript to be reviewed The future scope is to implement the lane clearance for an emergency vehicle, assessing the weather by classifying the frame, considering specific time events such as rush hour and peak hours into the effect.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>, is the recent advancement based on Paddle Paddle Detector Framework. It employs ResNet50-vd as Backbone, Feature Pyramid Network (FPN) with DropBlock regularization as Neck and YOLOv3 as Head. The overall AP value is increased to 1.5% by introducing Intersection over Union (IoU) Loss and Grid Sensitive modules. Furthermore, the Matrix-based Non-Max Suppression and the CordConv increases mAP value by 0.8%. The YOLO-based object detection framework outperformed since it detects the objects in a single shot. In this paper, the different novel YOLO architectures and their implementations are compared. The metrics AP 50 , the average precision at IoU of 50, and AP M , the average precision with the object size medium are considered for comparison.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>)</ns0:head><ns0:label /><ns0:figDesc>Random forest is a bagging technique based on decision trees, whereas the Gradient Boosting tree (GBT) is an ensemble boosting technique. Bagging is a technique that determines the output by considering all the values of the decision tree. The Information Gain based on Gini Index and Entropy determines the split of the decision tree. The Gini Index and Entropy are the two methods used to calculate the Information Gain, depicted in Equations 6 and 7. The boosting technique involves weight adjustment based on the previous decision tree prediction. The residual error Ȳ = Y |Fo(x) is determined in both the bagging and boosting techniques. This residual error is added to the initial value and fine-tuned until the value approaches close to the ground truth values. However, the eXtreme Gradient Boosting (XGBoost) is</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Flowchart for the Smart Traffic Management System</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>timer. The methodology followed for training the model is to split the dataset into the training part and testing part. Furthermore, the 10-fold cross-validation technique<ns0:ref type='bibr' target='#b12'>(Browne, 2000)</ns0:ref> is applied, which divides the training data into the training set and validation set. The model is first trained on training data and then fine-tuned over validation data. Finally, the model's unbiased evaluation is performed on the unseen testing dataset. Algorithm for Green Light Prediction input :Set of IP Addresses (N) Set of Region of Interest (N) output :Predicted time of Green Signal begin LANE ←− 0; T IMER ←− 0; Loop LANE ←− (LANE + 1)modN; V EHICLES ←− Y OLOv4.Detect( f rame); foreach objects ob j in VEHICLES do COUNT V EHICLES [ob j] ←− ob j.value ; RAIN ←− 0; isPrecipitation ←− W EAT HER API(LAT, LON); if isPrecipitation then RAIN ←− 1; PREDICT ED T IME ←− XGBoost.Predict(COUNT V EHICLES ); while T IMER not 0 do WAIT;T IMER ←− PREDICT ED T IME ; GREEN SIGNAL ←− T RUE ; if T IMER is 5 then Y ELLOW SIGNAL ←− T RUE;Along with the count of vehicles, the precipitation details are fetched from Open Weather Map API<ns0:ref type='bibr' target='#b40'>(OlgaUkolova, 2017)</ns0:ref> in one hot encoded form and conveyed to the model.The predicted time is set to the green light window once the previous lane's timer is completed. When the last 5 seconds remain, the light of the current lane switches from green to yellow, and the next lane's processing is performed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Cross-Roads undertaken for analytics from Vadodara City -(Map data ©2021 Google)</ns0:figDesc><ns0:graphic coords='9,178.44,63.78,340.05,283.47' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Different object detection algorithm analysis for traffic at cross-road in the day time A) OpenCV DNN Leaky YOLOv4 B) OpenCV DNN Mish YOLOv4 C) ONNX YOLOv4 D) PP-YOLO E) Darknet YOLOv4 F) Darknet YOLOv4 Tiny</ns0:figDesc><ns0:graphic coords='10,141.74,63.78,413.57,340.16' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Different object detection algorithm analysis for traffic at cross-road in the night time A) OpenCV DNN Leaky YOLOv4 B) OpenCV DNN Mish YOLOv4 C) ONNX YOLOv4 D) PP-YOLO E) Darknet YOLOv4 F) Darknet YOLOv4 Tiny</ns0:figDesc><ns0:graphic coords='11,141.74,63.78,413.58,311.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>308</ns0:head><ns0:label /><ns0:figDesc>time is performed, which is depicted in Equation 14. The reduced average waiting time is 32.3% in 309 comparision with the static time allocated to the usual road traffic based on the test dataset. Hence the 310 proposed methodology reduces the intersection's waiting time and improves the traffic lights' efficiency.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Histogram along with Kernel Density Estimation for the predictive model</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Average value time between model predicted and labelled for training A) 25% of Vehicles B) 50% of Vehicles C) 75% of Vehicles D) 100% of Vehicles</ns0:figDesc><ns0:graphic coords='14,141.73,63.78,413.57,267.52' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,206.79,63.78,283.46,198.42' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>304 Analysis of Prediction Model 305</ns0:head><ns0:label /><ns0:figDesc>Elastic Net, Support Vector Regressor, Random Forest Regressor, and Extreme Gradient Boosting algorithms are analyzed to determine the prediction model. Elastic Net and Support Vector Regressors are polynomial-based regressors, while Random Forest and Extreme Gradient Boosting are decision tree-based regressors. Table2details the fine-tuned hyper-parameters and regression analysis of each model across the training, validation, and test datasets. The Bayesian parameter optimization technique from the hyperparameter framework 'Optuna' is considered for parameter optimization and function subset selection<ns0:ref type='bibr' target='#b3'>(Akiba et al., 2019)</ns0:ref>. The r-squared coefficient of determination and the mean squared error (MSE) is used as an evaluation metrics, depicted in Equations 11,12 and 13. Based on the analysis findings, it is concluded that the decision tree-based models are best fit than polynomial-based models. Furthermore, studies shows that Elastic Net performed poorly with a r2 score of 0.638 and an MSE value of 21.79 over unseen samples, whereas Extreme Gradient Boosting outperformed and shows promising results with a r2 score of 0.92 and an MSE value of 5.53.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>r 2 = 1 −</ns0:cell><ns0:cell>Sum o f Square o f Residual Error (ss res ) Sum o f Suare o f Total Variation (ss tot )</ns0:cell><ns0:cell>(11)</ns0:cell></ns0:row></ns0:table><ns0:note>9/15PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>YOLO Object Detection Comparison between YOLOv4 ONNX,YOLOv4 Darknet,YOLOv4 Darknet Tiny YOLOv4,PP-YOLO,OpenCV Leaky YOLOv4 and OpenCV YOLOv4.The inference time and accuracy is calculated by using fixed computational environment .</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>Inference time inDay light</ns0:cell><ns0:cell>Inference time in Evening (low light)</ns0:cell><ns0:cell>AP50 (416x416)</ns0:cell><ns0:cell>APM (416x416)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>YOLOv4 ONNX YOLOv4 Darknet YOLOv4 Darknet Tiny PP-YOLO OpenCV Leaky YOLOv4 ∼1.40821 sec ∼3.1327 sec ∼8.865 sec ∼0.44 sec ∼4.489 sec OpenCV Mish YOLOv4 ∼1.6733 sec</ns0:cell><ns0:cell>∼3.168 sec ∼8.965 sec ∼0.1 sec ∼4.468 sec ∼1.4109 sec ∼1.679 sec</ns0:cell><ns0:cell>63.3% 63.3% 40.2% 62.8% 62.7% 63.3%</ns0:cell><ns0:cell>44.4% AP 44.4% AP -45.2% AP 43.7% AP 44.4% AP</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Regression Model Comparison between Elastic Net, Support Vector Machine Regressor (SVR),Random Forest regressor and Extreme Gradient Boosting tree based (XGBoost GBT). The hyperparameters of each model are optimized for Training, Validation and Testing set.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Hyper-parameters Training set Cross validation Testing set</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Degree: 2</ns0:cell></ns0:row><ns0:row><ns0:cell>Elastic Net</ns0:cell><ns0:cell>Interaction: True Learning Rate: 0.05</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>L1 ratio: 0.5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Waiting Time Reduced by XGBoost in comparison to Static Time. The parameters used for prediction are number of Cars, Bus or Trucks, Bike and Precipitation. median value is 45.01 seconds. These indicates that their is a symmetric distribution in predicted time with respect to density of traffic. The Minimum predicted time is 14.98 seconds and the maximum time predicted is 100.03 seconds. The average difference between labelled time and model prediction is minimal, indicating that the model has a perfect fit over the Gaussian probability density curve. The comparison with the actual allocated static time over the Gaussian probability distribution is presented in Figure 7. This figure demonstrates that the proposed solution effectively decreases waiting time as opposed to the static allocated time.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>CAR</ns0:cell><ns0:cell>BUS and TRUCK</ns0:cell><ns0:cell cols='2'>BIKE RAIN</ns0:cell><ns0:cell>XGBoost predicted (in sec)</ns0:cell><ns0:cell>Static Time (in sec)</ns0:cell><ns0:cell>Reduced Waiting Time</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>20%</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>NO</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>20%</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>33%</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>44</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>27%</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>42%</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>NO</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>47%</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>YES</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>55%</ns0:cell></ns0:row></ns0:table><ns0:note>11/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021) Manuscript to be reviewed Computer Science the</ns0:note></ns0:figure>
<ns0:note place='foot' n='15'>/15 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54950:3:0:NEW 8 May 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Date: 8 May 2021
Dr. Li Zhang
Academic Editor, Peerj
Sub: Submission of tracked changes manuscript entitled “An amalgamation of YOLOv4 and
XGBoost for next-gen smart traffic management system” by Pritul et al.
Dear Dr. Zhang,
I, on behalf of my co-authors, am pleased to submit revised manuscript entitled “An
amalgamation of YOLOv4 and XGBoost for next-gen smart traffic management system”. The
comments and suggestions gave us the opportunity to improve the paper. We intend to
address all of the concerns posed in the revised manuscript. We had addressed and acted
over the comments as per the reviewers' suggestions.
Sincerely,
Pritul Dave
Computer Science and Engineering Department,
Devang Patel Institute of Advance Technology and Research (DEPSTAR)
Charotar University of Science and Technology (CHARUSAT),
Email:pritul.dave@gmail.com;
Tel: +91 9904513475
Reviewer 1 (Anonymous)
No Changes
Reviewer 2 (Anonymous)
Basic Reporting
-I suggest adding more references such as:
--Zambrano-Martinez, J. L., Calafate, C. T., Soler, D., Lemus-Zúñiga, L. G., Cano, J. C.,
Manzoni, P., & Gayraud, T. (2019). A centralized route-management solution for
autonomous vehicles in urban areas. Electronics, 8(7), 722.
--Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; Wang, F.Y. Traffic flow prediction with big data: A deep
learning approach.
IEEE Trans. Intell. Transp. 2014, 16, 865–873
We are thankful to the reviewer for suggestion of adding more references. We have
undergone the provided references and improved the literature review. The “IoT Based
Approach” and “Computer Vision and Machine Learning Based Approach” are improved
based on the suggestions.
Experimental design
--There are no Figures of which are the polynomial curve for the predictive model.
We appreciate your insightful suggestion. We have improved the Figure 5. in which we have
added the polynomial curve as a Kernel Density Estimation of the predicted time.
Furthermore, the Figure 6 and Figure 7 are also depicting the Gaussian probability density
curve of the predictive model.
" | Here is a paper. Please give your review comments after reading it. |
148 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology based system to automate the assessment of therapist empathy ---a key therapy quality index ---from audio recordings of the psychotherapy interactions. We design a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employ Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimate therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Addiction counseling is a type of psychotherapy, where the therapist aims to support changing the patient's addictive behavior through face-to-face conversational interaction. Mental health care toward drug and alcohol abuse is essential to society. A national survey in the United States by the Substance 5 Abuse and Mental Health Services Administration <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> showed that there were 23.9 million illicit drug users in 2012. However, only 2.5 million persons received treatment at a specialty facility <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Further to the gap between the provided addiction counseling and what is needed, it is also challenging to evaluate millions of counseling cases regarding the quality of the therapy and the competence of 10 the therapists.</ns0:p><ns0:p>Unlike pharmaceuticals whose quality can be assessed during design and manufacturing, psychotherapy is essentially an interaction where multimodal communicative behaviors are the means of treatment, hence the quality is at best unknown until after the interaction takes place. Traditional approaches 15 of evaluating the quality of therapy and therapist performance rely on manual observational coding of the therapist-patient interaction, e.g., reviewing tape recordings and annotating them with performance scores. This kind of coding process often takes more than five times real time, including initial human coder training and reinforcement <ns0:ref type='bibr' target='#b3'>[2]</ns0:ref>. The lack of human and time resources prohibits 20 the evaluation of psychotherapy in large scale; and moreover, it limits deeper understanding of how therapy works due to the small number of cases evaluated. Similar issues exist in many human centered application fields such as education and customer service.</ns0:p><ns0:p>In this work, we propose computational methods for evaluating therapists Manuscript to be reviewed Computer Science seling called Motivational Interviewing (MI), which helps people to resolve ambivalence and emphasizes the intrinsic motivation of changing addictive behaviors <ns0:ref type='bibr' target='#b5'>[3]</ns0:ref>. MI has been proved effective in various clinical trials; and theories about its mechanisms have been developed <ns0:ref type='bibr' target='#b6'>[4]</ns0:ref>. Notably, Therapist empathy 30 is considered essential to the quality of care, in a range of health care interactions including MI, where it holds a prominent function. Empathy mainly encompasses two aspects in the MI scenario: the therapist's internalization of a patient's thoughts and feelings, i.e., taking the perspective of the patient; and the therapist's response with the sensitivity and care appropriate to the suffer-35 ing of the patient, i.e., feeling for the patient <ns0:ref type='bibr'>[5]</ns0:ref>. Empathy is an evolutionarily acquired basic human ability, and a core factor in human interactions. Physiological mechanisms on single-cell and neural-system levels lend support to the cognitive and social constructs of empathy <ns0:ref type='bibr' target='#b8'>[6,</ns0:ref><ns0:ref type='bibr' target='#b9'>7,</ns0:ref><ns0:ref type='bibr' target='#b10'>8]</ns0:ref>. Higher ratings of therapist empathy are associated with treatment retention and positive clinical outcomes 40 <ns0:ref type='bibr' target='#b11'>[9,</ns0:ref><ns0:ref type='bibr' target='#b6'>4,</ns0:ref><ns0:ref type='bibr'>10]</ns0:ref>. Therefore, we choose to computationally quantify empathy.</ns0:p><ns0:p>The study of the techniques that support the measurement, analysis, and modeling of human behavior signals is referred to as Behavioral Signal Processing (BSP) <ns0:ref type='bibr' target='#b13'>[11]</ns0:ref>. The primary goal of BSP is to inform human assessment and decision making. Other examples of BSP applications include the use of 45 acoustic, lexical, and head motion models to infer expert assessments of married couples' communicative behavioral characteristics in dyadic conversations <ns0:ref type='bibr'>[12,</ns0:ref><ns0:ref type='bibr' target='#b16'>13,</ns0:ref><ns0:ref type='bibr' target='#b17'>14]</ns0:ref>, and the use of vocal prosody and facial expressions in understanding behavioral characteristics in Autism Spectrum Disorders <ns0:ref type='bibr' target='#b18'>[15,</ns0:ref><ns0:ref type='bibr' target='#b19'>16,</ns0:ref><ns0:ref type='bibr'>17,</ns0:ref><ns0:ref type='bibr' target='#b21'>18]</ns0:ref>.</ns0:p><ns0:p>Closely related to BSP, Social Signal Processing studies modeling, analysis and 50 synthesis of human social behavior through multimodal signal processing <ns0:ref type='bibr' target='#b22'>[19]</ns0:ref>.</ns0:p><ns0:p>Computational models of empathy essentially explore the relation between low level behavior signals and high level human judgments, with the guidance of domain theories and data from real applications. Our previous work on empathy modeling has used lexical, vocal similarity, and prosodic cues. We have 55 shown that lexical features derived from empathic vs. generic language models are correlated with expert annotated therapist empathy ratings <ns0:ref type='bibr' target='#b24'>[20]</ns0:ref>. We segmented spoken language as output. For therapist empathy modeling in this paper, we focus on the spoken language of the therapist only. We propose three 90 methods for empathy level estimation based on language models representing high vs. low empathy, including using the Maximum Entropy model, the Maximum likelihood based model trained with human-generated transcripts, and a Maximum likelihood approach based on direct ASR lattice rescoring.</ns0:p><ns0:p>Given the access to a collection of relatively large size, well annotated databases 95 of MI transcripts, we train various models for each processing step, and evaluate the performance of intermediate steps as well as the final empathy estimation accuracies by different models. We employ the VAD system developed by Van Segbroeck et al. <ns0:ref type='bibr'>[26]</ns0:ref>. The 110 system extracts four types of speech features: (i) spectral shape, (ii) spectrotemporal modulations, (iii) periodicity structure due to the presence of pitch harmonics, and (iv) the long-term spectral variability profile. In the next stage, these features are normalized in variance; and a three-layer neural network is trained on the concatenation of these feature streams.</ns0:p></ns0:div>
<ns0:div><ns0:head>115</ns0:head><ns0:p>The neural network outputs the voicing probability for each audio frame, which requires binarization to determine the segmentation points. We use an adaptive threshold on the voicing probability to constrain the maximum length of speech segments. This binarization threshold increases from 0.5, until that all segments are shorter than an upper bound of segment length (e.g., 60s). Spoken 120 segment longer than that is infrequent in the target dyadic interactions, and not memory efficient to process in speech recognition. We merge neighboring segments on condition that the gap between them is shorter than a lower bound (e.g., 0.1s) and the combined segment does not exceed the upper bound of segment length (e.g., 60s). After the merging we drop segments that are too 125 short (e.g., less than 1s).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.'>Speaker diarization</ns0:head><ns0:p>Speaker diarization is a technique that provides segmentation of the audio with information about 'who spoke when'. Separating the speakers facilitates speaker adaptation in ASR, and identification of speaker roles (patient, thera-130 pist in our application). We assume the number of speakers is known a priori in the application -two speakers in addiction counseling. Therefore, the diarization process mainly includes a segmentation step (dividing speech to speaker We employ two diarization methods as follows, and both of them take VAD results and Mel-Frequency Cepstrum Coefficient (MFCC) features as inputs.</ns0:p><ns0:p>The first method uses Generalized Likelihood Ratio (GLR) based speaker segmentation, and agglomerative speaker clustering as implemented in <ns0:ref type='bibr' target='#b32'>[27]</ns0:ref>. The second method adopts GLR speaker segmentation and Riemannian manifold 140 method for speaker clustering, as implemented in <ns0:ref type='bibr'>[28]</ns0:ref>. This method slices each GLR derived segment into short-time segments (e.g., 1s), so as to increase the number of samples in the manifold space for more robust clustering (see <ns0:ref type='bibr'>[28]</ns0:ref> for more detail).</ns0:p><ns0:p>After obtaining the diarization results we compute session-level heuristics 145 for outlier detection: e.g., (i) percentage of speaking time by each speaker, (ii) longest duration of a single speaker's turn. These statistics can be checked against their expected values; and we define an outlier as a value that is more than three times of standard deviation away from the mean. For example, a 95%/5% split of speaking time in the two clusters may be a result of clustering 150 speech vs. silence due to imperfect VAD. We use the heuristics and a rule based scheme to integrate the results from different diarization methods as described further in Sec. 5.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.'>ASR</ns0:head><ns0:p>We decided to train an ASR using speech recordings from in-domain data 155 corpora that were collected in real psychotherapy settings. These recordings may best match the acoustic conditions (possibly noisy and heterogeneous) in the target application. In this work, a large vocabulary, continuous speech recognizer (LVCSR) is implemented using the Kaldi library <ns0:ref type='bibr'>[29]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Feature:</ns0:head><ns0:p>The input audio format is 16kHz single channel far-field recording.</ns0:p></ns0:div>
<ns0:div><ns0:head>160</ns0:head><ns0:p>The acoustic features are standard MFCCs including ∆ and ∆∆ features.</ns0:p><ns0:p>Dictionary: We combine the lexicon in Switchboard <ns0:ref type='bibr' target='#b37'>[30]</ns0:ref> and WSJ <ns0:ref type='bibr'>[31]</ns0:ref> corpora, and manually add high frequency domain-specific words collected from </ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.'>Speaker role matching</ns0:head><ns0:p>The therapist and patient play distinct roles in psychotherapy interaction; knowing the speaker role hence is useful for modeling therapist empathy. The Manuscript to be reviewed Computer Science language use. For example, a therapist may use more questions than the patient.</ns0:p><ns0:p>We expect a lower perplexity when the language content of the audio segment 195 matches the LM of the speaker role, and vice versa. In the following we describe the role-matching procedure in detail. 0. Input: training transcripts with speaker-role annotated, two sets of ASR decoded utterances U 1 and U 2 for diarized speakers S 1 and S 2 .</ns0:p><ns0:p>1. Train role-specific language models for (T)herapist and (P)atient sepa-200 rately, using corresponding training transcripts, e.g., trigram LMs with Kneser-Ney smoothing, using SRILM <ns0:ref type='bibr'>[32]</ns0:ref>.</ns0:p><ns0:p>2. Mix the final LM used in ASR to the role-specific LMs by a small weight (e.g., 0.1), for vocabulary consistency and robustness.</ns0:p><ns0:p>3. Compute ppl 1,T and ppl 1,P as the perplexities for U 1 over the two role-205 specific LMs. Similarly get ppl 2,T and ppl 2,P for U 2 .</ns0:p><ns0:p>4. Three cases: (i) (1) holds -we match S 1 to therapist and S 2 to patient;</ns0:p><ns0:p>(ii) (2) holds -we match S 1 to patient and S 2 to therapist; (iii) in all other conditions, we take both S 1 and S 2 as therapist.</ns0:p><ns0:formula xml:id='formula_0'>ppl 1,T ≤ ppl 1,P & ppl 2,P ≤ ppl 2,T<ns0:label>(1)</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>ppl 1,P < ppl 1,T & ppl 2,T < ppl 2,P<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>5. Outliers: When the diarization module outputs highly biased results in 210 speaking time for two speakers, the comparison of perplexities is not meaningful. If the total word count in U 1 is more than 10 times of that in U 2 , we match S 1 to therapist; and vice versa.</ns0:p><ns0:p>6. Output: U 1 and U 2 matched to speaker roles.</ns0:p><ns0:p>When there is not a clear role match, e.g., in step 4, case III and step 5, we Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we tend to oversample therapist language to augment captured information, and trade-off with the noise brought from patient language.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Therapist empathy models using language cues</ns0:head><ns0:p>We employ manually transcribed therapist language in MI sessions with high 220 vs. low empathy ratings to train separate language models representing high vs.</ns0:p><ns0:p>low empathy, and test the models on clean or ASR decoded noisy text. One approach is based on Maximum Likelihood N-gram Language Models (LMs) of high vs. low empathy respectively, previously employed in <ns0:ref type='bibr' target='#b24'>[20]</ns0:ref> and in <ns0:ref type='bibr' target='#b16'>[13]</ns0:ref> for a similar problem; we adopt this method for its simplicity and effectiveness.</ns0:p></ns0:div>
<ns0:div><ns0:head>225</ns0:head><ns0:p>Additional modeling approaches may be complementary to increase the accuracy of empathy prediction; for this reason we adopt a widely applied method -Maximum Entropy model, which has shown good performance in a variety of natural language processing tasks. Moreover, in order to improve the test performance on ASR decoded text, it is possible to evaluate an ensemble of 230 noisy text hypotheses through rescoring the decoding lattice with high vs. low empathy LMs. In this way empathy relevant words in the decoding hypotheses gain more weights so that they become stronger features. Without rescoring, it is likely that these words do not contribute to the modeling due to their absence in the best paths of lattices. In this work, we employ the above three approaches 235 and their fusion in the system.</ns0:p><ns0:p>For each session, we first infer therapist empathy at the utterance level, then integrate the local evidence toward session level empathy estimation. We discuss more about the modeling strategies in Sec. 7.1. The details of the proposed methods are described as follows. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.'>Maximum Entropy model</ns0:head><ns0:p>Maximum Entropy (MaxEnt) model is a type of exponential model that is widely used in natural language processing tasks, and achieves good performance in these tasks <ns0:ref type='bibr' target='#b40'>[33,</ns0:ref><ns0:ref type='bibr'>34]</ns0:ref>. We train a two-class (high vs. low empathy) MaxEnt model on utterance level data using the MaxEnt toolkit in <ns0:ref type='bibr' target='#b42'>[35]</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Let high and low empathy classes be denoted H and L respectively, and Y ∈ {H, L} be the class label. Let u ∈ U be an utterance in the set of therapist utterances. We use n-grams (n = 1, 2, 3) as features for the feature function</ns0:p><ns0:formula xml:id='formula_2'>f j n (u, Y )</ns0:formula><ns0:p>, where j is an index of the n-gram. We define f j n (u, Y ) as the count of the j-th n-gram type that appears in u if Y u = Y , otherwise 0. <ns0:ref type='formula' target='#formula_3'>3</ns0:ref>), where we denote the weight and partition function as λ j n and Z(u), respectively. In the training phase, λ j n is determined through the L-BFGS algorithm <ns0:ref type='bibr' target='#b43'>[36]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>P n (Y |u) = 1 Z(u) exp   j λ j n f j n (u, Y )  <ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Based on the trained MaxEnt model, averaging utterance level evidences 255 P n (H|u) gives the session level empathy score α n , as shown in <ns0:ref type='bibr' target='#b6'>(4)</ns0:ref>, where U T is the set of K therapist utterances.</ns0:p><ns0:formula xml:id='formula_4'>α n (U T ) = 1 K K i=1 P n (H|u i ), U T = {u 1 , u 2 , • • • , u K }, n = 1, 2, 3. (4)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.2.'>Maximum likelihood based model</ns0:head><ns0:p>Maximum Likelihood language models (LM) based on N-grams can provide the likelihood of an utterance conditioned on a specific style of language, e.g., 260 P (u|H) as the likelihood of utterance u in the empathic style. Following the Bayesian relationship, the posterior probability P (H|u) is formulated by the likelihoods as in (5), where we assume equal prior probabilities P (H) = P (L).</ns0:p><ns0:p>P (H|u) = P (u|H)P (H) P (u|H)P (H) + P (u|L)P (L) = P (u|H) P (u|H) + P (u|L)</ns0:p><ns0:p>(5)</ns0:p><ns0:p>We train the high empathy LM (LM H ) and low empathy LM (LM L ) using manually transcribed therapist language in high empathic and low empathic Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>LM (e.g., the final LM in ASR) to LM H and LM L with a small weight (e.g., 0.1).</ns0:p><ns0:p>Let us denote the mixed LMs as LM </ns0:p><ns0:p>We compute session level empathy score β n as the average of utterance level evidences as shown in <ns0:ref type='bibr' target='#b9'>(7)</ns0:ref>, where U T is the same as in (4).</ns0:p><ns0:formula xml:id='formula_6'>β n (U T ) = 1 K K i=1 P n (H|u i ) (7)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.3.'>Maximum likelihood rescoring on ASR decoded lattices 275</ns0:head><ns0:p>Instead of evaluating a single utterance as the best path in ASR decoding, we can evaluate multiple paths at once by rescoring the ASR lattice. The score (in likelihood sense) rises for the path of an highly empathic utterance when evaluated on the empathy LM, while it drops on the low empathy LM. We hypothesize that rescoring the lattice would re-rank the paths so that empathy-related words <ns0:ref type='formula'>8</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_7'>S H (L) = exp 1 R R r=1 s H (r) exp 1 R R r=1 s H (r) + exp 1 R R r=1 s L (r) (8)</ns0:formula><ns0:p>5. Compute the session level empathy score γ as in <ns0:ref type='bibr' target='#b11'>(9)</ns0:ref>, where U T is the set of K lattices of therapist utterances. Note that the Lattice Rescoring method is a natural extension of the Maximum Likelihood LM method in Sec. 3.2. When the score s H (r) denotes loglikelihood and R = 1, (8) becomes equivalent to <ns0:ref type='bibr' target='#b8'>(6)</ns0:ref>. In that case S H (L) represents a similar meaning to P (H|L). The lattice is a more compact way of representing the hypothesized utterances since there is no need to write out the Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_8'>γ(U T ) = 1 K K i=1 S H (L i )<ns0:label>(9</ns0:label></ns0:formula><ns0:p>Computer Science paths explicitly. It also allows more efficient averaging of the evidence from the top hypotheses.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Data corpora</ns0:head><ns0:p>This section introduces the three data corpora used in the study.</ns0:p><ns0:p>• 'TOPICS' corpus -153 audio-recorded MI sessions randomly selected 305 from 899 sessions in five psychotherapy studies <ns0:ref type='bibr' target='#b44'>[37,</ns0:ref><ns0:ref type='bibr' target='#b45'>38,</ns0:ref><ns0:ref type='bibr' target='#b48'>39,</ns0:ref><ns0:ref type='bibr' target='#b49'>40,</ns0:ref><ns0:ref type='bibr'>41</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.'>Empathy annotation in CTT corpus</ns0:head><ns0:p>Three coders reviewed the 826 audio recordings of the entire CTT corpus, and annotated therapist empathy using a specially designed coding system - </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>System implementation</ns0:head><ns0:p>In this section, we describe the system implementation in more detail. Table. 3 summarizes the usage of data corpora in various modeling and application steps. Manuscript to be reviewed Role matching: We use the TOPICS corpus to train role-specific LMs for the therapist and patient. We also mix the final LM in ASR with the role-specific LMs for robustness.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Empathy modeling:</ns0:head><ns0:p>We conduct empathy analysis on the CTT corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>385</ns0:head><ns0:p>Due to data sparsity, we carry out a leave-one-therapist-out cross-validation on CTT corpus, i.e., we use data involving all-but-one therapist's sessions in the corpus to train high vs. low empathy models, and test on that held-out therapist. For the lattice LM rescoring method in Sec. 3.3, we employ the top 100 paths (R = 100). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for empathy analysis, in order to learn the mapping between empathy scores and codes, we conduct an internal cross-validation on the training set in each 395 round. For a single empathy score, we use linear regression and threshold search (minimizing classification error) for the mapping to the empathy code and the high or low class, respectively. For multiple empathy scores, we use support vector regression and linear support vector machine for the two mapping tasks, respectively. 400 6. Experiment and results</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1.'>Experiment setting</ns0:head><ns0:p>We examine the effectiveness of the system by setting up the experiments in three conditions for comparison.</ns0:p><ns0:p>• ORA-T -Empathy modeling on manual transcriptions of therapist lan-405 guage (i.e., using ORAcle Text).</ns0:p><ns0:p>• ORA-D -ASR decoding of therapist language with manual labels of speech segmentation and speaker roles (i.e., using ORAcle Diarization and role labels), followed by empathy modeling on the decoded therapist language.</ns0:p></ns0:div>
<ns0:div><ns0:head>410</ns0:head><ns0:p>• AUTO -Fully automatic system that takes audio recording as input, carries out all the processing steps in Sec. 2 and empathy modeling in Sec. 3.</ns0:p><ns0:p>We setup three evaluation metrics regarding the performance of empathy code estimation: Pearson's correlation ρ, Root Mean Squared Error (RMSE) σ 415 between expert annotated empathy codes and system estimations, and accuracy Acc of session-wise high vs. low empathy classification.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2.'>ASR system performance</ns0:head><ns0:p>Table. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science within turns. Therefore inherent errors exist in the reference data, but we believe they should not affect the conclusions significantly due to the relatively low ratio of such events. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science sessions. For the latter, there might be a side effect that is influencing the 465 performance -lattice path re-ranking may pick up words in patient language that are relevant to empathy, such that the noise (i.e., patient language mixed in) is also 'colored' and no longer neutral to empathy modeling. Since the SP sessions have similar story setup (hence shared vocabulary) but not for the RP sessions, such effect may be less for RP sessions. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>estimation of empathy code. The performance for SP sessions is much higher 475 than that for RP sessions. One reason might be that SP sessions are based on scripted situations (e.g., Child Protective Serves takes kid away from mother who then comes to psychotherapy), while RP sessions are not scripted, and the topics tend to be diverse. It is interesting and surprising that in the ORA-D and AUTO cases, the 480 system performs relatively well given a WER above 40%. One reason might be that the distribution of session level WER is skewed to the lower endthe median WER (39.9%) is lower than the mean WER (43.6%). More importantly, ASR errors are likely independent to the high vs. low representation of empathy, i.e., noises in the transcripts are probably not biased towards higher 485 or lower empathy in general. Even though the domain information in the observation is attenuated, its polarity of high vs. low empathy remains unchanged.</ns0:p><ns0:p>Nevertheless, this does not mean that ASR errors have no effect on the performance. We found that the dynamic range of the predicted empathy scores is smaller and more centered in the ORA-D and AUTO cases, showing a reduced 490 discriminative power.</ns0:p><ns0:p>There are some seemingly counter-intuitive results regarding Acc; e.g., in</ns0:p><ns0:p>Table. 9 ORA-D outperforms ORA-T in Acc for the RP sessions. Firstly, due to the small sample size of RP sessions, the difference of prediction accuracies in this case is not statistically significant (p > 0.05). This means the comparison is Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>noisy text in the ORA-D case attenuates the representation of empathy, such effect is less critical for binary classification since it only concerns the polarity of high vs. low empathy rather than the actual degree. In Table <ns0:ref type='table'>.</ns0:ref> 9 ORA-D has slightly higher σ than ORA-T. This shows that ORA-D does not exceed 500 ORA-T in the estimation of empathy code values, possibly lending support to the decrease of estimation accuracy by the noisy text in the ORA-D case.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.'>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head n='7.1.'>Empathy modeling strategies</ns0:head><ns0:p>In this section we will discuss more about empathy and modeling strate-505 gies. Empathy is not an individual property but exhibited during interactions.</ns0:p><ns0:p>More specifically, empathy is expressed and perceived in a cycle <ns0:ref type='bibr' target='#b56'>[45]</ns0:ref>: (i) patient expression of experience, (ii) therapist empathy resonation, (iii) therapist expression of empathy, and (iv) patient perception of empathy. The real empathy construct is in (ii), while we rely on (iii) to approximate the perception 510 of empathy by human coders. This suggests one should model the therapist and patient jointly, as we have shown using the acoustic and prosodic cues for empathy modeling in <ns0:ref type='bibr' target='#b25'>[21,</ns0:ref><ns0:ref type='bibr' target='#b26'>22]</ns0:ref>.</ns0:p><ns0:p>However, joint modeling in the lexical domain may be very difficult, since patient language is unconstrained and highly variable, which leads to data spar-515 sity. Therapist language, as in (iii) above encodes empathy expression and hence provides the main source of information. Can et al. <ns0:ref type='bibr' target='#b58'>[46]</ns0:ref> proposed an approach to automatically identify a particular type of therapist talk style named reflection, which is closely linked to empathy. It showed that N-gram features of therapist language contributed much more than those of patient language. Therefore in 520 this initial work we focused on the modeling of therapist language, while in the future plan to investigate effective ways of incorporating patient language.</ns0:p><ns0:p>Human annotation of empathy in this work is a session level assessment, where coders evaluate the therapist's overall empathy level as a gestalt. In a long session of psychotherapy, the perceived therapist empathy may not be uniform Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>across time, i.e., there may be influential events or even contradicting evidence.</ns0:p><ns0:p>Human coders are able to integrate such evidence toward an overall assessment.</ns0:p><ns0:p>In our work, since we do not have utterance level labels, in the training phase we treat all utterances in high vs. low empathy sessions as representing high vs. low empathy, respectively. We expect the model to overcome this since 530 the N-grams manifesting high empathy may occur more often in high empathy sessions. In the testing phase, we found that scoring therapist language by utterances (and taking the average) exceeded directly scoring the complete set of therapist language. This demonstrates that the proposed methods are able to capture empathy on utterance level. We see that the ratio of human agreement to the averaged code is around 90% on the CTT corpus. This suggests that human judgment of empathy is not always consistent, and the manual assessment of therapist may not be perfect.</ns0:p></ns0:div>
<ns0:div><ns0:head>545</ns0:head><ns0:p>However, human agreement is still higher than that between the average code and automatic estimation (results in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science computational assessment as an objective reference may be useful for studying the subjective process of human judgment of empathy. Manuscript to be reviewed are often questioning or instructing the patient. This is consistent with the concept of empathy as 'trying on the feeling' or 'taking the perspective' of others.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='7.4.'>Robustness of empathy modeling methods</ns0:head><ns0:p>In this section we demonstrate the robustness of the Lattice Rescoring method Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In practice, if the empathy LM is rich enough, one can also decode the utterance directly using the high/low empathy LMs instead of rescoring the lattice.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8.'>Conclusion</ns0:head><ns0:p>In this paper we have proposed a prototype of a fully automatic system to 595 rate therapist empathy from language use in addiction counseling. We constructed speech processing modules that include VAD, diarization, and a large vocabulary continuous speech recognizer customized to the topic domain. We employed role-specific language models to identify therapist's language. We applied MaxEnt, Maximum Likelihood LM, and Lattice Rescoring methods to 600 estimate therapist empathy codes in MI sessions, based on lexical cues of the therapist's language. In the end, we composed these elements and implemented For evaluation, we estimated empathy using manual transcripts, ASR decoding using manual segmentation, and fully automated ASR decoding. Exper-605 imental results showed that the fully automatic system achieved a correlation of 0.643 between human annotation and machine estimation of empathy codes, as well as an accuracy of 81% in classifying high vs. low empathy scores. Using manual transcripts we achieve a better performance of 0.721 and 86% in correlation and classification accuracy, respectively. The experimental results</ns0:p></ns0:div>
<ns0:div><ns0:head>610</ns0:head><ns0:p>show the effectiveness of the system in therapist empathy estimation. We also observed that the performance of the three modeling methods are comparable in general, while the robustness varies for different methods and conditions.</ns0:p><ns0:p>In the future, we would like to improve the underlying techniques for speech processing and speech transcription, such as implementing more accurate VAD, 615 diarization with overlapped speech detection, and a more robust ASR system.</ns0:p><ns0:p>We would also like to acquire more and better training data such as by using close talking microphones in collections. The use of close-talking microphones may fundamentally improve the accuracy of speaker diarization. As a result acoustic and prosodic cues may be integrated into the system, which relies on 620 robust speaker identification. The system may be augmented by incorporating other behavioral modalities such as gestures and facial expressions from the visual channel. A joint modeling of these dynamic behavioral cues may provide a more accurate quantification of therapist's empathy characteristics.</ns0:p><ns0:p>Appendix A. Note on data sharing 625 Restrictions would apply to release the data corpora we used in the experiments for two reasons. First, our work is a secondary study analyzing data of archived recordings of counseling sessions, which cannot be fully anonymized.</ns0:p><ns0:p>Thus the data cannot be released to the public. The only exception is the 'General psychotherapy corpus' as a collection of psychotherapy transcripts. We Manuscript to be reviewed Computer Science seling and Psychotherapy Transcripts, Client Narratives, and Reference Works (http://alexanderstreet.com/products/counseling-and-psychotherapytranscripts-series). Second, all of the available original audio recordings were from third parties. The primary authors were not responsible for the col-635 lection of the original data, which was pulled from 6 different clinical trials. We list these specific studies and PI information as the following.</ns0:p><ns0:p>• Alcohol Research Collaborative: Peer Programs; <ns0:ref type='bibr' target='#b45'>[38]</ns0:ref>: Christine M. Lee; leecm@uw.edu</ns0:p><ns0:p>• Event Specific Prevention: Spring Break; [41]; Christine M. Lee; 640 leecm@uw.edu</ns0:p><ns0:p>• Event Specific Prevention: Twenty First Birthday; <ns0:ref type='bibr' target='#b48'>[39]</ns0:ref>; Clayton Neighbors; cneighbors@uh.edu</ns0:p><ns0:p>• Brief Intervention for Problem Drug Use and Abuse in Primary Care; <ns0:ref type='bibr' target='#b59'>[47]</ns0:ref>; Peter Roy-Byrne; roybyrne@u.washington.edu</ns0:p></ns0:div>
<ns0:div><ns0:head>645</ns0:head><ns0:p>• Indicated Marijuana Prevention for Frequently Using College Students. <ns0:ref type='bibr' target='#b49'>[40]</ns0:ref>; Christine M. Lee; leecm@uw.edu</ns0:p><ns0:p>• Context Tailored Training (CTT). <ns0:ref type='bibr' target='#b54'>[43]</ns0:ref>; John Baer; jsbaer@uw.edu We would like to point out that despite the constrains on data, the methods proposed in this work and the system we described are generally applicable 650 to empathy estimation in Motivational Interviewing. We expect the results to be reproducible on audio data that are in similar nature to the data in our experiments.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>25 performance</ns0:head><ns0:label>25</ns0:label><ns0:figDesc>based on their behaviors. We focus on one type of addiction coun-2 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 :5 2 . 1 .</ns0:head><ns0:label>121</ns0:label><ns0:figDesc>Figure1: Overview of modules in the system, including VAD, Diarization, ASR, speaker role matching, and therapist language modeling for empathy prediction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>6</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016) Manuscript to be reviewed Computer Science homogeneous segments) and a clustering step (assigning each segment to one of the speakers).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>135</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>7</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016) Manuscript to be reviewed Computer Science the training corpus, e.g., mm as a filler word and vicodin as an in-domain word. We ignore low frequency out of vocabulary words in the training corpus 165 including misspellings and made-up words, which in total take less than 0.03% of all word tokens. Text training data: We tokenize the training transcripts as follows. Overlapped speech regions of the two speakers are marked and transcribed; we only keep the longer utterance. Repetitions and fillers are marked and retained in the 170 way they are uttered. We normalize non-verbal vocalization marks into either '[laughter]' or '[noise]'. We also replace underscores by spaces, and remove punctuations and special characters. Acoustic Model training: For the Acoustic Model (AM), we first train a GMM-HMM based AM, initially on short utterances with a monophone setting, and gradually expand it to a tri-phone structure using more training data. We then apply feature Maximum Likelihood Linear Regression (fMLLR) and Speaker Adaptive Training (SAT) techniques to refine the model. Moreover, we train a Deep Neural Network (DNN) AM with tanh nonlinearity, based on the alignment information obtained from the previous model. 180 Language Model training: For Language Model (LM) training, we employ SRILM to train N-gram models [32]. Initial LM is obtained from the text of the training corpus, using trigram model and Kneser-Ney smoothing. We further employ an additional in-domain text corpus of psychotherapy transcripts (see Sec. 4) to improve the LM. The trigram model of the additional corpus is 185 trained in the same way and mixed with the main LM, where the mixing weight is optimized on heldout data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>190</ns0:head><ns0:label /><ns0:figDesc>diarization module only identifies distinct speakers but not their roles in the conversation. One possible way to automatically match roles to the speakers with minimal assumptions about the data collection procedures is by styles of 8 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>215</ns0:head><ns0:label /><ns0:figDesc>have to make assumptions about speaker roles. Since our target is the therapist,9 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>240</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>245 10</ns0:head><ns0:label>245</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>250</ns0:head><ns0:label /><ns0:figDesc>MaxEnt model then formulates the posterior probability P n (Y |u) as an exponent of the weighted sum of feature functions f j n (u, Y ), as shown in (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>265 sessions, respectively. We employ trigram LMs with Kneser-Ney smoothing by SRILM in implementation [32]. Next, for robustness we mix a large in-domain 11 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>′</ns0:head><ns0:label /><ns0:figDesc>H and LM ′ L .For the inference of P (H|u), we first compute the log-likelihoods l n (u|H)270 and l n (u|L) by applying LM ′ H and LM ′ L , where n = 1, 2, 3 are the utilized N-gram orders. Then P n (H|u) is obtained as in (6).P n (H|u) = e ln(u|H) e ln(u|H) + e ln(u|L) </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>280 2 285 1 .</ns0:head><ns0:label>21</ns0:label><ns0:figDesc>may be picked up, which improves the robustness of empathy modeling when the decoding is noisy (more discussion in Sec. 7.4). An illustration of the lattice paths re-ranking effect is shown in Fig. 2. 0. Input: ASR decoded lattice L, high and low empathy LMs LM ′ H , LM ′ L as described in Sec. 3.Update the LM scores in L by applying LM ′ H and LM ′ L as trigram LMs, denote the results as L H and L L , respectively. 2. Rank the paths in L H and L L according to the weighted sum of AM and LM scores.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>List the final scores of the R-best paths in L H and L L as s H (r) and s L (r) 290 in the log field, 1 ≤ r ≤ R, respectively. 12 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016) Manuscript to be reviewed Computer Science 4. Compute the utterance level empathy score S H (L) as in (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>) 6 .Figure 2 :</ns0:head><ns0:label>62</ns0:label><ns0:figDesc>Figure 2: Illustration of lattice rescoring by high/low empathy LMs: the middle column represents the ASR decoded lattice, where each row denotes a path in the lattice, ranked by their scores. Each path is color-coded to show the empathy degree. High/low empathy LM rescoring produce two new lattices on the left and right, respectively. The paths are re-ranked based on their new scores.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>300 13 PeerJ</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>320</ns0:head><ns0:label /><ns0:figDesc>The recording format and transcription scheme are the same as TOPICS corpus. Each session is about 20 min.All research procedures for this study were reviewed and approved by Institutional Review Boards at the University of Washington (IRB 36949) and University of Utah (IRB 00058732). During the original trials all participants 325 provided written consent. The UW IRB approved all consent procedures. The details about the corpus sizes are listed in Table.1. 14 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>330the 'Motivational Interviewing Treatment Integrity' (MITI) manual<ns0:ref type='bibr' target='#b55'>[44]</ns0:ref>. The empathy code values are discrete from 1 to 7, with 7 being of high empathy and 1 being of low empathy. 182 sessions were coded twice by the same or different coders, while no session was coded three times. The first and second empathy codes of the sessions that were coded twice had a correlation of 0.87. Intra-Class 335 Correlation (ICC) is 0.67±0.16 for inter-coder reliability, and 0.79±0.13 for intra-coder reliability. These statistics prove coder reliability in the annotation.We use the mean value of empathy codes if the session is coded twice.In the original study, three psychology researchers acted as Standardized Patient (SP), whose behaviors were regulated for therapist training and evalu-340 ation purposes. For example, SP sessions had pre-scripted situations. Sessions involving an SP or a Real Patient (RP) were about the same size in the entire corpus. The 200 sessions used in this study are selected from the two extremes of empathy codes, which may represent empathy more prominently. The class of low empathy sessions has a range of code values from 1 to 4, with mean value 345 of 2.16±0.55; while that for the high empathy class is 4.5 to 7, with mean of 5.90±0.58.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Run ASR using D 2 derived segmentation, obtain new VAD information according to the alignment in the decoding, disregard the decoded words.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>365 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Result D 3 : based on the new VAD information, apply the method in [28] again, with a scheme of slicing speech regions into 1-minute short segments.5. Result D 4 : if D 3 is an outlier that is detected using the heuristics in Sec. 2.2, and D 2 or D 1 is not an outlier, then take D 2 or D 1 in turn as 370 D 4 ; otherwise take D 3 as D 4 . Such an integration scheme is informed by the performance on the training corpus.ASR:We train the AM and the initial LM using the TOPICS corpus. We employ the General Psychotherapy corpus as a large in-domain data set and mix it in the LM for robustness. Perplexity decreases on the heldout data 375 after the mixing. The Deep Neural Network model is trained following the 'train tanh.sh' script in the Kaldi library. The ASR is used in finding more accurate VAD results as mentioned above. In addition, we apply the ASR to the CTT corpus under two conditions: (i) assuming accurate VAD and diarization conditions by utilizing the manually labeled timing and speaker information; 380 (ii) using the automatically derived diarization results to segment the audio.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>390</ns0:head><ns0:label /><ns0:figDesc>Empathy model fusion: The three methods in Sec. 3 and different choices of n-gram order n may provide complementary cues about empathy. This motivates us to setup a fusion module. Since we need to carry out cross-validation PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>18 PeerJ</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>4 reports false alarm, miss, speaker error rate (for diarization only), and total error rate for the VAD and diarization modules. These results are the 420 Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016) Manuscript to be reviewed Computer Science averages of session-wise values. We can see that ASR derived VAD information dramatically improves the diarization results in D 4 compared to D 2 that is based on the initial VAD.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>440 6 . 3 .</ns0:head><ns0:label>63</ns0:label><ns0:figDesc>Empathy code estimation performanceTable.<ns0:ref type='bibr' target='#b8'>6</ns0:ref> shows the results of empathy code estimation using the fusion of empathy scores α n , n = 1, 2, 3, which are derived by the MaxEnt model and n-gram features in Sec. 3.1. We compare the performance in ORA-T, ORA-D, and AUTO cases, for SP, RP and all sessions separately. Note that due data 445 sparsity, we conduct leave-one-therapist-out cross-validation on all sessions, and report the performance separately for SP and RP data. The correlation ρ is in the range 0 to 1; the RMSE σ is in the space of empathy codes (1 to 7); and the classification accuracy Acc is in percentage.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head>470 Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Empathy code estimation performance using lattice LM rescoring method the results by the fusion of the empathy scores including α n , β n , and γ, n = 1, 2, 3. The best overall results are achieved by such fusion except Acc in the AUTO case. The fully automatic system achieves higher than 80% accuracy in classifying high vs. low empathy, and correlation of 0.643 in 21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head>495</ns0:head><ns0:label /><ns0:figDesc>likely to be influenced by random effects. Moreover, as discussed above, though 22 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>525 23</ns0:head><ns0:label>525</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head>535 7 . 2 .</ns0:head><ns0:label>72</ns0:label><ns0:figDesc>Inter-human-coder agreement 62 out of 200 sessions in the CTT corpus were coded by two human coders. We binarize their coding with a threshold of 4.5. If the two coders annotated empathy codes in the same class, we consider it as coder agreement. If they annotated the opposite, one (and only one) of them would have a disagreement 540 to the class of the averaged code value. In Table. 10 we list the counts of coder disagreement.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_27'><ns0:head>550 7 . 3 .</ns0:head><ns0:label>73</ns0:label><ns0:figDesc>Intuition about the discriminative power of lexical cues</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_28'><ns0:head>′</ns0:head><ns0:label /><ns0:figDesc>H and LM ′ L as l n (w|H) and l n (w|L), respectively. Let cnt(w) be 555 the count of w in the CTT corpus. We define the discriminative power δ of w as in (10). δ(w) = (l n (w|H) − l n (w|L)) * cnt(w) (10) 25 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_29'><ns0:head>565</ns0:head><ns0:label /><ns0:figDesc>Fig. 3 illustrates the results. The upper left panel shows the corresponding WER by the sampled paths from lattice L. The upper right and lower left/right panels show the performances by the three methods regarding ρ, σ, and Acc, 580</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_30'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Comparison of robustness by MaxEnt, Maximum Likelihood LM, and Lattice Rescoring methods</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_31'><ns0:head>27 PeerJ</ns0:head><ns0:label>27</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016) Manuscript to be reviewed Computer Science the complete system.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_32'><ns0:head>630</ns0:head><ns0:label /><ns0:figDesc>obtained this data via library subscription from Alexander Street Press, Coun-28 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>], including intervention of college student drinking and marijuana use, as well as clinical mental health care for drug use. Audio data are available as single channel far-field recordings in 16 bit quantization, 16 kHz sample rate. Audio quality of the recordings varies significantly as they were col-</ns0:figDesc><ns0:table /><ns0:note>310 lected in various real clinical settings. The selected sessions were manually transcribed with annotations of speaker, start-end time of each turn, overlapped speech, repetition, filler words, incomplete words, laughter, sign, and other nonverbal vocalizations. Session length ranges from 20 min to 1 hour.315• 'General Psychotherapy' corpus -transcripts of 1200 psychotherapy sessions in MI and a variety of other treatment types[42]. Audio data are not available.• 'CTT' corpus -200 audio-recorded MI sessions selected from 826 sessions in a therapist training study (namely Context Tailored Training)<ns0:ref type='bibr' target='#b54'>[43]</ns0:ref>.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Details about the data corpora employed, including counts of session, talk turn, and word token, and also total time duration.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Corpus</ns0:cell><ns0:cell cols='4'>No. sessions No. talk turns No. word tokens Duration</ns0:cell></ns0:row><ns0:row><ns0:cell>TOPICS</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>3.69 × 10 4</ns0:cell><ns0:cell>1.12 × 10 6</ns0:cell><ns0:cell>104.2 hr</ns0:cell></ns0:row><ns0:row><ns0:cell>Gen. Psyc.</ns0:cell><ns0:cell>1200</ns0:cell><ns0:cell>3.01 × 10 5</ns0:cell><ns0:cell>6.55 × 10 6</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>CTT</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>2.40 × 10 4</ns0:cell><ns0:cell>6.24 × 10 5</ns0:cell><ns0:cell>68.6 hr</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Table. 2 shows the counts of high vs. low empathy and SP vs. RP sessions. Moreover, the selected sessions are diverse in the therapists involved. Counts of SP, RP, high and low empathy sessions in the CTT corpus.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>There are 133 unique therapists, and any therapist has no more than three</ns0:cell></ns0:row><ns0:row><ns0:cell>sessions.</ns0:cell></ns0:row></ns0:table><ns0:note>350</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Summary of data corpora usage in the training and test.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Corpus</ns0:cell><ns0:cell>Phase VAD Diar. ASR-AM ASR-LM Role Emp.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>TOPICS</ns0:cell><ns0:cell>Train Test</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Gen. Psyc.</ns0:cell><ns0:cell>Train Test</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CTT</ns0:cell><ns0:cell>Train Test</ns0:cell></ns0:row><ns0:row><ns0:cell>355</ns0:cell><ns0:cell cols='2'>VAD: We construct the VAD training and development sets by sampling</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>from the TOPICS corpus. The total length of the two sets are 5.2h and 2.6h,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>respectively. We expect a wider coverage of heterogeneous audio conditions</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>would increase the robustness of the VAD. We train the neural network as</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>described in Sec. 2.1, and tune the parameters on the development set. We</ns0:cell></ns0:row><ns0:row><ns0:cell>360</ns0:cell><ns0:cell cols='2'>apply VAD on the CTT corpus.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Diarization: We run diarization on the CTT corpus as below.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>1. Result D 1 : apply the agglomerative clustering methods in [27].</ns0:cell></ns0:row></ns0:table><ns0:note>2. ResultD 2 : apply the Riemannian clustering method in [28]. 16 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Session-wise average performance of VAD and diarization modules.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>Results False Alarm (%) Miss (%) Speaker error (%) Total error (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>VAD</ns0:cell><ns0:cell>5.8</ns0:cell><ns0:cell>6.8</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>12.6</ns0:cell></ns0:row><ns0:row><ns0:cell>D 2</ns0:cell><ns0:cell>6.9</ns0:cell><ns0:cell>8.7</ns0:cell><ns0:cell>13.7</ns0:cell><ns0:cell>29.3</ns0:cell></ns0:row><ns0:row><ns0:cell>D 4</ns0:cell><ns0:cell>4.2</ns0:cell><ns0:cell>6.7</ns0:cell><ns0:cell>7.3</ns0:cell><ns0:cell>18.1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>Table. 5 reports averaged ASR performance in terms of substitution, dele-</ns0:cell></ns0:row></ns0:table><ns0:note>tion, insertion, and total Word Error Rate (WER) for the case of ORA-D and 425 AUTO. We can see that in the AUTO case there is a slight increase in WER, which might be a result of VAD and diarization errors, as well as the influence on speaker adaptation effectiveness. Using clean transcripts we were able to identify speaker roles for all sessions. For the AUTO case, due to diarization and ASR errors, we found a match of speaker roles in 154 sessions (78%), but 430 failed in 46 sessions.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Overall ASR performance for ORA-D and AUTO cases.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Cases</ns0:cell><ns0:cell cols='4'>Substitution (%) Deletion (%) Insertion (%) WER (%)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ORA-D</ns0:cell><ns0:cell>27.1</ns0:cell><ns0:cell>11.5</ns0:cell><ns0:cell>4.6</ns0:cell><ns0:cell>43.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>AUTO</ns0:cell><ns0:cell>27.9</ns0:cell><ns0:cell>12.2</ns0:cell><ns0:cell>4.5</ns0:cell><ns0:cell>44.6</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>There are two notes about the speech processing results. First, due to the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>large variability of audio conditions in different sessions, the averaged results are</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>affected by the very challenging cases. For example, session level ASR WER is in</ns0:cell></ns0:row><ns0:row><ns0:cell>435</ns0:cell><ns0:cell cols='5'>the range of 19.3% to 91.6%, with median WER of 39.9% and standard deviation</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>of 16.0%. Second, the evaluation of VAD and diarization are based on speaking-</ns0:cell></ns0:row></ns0:table><ns0:note>turn level annotations, which ignore gaps, backchannels, and overlapped regions 19 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Empathy code estimation performance using the MaxEnt model Similarly, Table.7 shows the results by the fusion of empathy scores β n , LMs in Sec. 3.2. From the results in Table. 6and Table.7 we can see that the MaxEnt method and the Maximum Likelihood LM method are comparable in performance. The MaxEnt method suffers more from noisy data in the RP sessions than the Maximum Likelihood LM method as the performance decreases more in the AUTO case for RP, while it is more</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>RP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>All sessions</ns0:cell></ns0:row><ns0:row><ns0:cell>Cases</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell></ns0:row><ns0:row><ns0:cell>ORA-T</ns0:cell><ns0:cell cols='9'>0.747 1.27 87.9 0.653 1.49 80.3 0.707 1.36 85.0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>ORA-D 0.699 1.38 85.5 0.651 1.51 84.2 0.678 1.43 85.0</ns0:cell></ns0:row><ns0:row><ns0:cell>AUTO</ns0:cell><ns0:cell cols='9'>0.693 1.48 87.1 0.452 1.73 64.5 0.611 1.58 78.5</ns0:cell></ns0:row><ns0:row><ns0:cell>455</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='10'>effective in cleaner condition like the ORA-D case. As a type of discriminative</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>model, the MaxEnt model may overfit more than the Maximum Likelihood LM</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>in the condition of sparse training data. Thus the influence of noisy input is</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>also heavier for the MaxEnt model.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>450 n = 1, 2, 3, derived by the n-gram 20 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Empathy code estimation performance using Maximum Likelihood LM Table. 8 shows the results using the empathy score γ that is derived by the</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>RP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>All sessions</ns0:cell></ns0:row><ns0:row><ns0:cell>Cases</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell></ns0:row><ns0:row><ns0:cell>ORA-T</ns0:cell><ns0:cell cols='9'>0.749 1.27 89.5 0.632 1.51 77.6 0.706 1.37 85.0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>ORA-D 0.699 1.39 86.3 0.581 1.62 71.1 0.654 1.48 80.5</ns0:cell></ns0:row><ns0:row><ns0:cell>AUTO</ns0:cell><ns0:cell cols='9'>0.693 1.51 87.1 0.510 1.72 73.7 0.628 1.59 82.0</ns0:cell></ns0:row></ns0:table><ns0:note>460lattice LM rescoring method in Sec. 3.3, for the case of ORA-D and AUTO that involves ASR decoding. Here we set the count of paths R for score averaging as 100. The Lattice Rescoring method performs comparably well in the ORA-D case. It performs well in the AUTO case for RP sessions, but suffers in SP</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Empathy code estimation performance by the fusion of the MaxEnt, MaximumLikelihood LM, and Lattice LM rescoring (for ORA-D and AUTO cases) methods</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>RP</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>All sessions</ns0:cell></ns0:row><ns0:row><ns0:cell>Cases</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>ρ</ns0:cell><ns0:cell>σ</ns0:cell><ns0:cell>Acc</ns0:cell></ns0:row><ns0:row><ns0:cell>ORA-T</ns0:cell><ns0:cell cols='9'>0.758 1.24 90.3 0.667 1.45 79.0 0.721 1.32 86.0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>ORA-D 0.717 1.33 87.9 0.674 1.46 86.8 0.695 1.38 87.5</ns0:cell></ns0:row><ns0:row><ns0:cell>AUTO</ns0:cell><ns0:cell cols='9'>0.702 1.43 87.1 0.534 1.67 71.1 0.643 1.53 81.0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Count of human coder disagreement on high vs. low empathy coding</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Coders</ns0:cell><ns0:cell>I</ns0:cell><ns0:cell>II</ns0:cell><ns0:cell>III</ns0:cell><ns0:cell>Total</ns0:cell></ns0:row><ns0:row><ns0:cell>Annotated sessions</ns0:cell><ns0:cell>43</ns0:cell><ns0:cell>47</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>124 = 62 × 2</ns0:cell></ns0:row><ns0:row><ns0:cell>Disagreement</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Agreement Ratio (%) 90.7 93.6 85.3</ns0:cell><ns0:cell>90.3</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>Table. 9). In the future, we would like to investigate if computational methods can match human accuracy. Moreover, the 24</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Bigrams associated with high and low empathy behaviors</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>High empathy</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Low empathy</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>sounds like</ns0:cell><ns0:cell>it sounds</ns0:cell><ns0:cell>kind of</ns0:cell><ns0:cell>okay so</ns0:cell><ns0:cell>do you</ns0:cell><ns0:cell>in the</ns0:cell></ns0:row><ns0:row><ns0:cell>that you</ns0:cell><ns0:cell>p s</ns0:cell><ns0:cell>you were</ns0:cell><ns0:cell>have to</ns0:cell><ns0:cell>your children</ns0:cell><ns0:cell>have you</ns0:cell></ns0:row><ns0:row><ns0:cell>i think</ns0:cell><ns0:cell>you think</ns0:cell><ns0:cell>you know</ns0:cell><ns0:cell>some of</ns0:cell><ns0:cell>in your</ns0:cell><ns0:cell>would you</ns0:cell></ns0:row><ns0:row><ns0:cell>so you</ns0:cell><ns0:cell>a lot</ns0:cell><ns0:cell>want to</ns0:cell><ns0:cell>at the</ns0:cell><ns0:cell>let me</ns0:cell><ns0:cell>give you</ns0:cell></ns0:row><ns0:row><ns0:cell>to do</ns0:cell><ns0:cell>sort of</ns0:cell><ns0:cell>you've been</ns0:cell><ns0:cell>you need</ns0:cell><ns0:cell>during the</ns0:cell><ns0:cell>would be</ns0:cell></ns0:row><ns0:row><ns0:cell>yeah and</ns0:cell><ns0:cell>talk about</ns0:cell><ns0:cell>if you</ns0:cell><ns0:cell>in a</ns0:cell><ns0:cell>part of</ns0:cell><ns0:cell>you ever</ns0:cell></ns0:row><ns0:row><ns0:cell>it was</ns0:cell><ns0:cell>i'm hearing</ns0:cell><ns0:cell>look at</ns0:cell><ns0:cell>have a</ns0:cell><ns0:cell>you to</ns0:cell><ns0:cell>take care</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Trigrams associated with high and low empathy behaviors Sec. 3.2 on the CTT corpus. Let us denote n-gram terms as w, the log-likelihood derived from LM</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>High empathy</ns0:cell><ns0:cell cols='2'>Low empathy</ns0:cell></ns0:row><ns0:row><ns0:cell>it sounds like</ns0:cell><ns0:cell>a lot of</ns0:cell><ns0:cell>during the past</ns0:cell><ns0:cell>please answer the</ns0:cell></ns0:row><ns0:row><ns0:cell>do you think</ns0:cell><ns0:cell>you think about</ns0:cell><ns0:cell>using card a</ns0:cell><ns0:cell>you need to</ns0:cell></ns0:row><ns0:row><ns0:cell>you think you</ns0:cell><ns0:cell>you think that</ns0:cell><ns0:cell>past twelve months</ns0:cell><ns0:cell>clean and sober</ns0:cell></ns0:row><ns0:row><ns0:cell>sounds like you</ns0:cell><ns0:cell>a little bit</ns0:cell><ns0:cell>do you have</ns0:cell><ns0:cell>have you ever</ns0:cell></ns0:row><ns0:row><ns0:cell>that sounds like</ns0:cell><ns0:cell>brought you here</ns0:cell><ns0:cell>some of the</ns0:cell><ns0:cell>to help you</ns0:cell></ns0:row><ns0:row><ns0:cell>sounds like it's</ns0:cell><ns0:cell>sounds like you're</ns0:cell><ns0:cell>little bit about</ns0:cell><ns0:cell>mm hmm so</ns0:cell></ns0:row><ns0:row><ns0:cell>p s is</ns0:cell><ns0:cell>you've got a</ns0:cell><ns0:cell>the past ninety</ns0:cell><ns0:cell>in your life</ns0:cell></ns0:row><ns0:row><ns0:cell>what i'm hearing</ns0:cell><ns0:cell>and i think</ns0:cell><ns0:cell>first of all</ns0:cell><ns0:cell>next questions using</ns0:cell></ns0:row><ns0:row><ns0:cell>one of the</ns0:cell><ns0:cell>if you were</ns0:cell><ns0:cell>you know what</ns0:cell><ns0:cell>you have to</ns0:cell></ns0:row><ns0:row><ns0:cell>so you feel</ns0:cell><ns0:cell>it would be</ns0:cell><ns0:cell>the past twelve</ns0:cell><ns0:cell>school or training</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>We analyze the discriminative power of N-grams to provide some intuition on</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>what the model captures regarding empathy. We train LM</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>′ H and LM ′ L similarly to</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head /><ns0:label /><ns0:figDesc>Table. 11 and Table. 12 show the bigrams and trigrams with extreme δ values, i.e., phrases strongly indicating high/low empathy. We see that high empathic words often express reflective listening to the patient, while low empathic words</ns0:figDesc><ns0:table><ns0:row><ns0:cell>560</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7361:1:0:NEW 4 Mar 2016)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Response to reviewer's comments
Reviewer 1
1. The authors mention that they propose three quantification models, and then propose to fuse the results of
these models. It would be helpful to briefly summarize why the authors chose this approach. Do these models
have characteristics that lead one to suggest a priori that this is the best approach to take? Have others employed
them for similar tasks? The authors explain this a bit later, but only in the context of a detailed description of the
methods.
RESPONSE: Thanks for the valuable suggestion. Previous work has used Maximum Likelihood language models
for human behavior modeling, which we adopt in this work. In addition, multiple diverse modeling approaches may
provide complementary information, contributing together to a more accurate system level fusion. Therefore we
have also employed the Maximum Entropy model for its good performance in solving natural language processing
problems, and the lattice rescoring method to better process the noisy output of ASR. We have added a
paragraph summarizing the motivation of choosing the approaches at the beginning of Section 3.
2. Also the correspondence between the narrative and the diagram in Figure 1 could be made clearer, since the
diagram says “N-gram language model” and the narrative says “Maximum likelihood model training with human
generated transcripts”. I gather these refer to the same thing.
RESPONSE: Thanks for pointing this out. We have edited Figure 1 to be consistent with the narrative. We have
further checked the whole manuscript for the consistency on this matter.
3. I would like to see some clarification of the following points:
How does the diarization algorithm perform in the case where speakers overlap or interrupt each other? The
authors note that this is an infrequent occurrence.
RESPONSE: Recordings of psychotherapy sessions used in this work are only available in far-field, single
microphone format. Overlapped speech detection with a single microphone is still an unsolved problem in
research. Time boundaries of overlapped regions are not available from the manual transcription of the recordings
either. Therefore it is beyond the scope of this work to detect overlapped speech or even evaluate the outcome of
such detection. Nevertheless, due to the relatively low ratio of such events, the system is still able to derive useful
output. In the Conclusion section, we have added the need of improving the diarization module to consider
overlapped speech detection, as well as collecting data using close-talking microphones to solve the problem of
diarization for overlapped speech.
4. Was it necessary to train an acoustic model specially for this purpose? How much performance improvement
does this offer over off-the-shelf acoustic models?
RESPONSE: We decided to train the ASR using speech recordings from in-domain data corpora that were
collected in real counseling settings. These data may best match the acoustic conditions (possibly noisy and
heterogeneous) in the addiction counseling scenario. We have added a note in Section 2.3 regarding this point.
5. In the final manuscript, make sure that all acronyms (e.g., MFCCs, OOV, SRILM) are explained or are obvious
in context.
RESPONSE: We have checked the manuscript for the use of acronyms and made a few edits. Here SRILM is the
name of the toolkit that does not stand for a phrase.
6. Typos:
p. 3: “clinical trails”
p. 5: “experiment results” > “experimental results”
p. 9: “highly biased result” > “highly biased results”
p. 11: “while drops” > “while it drops”
p. 13: “pathes” > “paths”
p. 14: “a SP” > “an SP”
RESPONSE: Thanks for pointing out these typos. We have corrected these typos in the manuscript.
Reviewer 2
1. The paper is readable but the English has to be improved. In particular using the first person style “We......”
should be removed. There are also frequent grammatical errors, too many to list. The whole style in which the
paper is written should be revised. Please improve English.
RESPONSE: We have carefully edited the entire manuscript, constrained the use of first person style sentences,
and corrected several grammatical errors to improve the writing quality.
2. From the engineering point of view, the paper does not introduce any novel technologies. Existing methods
were put together for a specific application. However, this particular application (automatic rating) could be of
interest to many readers.
RESPONSE: We agree with the reviewer in the aspect that the system relies on a number of well known speech
and language processing techniques. It is in our plan to continue working on the various modules, and to develop
novel signal processing approaches to further improve the performance. Besides individual modules, the
automatic system addressing therapist empathy modeling is a non-trivial integration of various techniques
towards a complex problem in a real application. As the reviewer mentioned, we think the proposed system may
contribute to the application of automatic rating of therapist, and be of interest to readers in both engineering and
psychology fields.
Reviewer 3
1. To further improve clarity of the article, I feel that the authors need enrich most of captions in both figures and
tables. For example, Figure 2 is quite complicate and its existing caption is too brief to properly guide the readers
to fully understand this figure.
RESPONSE: Thanks for the suggestion. We have elaborated the captions of most figures and tables in the
manuscript. For example, in Figure 2 we have added a more detailed explanation of each part in the plot as an
extended caption.
2. Around the line 60, the authors mentioned that they already “quantified prosodic features of the therapist and
patient, and … “. I am wondering why the authors only limited their experiments on analyzing ASR outputs rather
than considering prosodic cues (could be in a very simple format) given their previous research findings.
RESPONSE: We agree with the reviewer that prosodic cues are useful information in modeling empathy. Our
previous work analyzed prosodic cues based on manual annotation of speakers and time marks. Since the
automatic speaker diarization is not fully robust to errors, the extracted prosodic cues may not be aligned with the
true speakers, so that their effectiveness may be limited. Since the main point of this work is on the automatic
system, we decide to keep prosodic cues for future integration. This may require an update of VAD and
diarization modules in the system, or using close-talking microphones in data collection. In the Conclusion section,
we have elaborated the discussion of future work regarding incorporating prosodic cues.
3. Regarding 2.4 speaker role matching, I feel that many possible useful cues could be used beyond the current
LM only approach. For example, it is possible that a therapist always initializes the dialog and he or she tends to
use shorter time compared to a patient during the entire dialog.
RESPONSE: Thanks for the insights. We have considered multiple cues for role matching in the development.
However, intuition is not always correct in practice. For example, it may be true that the therapist always initiates
the interaction, but often they could actually turn on the recording after their opening sentence, or they could have
turned it on before the session, and the patient could be the first to greet the therapist in the recording. Likewise,
therapists are supposed to speak less and invoke the speech by the patient, but there are cases that the therapist
is less competitive, or the patient is not cooperative. Diarization errors may also limit the reliability of speaking
time estimates. Therefore in this work we have developed LM based approach that is relying on minimal
assumptions about the data collection procedure. We have clarified this in Section 2.4.
In addition, we found there was a typo about the experimental result of role matching in Section 6.2. The actual
ratio of successfully matched sessions is 78% instead of 75.5%. This was due to reporting an older version of
result. We have double checked that the rest of the results are reported faithfully.
4. Regarding section 3, the motivations of using both n-gram LM and MaxEnt model were not clearly introduced.
Reader need know why these methods were considered to be useful.
RESPONSE: This point is also raised by Reviewer 1. We have added a paragraph regarding the motivation to
use multiple methods at the beginning of Section 3.
5. Regarding Table 4, on which levels, the VAD and diarization, were evaluated. This seems not very clear from
reading the paper.
RESPONSE: Thanks for pointing this out. We have clarified in the caption of Table 4 that the results are averages
of session-wise performances.
6. Around the line 415, session level ASR WER has a large variation range (from about 20% to 90%). Since the
article focused on predicting empathy based on lexical cues, the accuracy of ASR impacts the prediction accuracy
very much. In this sense, I am wondering whether the authors should focus their study on the sessions with good
enough ASR WER. It is hard to convince the readers that ASR result with a WER about 0.4 or 0.5 still could be
processed by the proposed empathy prediction method.
RESPONSE: We agree with the reviewer that ASR performance is critical to the system. However, it is not a hard
barrier for empathy analysis. ASR errors are likely independent to the high or low representation of empathy.
Noises in the automatic transcripts are probably not biased towards higher or lower empathy in general. Even
though the domain information in the observation may be attenuated, the polarity of high and low empathy
remains unchanged. Nevertheless, this does not mean that ASR errors have no effect to the system. We found
that the dynamic range of the predicted empathy scores is smaller and more centered in the ORA-D and AUTO
cases, showing a reduced discriminative power. We have added the above discussion in Section 6.3.
7. One issue I spotted is from Table 9. For RP, the Acc on ORAT case was worse than ORAD case (79.0% <
86.8%). This seems be against the intuition. Why the result from transcriptions (WER in such case is close to 0.0)
could be worse than the result from an ASR output with an averaged WER about 0.4.
RESPONSE: There are a couple clarifications for this seemingly counter-intuitive result. Firstly, due to the small
sample size of RP sessions, the difference of prediction accuracies is not statistically significant at 5% level. This
means the comparison is likely to be influenced by random effects. More importantly, as discussed in the previous
item, though noisy text in the ORA-D case attenuates the representation of empathy, such effect is less critical for
binary classification since it only concerns the polarity of high vs. low empathy rather than the actual degree. In
Table.9 ORA-D has slightly higher RMSE than ORA-T. This shows at least that the ORA-D case is not exceeding
the ORA-T case in terms of actual empathy code estimation, possibly lending support to the decrease of
estimation accuracy by the noisy text in the ORA-D case. We have added the discussion on this issue in Section
6.3.
" | Here is a paper. Please give your review comments after reading it. |
149 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Compi is an application framework to develop end-user, pipeline-based applications with a primary emphasis on: (i) user interface generation, by automatically generating a command-line interface based on the pipeline specific parameter definitions; (ii) application packaging, with compi-dk, which is a version-control-friendly tool to package the pipeline application and its dependencies into a Docker image; and (iii) application distribution provided through a public repository of Compi pipelines, named Compi Hub, which allows users to discover, browse and reuse them easily. By addressing these three aspects, Compi goes beyond traditional workflow engines, having been specially designed for researchers who want to take advantage of common workflow engine features (such as automatic job scheduling or logging, among others) while keeping the simplicity and readability of shell scripts without the need to learn a new programming language. Here we discuss the design of various pipelines developed with Compi to describe its main functionalities, as well as to highlight the similarities and differences with similar tools that are available. An open-source distribution under the Apache 2.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Bioinformatics units routinely deal with massive data analyses, which require combining multiple sequential or parallel steps using specific software tools <ns0:ref type='bibr' target='#b15'>(Perkel, 2019)</ns0:ref>. Many of these computational pipelines are published regularly in the form of protocols, best practices or even fully runnable pipelines. They implement all the required steps and dependencies in order to ensure the reproducibility of the analyses and facilitate job automation <ns0:ref type='bibr' target='#b6'>(Grüning et al., 2018)</ns0:ref>. Thus, scientific computational pipelines must provide three key features: reproducibility, portability and scalability. Container technologies such as Docker or Singularity are the most widely used tools to ensure that pipelines run in stable environments (i.e. always using the exact same version of pipeline dependencies) and make it easy to run the pipeline on multiple hardware platforms (e.g. workstations or cloud infrastructures), enforcing reproducibility and portability. Moreover, scalable pipelines must support running on HPC (High Performance Computing) resources using cluster management and job scheduling systems such as Slurm or SGE. For the above reasons, a wide variety of workflow management systems have been released in recent years that address these issues in different ways. Tools with graphical user interfaces, such as Galaxy <ns0:ref type='bibr' target='#b0'>(Afgan et al., 2018)</ns0:ref>, are designed for scientists with little or no programming experience, although such tools can be difficult to set up and configure. Furthermore, commandline based applications such as Nextflow <ns0:ref type='bibr' target='#b1'>(Di Tommaso et al., 2017)</ns0:ref>, Snakemake <ns0:ref type='bibr' target='#b7'>(Köster & Rahmann, 2012)</ns0:ref>, or SciPipe <ns0:ref type='bibr' target='#b8'>(Lampa et al., 2019)</ns0:ref>, provide feature-rich workflow engines oriented to bioinformaticians with medium-to-high programming skills. The Common Workflow Language (CWL; https://www.commonwl.org/) definition represents another alternative, since it defines a specification and offers a reference implementation, but does not provide a complete framework <ns0:ref type='bibr' target='#b9'>(Leipzig, 2017)</ns0:ref>. Other frameworks like Galaxy or Taverna made significant progress to support the execution of workflows defined in CWL and other tools allow to export their workflows into CWL (e.g. Snakemake) or import them from CWL. Despite the existence of such remarkable workflow management systems, scientists with basic scripting skills (e.g. able to create shell scripts invoking command-line tools) but lacking advanced programming skills (e.g. knowledge of programming languages such as Python or Go), are usually overwhelmed due to the high complexity of these systems, and could be hampered to use or create their own workflows. In this sense, tools such as Bpipe <ns0:ref type='bibr' target='#b16'>(Sadedin, Pope & Oshlack, 2012)</ns0:ref> help to assemble shell scripts into workflows to aid in job automation, logging and reproducibility.</ns0:p><ns0:p>Compi is specially designed for researchers who want to take advantage of common workflow engine features (e.g. automatic job scheduling, restart from point of failures, etc.) while maintaining the simplicity and readability of shell scripts, without the need to learn a new programming language. Nevertheless, Compi also incorporates several features that meet the needs of the most advanced users, such as support for multiple programming languages or advanced management and control of workflow execution. In this sense, Compi is more than a workflow engine, it was created as an application framework for developing pipeline-based enduser applications by providing: (i) automatic user interface generation -generates a classical command-line interface (CLI) for the entire pipeline based on its parameter specifications; (ii) application packaging -provides a version-control-friendly mechanism to package the pipeline application and its dependencies into a Docker image; and (iii) application distribution, supported by Compi Hub <ns0:ref type='bibr' target='#b14'>(Nogueira-Rodríguez et al., 2021)</ns0:ref>, a public repository where researchers can easily and freely publish their Compi pipelines and related documentation, making them available for other researchers. Compi has been adopted by our research group to create pipelines for multiple research projects where other systems were not appropriate enough (e.g. the creation of complex pipelines for phylogenomics or training of models based on deep learning for image classification). Likewise, pipelines developed with Compi in collaboration with other research groups have already been published. These are the cases of Metatax, a pipeline to analyze biological samples based on 16S rRNA gene sequencing <ns0:ref type='bibr'>(Graña-Castro et al., 2020a,b)</ns0:ref>, FastScreen, a pipeline for inferring positive selection in large viral datasets <ns0:ref type='bibr' target='#b11'>(López-Fernández et al., 2020)</ns0:ref>, and GenomeFastScreen, an extension of the FastScreen pipeline <ns0:ref type='bibr'>(López-Fernández et al., 2021)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>This section depicts the most relevant technical details regarding the implementation of Compi. An open-source distribution under the Apache 2.0 License is available from GitHub (https://github.com/sing-group/compi). Documentation and installers are available from https://www.sing-group.org/compi.</ns0:p></ns0:div>
<ns0:div><ns0:head>XML for pipeline definition</ns0:head><ns0:p>Compi pipelines are defined in a single XML (eXtensible Markup Language) file that includes (i) tasks and dependencies between them, (ii) pipeline input parameters that are forwarded to tasks, and (iii) metadata, including parameters and tasks descriptions, which are useful for automatic generation of the user interface, as well as for pipeline documentation. We have chosen XML instead of JSON, YAML, or a custom DSL (Domain Specific Language) to reconcile the various requirements simultaneously. First, we wanted a high degree of interoperability. XML, JSON, and YAML can be easily generated and parsed in almost any programming language, although YAML can pose portability issues in some cases as YAML parsers in different languages can produce different results. In contrast, a DSL (Di <ns0:ref type='bibr' target='#b1'>Tommaso et al., 2017)</ns0:ref> is less interoperable, being difficult to produce or consume from languages other than the one on which the DSL is based. Second, XML is appropriate for dealing with long chunks of text, such as the embedded source code for pipeline tasks. Since Compi is language agnostic, these tasks can be defined in any programming language. Thanks to the XML CDATA (character data) blocks, which allow declaring a section of the XML that should not be parsed as XML, it is possible to include source code without any alteration. Embedding task code in JSON is virtually not feasible, as tabs and line breaks, which are key characters in languages like Python, must be escaped in JSON files. In YAML, however, it is easier thanks to its multiline literals. In DSLs it depends on whether the programming language the DSL is based on allows you to declare multiline strings easily, as Python does. Third, XML is easy to validate syntactically and semantically through schemas, which are also present in JSON and YAML. DSLs take advantage of the language parser the DSL is based on. Fourth, for security reasons, since Compi is designed to run pipelines defined by third-party programmers, YAML is a less secure language in this regard, as runnable code could be embedded in fields that were not intended for this purpose (see https://www.arp242.net/yaml-config.html). Fifth comes readability. YAML is the clearer winner in this regard, as readability is the key feature of this format. Although XML is less readable than YAML, the comparison with JSON and DSLs is a more subjective matter. Based on these five requirements, the choice was between XML and YAML: while XML is more secure and portable, YAML is more readable. Finally, we selected XML, prioritizing its security, portability and popularity. Also, while writing XML files could be verbose, YAML is syntactically aware of whitespace, which is a welcome feature in the Python community, but still debated outside of it.</ns0:p></ns0:div>
<ns0:div><ns0:head>The workflow execution engine</ns0:head><ns0:p>The workflow execution engine of Compi is implemented purely in Java and is responsible for multi-thread task scheduling, monitoring, and standard error and output logging of tasks. First, it computes the DAG (Direct Acyclic Graph) of the task dependencies, since a task may depend on a set of tasks that must be run before the given task. Right after starting, or whenever a task finishes its execution, the engine reacts and all the tasks that can be run, i.e. the tasks whose PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>dependencies are now complete, are sent to a worker thread pool that has a parameterizable size. When there are no more tasks to run, the pipeline execution ends. Every time a task is about to run and there is a free thread in the pool, a new subprocess is spawned by invoking the system Bash interpreter to execute the task script. Pipeline parameters are passed to the task script through environment variables, which is a robust, standard mechanism with two main benefits. On the one hand, environment variables are easily accessed via '$variable_name' or more complex expressions, so pipeline parameters are available directly within the task script. On the other hand, this allows Compi to pass parameters to scripts written in languages other than Bash, because virtually all programming languages give access to environment variables. The way to execute languages differently from Bash is through task interpreters. Any task can have a task interpreter defined in the pipeline specification, which is an intermediate, userdefined bash script intended to take the task script as input and call an interpreter from a different programming language. Moreover, it is possible to define task runners, which are the same concept as interpreters, but with a different purpose. They are not defined within the pipeline specification, but rather at execution time, and are intended to tailor task execution to specific computing resources, without modifying the pipeline itself. In this way, the workflow definition is decoupled from the workflow execution, making it possible to change the way tasks are executed without modifying the workflow XML file. For instance, if the tasks must be run in a cluster environment such as SGE or Slurm, computations must be initiated via a submission command (qsub in SGE or srun in Slurm). Task runners intercept task execution by changing the default Bash interpreter with queue submissions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Compi project architecture</ns0:head><ns0:p>Compi comprises three main modules. The most important is the core module, which contains the workflow execution engine, with its main data structures. On top of it, there are two additional modules. On the one hand, the cli (command-line interface) module contains the command-line user interface for running pipelines, which generates a specific pipeline application tailored for pipeline tasks and parameters. On the other hand, the dk (Development Kit) module allows to create a portable application in the form of a Docker image and publishing the pipelines at Compi Hub.</ns0:p></ns0:div>
<ns0:div><ns0:head>Compi Hub</ns0:head><ns0:p>As discussed above, Compi Hub is a public repository where Compi pipelines can be published. The Compi Hub front-end was implemented using the Angular v7 web application framework, while the back-end was implemented using TypeScript and offers a RESTful API that supports all the functionality of the front-end. This REST API is also used by the compi-dk tool to allow pipeline developers to publish their pipelines from the command line. The Compi Hub back-end runs in a Node.js server and uses a MongoDB database to store the data. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Compi can be seen as an ecosystem comprising: (i) compi, the workflow engine with a command-line user interface to control the pipeline execution, (ii) compi-dk, a command-line tool to assist in the development and packaging of Compi-based applications, and (iii) Compi Hub, a public repository of Compi pipelines that allows users to easily discover, navigate and reuse them <ns0:ref type='bibr' target='#b14'>(Nogueira-Rodríguez et al., 2021)</ns0:ref>. Figure <ns0:ref type='figure'>1</ns0:ref> illustrates the development and deployment lifecycle of Compi-based applications. A Compi-based application is a CLI application in which a user can run a pipeline in whole or in part by providing only the parameters of each task. This CLI displays all parameter names and descriptions following standard CLI conventions to help users use the pipeline application more easily. Compi helps pipeline developers by automatically determining the parameters required by a pipeline. This implies, on the one hand, that comprehensive help is generated for the user on how to use the application and, on the other hand, that the application parameters are automatically parsed when the pipeline is executed. This is complemented by several built-in features added to the CLI application, such as an advanced task execution control, allowing to run only specific tasks in a pipeline, or log management support per task. Therefore, building a CLI application from a Compi pipeline is a straightforward process that allows Compi users to focus on pipeline development. The 'Pipeline design' section explains how Compi users can design their pipelines. Once the pipeline is developed, it can take two complementary paths: it can be executed directly with compi (top right of Figure <ns0:ref type='figure'>1</ns0:ref>) or it can be packaged as a portable end-user CLI application in a Docker image using compi-dk (application packaging area in Figure <ns0:ref type='figure'>1</ns0:ref>), which can be then executed via Docker. Pipeline execution and dependency management are explained in the 'Pipeline execution and dependency management' section, while the application packaging in Docker images is explained in the 'Reproducible Application Packaging with Docker' section. When an end-user pipeline application is ready to be published, Compi provides pipeline developers with a distribution platform called Compi Hub, where they can share different versions of their pipelines along with usage documentation, example datasets, parameter values, as well as links of interest (e.g. Docker Hub, Github). Community users can then browse the Compi Hub for pipelines and can explore the helpful documentation generated by the authors of each pipeline, as well as other information automatically generated by the platform, such as an interactive DAG representation of the pipeline. This topic is covered in the 'Pipeline distribution via Compi Hub' section. Finally, it is important to note that compi-dk projects are particularly designed to be compatible with version control systems, as the required configuration and pipeline files are text files. Dependency management via a Dockerfile is key to achieve this, complemented by Compispecific dependency management performed by compi-dk (e.g. the compi executable added to the Docker image of the pipeline). Therefore, a compi-dk project only requires that compi, compi-dk and Docker are installed in order to be built or run. With this feature, Compi gives pipeline developers the ability to use version control systems to keep their pipeline safe and to version their code.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pipeline design</ns0:head><ns0:p>As explained previously, Compi pipelines are defined in an XML document that includes user parameters, tasks, and metadata. Figure <ns0:ref type='figure'>2</ns0:ref> shows an example of a minimal pipeline. This sample pipeline defines two parameters ('name' and 'output'), described in the corresponding 'params' section, and two tasks ('greetings' and 'bye') described in the 'tasks' section. The parameter descriptions are used to automatically generate both the user interface and the documentation for the pipeline. In the same way and for the same purpose, tasks are described in the 'metadata' section, which allows the pipeline developer to describe them in a human-readable manner. Tasks are defined as 'task' elements within the 'tasks' section. The main components of a task are: source code, parameters and dependencies. The source code is placed inside a 'task' element, and the parameters used by a task are defined within the 'params' attribute, and those tasks that the current task depends on are defined in the 'after' attribute. For instance, in Figure <ns0:ref type='figure'>2</ns0:ref>, the 'bye' task, uses the 'name' and 'output' parameters, and depends on the 'greetings' task. Parameter values are passed to the tasks as environment variables, which is a standard method that any programming language gives access to. In this way, the Compi workflow execution engine does not need to process the task code in any way (e.g. to perform variable substitution), guaranteeing the possibility of using any programming language to define the code of the task. In this sense, the task code is written in Bash by default, although other scripting languages (e.g. Python, R, AWK, etc.) can be used under a suitable interpreter through an 'interpreter' parameter. For example, in Figure <ns0:ref type='figure'>2</ns0:ref>, the 'bye' task is written in Perl and the 'interpreter' parameter indicates how to invoke the Perl interpreter to run the task code. A special type of tasks are parallel iterative tasks (or loop tasks), defined via 'foreach' elements, which spawn multiple parallel processes with the same code on a collection of items. This collection of items is provided by a user-specified source of items, which could be a commaseparated list of values, a range of numbers, the files in a specific directory, a parameter whose value is a list of values separated by commas, or even a custom command whose output lines are taken as items. Each item is available to the task code as an environment variable. When all spawned tasks have finished, the foreach task ends as well and subsequent dependent tasks can be run. Nevertheless, there are several scenarios in which a pipeline developer can define multiple consecutive foreach tasks intended to iterate over the same collection of independent items. In these scenarios, when one iteration of a loop has finished, the corresponding iteration of the next loop could start without waiting for the entire previous loop to finish. Compi supports this type of interaction between foreach tasks by simply adding the '*' prefix to the task name when declaring the dependency on the 'after' attribute. For example, in a scenario where a pipeline must process a set of samples (e.g: 'case-1', 'case-2', 'control-1', 'control-2') by performing two consecutive operations: preprocess and analyze, the dependency of the 'preprocess' task on the 'analyze' task can be prefixed with '*' (i.e after='*preprocess') PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to tell Compi that iterations over the 'analyze' loop could start when the corresponding iterations over the 'preprocess' loop have finished. Figure <ns0:ref type='figure'>3</ns0:ref> shows an example of this, where two foreach tasks 'preprocess' and 'analyze' iterate over the same set of items ('samples'). The 'analyze' task depends on the 'preprocess' task, but at an iteration-level (after='*preprocess'). Without the '*' prefix, the 'analyze' task will only start when the whole 'preprocess' task has finished.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pipeline execution and dependency management</ns0:head><ns0:p>One of the main features of Compi is the creation of a classic CLI for the entire pipeline based on its parameter specifications to aid users run the pipeline. This CLI is displayed when a user executes 'compi run -p pipeline.xml --help', which describes the Compi execution parameters and specification of each pipeline task. Supplementary File 1 shows this CLI for the RNA-Seq Compi pipeline (https://www.sing-group.org/compihub/explore/5d09fb2a1713f3002fde86e2). As Figure <ns0:ref type='figure'>4</ns0:ref> illustrates, the execution of Compi pipelines can be controlled using multiple parameters that fall in three main categories: pipeline inputs (i.e. the pipeline definition and its input parameters), logging, and execution control. In Figure <ns0:ref type='figure'>4A</ns0:ref>, the 'compi run' command receives the pipeline definition file explicitly, while the '--params' option indicates that the input parameters must be read from the 'compi.params' file. On the other hand, in Figure <ns0:ref type='figure'>4B</ns0:ref> the pipeline definition file is omitted and compi assumes that the pipeline definition must be read from a file named 'pipeline.xml' located in the current working directory. Also, pipeline parameters are passed on the command line after all the 'compi run' parameters and separated by the '--' delimiter. Regarding logging options, both include the '--logs' option to specify a directory to save the standard (stdout) and error (stderr) outputs of each task along with the specific parameter values used in each execution. They are saved in three different files named with the name of the corresponding task as prefix (e.g. 'task-name.out.log','task-name.err.log', and 'taskname.params'). To avoid unnecessary file creation, Compi does not save task outputs unless the '--logs' option is used. However, it is possible to specify which tasks should be logged using the '--log-only-task' or '--no-log-task' parameters. In addition to the task specific logs, Compi displays its own log messages during pipeline execution. These messages can be disabled by including '--quiet' as shown in Figure <ns0:ref type='figure'>4B</ns0:ref>. In contrast, in the example given in Figure <ns0:ref type='figure'>4A</ns0:ref>, the '--show-std-outs' option forces Compi to forward each task log to the corresponding Compi output, which is very useful for debugging purposes during pipeline development. The third group of options allows to control the execution of the pipeline. For instance, the '-num-tasks' parameter used in Figure <ns0:ref type='figure'>4A</ns0:ref> sets the maximum number of tasks that can be run in parallel. It is important to note that this is not necessarily equivalent to the number of threads the pipeline will use, as some tasks may use parallel processes themselves. The '--abort-ifwarnings' option, also used in the example in Figure <ns0:ref type='figure'>4A</ns0:ref>, tells Compi to abort the pipeline execution if there are warnings in the pipeline validation. This is a useful and recommended option for pipeline testing during development, to avoid undesired effects that may arise from PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science ignoring such warnings. A typical scenario that causes a warning is when the name of a pipeline parameter is found inside the task code, but the task does not have access to it, because it is not a global parameter nor defined in the set of task parameters. One of the most notable features of the Compi workflow execution engine is that it allows a finegrained control over the execution of the pipeline tasks. While many workflow engines only allow launching the entire pipeline or partially relaunching a pipeline from a point of failure, which can also be done in Compi using the 'resume' command, Compi also allows launching sub-pipelines using modifiers such as '--from', '--after', '--until' or '--before'. In the example shown in Figure <ns0:ref type='figure'>4B</ns0:ref>, a combination of the '--from' and '--until' modifiers is used, resulting in the execution of all tasks in the path between 'task-1' and 'task-10', including 'task-1' and 'task-10'. If the '--after' and '--before' are used instead, then 'task-1' and 'task-10' are not executed. A fifth modifier is '--single-task', which allows only the specified task to execute and is not compatible with the other four modifiers. The Compi documentation (https://www.sing-group.org/compi/docs/running_pipelines.html#examples) includes several examples to illustrate how each of these options work. Compi runs each task as a local command by default (Figure <ns0:ref type='figure' target='#fig_5'>5A</ns0:ref>). This means that if a task invokes a certain tool (e.g. a ClustalOmega alignment running 'clustalo -i /path/to/input.fasta -o /path/to/output.fasta'), this tool must be available either from the path environment variable or by including the absolute path to the binary executable. Since dependency management is always cumbersome, a special effort has been made to offer developers and users various alternatives to deal with this problem, which are explained below. One way to address this issue is by means of a file with a custom XML runner definition, as done in the example shown in Figure <ns0:ref type='figure'>4A</ns0:ref> with '--runners-config pipeline-runners.xml'. Individual runners are defined through a 'runner' element within a runners file, where the 'task' attribute is used to specify the list of tasks that the runner must execute. In this way, when a task identifier is assigned to a runner, Compi will ask the runner to run the corresponding task code (instead of running it as a local command). The usage of pipeline runners to handle dependencies allows Docker images to take responsibility for them (Figure <ns0:ref type='figure' target='#fig_5'>5D</ns0:ref>). For instance, Figure <ns0:ref type='figure'>6A</ns0:ref> shows a pipeline task named 'align' that uses a tool (defined by the pipeline parameter 'clustalomega') that receives a file as input and produces an output. The runner defined in Figure <ns0:ref type='figure'>6B</ns0:ref> for the same task runs the specified task code (available in the environment variable 'task_code') using a Docker image. The runner here is almost a generic Docker runner, and the key points are:  First, the creation of a variable ('$envs') with the list of parameters that must be passed as environment variables to the Docker container.</ns0:p><ns0:p> Second, run the Docker image with the list of environment variables and mount the directory where the command has the input and output files ('workingDir' in this example). Such a Docker runner would allow to follow an image-per-task execution pattern, where each task is executed using a different container image <ns0:ref type='bibr'>(Spjuth et al., 2018)</ns0:ref>. An example of this execution pattern can be found in the GenomeFastScreen pipeline (https://sing-group.org/compihub/explore/5e2eaacce1138700316488c1), although in this case 'docker run' commands are included in each task rather than provided in a runners file for the sake of simplicity. Following this image-per-task execution pattern, it is possible for a pipeline to use different versions of the same software or two tools that require different versions of some dependencies. In addition, custom runners can also be used to submit pipeline tasks to a job scheduler such as SGE, Torque or SLURM in supercomputers or computer clusters (Figure <ns0:ref type='figure' target='#fig_5'>5B</ns0:ref>). For instance, Figure <ns0:ref type='figure'>7</ns0:ref> shows a generic Slurm runner. Some srun parameters may need to be adjusted for each specific cluster and the '--export' parameter must be used to export all environment variables to the process to be executed, as the task parameters are declared as environment variables. In the same way, when using Compi runners, it is also possible to combine containerized execution of Compi tasks with Docker in a clustered environment, using a container orchestration system such as Kubernetes (Figure <ns0:ref type='figure' target='#fig_5'>5E</ns0:ref>). Another way to achieve dependency management is by building a monolithic Docker image with Compi, the pipeline itself and all its dependencies (Figure <ns0:ref type='figure' target='#fig_5'>5C</ns0:ref>). This topic is explained in subsection 3.3. Finally, dependency management can be also delegated to external systems. For example, Compi allows the use of Conda/Bioconda packages seamlessly. Each task can use them simply by activating and deactivating the corresponding Conda environments before executing the specific commands. The Metatax pipeline (https://www.singgroup.org/compihub/explore/5d807e5590f1ec002fc6dd83) illustrates this. For instance, Figure <ns0:ref type='figure'>8</ns0:ref> shows the execution of a script (whose name is defined by the 'validate_mapping_file' parameter, highlighted in bold) from the Qiime Bioconda package.</ns0:p></ns0:div>
<ns0:div><ns0:head>Reproducible Application Packaging with Docker</ns0:head><ns0:p>Compi enables the creation of portable end-user CLI applications for pipelines that can be distributed as Docker images. As noted in the previous section, this is another way to deal with dependency management, as such Docker images contain all the dependencies required by the pipeline. Pipelines distributed in this way follow an image-per-pipeline execution pattern in which all tasks are executed using the same image container <ns0:ref type='bibr'>(Spjuth et al., 2018)</ns0:ref> (Figure <ns0:ref type='figure' target='#fig_5'>5C</ns0:ref>), and can even be run using Docker-compatible container technologies such as Singularity. The compi-dk command-line tool is provided to assist in the development and packaging of Compi-based applications into Docker images. Pipeline development starts with the creation of a new compi-dk project with the 'compi-dk new-project' command, which creates a project directory and inizialites two template files: pipeline.xml and Dockerfile. After this, the definition of the pipeline can start by modifying the pipeline.xml template and the subsequent local testing (using the compi command). Also, it can be tested by building a Docker image (using compi-dk) and running the containerized pipeline. As for the latter case, when the compi-dk build command is executed on the project directory, a Docker image for the pipeline is created. This image contains the compi executable file and a specific pipeline.xml file, along with the pipeline PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science dependencies as defined in the Dockerfile. Figure <ns0:ref type='figure'>9</ns0:ref> shows the Dockerfile of the MINC Computer Vision pipeline for image classification based on Deep Learning (https://www.singgroup.org/compihub/explore/5d08a9e41713f3002fde86d5). The Dockerfile skeleton was automatically generated by compi-dk, being only necessary to add the 'RUN' commands for the installation of 'gluoncv' and 'gnuplot' dependencies. In this regard, it is important to note that special attention was placed on the ability to derive a pipeline application image from any preexisting Docker image of interest (e.g. images with bioinformatics packages). For instance, the RNA-Seq Compi pipeline discussed above was created using the DEWE <ns0:ref type='bibr' target='#b10'>(López-Fernández et al., 2019)</ns0:ref> Docker image as a base image and the MINC Computer Vision was created using one of the Apache MXNet Docker images as a base image. When working with a compi-dk project, it is also possible to create a Docker image for a pipeline that follows the image-per-task execution pattern. This is the case of the GenomeFastScreen pipeline (https://sing-group.org/compihub/explore/5e2eaacce1138700316488c1), in which, as explained above, a 'docker run' command is included within each task instead of managing the execution of Docker using an external runner file. Regardless, most of the tasks within this pipeline run under external Docker images, thus following the image-per-task execution pattern. As the GenomeFastScreen pipeline itself is also distributed as a Docker image, it must be able to run Docker images as well (please refer to the pipeline documentation for details on how to do this).</ns0:p></ns0:div>
<ns0:div><ns0:head>Pipeline distribution via Compi Hub</ns0:head><ns0:p>Compi enables the creation of portable end-user CLI applications for pipelines that can be distributed as Docker images. As noted in the previous section, this is another way to deal with dependency management, as such Docker images contain all the dependencies required by the pipeline. Pipelines distributed this way follow an image-per-pipeline execution pattern in which all tasks are executed using the same image container <ns0:ref type='bibr'>(Spjuth et al., 2018)</ns0:ref> (Figure <ns0:ref type='figure' target='#fig_5'>5C</ns0:ref>), and can even be run using Docker-compatible container technologies such as Singularity. Once the pipeline development is completed, it can be released through Compi Hub to increase the visibility and benefit from the Compi Hub features. Pipelines can be registered using the Compi Hub web interface (https://www.sing-group.org/compihub) or using the 'compi-dk hubpush' command. Compi Hub can store several versions of a pipeline, each of them associated to a pipeline.xml file where the specific pipeline version is defined. In addition, since Compi Hub does not store either the full source code (e.g. scripts included in the pipeline.xml as source files) or the Docker images themselves, pipeline publishers are encouraged to: (i) publish the source code (i.e. the compi-dk project) in public repositories such as GitHub or GitLab to allow users to re-build the project locally at any time; and (ii) push the corresponding Docker image to the Docker Hub registry so that users can pull the image and follow the instructions to run the pipeline application. The Compi Hub website lists publicly available pipelines and gives access to all pipelines. When a pipeline is selected, the main pipeline information is displayed, including title and description, Manuscript to be reviewed Computer Science creation date, as well as links to external repositories in GitHub or Docker Hub. In addition, for each pipeline version, Compi Hub displays the following information:  Overview: this section is headed by the pipeline DAG, generated in the backend using the 'compi export-graph' command. Since it is an interactive graph, visitors can use it to navigate to each task description. Figure <ns0:ref type='figure'>10</ns0:ref> shows the Metatax pipeline DAG (https://www.sing-group.org/compihub/explore/5d807e5590f1ec002fc6dd83). The DAG is followed by two tables, one containing the pipeline tasks and their associated descriptions, and a second one containing global parameters of the pipeline. Finally, this section encloses one table for each task with descriptions and specific parameters. It is important to note that all this information is automatically generated from the pipeline XML.</ns0:p><ns0:p> Readme: this section shows the content of the README.md file when it is present in the compi-dk project. This file should be used to provide a comprehensive description of the pipeline, as well as instructions on how to use it.</ns0:p><ns0:p> Dependencies: this section shows the content of the DEPENDENCIES.md file when it is present in the compi-dk project. We recommend that pipeline developers include this file with a human-readable description of the pipeline dependencies and the specific versions used to develop the pipeline.</ns0:p><ns0:p> License: this section shows the content of the LICENSE file when it is present in the compi-dk project. We encourage pipeline developers to include this file in their compi-dk projects so that the terms of use of the pipeline are clear.  Dataset: this section contains a list of datasets that can be used to test the different versions of the pipeline. It is shown when a pipeline publisher associates a test dataset with the displayed pipeline. In addition to the instructions given in the Readme section, we also recommend that pipeline publishers provide test datasets to help users test the pipelines themselves.</ns0:p><ns0:p> Runners: this section displays a list of example runner configurations when they are present in the compi-dk project. Runner configurations must be stored as XML files within the 'runners-example' directory of the project.</ns0:p><ns0:p> Params: this section shows a list of example parameter configurations when they are present in the compi-dk project. Parameter configurations must be stored as plain-text key-value files in the 'params-example' directory of the project. As can be seen, Compi Hub was not designed to be a merely pipeline repository. We seek that developers accompany each pipeline with all the necessary information to ensure its portability and reproducibility by other researchers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Workflow management systems play a key role in the development of data science processing pipelines in multiple fields, such as bioinformatics or machine learning, among others. There are multiple solutions and approaches to develop flexible, portable, usable, maintainable and reproducible analysis pipelines in an easy way. With Compi, we progressed from a state-of-theart capable workflow management system to an entire application framework, focusing on the transition from pipeline development to end users and the community. In this sense, we have put special emphasis on providing pipelines with an advanced and automatically generated CLI, which is aware of the pipeline structure, parameters and tasks descriptions, to facilitate its adoption by final users. Moreover, Compi aids pipeline developers in creating all-in-one distributable Docker images, as well as share the pipeline in an online automatically documented hub where it will be available to the community. This combination with Docker has the added benefit that Compi projects can be built into runnable CLI applications using only text files (i.e. a 'pipeline.xml' file, a Dockerfile, and other project files), allowing pipeline developers to use control version systems to keep track of their development. Compi pipelines can be executed in multiple computing layouts without any modification, from running natively on a single machine to a high-performance, fully containerized cluster environment, tailored to the needs of the end user. At design level, we have prioritized the use of well-known standards, such as XML, low intrusiveness and language agnosticism, targeting a broader user community. In this sense, we avoid defining pipelines via DSLs based on a specific programming language, forcing a scripting language to define each task code, or be coupled to specific dependency management systems, such as Python (Conda), R (Cran) or Java (Maven). Unlike Snakemake, the Nextflow and SciPipe tools allow dynamic scheduling, that is, the ability to change the pipeline structure dynamically to schedule a different number of tasks based on the results of a previous step or any other parameter. Compi allows dynamic scheduling via (i) the 'if' attribute of tasks, which executes a command just before the task is about to run, allowing the task to be skipped dinamically, and (ii) foreach loops, which can take their iteration values from the output of a command, which is executed just before the foreach loop is about to run, allowing, for instance, to do more parallel iterations depending on the number of files generated by a previous task. Compi, Snakemake, and Nextflow are language independent, allowing external scripts written in any programming language to be invoked. Similar to Compi runners, Nextflow defines executors, which are the components that determine where a pipeline process runs and its execution is supervised. It provides multiple built-in executors to manage execution on SGE, SLURM, Kubernetes, and many others. In SciPipe, this can be achieved by using the Preprend field when defining processes, similarly to how Compi runners work. Regarding containerization, as explained above, when pipeline tasks need to run in isolated containers, Compi users must include the corresponding command (e.g. 'docker run') in the task code or in a runner that tackles the execution of such tasks. Additionally, pipeline developers can create a Docker image for the entire pipeline so that all tasks are executed in the same container. In both cases, the developer must mount the paths to the input and output files of each task when using Docker. In this sense, Nextflow provides built-in support to run individual tasks or complete pipelines using Docker, Singularity or Podman images. In the case of Docker, Nextflow is able to mount the input and output paths automatically, since it is aware of the files needed by tasks. Similarly, Snakemake has built-in support to execute complete pipelines on Docker images and individual rules in isolated Conda environments. The SciPipe documentation does not provide information on how to containerize pipelines. Logging is another important feature of workflow management systems, allowing pipeline users to see how execution went and determine causes of errors if necessary. SciPipe has been designed with special care on logging and collecting metadata about each executed task. Following a data-centric audit logging approach, SciPipe generates a JSON file for each output file that contains the full trace of the tasks that were executed to generate it. Nextflow provides a log command that returns useful information about a specific pipeline execution, and incorporates a '--with-report' option in the run command that instructs Nextflow to generate an HTML execution report that includes many useful metrics about a specific workflow execution. It also supports a '--with-trace' option in the run command that generates an execution trace file that contains useful information about each process executed as part of the pipeline (e.g. submission time, start time, completion time, CPU, and memory usage). As previously stated, the 'compi run' command provides the '--logs' option to specify a directory to save the stdout and stderr outputs and the specific parameter values of each task execution in separated files prefixed with the corresponding task name. Nextflow has nf-core, an environment with a dual purpose: to provide an online repository of Nextflow pipelines and to provide a command-line tool to interact with the repository and manage the execution of the hosted pipelines. Similarly, the Compi Hub repository allows users to discover and explore pipelines, and the compi-dk tool allows to push pipelines to the hub via command line. Snakemake does not provide a similar public repository, but a dedicated GitHub project exists (https://github.com/snakemake-workflows/docs). After these considerations, and given the choices available, we believe that Compi may be a reasonable choice for researchers with CLI skills looking to create medium complexity pipelines without requiring them to learn new programming languages (e.g. to learn the Nextflow DSL or Go for SciPipe pipeline development). In this way, researchers would benefit from the common features of the workflow management engine that Compi offers without the need for much training. In addition, thanks to the auto-generated CLI, Compi would be the most suitable solution for those pipelines meant to be used by researchers as end-user applications. As the existing pipeline examples presented in the previous section demonstrate, pipeline-based applications developed in this way can easily be distributed as Docker images. Finally, to illustrate the way of creating Compi pipelines and the main differences with other workflow management systems, we have implemented a Nextflow example pipeline in Compi (Supplementary File 2). This simple example, the description made in the previous section, and the public pipelines available on Compi Hub, will allow potential users to determine when Compi may be the most suitable option. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Compi is an application framework for developing pipeline-based end-user applications in bioinformatics and data science. Two Compi design principles are low intrusiveness and language agnosis, with the goal of covering a wide variety of scenarios and providing the most flexibility to pipeline developers. Noteworthy, Compi pipelines can be executed in multiple computing layouts without the need to modify the pipeline definition, from running natively on a single machine to a fully-containerized, high-performance cluster environment, tuned for the needs of the end user. To complement the Compi workflow execution engine, we have also created Compi Development Kit (compi-dk) and Compi Hub. Thanks to the Compi Development Kit, pipelines can be packaged as self-contained Docker images that also include pipeline dependencies, and can be shared publicly with minimal effort using Compi Hub. Future work include, among other tasks, the following issues: (i) improving the metadata section of the pipelines to allow the inclusion of more information (e.g. licensing, attributions, or custom information); (ii) including the possibility of creating stackable runners that gives more flexibility and power to customize task execution; and (iii) enhancing the logging reports generated by Compi.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 2</ns0:head><ns0:p>Minimal pipeline example of a Compi pipeline with two tasks.</ns0:p><ns0:p>A specific section ('params') defines two parameters ('name' and 'output'), used in the two pipeline tasks, named 'greetings' and 'bye'. Both tasks print the value of the 'name' parameter to the path specified in the 'output' parameter. By default, tasks are executed as Bash commands and this is the case of the 'greetings' task. The 'bye' task is written in Perl Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>Example of Compi pipeline using iterative foreach tasks.</ns0:p><ns0:p>Note the '*' character when the dependency of the 'analyze' task on the 'preprocess' task is defined. This way, the second foreach is binded to the first one, meaning that the two Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>Examples of Compi parameters to control how pipelines must be executed.</ns0:p><ns0:p>Compi parameters belong to three main categories: pipeline inputs (i.e. the pipeline definition and its input parameters), logging, and execution control (i.e. specify whith tasks must be executed). </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>and the 'interpreter' parameter of the task definition indicates how to invoke the Perl interpreter to run the task code. PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>foreach tasks iterate over the same collection of elements and allowing Compi to start the execution of each iteration of the second foreach right after the corresponding iterations of the first foreach had finished. PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,70.87,525.00,372.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,280.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,280.87,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,199.12,525.00,371.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,275.62,525.00,204.75' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57469:1:1:NEW 3 May 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Formal Response of its Authors to the Reviewers Comments on the paper
Compi: a framework for portable and reproducible pipelines
(Manuscript ID: 57469)
submitted to PeerJ - Computer Science on 27.01.2021
Editor Comments (Robert Winkler)
Both reviewers made a number of comments about the integration of Compi into the ecosystem of workflow management systems. Could you please compare an example workflow?
Authors response) We have included as an example a pipeline that allows Compi to be compared with Nextflow, trying to show the way of defining Compi pipelines and its main advantages. This example is included at the end of discussion section and provided as Supplementary File 2.
Also, could you make reference to the CWL - Common Workflow Language?
Authors response) In this revised version of the manuscript we have also included a mention of CWL in the introductory section. It is accompanied by a link to the CWL website (https://www.commonwl.org/) and a reference.
Reviewer 1
Basic reporting
Just a personal comment related to sentence beginning at line 216. There are many life science researchers who are circumstantial developers out there who are shy sharing their code, as they feel embarrassed about it. Therefore, most of those researchers are not used to version control systems yet.
Authors response) We agree with the reviewer, we have rephrased that entire paragraph.
First sentence in line 477 should be rewritten, so it is not misunderstood. It took me re-reading it at least twice, carefully, in order to get its meaning.
Authors response) We agree with the reviewer that this sentence was misleading. We have rewritten the paragraph to which that sentence belonged.
Experimental design
Line 485. I cannot agree with sentence 'According to its documentation, Snakemake only allows the execution of external scripts written in Python, R, R Markdown, and Julia'. If you have a look at https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html and at example at https://snakemake.readthedocs.io/en/stable/tutorial/basics.html#step-1-mapping-reads , you can see there that it supports 'shell' command line run. Snakemake supports rules where some python code can be embedded through 'run' keyword, similar to Nextflow functionality, where groovy code can be run using 'exec' keyword https://www.nextflow.io/docs/latest/process.html#native-execution.
Authors response) We agree with the reviewer and that is why we have changed that paragraph completely to the following: “Compi, Snakemake, and Nextflow are language independent, allowing external scripts written in any programming language to be invoked.”.
Comments for the author
I have read the manuscript, and also the related documentation, scattered among the repository and the different web pages. I have both comments, issues, questions and suggestions:
Authors response) We appreciate the in-depth and detailed review carried out by the reviewer.
Line 209 of the manuscript: I guess there is a typo, as 'Compo Hub' should be 'Compi Hub'.
Authors response) We thank the reviewer for noting it, we have already corrected it.
Related to Compi Hub and its links in the manuscript, I guess I have found a bug in Compi Hub behaviour (2021-02-22, Firefox browser). I tried both links at line 269 ( https://www.sing-group.org/compihub/explore/5d09fb2a1713f3002fde86e2 ), at line 357 ( https://www.sing-group.org/compihub/explore/5d807e5590f1ec002fc6dd83 ) and at line 390 ( https://www.sing-group.org/compihub/explore/5e2eaacce1138700316488c1 ), but they did not work. I went to Compi Hub in order to find the referred RNA-Seq, Metatax and GenomeFastScreen pipelines, and I found then at 'compi-rnaseq-pipeline', 'metatax' and 'pss-genome-fs' with the very same links. Clicking there it worked, but copying and pasting the links in a sibling tab or window did not work. The same happens to https://www.sing-group.org/compihub/explore , which is the URL https://www.sing-group.org/compihub/ redirects to.
Authors response) It was due to a bug in the Compi Hub deployment. We have already corrected it.
I have perceived that the different sites and documentation are a bit isolated among themselves.
* I have missed both in http://sing-group.org/compi/docs/ and https://github.com/sing-group/compi/blob/master/README.md a reference to the download area at http://sing-group.org/compi/#downloads , as well as installation instructions (at least, the easy ones).
Authors response) We have added a new section called 'installation' within the Introduction section on the website (http://sing-group.org/compi/docs/introduction.html#installation). We have also added these instructions to the README.
* I have also missed in http://sing-group.org/compi/docs/ a reference to https://github.com/sing-group/compi/ .
Authors response) We have added the requested reference to the Introduction section, within the subsection 'What is Compi ?'.
* Also, I have missed in https://github.com/sing-group/compi/ an INSTALL.md or similar explaining the steps to build compi on different platforms or architectures (which could be needed by macOS users, for instance). This comment is related to sustainability of a workflow written in Compi.
Authors response) Compi is implemented in Java and, therefore, its code is cross-platform. However, it requires the envsubst program to override the environment variable values. For this reason, we have created self-contained Linux builds that include Java (JRE) and envsubst binaries. As previously mentioned, we have added a section titled 'Install Compi and compi-dk' to the README.md file in the GitHub repository. Within this section, we have added a 'Build from source' subsection that explains the steps and requirements needed to build Compi.
* Releases at https://github.com/sing-group/compi/releases could be enriched with the installers you are already providing at http://sing-group.org/compi/#downloads.
Authors response) We have included the installers in the latest release of Compi (v1.4.0) at GitHub and will continue to include them in future releases.
I have been studying the custom runners behaviour, and they are a very interesting feature, albeit quite synchronous. For instance, Slurm runner example https://www.sing-group.org/compi/docs/custom_runners.html#generic-slurm-runner is using the synchronous executor srun. Scenarios which require implementing custom runners which involve asynchronous checks to some resource are harder to implement. Could you include in the online documentation an advanced example of 'polling' in a custom runner?
Authors response) Following the reviewer’s suggestion, we have added two new generic runners to the online documentation (https://www.sing-group.org/compi/docs/custom_runners.html#examples-of-useful-runners). The first one is the “Generic SSH runner”, which allows executing a given task in a remote host through SSH. To do so, the task parameters (available as environment variables) must be saved into an “environment file” that is copied using scp before running the task. Then, the second runner is a “Generic AWS runner” that starts an Amazon AWS instance (if not running yet) and waits until it is available. Once the instance is available, the task is executed through SSH as in the previous runner.
- As it is distilled from line 429 to line 438, and from line 444 to line 446 good (scientific) software practices should be encouraged. So, I propose you that 'compi-dk new-project' command should create initial LICENSE.md, README.md and DEPENDENCIES.md files, as well as a 'runners-example' directory with a couple of the more common custom runners, and 'compi-dk build' should check their existence (and give a warning when they are not found).
Authors response) We thank the reviewer for this suggestion regarding the “compi-dk new-project” command, which is now implemented in the new Compi release. Regarding the “compi-dk build” project, we have decided not to include such validation there since the “compi-dk hub-push” (the command that publishes a pipeline at Compi Hub) already does this checking. We think that this is better than creating warnings in the “compi-dk build” command because such files are not required for building the Docker image and some developers may not need them (e.g. a local pipeline not intended to be shared at Compi Hub).
I have also several questions and suggestions to the authors, related to future developments of compi and Compi Hub:
- Have you thought on an expandable workflow metadata system for Compi workflows? Currently it is a key / string value system bound to tasks, so it is not possible to have metadata attached to the whole workflow, its inputs and outputs.
- I have a question which might be related to previous suggestion. How the authors of a workflow can embed the details of its licence and attributions? Currently that information must be either it is derived from the git repository of the workflow or it should be embedded as a comment in the workflow, as metadata system does not support it.
Authors response) We agree with the reviewer. Indeed, it is in our plans to extend the Compi schema to include more metadata fields and improve the documentation.
Do you have sub-workflow support in Compi's roadmap? My question is more related to common sub-workflow patterns reuse than composing a meta-workflow.
Authors response) We agree that it could be very useful to implement some kind of reusability. We have considered adding include-like instructions to reuse specific parts (tasks and/or parameter definitions) of external workflows that might be available via URLs or files. However, we have technical doubts about how to finally implement it, and since we have not found this requirement essential to develop our pipelines, we have not implemented it.
Execution provenance. A way to report what happened in an execution would be generating a parse-able report with all the parameters, both explicit and implicitly set, along with additional information and metrics of the workflow execution and the custom runners.
Authors response) We agree that this feature could be very useful, especially for debugging the execution of pipelines and we are planning it in future developments. Currently, the Compi standard output displays the execution parameters and the times when the task starts and ends. Also, every standard output/error of all tasks is saved in log files. In addition, the new version of Compi (v1.4.0) also generates a “.params” file that contains the parameters of the task where all variables were resolved. In this sense, it is possible to listen to Compi core execution events via a Java interface (ExecutionHandler), which could allow creating Compi monitoring systems.
Should workflow semantic versioning be only recommended or enforced? My question is two-fold, both philosophical and related to that the there is no restriction on the version string at XML Schema definition.
Authors response) The version string is mandatory, but it is not a semantic versioning. We have not added any restriction to the version in the XML schema because we do not implement any functionality that parses the version string, it is only informative. For example, it is displayed at Compi Hub, allowing to select which version must be shown. Since at this point applying semantic versions would be incompatible with previous versions, we believe that this could be a recommendation or a best practice.
Although custom runners are a very powerful feature, they have greater potential. So, I recommend having in the roadmap stackable custom runners, so stacks of custom runners would allow easy combinations of, for instance, Slurm+singularity or SGE+conda environment activation.
Authors response) We really appreciate this suggestion. We had not thought of that, but it could be a really cool feature, especially to be able to reuse and combine generic runners. We are now planning to include it in future versions of Compi. We have also added this point to the conclusion section.
An additional suggestion about custom runners is allowing them to use task context annotations or variables. In that way, custom runners would be more reusable. For instance, a runner needing a conda environment precondition with an specific package installed could learn which conda package should be installed from that received metadata. Or using different containers for different tasks, based on an annotation provided by the task.
Authors response) Currently, runners receive the task code as well as all task variables, our impression is that this could be implemented using Compi in its current form.
Last suggestion involving custom runners is having in Compi Hub a way to upload custom runner templates, so researchers can reuse successful patterns.
Authors response) Compi Hub already allows users to upload custom runners as well as custom parameter files. In the bicycle case-study pipeline (https://www.sing-group.org/compihub/explore/5d07aa4faed515002b23d10b#runners), they are displayed in the “Runners” and “Params” tabs.
Reviewer 2
Basic reporting
This manuscript describes the features of Compi- an application framework to develop end-user, pipeline-based applications with emphasis on user interface generation, application packaging, with compi-dk, and application distribution provided through a public repository of Compi pipelines, named Compi Hub.
I find it quite interesting that the authors have developed an XML-based platform for creating workflows and I also had a look at the resources for Compi at (i) http://www.sing-group.org/compi/ and (ii) https://sing-group.org/compihub/explore and they seem to be pretty well-documented.
However I have some reservations, namely:
(i) The paper is written more like a user manual instead of an application paper - there are also some grammatical mistakes. e.g. line 117 has an incomplete sentence 'Fourth, for security reasons, since Compi is intended to run pipelines defined by third party programmers.' Similarly there are more.
Authors response) We have completely revised the manuscript text and made all the necessary corrections.
(ii) The authors have not mentioned CWL - Common Workflow Language, at all. I would like to draw their attention to the fact that CWL (https://github.com/common-workflow-language) is a very mature workflow language, being used by many bioinformatics communities. They can check https://www.sevenbridges.com/cwl-seven-bridges-platforms/
Authors response) We welcome this suggestion and agree that CWL represents a mature workflow language. Therefore, we have included a mention of CWL in the introduction section, along with a web link and a reference.
(iii) Given that they have compared Compi with Nextflow, I would expect that they take a simple workflow as an example and demonstrate how it is defined in both in NextFlow and Compi so that readers can see the advantages of Compi compared to NextFlow.While there already exist NextFlow and CWL as two major workflow languages, readers must be convinced why they should use Compi.
Authors response) Following the reviewer’s suggestion, we have added an example workflow (Supplementary File 2), which is referenced at the end of the Discussion. Our intention is not to compare directly with Nextflow, but try to show the benefits and advantages of Compi to potential users. We hope that this simple example allows readers to get a better idea of the benefits Compi brings. Compi has similarities with Nextflow and other workflow engines, but, as the reviewer has pointed out, our solution is based on XML and the pipelines are self-documenting, allowing the automatic generation of a command-line user interface. Likewise, Compi is agnostic regarding the programming language at the time of writing the source code of the tasks, since it is paired with interpreters of those languages.
" | Here is a paper. Please give your review comments after reading it. |
150 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>This study determines one of the most relevant quality factors of apps for people with disabilities utilizing the abductive approach to the generation of an explanatory theory.</ns0:p><ns0:p>First, the abductive approach was concerned with the results' description, established by the apps' quality assessment, using the Mobile App Rating Scale (MARS) tool. However, because of the restrictions of MARS outputs, the identification of critical quality factors could not be established, requiring the search for an answer for a new rule. Finally, the explanation of the case (the last component of the abductive approach) to test the rule's new hypothesis. This problem was solved by applying a new quantitative model, compounding data mining techniques, which identified MARS' most relevant quality items.</ns0:p><ns0:p>Hence, this research defines a much-needed theoretical and practical tool for academics and also practitioners. Academics can experiment utilizing the abduction reasoning procedure as an alternative to achieve positivism in research. This study is a first attempt to improve the MARS tool, aiming to provide specialists relevant data, reducing noise effects, accomplishing better predictive results to enhance their investigations.</ns0:p><ns0:p>Furthermore, it offers a concise quality assessment of disability-related apps.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>There are several definitions of theory. One, established by Sjøberg et al. <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>, depends on philosophical and practical issues and the field of study. However, Corley and Gioia <ns0:ref type='bibr' target='#b2'>[2]</ns0:ref> offer a more straightforward definition, a statement of theories and their interrelationships that shows how and why an exceptional event occurs. But Horvath's <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref> theory explains the concepts and facts in a given context, matching ideas and events logically based on their meaning, which similarly indicates the limits of the theory, facilitates the applicability and permits the recognition of new hypotheses to cover a broader field. According to Wacker <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>, a theory contains four components: definitions, domain, relationships, and prediction. Aliseda <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> suggests that discovering an idea towards a new theory involves a complicated process starting with the initial conception through to an acceptable conclusion, thereby forming a new theory. Nevertheless, according to Philipsen <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>, knowledge production divides into three specific categories: discovery, problem/domain definition, initial concepts, and also the context of justification, theories testing, as well as hypotheses enhancement. Researchers understand reason more readily than discovery. The justification uses three interrelated types of reasoning. Ngwenyama <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> inferred that the deduction probably assists in suggesting logical implications of rules to develop experiments for observation as well as testing. Induction enables the scientist to deduce general rules from the monitoring of consistencies in phenomena behavior. Abduction is primarily an inference of an explanation of the views analyzed by <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>, <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>. Lastly, Kapitan <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref> suggests abduction is the procedure of generating theories and also developing some of them; reduction extracts their testable effects while induction assesses them. The primordial feature of the rule is the variable individual measurable property of a process being observed. Feature selection helps understand data, which reduces calculus requirement and the effect of dimensionality while additionally improving the predictor performance. Consequently, the relevance of the attribute option is to select a subset of input variables that can describe data, minimizing noise or irrelevant variables, and yet offer more accurate predictive results <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref>, <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, <ns0:ref type='bibr' target='#b14'>[12]</ns0:ref>. An exploratory approach to related contributions shows little research has been conducted on the fore-mentioned topic. Thus, new experiences are required to amplify the application domain options and corroborating the relatively new abductive approach. The current study uses the abductive process to create theories to improve the formal application of the Mobile App Rating Scale (MARS) results <ns0:ref type='bibr' target='#b15'>[13]</ns0:ref>, the tool used to evaluate apps quality for people with disabilities. The research estimates the evaluation of the tool's external consistency because the MARS tool results are unable to be used for identifying relevant and unique variables that represent the quality factors. The objective of this work is to simplify the MARS tool to increase its performance without losing the quality of the evaluation. As far as we know, this study is the first attempt to improve the MARS tool. The principal contribution is to provide specialists relevant data, reducing noise effects, and accomplishing better predictive results to enhance their investigations. The structure of the current study is: Section 2 presents background and related work that includes the definitions of abductive reasoning and its applications; Section 3 contains the research approach, in three stages: the result, the rule, and the case; Section 4 involves discussions of the results obtained, while Section 5 presents various conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Background and related work</ns0:head><ns0:p>Philipsen <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref> and Ngwenyama <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> establish the distinctions between deductive, inductive, and also abductive reasoning with the connections between the entities; rule, case, and result. These three forms of scientific thinking are presented in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. The abductive reasoning process addresses the situation where the findings differ from the theory's anticipated result, which guides the research study. The starting point coincides with that of induction but is concerned with the search for an explanation of the results, which are complex to explain applying the initial guiding theory. The search for reason demands the need for a new hypothesis, leading to the specific investigated case <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>. Aliseda <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> assumes that abduction in the scientific sense refers to empirical progress, pragmatism, and epistemic change. O'Reilly <ns0:ref type='bibr' target='#b16'>[14]</ns0:ref> and <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref> concluded that abduction is the only logical operation that permits new ideas. In testing theory, abduction develops phases of the knowledge-production process. New explanations will likely arise where there is a requirement to solve an anomaly and discover new methods of explaining the particular empirical phenomenon <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>. Philipsen et al. established the research gaps, and the results are considered vital factors for identifying inconsistencies. Aliseda <ns0:ref type='bibr' target='#b19'>[15]</ns0:ref> indicated logical abduction is relevant regarding issues of scientific explanation. More recently, logical abduction found a place in computationally oriented theories of belief change in Artificially Intelligence. Olsen and Gjerding <ns0:ref type='bibr' target='#b20'>[16]</ns0:ref> investigated the notion of abduction related to and can be applied in a scientific research study. Furthermore, it showed the most necessary treatments of abduction in modern times, and it tried to define various processing modalities, both as an autonomous research strategy and inference type, and in relation and contrast to induction and deduction. According to Zelechowska et al. <ns0:ref type='bibr' target='#b21'>[17]</ns0:ref>, abduction is a type of complex reasoning carried out to make sense of unusual or ambiguous phenomena or fill the gaps in our beliefs. Despite the ubiquity of abduction in professional and everyday problem-solving processes, little empirical research was dedicated to investigating this kind of reasoning. Most of them concentrated on products of abduction-abductive hypotheses. Rapanta <ns0:ref type='bibr' target='#b22'>[18]</ns0:ref> explored abductive reasoning as the most appropriate for students' arguments to emerge in a class discussion. Abductive reasoning embraces the concept of plausibility and defeasibility of both the premises and the conclusion. Mitchell <ns0:ref type='bibr' target='#b23'>[19]</ns0:ref> posits that pragmatism supports using various research techniques, which a continual cycle of inductive, deductive, and when proper, abductive reasoning creates practical knowledge and works as a rationale for a rigorous research study. Abductive reasoning was essential for explaining empirical phenomena relating to competition, primarily how the top United Kingdom and German multinationals developed various strategies for outsourcing. Moreover, applying different methods can lead to research and succeeding management choices that reflect both the interplay of social and scientific elements of the world today. The work of Mitchell <ns0:ref type='bibr' target='#b23'>[19]</ns0:ref>is focused on the strategies for outsourcing. This is in contrast to our work, as we concentrate on app quality. <ns0:ref type='bibr' target='#b24'>[20]</ns0:ref> present the idea of an abduction-ready database, which precomputes semantic features and related statistics, allowing semantic similarity-aware query intent discovery (SQUID) to achieve real-time performance. Also, an extensive empirical assessment was provided on three real-world datasets, consisting of user-intent case studies, demonstrating that SQUID is efficient and effective and outperforms machine learning techniques. In contrast to our research, there is no assessment of the quality of the apps for data processing. Ganesan et al. <ns0:ref type='bibr' target='#b25'>[21]</ns0:ref> propose a probabilistic abductive reasoning method that enhances an existing rule-based Intrusion Detection Systems (IDS) to detect these evolved attacks by predicting rule conditions that are likely to occur and able to generate new snort rules when provided with seed rule to reduce the concern on experts to update them constantly. This is in contrast to our study, as we focused on feature assessment of apps. Bhagavatula et al. <ns0:ref type='bibr' target='#b26'>[22]</ns0:ref> present the initial study that research the viability of language-based abductive reasoning. Also, conceptualize and introduce Abductive Natural Language Inference (ANLI) -a novel task focused on abductive reasoning in narrative contexts. The task is formulated as a multiple-choice question answering problem. Additionally, introduced Abductive Natural Language Generation (ANLG) -a novel task that requires machines to generate plausible hypotheses for given observations. In our study, we only focus on optimizing the MARS. A review of related work shows that the abductive process has been used in various forms and specialties related to Information Systems (IS) and Information Technology (IT). The abduction process has more theoretical development than practical in relation to the integration of abduction and induction <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>. Other works consider digital interaction's abduction paradigm to be a research paradigm <ns0:ref type='bibr' target='#b28'>[23]</ns0:ref> and <ns0:ref type='bibr' target='#b29'>[24]</ns0:ref>. To solve the problems of single-case research, the approach based on the systematic combination in an abductive logic was implemented to improve theory development <ns0:ref type='bibr' target='#b30'>[25]</ns0:ref>. Flach and Kakas <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref> contributions to abductive reasoning included logic programming, machine learning, and artificial intelligence. The theory development in software engineering combines mainly inductive and abductive aspects, which may initiate from both the practical and theoretical perspectives. For example, in the related work, the abductive approach is applied to software requirements <ns0:ref type='bibr' target='#b31'>[26]</ns0:ref> and software testing <ns0:ref type='bibr' target='#b32'>[27]</ns0:ref>. Osei-Bryson and Ngwenyama <ns0:ref type='bibr' target='#b33'>[28]</ns0:ref> explore and illustrate the use of IT in IS research to assist researchers in the testing of theories and developments through mining techniques <ns0:ref type='bibr' target='#b34'>[29]</ns0:ref>, decision trees <ns0:ref type='bibr' target='#b35'>[30]</ns0:ref>, or logical foundations <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Fariha and Meliou</ns0:head></ns0:div>
<ns0:div><ns0:head>Research approach</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1.'>The result: data collection and evaluation</ns0:head><ns0:p>This research study contains a group of apps that focuses on the needs of people with an intellectual disability who were assessed applying a specialized evaluation tool. The data collection contains some components and processes, which are following described.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.1.'>The MARS tool</ns0:head><ns0:p>As Holzinger et al. <ns0:ref type='bibr' target='#b36'>[31]</ns0:ref> determine, metric-based reference points are significant for quantifying software program usability, particularly for specific end-user groups. One of the principal qualities is its usability, as it is an indispensable feature of all software. It is even more crucial in apps created for a large range of users. Additionally, the requirements of people with disabilities are ruled out in the basic needs extraction procedure <ns0:ref type='bibr' target='#b41'>[32]</ns0:ref>. The fundamental elements of software-based clinical systems are; software apps measurement, quality assurance, and end-user satisfaction <ns0:ref type='bibr' target='#b43'>[33]</ns0:ref>. MARS is rated as an outstanding quality tool for efficient use for mobile health apps, developed from a methodical literature search to determine apps quality criteria <ns0:ref type='bibr' target='#b15'>[13]</ns0:ref>. MARS is a wellestablished tool worldwide that has been consulted by 201436 academics, cited by 526 researchers, and 78 tweets. MARS scale assesses app quality on four dimensions, with similar grading to the Likert scale, e.g., '1. Inadequate' to '5. Excellent', 18 questions and descriptors were used <ns0:ref type='bibr' target='#b15'>[13]</ns0:ref>:</ns0:p><ns0:p> Engagement: entertainment, interest, customization, interactivity, and target group.</ns0:p><ns0:p> Functionality: performance, ease of use, navigation, and gestural design.</ns0:p><ns0:p> Aesthetics: layout, graphics, and visual appeal.</ns0:p><ns0:p> Information: accuracy of app description, goals, quality of information, the quantity of information, visual information, and credibility. Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref> shows the most relevant studies that use MARS. Selected articles use MARS to evaluate different types of apps in the health field.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>: Related investigations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.2.'>Apps collection</ns0:head><ns0:p>In order to obtain accurate results, a relevant issue is the determination of the number of apps sampled and apps evaluated. Exploratory activity was performed doing a data compilation of web and mobile apps for people with disabilities using information from the year 2000 to 2020. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> present the available data and showing the exponential growth of the number of apps in the period. Besides, the data universe size suggests that a good selection is a census of a specified domain for specific users and downloaded in an 'instant' period. The present research makes use of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses tool (PRISMA) <ns0:ref type='bibr' target='#b44'>[34]</ns0:ref> o select the appropriate apps for testing in the MARS tool. PRISMA <ns0:ref type='bibr' target='#b45'>[35]</ns0:ref> consists in a four-phase flowchart: identification, screening, eligibility, and inclusion. The apps were selected on different platforms. Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> illustrates the search and inclusion process conditions. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The researchers chose the apps across four platforms: desktop 22.83 %, web 33.45 %, Android 22.12 %, and iOS 21.59 %.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.3.'>Apps evaluation</ns0:head><ns0:p>There are three stakeholder groups to assess apps for people with disabilities: health specialists, software specialists, and final users. The initial approach used a sample of four apps which teachers, software testers, and children with disabilities evaluated. A group of 10 specialized teachers for people with disabilities, 15 children with special educational needs or intellectual disabilities, and five software testers used MARS to evaluate the apps. The authors created a new Spanish version based on the MARS template. Still, due to the questionnaire's size and complexity, it was necessary to adapt it for children with disabilities and test it. Although the results of the teachers' and children's evaluations of the apps were similar, the software testers' evaluation shows some discrepancies, as shown in Figure <ns0:ref type='figure' target='#fig_10'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 3: Joint assessment of four apps for people with disabilities</ns0:head><ns0:p>The complete test series included a total of 1125 apps after a PRISMA screening process deleted duplicates and non-available apps, having a result of 565 apps, where 123 iOS apps, 125 Android apps, 190 web apps, and 127 Windows apps to be evaluated with MARS. Two independent software testers performed the evaluation. Table <ns0:ref type='table'>3</ns0:ref> shows the devices used to complete the assessment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 3: Devices used for evaluation.</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> contains data extracted from evaluated apps and displays a summary of 10 random apps: original MARS score, competitive classification group, and new MARS score. The competitive classification group is defined by a competitive neural network applied to MARS data. The new MARS score is the average of the most significant MARS items (X2, X5, X6, X8, X11, X15) given by a greedy stepwise algorithm. Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>: Data extracted from the apps evaluated.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.4.'>Interpretation of the results</ns0:head><ns0:p>Cronbach's α is the most useful as a positive test to determine an instrument's internal consistency <ns0:ref type='bibr' target='#b47'>[36]</ns0:ref>. An appropriate reliability score is one of 0.7 or higher <ns0:ref type='bibr' target='#b48'>[37]</ns0:ref>. In this case, the value is 0.966; but this value suggests there are data item duplications. Table <ns0:ref type='table' target='#tab_1'>5</ns0:ref> shows the data regression matrix corresponding to categorical variables and a high linear correlation between some of them; this result can also be related to data item duplications. The gray cells show the values which have a higher correlation between the variables; higher values are considered greater than 0.5. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The most significant descriptive statistical results between the non-linear distribution of the variables and the direct proportional relationships are shown in Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>. The non-linear distributions are the consequence of the categorical variables of the Likert scale used in MARS. The positive proportional relationship between all variables shows that the feedback was 100 % positive. This effect would be a strange result only possible in inexistent open systems, where the possible improvements are limitless. Summarizing the previous facts: 1) The high linear correlations suggest that some variables introduce duplications in data; 2) The distribution of categorical variable values are non-linear; therefore, possible models for treating the data must support non-linear data; 3) Considering the perspective of the MARS results, the tool defines a quality value of apps that is ultimately accepted and not guide apps' quality measurement and interpretation process satisfactorily. As MARS was systematically defined, it is insufficient to understand the ratings. Therefore, theorizing and applying a technique to reduce MARS factors is a feasible research objective to utilize abductive reasoning.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.'>The rule: the new explanatory model</ns0:head><ns0:p>A variable is represented by a feature that is a specific quantifiable property of a procedure being observed. Feature selection assists in the comprehension of data, reducing computer skill requirement, simplifying dimensionality, and improving the predictor performance. Subsequently, the focus of feature selection is to choose a subset of input variables that can explain data, limiting the impacts from noise or superfluous variables, and still provide an improved selection of predictive results <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref>, <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, <ns0:ref type='bibr' target='#b14'>[12]</ns0:ref>. Label information is the feature selection technique classified into three groups: supervised methods, semi-supervised methods, and unsupervised methods <ns0:ref type='bibr' target='#b49'>[38]</ns0:ref>, <ns0:ref type='bibr' target='#b50'>[39]</ns0:ref>, <ns0:ref type='bibr' target='#b51'>[40]</ns0:ref>. Label information enables the supervised feature selection algorithms to effectively opt for discriminative and pertinent features, to highlight samples from different classes. Feature selection is also classified into three techniques: filter, wrapper, and embedded methods <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, <ns0:ref type='bibr' target='#b14'>[12]</ns0:ref>, <ns0:ref type='bibr' target='#b49'>[38]</ns0:ref>. The filter models are fast and straightforward, while the embedded methods trend to performance optimization manages high data volume. The wrapper methods achieve balance. Wrapper methods incorporate a learning algorithm, similar to a black box, and consist of utilizing the prediction performance to evaluate the relative feature of subsets of variables. Alternatively, the feature selection algorithm applies a learning method (Classifier) as a subroutine with the computational load that originates from taking a learning algorithm to assess each subset of features <ns0:ref type='bibr' target='#b52'>[41]</ns0:ref>, <ns0:ref type='bibr' target='#b53'>[42]</ns0:ref> (See Figure <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Guyon analyzed the use of criteria techniques to select features: the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref>. Chosen options for Feature Selection are Wrapper Subset Evaluator, Correlation-based Feature Subset Selection (CFS), Principal Components Analysis, <ns0:ref type='bibr' target='#b55'>[43]</ns0:ref>, <ns0:ref type='bibr' target='#b57'>[44]</ns0:ref>, <ns0:ref type='bibr' target='#b58'>[45]</ns0:ref>. Other options for Search Methods (Classifiers) are Greedy Stepwise and Best First <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, <ns0:ref type='bibr' target='#b60'>[46]</ns0:ref>. In this research, it is essential to recognize that the data variables are discrete and non-linear, creating limitations. Therefore, options of Feature Selection are CFS and Wrapper Subset Evaluator. Wrapper Subset Evaluator are techniques based on Bayes, Rules, Functions, and Trees, as can be studied in the works of Abusamra <ns0:ref type='bibr' target='#b55'>[43]</ns0:ref>, Kaur et al. <ns0:ref type='bibr' target='#b57'>[44]</ns0:ref>, and Karabulut et al. <ns0:ref type='bibr' target='#b58'>[45]</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref> illustrates the current data collection evidence that the MARS score, a mean value, is an apparent dependent variable that can be deleted without changing the data. In this case, an option is to use an unsupervised learning technique to re-classify the results. It is possible to filter the relevance of the variables utilizing a wrapper feature selection technique. At last, the software quality elements are pinpointed and interpreted. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.'>The case</ns0:head><ns0:p>In order to apply the new model for the apps, it is essential to use one unsupervised classifier, a supervised wrapper, and a search technique. The classifier in this research study implemented a self-organizing connect with a competitive network version, able to determine consistencies and correlations in their input and adapt their future output <ns0:ref type='bibr' target='#b61'>[47]</ns0:ref>. In an initial assessment, utilizing the Merit as an efficiency measure <ns0:ref type='bibr' target='#b62'>[48]</ns0:ref>, the optimum results were revealed by Multilayer Perceptron as a wrapper method and greedy stepwise as a search technique. Merit is calculated as:</ns0:p><ns0:formula xml:id='formula_0'>𝑀 𝑆 = 𝑘𝑟 𝑐𝑓 𝑘 + 𝑘(𝑘 -1)𝑟 𝑓𝑓</ns0:formula><ns0:p>where the heuristic 'Merit' of a feature subset S containing k features, is the mean feature-𝑟 𝑐𝑓 class correlation ( ), and is the average feature-feature intercorrelation.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑓 ∈ 𝑆 𝑟 𝑓𝑓</ns0:head><ns0:p>The numerator of the equation illustrates how predictive of the class a set of characteristics is, the denominator of how many of them are redundant. Consequently, the higher value of 𝑀 𝑆 means better data classification.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1.'>Self-organizing neural network</ns0:head><ns0:p>The neurons of self-organizing maps learn to identify groups of comparable input vectors to ensure that neurons physically near each other in the neuron layer respond to identical input vectors <ns0:ref type='bibr' target='#b63'>[49]</ns0:ref>. The competitive learning models, a type of self-organizing maps, are based upon the principle of Winner Take All, specified as the closest weight vector to the existing input vector <ns0:ref type='bibr' target='#b49'>[38]</ns0:ref>. The formula to discover the winning neuron, , is:</ns0:p><ns0:formula xml:id='formula_1'>𝑖(𝑡) 𝑖(𝑡) = arg min ∀𝑖 ‖𝑥(𝑡) -𝑊𝑖(𝑡)‖</ns0:formula><ns0:p>Where is the current input vector, is the weight vector of neuron i, and t is the 𝑥(𝑡) 𝑤 𝑖 (𝑡) iteration number. The weight vector of the winning neuron is iteratively modified, using a learning rate , through <ns0:ref type='bibr' target='#b49'>[38]</ns0:ref>: Where is the current input vector, is the 𝜂 (0 ≤ 𝜂 ≤ 1) 𝑥(𝑡) 𝑤 𝑖 (𝑡)</ns0:p><ns0:p>weight vector of neuron i, and t is the iteration number. The weight vector of the winning neuron is iteratively modified, using a learning rate , through <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_2'>𝜂 (0 ≤ 𝜂 ≤ 1) 𝑤 𝑖 (𝑡 + 1) = 𝑤 𝑖 (𝑡) + 𝜂[𝑥(𝑡) -𝑤 𝑖 (𝑡)]</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.3.2.'>Greedy stepwise search method</ns0:head><ns0:p>The greedy stepwise executes a greedy forward or backward search through the area of characteristic subsets; the process finalizes when the addition/subtraction of any remaining feature causes a lesser evaluation. The method, described by Arguello <ns0:ref type='bibr' target='#b64'>[50]</ns0:ref>, solves the following model based on the Variance: </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.3.'>Multilayer perceptron</ns0:head><ns0:p>A supervised classification was building a class model from a set of records containing class labels <ns0:ref type='bibr' target='#b65'>[51]</ns0:ref>. A multilayer perceptron (MLP), a class of feedforward artificial neural network, categorizes data that is not linearly separable.</ns0:p><ns0:p>An MLP contains a minimum of three layers of nodes: an input layer, a concealed layer, and an outcome layer. Besides the input nodes, each node is a neuron that uses a non-linear activation feature. MLP applies a supervised learning method, backpropagation, for training. Its numerous layers and non-linear activation differentiate MLP from a linear perceptron. MLP is formalized by:</ns0:p><ns0:formula xml:id='formula_3'>𝑦 = ℎ(𝐴) = ℎ(𝑔(𝐼)) = ℎ(𝑔 ( 𝑓 (∑ 𝑋 𝑝𝑖 × 𝑊 𝑗𝑖 )) )</ns0:formula><ns0:p>Where is the input vector of dimension p; f is the input function; g is the activation function, 𝑋 𝑝𝑖 Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science 3.3.4. Model application and results</ns0:head><ns0:p>According to the process specified in Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>, the application's details and results are documented. The MARS output data are categorized by the competitive model (see Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>). Four is the logical number of categories that coincides with the number of Mars categories. Table <ns0:ref type='table' target='#tab_2'>6</ns0:ref> displays a summary of the classification results, the best of several attempts in each one, which is improved by trial-and-error technique until the response was stable. Different combinations of the learning rate, initial weights, and iterations are examined; in addition, the Merit value is considered, according to the explanation below. MATLAB's nntool, with 500 epochs and a learning scale of 0.1 was applied for data processing. The results show that the MARS score and the competitive classification are not necessarily similar (See Figure <ns0:ref type='figure' target='#fig_8'>8</ns0:ref> and Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>). This fact can be interpreted as the way the MARS tool users understand the model's questions in a particular form for each app since the mean value assumes the same interpretation in all app cases. The classified data is applied to choose the relevant variables with the greedy stepwise algorithm as a search method and J48, an extension of ID3 and C4.5, as a classifier. The combination was run as a wrapper method in Weka <ns0:ref type='bibr' target='#b66'>[52]</ns0:ref> <ns0:ref type='bibr' target='#b67'>[53]</ns0:ref>. The value of Merit (MS) of the feature selection <ns0:ref type='bibr' target='#b66'>[52]</ns0:ref> was 1.024. The algorithms produce as selected attributes the variables X2, X5, X6, X8, X11, X15 as relevant for the study. Using the chosen variables, it is possible to recalculate the MARS quality score (See Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>, for example, Figure <ns0:ref type='figure' target='#fig_9'>9</ns0:ref>). There is a similitude on the tendency of the value. The results of competitive classification can be useful to select apps of similar quality. Each app case requires an individual analysis; for example, the competitive classification in a group of apps Blindfold sudoku and Memora -classic (See Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>) is identical, despite the original MARS scores are different. In another case, the competitive classification can be different, although the original MARS scores are similar. The competitive trained model can also be used to classify a MARS evaluation of a new app and identify others of similar quality. Finally, the group of six selected variables proves the following:</ns0:p><ns0:p>1. The four MARS categories are maintained, but some subcategories are optional.</ns0:p><ns0:p>2. The interest and target group represent the engagement category. 5. The quality of information is defined as the information category.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The use of the abductive process to theory generation is summarized in Figure <ns0:ref type='figure' target='#fig_11'>10</ns0:ref>. Initially, the result has an old hypothesis: The Apps quality evaluation, using the MARS tool, facilitates the complete identification and interpretation of quality factors. In this phase, the MARS tool is used considering the practices applied in relevant research and recommended by the tool's authors. After obtaining the MARS evaluation values, a disruption is identified from the use quality <ns0:ref type='bibr' target='#b68'>[54]</ns0:ref>. Although the MARS tool was created using a systematic process, its application shows average values and a set of descriptive statistical values, which do not permit new explanations and interpretations about the apps' quality factors. As such, a new rule or model is necessary. The following is the new hypothesis: The results of apps' quality evaluation enable selecting quality features, using data mining techniques ordered in a new processing model. In this phase, the MARS tool results are processed using a new model to obtain the relevant quality factors. A case is defined and developed; that is, the new rule is applied to the original MARS evaluation results, generating further explanations and interpretations of the old results. The new findings have useful evidence about their validity. As is noted in the data collection, the data corresponding to one evaluation per app, and the group have a domain related to people with disabilities, which run on similar technological platforms (mobiles). These facts can be interpreted as the data capture the generalized quality of apps in the specified domain and specific users. Collaterally, the classification obtained of the competitive neural network enables to identify of the classification group and the apps with similar quality. Feature selection process identifies six relevant variables, and according to Chandrashekar and Sahin <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>, which assists with the interpretation of data, minimizing the effect of the dimensionality, and increasing the predictor performance. Reducing dimensionality permits a better understanding of the quality factors. In this way, the components of a quality profile for apps for people with disabilities are settings, interactions, goals, and information. This profile can be used as a criterion for apps quality improvement. In this research, reducing the computation requirement is not an objective because of the data size. But the possibility of the use the selected variables to construct a small questionnaire directed to final users and specialists is an essential output of the new explanation. The result constitutes an improvement to the predictor performance. It can be recognized as a significant outcome, which would contribute to react adequately to the dynamics of the current context of development and the massive emergence of mobile Apps.</ns0:p><ns0:p>In many data mining machine learning applications, the precise knowledge structures are acquired, the structural descriptions are equally as important as the ability to perform well on new examples <ns0:ref type='bibr' target='#b69'>[55]</ns0:ref>. Also, researchers regularly use data mining to extract knowledge, not only predictions <ns0:ref type='bibr' target='#b66'>[52]</ns0:ref>. Both opinions support the idea of a theory generation, according to the assertions of Horváth <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref> and Wacker <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref> considered in the related work of this study. The contents of this study identify the collected data as MARS application results (what), describe (how), and explain (why) their significance, based on a new quantitative model. Similarly, the study establishes the conditions for the new model (when and where). Therefore, according to Recker <ns0:ref type='bibr' target='#b70'>[56]</ns0:ref>, cited in the related work, the experience described in this study reached the generation of an explanatory theory.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions and future work</ns0:head><ns0:p>The post-positivist philosophies of social science have identified the basic restrictions of the positivist behavioral approach to IS research study and present new goals for the systematic development of scientific research practice. Therefore, further research is of the utmost importance <ns0:ref type='bibr' target='#b71'>[57]</ns0:ref>. The abductive approach has been used in IS domains; as well as the multiple options to use quantitative techniques are considered and potentiate the results. The application is also related to apps quality using data mining techniques and evidence a practical use case; an initial quantitative model is analyzed using other specialized quantitative models. The investigative community has generated several qualitative and quantitative models in varied domains, like MARS. So, the general concepts of this work are applicable because the used process constitutes an assessment of the proposed model's external consistency (Rule); that is, according to Brown <ns0:ref type='bibr' target='#b72'>[58]</ns0:ref> and the outer reliability enhanced/verified by inspecting statistical results regarding process replication. The main contribution of this work is to decrease from 18 to 6 items of the MARS to evaluate apps; those selected attribute the variables X2, X5, X6, X8, X11, X15 as relevant to the study. This reduction in the number of variables reduces the time needed to evaluate the quality of an app since fewer items are needed, but without a decrease in the quality of the results. Of the investigations mentioned, the evaluators are health specialists and the article's authors. Only in the previous research, the app's evaluation was carried out by final users, a cancer survivors' group. A research opportunity exists to expand the coverage of assessment, considering the users with disabilities. In the present research, it is possible to stimulate suggestions for improvement and study the validity of generalizations, starting with machine learning for data mining. The experience results contribute to new quantitative possibilities, such as using other intelligent options and multivariate statistical techniques to identify factors of new domains, not necessarily including feature selection. Alike, simulation models can be utile to experiment with various scenarios and identifying transcendental quality factors. A pending research theme related to the experience presented here is the stakeholders' participation in the apps evaluation process. Based on the preliminary evidence described, the results of this research are invaluable. Additionally, comparative studies could be beneficial for the final user and specialists' involvement. With this research, academics can revise a new experience using an alternative reasoning process to overcome IS research's positivism. For the practitioners, the study contributes to the growth of the current knowledge about apps quality assessment related to people with disabilities. The study analyzed asthma apps with the potential to promote patient's self-management. Thirty-eight apps were evaluated. Reviewed by the article authors.</ns0:p><ns0:p>Sullivan et al. <ns0:ref type='bibr' target='#b41'>[32]</ns0:ref> Describes features of 40 apps which collect personal data and dietary behavior. 20 travel apps and 20 dietary apps were assessed with MARS.</ns0:p><ns0:p>Reviewed by the article authors.</ns0:p></ns0:div>
<ns0:div><ns0:head>Grainger et al. [33]</ns0:head><ns0:p>The study assessed features of apps that assist people to monitor Rheumatoid arthritis disease activity. 11 Android and 16 iOS apps were evaluated through MARS. Two independent reviewers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Wilson et al. [34]</ns0:head><ns0:p>The study established the quality and sharpness of 58 apps for drink driving prevention. Reviewers not specified.</ns0:p><ns0:p>Chavez et al. <ns0:ref type='bibr' target='#b45'>[35]</ns0:ref> Using MARS, 89 apps were assessed for diabetes management to see if they have enough quality to complement clinical care. Reviewed by three people.</ns0:p><ns0:p>Yu et al. <ns0:ref type='bibr' target='#b47'>[36]</ns0:ref> Twelve mHealth apps that give the user behavioral and cognitive skills to manage insomnia were evaluated with MARS. Reviewed by two authors of the article.</ns0:p><ns0:p>Reyes et al. <ns0:ref type='bibr' target='#b48'>[37]</ns0:ref> Five iOS apps for self-managed balance rehabilitation for older adults were assessed with MARS. Reviewed by two authors of the article.</ns0:p><ns0:p>Kim et al. <ns0:ref type='bibr' target='#b49'>[38]</ns0:ref> Characteristics of 23 potential Drug-Drug Interaction apps were reviewed and evaluated with MARS. Reviewed by two testers per app.</ns0:p><ns0:p>Escoffery et al. <ns0:ref type='bibr' target='#b50'>[39]</ns0:ref> Conducted a systematic review of apps related to epilepsy. Found and evaluated 20 apps with MARS focused on educating people about their condition. Reviewed by a research team.</ns0:p></ns0:div>
<ns0:div><ns0:head>Short et al. [40]</ns0:head><ns0:p>Ten people assessed 54 apps; the research contributes with new insights about how to use mHealth apps to assist cancer survivors' physical exercise.</ns0:p><ns0:p>Tofighi et al. <ns0:ref type='bibr' target='#b52'>[41]</ns0:ref> An interdisciplinary team of clinicians, behavioral informatics, and public health reviewers trained in substance use disorders conducted a descriptive analysis of 74 apps using MARS.</ns0:p><ns0:p>Davis and Ellis <ns0:ref type='bibr' target='#b53'>[42]</ns0:ref> Participants were randomly assigned to interact with either the high behavior change technique app, or the low behavior change technique app using an iPad. Participants then completed a MARS questionnaire.</ns0:p><ns0:p>1 </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Inference forms. (Adapted from Philipsen [6]).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: Number of web and mobile apps published between 2000 and 2020.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Scatter plot matrix for the first nine variables.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Wrapper method configuration. (Adapted from Bolón-Canedo et al. [42])</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: The model to identify relevant quality factors.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>𝐺,𝑆) 𝑠. 𝑡.|𝑆| = 𝑘 Where K is the number of data sources to choose, P is the data sources, G is the target data, and are the regression coefficients from fitting G using the 's 𝛼 𝑖 𝑃 𝑖 𝑅 2 (𝐺,𝑆) = 𝑉𝑎𝑟(𝐺) -𝑉𝑎𝑟(𝐺 -∑ 𝑖 ∈ 𝑆 𝛼 𝑖 𝑃 𝑖 ) 𝑉𝑎𝑟(𝐺)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>and h represents the training function. Weights are updated using a backpropagation process. 𝑊 𝑗𝑖 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58241:1:0:NEW 14 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Competitive neural network architecture used for data classification.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: MARS evaluation values and Competitive Classification.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Old and new MARS scores of apps.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Performance and navigation represent the functionality category. PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58241:1:0:NEW 14 May 2021) Manuscript to be reviewed Computer Science 4. Graphics represent the aesthetics category.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Abductive reasoning process used in this research. The abductive reasoning process (left). The hypotheses and the application process of the new model (right).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,233.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,306.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,294.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,213.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,217.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,321.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,321.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,372.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Search process.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58241:1:0:NEW 14 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Data regression matrix.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58241:1:0:NEW 14 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Summary of the cases classification.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Related investigations. Evaluated 34 apps with MARS related to heart failure symptom monitoring and self-care management. Reviewed by the article authors.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Summary</ns0:cell></ns0:row><ns0:row><ns0:cell>Masterson et al.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>[28]</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Assessed 23 iOS apps with MARS. The engagement category had the</ns0:cell></ns0:row><ns0:row><ns0:cell>Mani et al. [29]</ns0:cell><ns0:cell>lowest score and highlights the lack of attractiveness. Reviewed by the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>article authors.</ns0:cell></ns0:row><ns0:row><ns0:cell>Santo et al. [30]</ns0:cell><ns0:cell>A group of 272 medication reminder apps was classified. Only ten apps were evaluated with MARS. Reviewed by the article authors.</ns0:cell></ns0:row><ns0:row><ns0:cell>Tinschert et al.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>[31]</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 2</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Data extracted from Apps evaluated.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Variables</ns0:cell><ns0:cell /><ns0:cell>Blindfold sudoku</ns0:cell><ns0:cell>Cross fingers</ns0:cell><ns0:cell>Dibuja el</ns0:cell><ns0:cell>abecedario</ns0:cell><ns0:cell>Diferencia</ns0:cell><ns0:cell>animales</ns0:cell><ns0:cell>Juego de</ns0:cell><ns0:cell>clasificación</ns0:cell><ns0:cell>Memora -classic</ns0:cell><ns0:cell>Memora 2</ns0:cell><ns0:cell>Oldschool blocks</ns0:cell><ns0:cell>Iwritemusic</ns0:cell><ns0:cell>Keezy classic</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1. Entertainment</ns0:cell><ns0:cell>X1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2. Interest</ns0:cell><ns0:cell>X2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Engagement</ns0:cell><ns0:cell>3. Customization</ns0:cell><ns0:cell>X3</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4. Interactivity</ns0:cell><ns0:cell>X4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5. Target group</ns0:cell><ns0:cell>X5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>6. Performance</ns0:cell><ns0:cell>X6</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Functionality</ns0:cell><ns0:cell>7. Ease of use 8. Navigation</ns0:cell><ns0:cell>X7 X8</ns0:cell><ns0:cell>4 5</ns0:cell><ns0:cell>5 5</ns0:cell><ns0:cell cols='2'>5 5</ns0:cell><ns0:cell cols='2'>5 5</ns0:cell><ns0:cell cols='2'>5 5</ns0:cell><ns0:cell>5 5</ns0:cell><ns0:cell>5 5</ns0:cell><ns0:cell>4 4</ns0:cell><ns0:cell>5 4</ns0:cell><ns0:cell>5 5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>9. Gestural design</ns0:cell><ns0:cell>X9</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>10. Layout</ns0:cell><ns0:cell>X10</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>Aesthetics</ns0:cell><ns0:cell>11. Graphics</ns0:cell><ns0:cell>X11</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>12. Visual appeal</ns0:cell><ns0:cell>X12</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>13. Accuracy of app description</ns0:cell><ns0:cell>X13</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>14. Goals</ns0:cell><ns0:cell>X14</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Information</ns0:cell><ns0:cell>15. Quality of information</ns0:cell><ns0:cell>X15</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>16. Quantity of information</ns0:cell><ns0:cell>X16</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>17. Visual information X17</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>18. Credibility</ns0:cell><ns0:cell>X18</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Original MARS quality score</ns0:cell><ns0:cell /><ns0:cell cols='4'>4.0 4.7 4.7</ns0:cell><ns0:cell cols='2'>4.6</ns0:cell><ns0:cell cols='2'>4.2</ns0:cell><ns0:cell cols='4'>4.5 4.7 4.6 4.4 4.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Competitive classification group</ns0:cell><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='2'>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>New MARS quality score</ns0:cell><ns0:cell /><ns0:cell cols='4'>4.2 4.5 4.7</ns0:cell><ns0:cell cols='2'>4.5</ns0:cell><ns0:cell cols='2'>4</ns0:cell><ns0:cell cols='4'>4.8 4.5 4.5 4.2 4.7</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58241:1:0:NEW 14 May 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58241:1:0:NEW 14 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
Departamento de Informática y Ciencias de la Computación
Escuela Politécnica Nacional
Quito, Pichincha, 17-01-2759, Ecuador
https://www.epn.edu.ec/
Tel.: +593-99-8981007
E-mail: andres.larco@epn.edu.ec
May 14th, 2021
We thank the reviewers for their valuable comments on the manuscript and have edited the manuscript to resolve their concerns. Additionally, thanks to the reviewers' useful comments, we have improved the manuscript's introduction and background and related work section.
We expect that the manuscript is now appropriate for publication in PeerJ Computer Science.
Prof. Andrés Larco
Associate Professor of Department of Informatics and Computer Science
On behalf of all authors.
Reviewer 1
Basic reporting
The authors have explained very well first 3 sections. There is need of addition in section 2 that how the proposed work is different with existing one.
Thank you very much for the comments. We added 8 current references in section two, Background and related work:
[15] A. Aliseda, The Logic of A 10. The Logic of Abduction: An Introduction, in: Springer Handbook of Model-Based Science, 2017: p. 12. https://doi.org/10.1007/978-3-319-30526-4.
[16] J. Olsen, A. Gjerding, Modalities of Abduction: a Philosophy of Science-Based Investigation of Abduction, Hu Arenas. 2 (2019) 129–152. https://doi.org/10.1007/s42087-018-0044-4.
[17] D. Żelechowska, N. Żyluk, M. Urbański, Find Out A New Method to Study Abductive Reasoning in Empirical Research, International Journal of Qualitative Methods. 19 (2020) 160940692090967. https://doi.org/10.1177/1609406920909674.
[18] C. Rapanta, Teaching as Abductive Reasoning: The Role of Argumentation, IL. 38 (2018) 293–311. https://doi.org/10.22329/il.v38i2.4849.
[19] A. Mitchell, A Review of Mixed Methods, Pragmatism and Abduction Techniques, 16 (2018) 14.
[20] A. Fariha, A. Meliou, Example-Driven Query Intent Discovery: Abductive Reasoning using Semantic Similarity, ArXiv:1906.10322 [Cs]. (2019). http://arxiv.org/abs/1906.10322.
[21] A. Ganesan, P. Parameshwarappa, A. Peshave, Z. Chen, T. Oates, Extending Signature-based Intrusion Detection Systems WithBayesian Abductive Reasoning, ArXiv:1903.12101 [Cs]. (2019). http://arxiv.org/abs/1903.12101.
[22] C. Bhagavatula, R.L. Bras, C. Malaviya, K. Sakaguchi, A. Holtzman, H. Rashkin, D. Downey, S.W. Yih, Y. Choi, Abductive Commonsense Reasoning, ArXiv:1908.05739 [Cs]. (2020). http://arxiv.org/abs/1908.05739.
Experimental design
All the results are mentioned in an effective manner.
Thanks for the comment.
Validity of the findings
NIL
No Comments
Comments for the author
Acceptable
Thanks for the comment.
Reviewer 2
Basic reporting
This paper 'An experience selecting quality features of apps for people with disabilities using abductive approach to explanatory theory generation' is well organized and reasonable in structure. The comments are as follows:
Does the paper contribute to the body of knowledge?: Yes.
The paper proposes the first attempt to improve the MARS tool, aiming to provide specialists relevant data, reducing noise effects, accomplishing better predictive results to enhance their investigations.
this methodology allows the developers to use to analyze and develop software for the health informatics model and create a space in which software engineering and machine learning experts can work together on the machine learning model lifecycle.
Thanks for the comment.
Is the paper technically sound?: Yes. The paper is technically sound and is of very high quality. The various claims in the paper are quite well supported by the experiments and evaluate on a set of real data sets. the manuscript presents many FIGURE and many tables, this supports the researches idea and methodology. The results on real-world data sets are promising and motivate further investigations into the use of approximate inference in this context.
Thanks for the comment.
Is the subject matter presented in a comprehensive manner?: no. The presentation isn't comprehensive it needs to better organize and simplify the main objectives of the idea of the manuscript topic.
We greatly appreciate the comment. We have added the objective and principal contribution into the introduction:
“The objective of this work is to simplify the MARS tool to increase its performance without losing the quality of the evaluation. As far as we know, this study is the first attempt to improve the MARS tool. The principal contribution is to provide specialists relevant data, reducing noise effects, and accomplishing better predictive results to enhance their investigations”.
In addition, we have added a clarification in Conclusions:
“The main contribution of this work is to decrease from 18 to 6 items of the MARS to evaluate apps; those selected attribute the variables X2, X5, X6, X8, X11, X15 as relevant to the study. This reduction in the number of variables reduces the time needed to evaluate the quality of an app since fewer items are needed, but without a decrease in the quality of the results”.
Are the references provided applicable and sufficient?: Yes. Among all references, only poor references are in the past three years. In order to highlight the innovation of this work, it is better to cite other six up-to-date references to be applicable and sufficient enough to provide relevant materials about this novel approach software.
Thank you very much for the comments. We added 8 current references in section two, Background and related work:
[15] A. Aliseda, The Logic of A 10. The Logic of Abduction: An Introduction, in: Springer Handbook of Model-Based Science, 2017: p. 12. https://doi.org/10.1007/978-3-319-30526-4.
[16] J. Olsen, A. Gjerding, Modalities of Abduction: a Philosophy of Science-Based Investigation of Abduction, Hu Arenas. 2 (2019) 129–152. https://doi.org/10.1007/s42087-018-0044-4.
[17] D. Żelechowska, N. Żyluk, M. Urbański, Find Out A New Method to Study Abductive Reasoning in Empirical Research, International Journal of Qualitative Methods. 19 (2020) 160940692090967. https://doi.org/10.1177/1609406920909674.
[18] C. Rapanta, Teaching as Abductive Reasoning: The Role of Argumentation, IL. 38 (2018) 293–311. https://doi.org/10.22329/il.v38i2.4849.
[19] A. Mitchell, A Review of Mixed Methods, Pragmatism and Abduction Techniques, 16 (2018) 14.
[20] A. Fariha, A. Meliou, Example-Driven Query Intent Discovery: Abductive Reasoning using Semantic Similarity, ArXiv:1906.10322 [Cs]. (2019). http://arxiv.org/abs/1906.10322.
[21] A. Ganesan, P. Parameshwarappa, A. Peshave, Z. Chen, T. Oates, Extending Signature-based Intrusion Detection Systems WithBayesian Abductive Reasoning, ArXiv:1903.12101 [Cs]. (2019). http://arxiv.org/abs/1903.12101.
[22] C. Bhagavatula, R.L. Bras, C. Malaviya, K. Sakaguchi, A. Holtzman, H. Rashkin, D. Downey, S.W. Yih, Y. Choi, Abductive Commonsense Reasoning, ArXiv:1908.05739 [Cs]. (2020). http://arxiv.org/abs/1908.05739.
Experimental design
This paper 'An experience selecting quality features of apps for people with disabilities using abductive approach to explanatory theory generation' is well organized and reasonable in structure. The comments are as follows:
Does the paper contribute to the body of knowledge?: Yes.
The paper proposes the first attempt
to improve the MARS tool, aiming to provide specialists relevant data, reducing noise
effects, accomplishing better predictive results to enhance their investigations.
this methodology allows the developers to use to analyze and develop software for the health informatics model and create a space in which software engineering and machine learning experts can work together on the machine learning model lifecycle.
Thanks for the comment.
Is the paper technically sound?: Yes. The paper is technically sound and is of very high quality. The various claims in the paper are quite well supported by the experiments and evaluate on a set of real data sets. the manuscript presents many FIGURE and many tables, this supports the researches idea and methodology. The results on real-world data sets are promising and motivate further investigations into the use of approximate inference in this context.
Thanks for the comment.
Is the subject matter presented in a comprehensive manner?: no. The presentation isn't comprehensive it needs to better organize and simplify the main objectives of the idea of the manuscript topic.
We greatly appreciate the comment. We have added the objective and principal contribution into the introduction. In addition, we have added a clarification in Conclusions.
Are the references provided applicable and sufficient?: Yes. Among all references, only poor references are in the past three years. In order to highlight the innovation of this work, it is better to cite other six up-to-date references to be applicable and sufficient enough to provide relevant materials about this novel approach software.
Thank you very much for the comments. We added 8 current references in section two, Background and related work.
Validity of the findings
This paper 'An experience selecting quality features of apps for
people with disabilities using abductive approach to
explanatory theory generation' is well organized and reasonable in structure. The comments are as follows:
Does the paper contribute to the body of knowledge?: Yes.
The paper proposes the first attempt
to improve the MARS tool, aiming to provide specialists relevant data, reducing noise
effects, accomplishing better predictive results to enhance their investigations.
this methodology allows the developers to use to analyze and develop software for the health informatics model and create a space in which software engineering and machine learning experts can work together on the machine learning model lifecycle.
Thank for the comment.
Is the paper technically sound?: Yes. The paper is technically sound and is of very high quality. The various claims in the paper are quite well supported by the experiments and evaluate on a set of real data sets. the manuscript presents many FIGURE and many tables, this supports the researches idea and methodology. The results on real-world data sets are promising and motivate further investigations into the use of approximate inference in this context.
Thank for the comment.
Is the subject matter presented in a comprehensive manner?: no. The presentation isn't comprehensive it needs to better organize and simplify the main objectives of the idea of the manuscript topic.
We greatly appreciate the comment. We have added the objective and principal contribution into the introduction. In addition, we have added a clarification in Conclusions.
Are the references provided applicable and sufficient?: Yes. Among all references, only poor references are in the past three years. In order to highlight the innovation of this work, it is better to cite other six up-to-date references to be applicable and sufficient enough to provide relevant materials about this novel approach software.
Thank you very much for the comments. We added 8 current references in section two, Background and related work.
Comments for the author
This paper 'An experience selecting quality features of apps for
people with disabilities using abductive approach to
explanatory theory generation' is well organized and reasonable in structure. The comments are as follows:
Does the paper contribute to the body of knowledge?: Yes.
The paper proposes the first attempt
to improve the MARS tool, aiming to provide specialists relevant data, reducing noise
effects, accomplishing better predictive results to enhance their investigations.
this methodology allows the developers to use to analyze and develop software for the health informatics model and create a space in which software engineering and machine learning experts can work together on the machine learning model lifecycle.
Thank for the comment.
Is the paper technically sound?: Yes. The paper is technically sound and is of very high quality. The various claims in the paper are quite well supported by the experiments and evaluate on a set of real data sets. the manuscript presents many FIGURE and many tables, this supports the researches idea and methodology. The results on real-world data sets are promising and motivate further investigations into the use of approximate inference in this context.
Thank for the comment.
Is the subject matter presented in a comprehensive manner?: no. The presentation isn't comprehensive it needs to better organize and simplify the main objectives of the idea of the manuscript topic.
We greatly appreciate the comment. We have added the objective and principal contribution into the introduction. In addition, we have added a clarification in Conclusions.
Are the references provided applicable and sufficient?: Yes. Among all references, only poor references are in the past three years. In order to highlight the innovation of this work, it is better to cite other six up-to-date references to be applicable and sufficient enough to provide relevant materials about this novel approach software.
Thank you very much for the comments. We added 8 current references in section two, Background and related work.
Reviewer 3
Basic reporting
In this paper the authors present an study that determines the most relevant quality factors of apps for people with disabilities utilizing the abductive approach to the generation of an explanatory theory. The paper reads well and it is well organized. Very few recent references are included.
Thank you very much for the comments. We added 8 current references in section two, Background and related work:
[15] A. Aliseda, The Logic of A 10. The Logic of Abduction: An Introduction, in: Springer Handbook of Model-Based Science, 2017: p. 12. https://doi.org/10.1007/978-3-319-30526-4.
[16] J. Olsen, A. Gjerding, Modalities of Abduction: a Philosophy of Science-Based Investigation of Abduction, Hu Arenas. 2 (2019) 129–152. https://doi.org/10.1007/s42087-018-0044-4.
[17] D. Żelechowska, N. Żyluk, M. Urbański, Find Out A New Method to Study Abductive Reasoning in Empirical Research, International Journal of Qualitative Methods. 19 (2020) 160940692090967. https://doi.org/10.1177/1609406920909674.
[18] C. Rapanta, Teaching as Abductive Reasoning: The Role of Argumentation, IL. 38 (2018) 293–311. https://doi.org/10.22329/il.v38i2.4849.
[19] A. Mitchell, A Review of Mixed Methods, Pragmatism and Abduction Techniques, 16 (2018) 14.
[20] A. Fariha, A. Meliou, Example-Driven Query Intent Discovery: Abductive Reasoning using Semantic Similarity, ArXiv:1906.10322 [Cs]. (2019). http://arxiv.org/abs/1906.10322.
[21] A. Ganesan, P. Parameshwarappa, A. Peshave, Z. Chen, T. Oates, Extending Signature-based Intrusion Detection Systems WithBayesian Abductive Reasoning, ArXiv:1903.12101 [Cs]. (2019). http://arxiv.org/abs/1903.12101.
[22] C. Bhagavatula, R.L. Bras, C. Malaviya, K. Sakaguchi, A. Holtzman, H. Rashkin, D. Downey, S.W. Yih, Y. Choi, Abductive Commonsense Reasoning, ArXiv:1908.05739 [Cs]. (2020). http://arxiv.org/abs/1908.05739.
Experimental design
The contributions of the paper and the research questions should be clarified.
We greatly appreciate the comment. We have added the objective and principal contribution into the introduction:
“The objective of this work is to simplify the MARS tool to increase its performance without losing the quality of the evaluation. As far as we know, this study is the first attempt to improve the MARS tool. The principal contribution is to provide specialists relevant data, reducing noise effects, and accomplishing better predictive results to enhance their investigations”.
In addition, we have added a clarification in Conclusions:
“The main contribution of this work is to decrease from 18 to 6 items of the MARS to evaluate apps; those selected attribute the variables X2, X5, X6, X8, X11, X15 as relevant to the study. This reduction in the number of variables reduces the time needed to evaluate the quality of an app since fewer items are needed, but without a decrease in the quality of the results”.
Validity of the findings
The results are well presented and support the findings presented in the discussion and conclusions sections.
Thank for the comment.
Comments for the author
Some aspects to improve:
- Introduction: clarify the contributions and research questions.
We greatly appreciate the comment. We have added the objective and principal contribution into the introduction. In addition, we have added a clarification in Conclusions.
- Related Work: Analyze the references more deeply and include more references of the latest 5 years.
Thank you very much for the comments. We added 8 current references in section two, Background and related work.
" | Here is a paper. Please give your review comments after reading it. |
151 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Virtual reality is utilised in exergames to help patients with disabilities improve on the movement of their limbs. Exergame settings, such as the game difficulty, play important roles in the rehabilitation outcome. Similarly, suboptimal exergames' settings may adversely affect the accuracy of the results obtained. As such, the improvement in patients' movement performances falls below the desired expectations. In this paper, the recommender system is incorporated to suggest the most preferred movement setting for each patient, based on the movement history of the patient.</ns0:p><ns0:p>Method. The proposed recommender system (ResComS) suggests the most suitable setting necessary to optimally improve patients' rehabilitation performances. In the course of developing the recommender system, three methods are proposed and compared: ReComS (K-nearest neighbours and collaborative filtering algorithms), ReComS+ (kmeans, K-nearest neighbours, and collaborative filtering algorithms) and ReComS++ (bacterial foraging optimisation, k-means, K-nearest neighbours, and collaborative filtering algorithms). The experimental datasets are collected using the Medical Interactive Recovery Assistant (MIRA) software platform. Result. Experimental results, validated by the patients' exergame performances, reveal that the ReComS++ approach predicts the best exergame settings for patients with 85.76% accuracy prediction.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Human disabilities could develop through cerebral palsy (CP), stroke, spinal cord injury (SCI), traumatic brain injury (TBI), and humerus fracture <ns0:ref type='bibr'>[1]</ns0:ref>. Rehabilitating these disabled patients can be achieved using traditional and modern treatments. Virtual reality <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, robotics <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> [4], simulation, and exergames <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> are commonly used in modern treatments. These games are quite promising as they furnish patients with new experiences while performing their daily exercises which are rehabilitation therapies <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>. Rehabilitation is traditionally based on the assessment requirements drawn through physiotherapy. Rehabilitation therapy and assessment are provided by rehabilitation centres, where patients train their disabled limbs through a series of pre-determined exercises. Such procedures help to improve the movement of the limbs and improve their functionality.</ns0:p><ns0:p>One of the recently developed systems for training lower limbs is the robotic system. Currently, there are five types: foot-plate-based gait trainers, treadmill gait trainers, overground gait trainers, active foot orthoses, in addition to stationary gait and ankle trainers <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. These systems include passive robotic devices, which assist the patient to train the idle limb(s) of the lower part. The virtual reality therapy (VRT) systems involve the use of virtual reality as assistant-tools for the rehabilitation process. Exergames are VRT systems that are used by patients who suffer from movement disability in their idle limbs. Through several training procedures, exergames help patients to improve on the physical movements of their muscles <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref> so that they can move their idle parts gradually. For example, the VRT system employs three exergames (the bike, the pedal boat, and the swimmer) that consists of a virtual system comprising a strengthening machine, Kinect device, large screen, and a computer <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>. As for the spine, the VRT system is a non-invasive alternative having minimal negative or harmful effects <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>. The VRT systems for lower limbs and spine; however, remain inefficient as existing VRT systems only cater to the upper limbs.</ns0:p><ns0:p>Exergame therapy forms part of the rehabilitation approaches that are offered to patients in rehabilitation centres. Exergames refer to video games <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> that encourage patients to continue their exercise without feeling bored. The therapy consists of an iteration of exercises that focus mainly on strengthening a part of the patient's body such as the knee. For effective results, it is essential that the patient performs the right movement following the rules of each exergame; otherwise, the benefits may not be pronounced and the desired results will be less noticeable <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>. Different devices provide the virtual environment based on the requirements of the VRT application. These include a large monitor, virtual interface <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>, Microsoft Kinect, Xbox <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref>, strengthening machine <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>, customised metal rig that holds standard wheelchair <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>, and robotic devices <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref>.</ns0:p><ns0:p>The Medical Interactive Recovery Assistant (MIRA) platform is a new VRT application that presents a wide variety of games and movements for various rehabilitation needs. It consists of three parts: adapted movement-based interactive video games, the Kinect, and the leap motion sensors <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref>. The Kinect sensor tracks motion and provides different interactions between the patient and the different types of exergames <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>. Leap motion tracks the hand's movement that composed with the flexion gauges placed into the glove <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>. In other words, the exergames in MIRA are created specifically to aid physical rehabilitation therapies and assessments. An example is that studies the movement performance of children who are seven years old and suffer from the brachial plexus palsy caused by transverse myelitis <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref>. In the study, the movement performance of children was improved by MIRA exergames training. Another case study demonstrates that MIRA exergames have positive effects and can be safely implemented for adult patients <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>. However, in both case studies (i.e., <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> and <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>), a physiotherapist uses default exergames settings and no automation was considered.</ns0:p><ns0:p>In the above scenarios, the most suitable exergame settings are required considering each patient's disability type. As such, due to the low engagement between physiotherapists and patients, physiotherapists use default settings which invariably lowers the accuracy in playing the exergames and reduces the patients' performances. To overcome this problem, recommender system (RS) is needed in the MIRA platform. However, to our best, addressing this problem using RS is absent in the literature. In addition, the decision tree model applied to predict a patient's rehabilitation future performance is based on time, average acceleration, distance, moving time, and average speed. This prediction method uses the default exergame settings and the previous performances of patients who played the same exergame with the same side <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. In spite of that, the prediction will be more accurate if the exergame settings are controlled automatically. Therefore, RS is utilised in this paper to suggest appropriate settings for patients who use MIRA to enhance their movement disabilities.</ns0:p><ns0:p>RS is a subcategory of an information filtering system that aims to forecast or project the 'rating' or 'preference' of a person <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>. Over the last ten years, RSs have been explored and used in various applications that include e-health, e-learning, e-commerce, and knowledge management systems <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref> <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. In the same manner, we deploy the RS in this research. Traditionally, an exergame records a patient's information and performance during a session. The recorded data is used to monitor the progress of the patient <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. Likewise, the system analyses patients' movements during the exercise and generates statistical data. Nonetheless, several challenges have been identified while creating tailored exergame schedules for patients. In this respect, this paper explores the use of ReComS as an interface for each exergame, using the patient's movement history as a benchmark. To address the problem related to the input setting, the ReComS approach applies k-means, K-nearest neighbours (K-NN), collaborative filtering (CF), and bacterial foraging optimisation algorithm (BFOA) to accurately predict input variables through an item settings dialogue box in the MIRA platform itself. From the above discussion, the primary contributions of this research include the following:</ns0:p><ns0:p>1. A proposed ReComS approach that suggests the most appropriate setting for enhancing rehabilitation patients' performances.</ns0:p><ns0:p>2. A novel deployment of RS in the MIRA platform to accurately suggest the most suitable settings needed to improve the limb movement of patients.</ns0:p><ns0:p>3. An enhancement of input variables' prediction accuracy using three newly-developed comparison methods named ReComS (K-NN and CF algorithms), ReComS+ (k-means, K-NN and CF algorithms) and ReComS++ (BFOA, k-means, K-NN, and CF algorithms).</ns0:p><ns0:p>The remainder of this paper is structured as follows: Section 2 describes the MIRA platform. The concepts used in this paper are defined in Section 3. In Section 4, the proposed methods are provided. Sections 5 and 6 present the empirical results and conclusion, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>The MIRA Platform</ns0:head><ns0:p>The MIRA platform is an effective system that allows patients to play their way towards recovery <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>. MIRA is a non-immersive type of VRT application developed to make physiotherapy entertaining and enjoyable for patients. The platform transforms prevailing physical therapy exercises into clinically-designed video games. Asides from improving patients' interests in exercising, an external sensor monitors and evaluates their adherence. MIRA contains a broad range of games and exercises for the upper limbs, lower limbs, and spine. Fig. <ns0:ref type='figure'>1</ns0:ref> illustrates three instances of games in the MIRA platform. First, the patient plays the Catch game with hip abduction movement (Fig. <ns0:ref type='figure'>1(a)</ns0:ref>). In the second image, the patient plays the Airplane game with elbow flexion in abduction movement (Fig. <ns0:ref type='figure'>1(b)</ns0:ref>). In the third example, the patient plays the Flight control game with general shoulder movement (Fig. <ns0:ref type='figure'>1(c)</ns0:ref>). All MIRA games are played following the rules of each game and movement.</ns0:p><ns0:p>Kinects and screens are connected to computers (as shown in Fig. <ns0:ref type='figure'>1</ns0:ref>) to provide the virtual environment. The physiotherapist utilises these devices and the MIRA platform to create a session for patients based on their ability and rehabilitation needs. He combines exercises and games with the specific difficulty setting, movement tolerance, and range of movement <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> based on his observations from the previous data of each patient. Through these settings, this research examines the input and output attributes of the scheduled games in order to autodetermine input variables for each exergame. Traditionally, the physiotherapist tracks the exergame history of each patient (that reflects the movement threshold of the idle limb) then suggests the values of input settings in the dialogue box for future exergaming. This method of observation is costly and time-consuming because the physiotherapist needs to prepare a manual list of variables to track the patient's history. Thus, most physiotherapists use the default setting for all patients which retards the patients' performances, especially when they play using their idle limbs. In view of the aforementioned, this research proposes ReComS approaches for learning the 'best' setting for each patient. It deploys k-means, K-NN, and BFOA algorithms, in addition to the CF technique. The main reasons for using MIRA data include:</ns0:p><ns0:p> MIRA data contains several exergame features that can be performed by moving the upper or lower limbs.</ns0:p><ns0:p> Patients' data are normalized and arranged in a matrix of features. However, this matrix includes a high percentage of zeros (i.e., unknown values) because the exergame output variables are different due to patients' diverse movement skills.</ns0:p><ns0:p> Some exergames generate a small number of records because only a few patients are interested in playing them. Our approach is discussed further in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Definition of Concepts a. Collaborative Filtering (CF)</ns0:head><ns0:p>The use of CF (a popular technique in RSs) to send personalised recommendations to users based on their behaviour has become widely used due to its superb role in recommending preferred items. The CF technique utilises product ratings provided by a collection of customers and recommends products that the target customer has not yet considered but will likely enjoy <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>.</ns0:p><ns0:p>The rating score (with a value between 1 and 5) is used to indicate if a person likes a product or otherwise. These values are arranged in a matrix as rows. Thereafter, the similarity values between the target customer and other customers in the matrix are calculated to predict the customer's interest in the products <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>. In this work, the patients represent users and the generated output features of the MIRA platform represent items. The considered MIRA platform offers a vast range of features for several games and movements managed in a single matrix whereas, the preferences of customers are managed in the rating matrix for estimation via the CF technique. This matrix of features necessitates division using the k-means algorithm to minimise the prediction errors. Moreover, Algorithm 1 summarizes the procedure involved in the CF technique.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm 1. Pseudocode of the CF technique</ns0:head></ns0:div>
<ns0:div><ns0:head>Input: Matrix features (patients, features) Output: A set of prediction scores for features Steps:</ns0:head><ns0:p> Choose the target patient from the matrix features.</ns0:p><ns0:p> Assign the similarity between target patient and other patients utilising similarity distance.</ns0:p><ns0:p> Assign the prediction value for each feature using the prediction approach.</ns0:p><ns0:p> Measure the accuracy prediction performance using error function, such as Root Mean Squared Error (RMSE) <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. k-means Algorithm</ns0:head><ns0:p>The k-means is a clustering algorithm that is often used in iterative optimisation, given its efficiency <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>. Algorithm 2 displays the proximity measures of k-means which are city block, hamming, cosine, correlation coefficient, and squared Euclidean Distance (ED). The ED between two points is the length of the straight line that connects them. Within the Euclidean plane, the distance between points (x 1 , y 1 ) and (x 2 , y 2 ) can be calculated using Eq. (1) <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>𝐸𝐷(𝑥 1 ,𝑦 1 ,𝑥 2 ,𝑦) = (𝑥 1 -𝑦 1 ) 2 + (𝑥 2 -𝑦 2 ) 2</ns0:formula><ns0:p>(1) Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>b. For each cluster, calculate the new mean.</ns0:p></ns0:div>
<ns0:div><ns0:head>c. Calculate the junction among clusters to keep the k number of clusters.</ns0:head><ns0:p>Until convergence criteria are met.</ns0:p></ns0:div>
<ns0:div><ns0:head>c. K-Nearest Neighbours Algorithm</ns0:head><ns0:p>The K-NN is a simple classification method used to analyse a large matrix of features or to provide recommendations <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>. When new data are required to be categorised, the K-NN algorithm computes the distance in values between the target record and other records. These records are ordered based on distance <ns0:ref type='bibr' target='#b28'>[28]</ns0:ref>. At the final stage, the first k record will be chosen from the ordered list i.e., K-NN. The pseudocode in Algorithm 3 describes the stages of applying K-NN.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm 3. Pseudocode of K-nearest neighbours algorithm. Input:</ns0:head></ns0:div>
<ns0:div><ns0:head>Matrix Features (Patients, Features) Output:</ns0:head><ns0:p>A set of neighbours most like the target patient.</ns0:p></ns0:div>
<ns0:div><ns0:head>Steps:</ns0:head><ns0:p>1-Choose the target patient from the matrix features.</ns0:p><ns0:p>2-Assign the similarity values amid the target patient and other patients by a similarity distance measure, as shown in Eq. ( <ns0:ref type='formula'>1</ns0:ref>). 3-Sort the patients starting from the lowest distance value to the topmost distance value (from the highest similarity to the lowest similarity). 4-Choose k number of patients from the first in the sorted list.</ns0:p></ns0:div>
<ns0:div><ns0:head>d. Bacterial Foraging Optimisation Algorithm</ns0:head><ns0:p>Optimisation algorithms have proven effective in several areas including RS [29] and healthcare <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. For instance, BFOA has been well-embraced in recent RS approaches for providing high accuracy prediction [29] <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>. This motivates us to use BFOA in this experiment for learning patients' latent features and for optimising the output prediction. The BFOA is an evolutionary computational algorithm for global optimisation. It is used to classify and learn better convergence <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref>. For example, in the human intestine, the BFOA conventions the features of E.coli bacteria in the foraging procedure and recycles them for universal optimisation to produce effective clarifications for large ranging issues <ns0:ref type='bibr' target='#b32'>[31]</ns0:ref>. Similarly, BFOA is utilized to learn patients' latent features which are classified as nearest neighbours within the features of the best cluster while other clusters will be neglected. There are three main phases during the swarming evolution of bacteria which are chemotaxis, reproduction, and elimination-dispersal. These phases are described as follows: a.</ns0:p><ns0:p>Chemotaxis: During this stage, each bacterium locates rich nutrients and avoids noxious substances. Patients' accurate features represent rich nutrients that can be tracked by learning the lowest error value throughout the learning iteration. The chemotaxis stage includes three processes: swimming, tumbling, and swarming. The bacterium swims for a certain period and tumbles while using its flagella to change its swimming direction <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref>. The direction of movement after a tumble is given in Eq. ( <ns0:ref type='formula'>2</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_1'>𝛽 𝑖 (𝑗 + 1,𝑘,𝑙) = 𝛽 𝑖 (𝑗,𝑘,𝑙) + 𝐶 𝑖 + ∅/ ∅ 𝑡 𝑖 ∅ 𝑖 (2)</ns0:formula><ns0:p>where represents the members of bacteria i (i.e., latent features of patients), is 𝛽 𝑖 𝐶 𝑖 the stage degree in the direction of the tumble, denotes the index for the sum of 𝑗 chemotactic, refers to the index for the number of reproductions, and reflects the 𝑘 𝑙 index for the sum of elimination-dispersal. Besides, is the random unit ∅/ ∅ 𝑡 𝑖 ∅ 𝑖 length direction shown during the swimming phase. In the swarming mechanism, the latent features of a patient release attractant or repellent signals regarding other patient's latent features, as portrayed in Eq. ( <ns0:ref type='formula' target='#formula_2'>3</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_2'>𝐽 𝑐𝑐 (𝛽 + 𝛽 𝑖 (𝑗,𝑘,𝑙)) = [ 𝑆 ∑ 𝑖 = 1 -𝑑 𝑎𝑡𝑡𝑟𝑎𝑐𝑡 exp (𝑤 𝑎𝑡𝑡𝑟𝑎𝑐𝑡 𝑃 ∑ 𝑚 = 1 (𝛽 𝑚 -𝛽 𝑖 𝑚 ) 2 ] + [ 𝑆 ∑ 𝑖 = 1 -ℎ 𝑟𝑒𝑝𝑒𝑙𝑙𝑎𝑛𝑡 exp ( -𝑤 𝑟𝑒𝑝𝑒𝑙𝑙𝑎𝑛𝑡 𝑃 ∑ 𝑚 = 1 (𝛽 𝑚 -𝛽 𝑖 𝑚 ) 2 ]<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where is the depth of the attractant that can be recycled to establish the immensity of 𝑑 𝑎𝑡𝑡𝑟𝑎𝑐𝑡 secretion of attractant by a cell value, is the width of the attractant than can be recycled 𝑤 𝑎𝑡𝑡𝑟𝑎𝑐𝑡 Reproduction: This phase deals with the feedback (RMSE) value which acts as the fitness value. These values are obtained after training the target patients' features that have been extracted through the current training stage using the k-means, K-NN, BFOA and CF methods. The RMSE values will be saved in an array before sorting (smaller and larger values). The lower half of the latent features having a larger fitness value (dies) while the outstanding latent features or the other half of the population is separated into two equivalent parts having equal values. This phase keeps the bacteria population constant. Eq. ( <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>) shows the healthy values for patients' latent features.</ns0:p><ns0:formula xml:id='formula_3'>𝐽 𝑖 ℎ𝑒𝑎𝑙𝑡ℎ = 𝑁 𝑐 + 1 ∑ 𝑗 = 1 𝐽(𝑖,𝑗,𝑘,𝑙),<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where is the sum of patients' latent features, is the sum of chemotactic steps , k is the 𝑖 𝑗 𝑁 𝑐 reproduction step, and is the elimination-dispersal step. 𝑙 c. Elimination-Dispersal: This phase provides the position shifting probability for the limited latent features of patients. The random vectors are produced and arranged in ascending order.</ns0:p><ns0:p>In this study, ReComS, ReComS+ and ReComS++ approaches are proposed to recommend preferences for the MIRA platform. These preferences are used to determine the input settings of the exergames by learning the precise behaviours of patients. ReComS integrates the K-NN and CF methods to classify the predicted values by reducing the error value. The error value is obtained using the projected and actual values of the previous session of the exergame. The ReComS+ approach improves the prediction performance of ReComS by integrating the k-means algorithm with K-NN and CF, which reduces the error value. However, this error value is relatively high. Hence, it lowers the prediction performance of the ReComS and ReComS+. ReComS++ further reduces the error value by optimising the prediction or projection values and integrating k-means, K-NN, CF, and BFOA algorithms.</ns0:p></ns0:div>
<ns0:div><ns0:head>a. Dataset</ns0:head><ns0:p>This study was carried out in the rehabilitation centre of Melaka, Malaysia to analyse the generated data using MIRA platform with ethic approval no PRPTAR.600-5 <ns0:ref type='bibr' target='#b27'>(27)</ns0:ref>. The MIRA platform patient data file in this study contains patients' personal information such as first and last names, patient ID, and birth date. It also entails information related to the games played such as the session ID, name of the game, movement ID, movement name, and associated dates <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>.</ns0:p><ns0:p>Each selected game and movement acts as one exergame with its unique input variables in the item settings dialogue. The settings include the sides used (left or right), duration, difficulty, tolerance, minimum and maximum ranges. The values of these variables could be fixed based on the default values or adjusted by the physiotherapist after evaluating the performance of the patient. The MIRA platform could generate 26 variables based on the exergame or cognigame (a game that trains the cognitive function). Table <ns0:ref type='table'>1</ns0:ref> describes the most significant variables generated by the exergames.</ns0:p><ns0:p>The experimental data contained 3553 records generated by 61 patients with different types of diagnoses in which 41 patients had a stroke, 14 patients had TBI, seven patients had SCI, one patient had CP, and two patients had humerus. Fig. <ns0:ref type='figure'>2</ns0:ref> portrays an example of the MIRA setting for animals' exergame with the elbow flexion movement. The item settings includes six variables that can be manipulated by the physiotherapist or player. Table <ns0:ref type='table'>2</ns0:ref> presents a description of the generated exergame features by patients using the MIRA platform. In each session, a patient plays an exergame by moving his/her limbs according to the rules of the game and movement exercise. During the exercise, the physiotherapist predicts the variables of the input setting such as difficulty, tolerance, minimum range, and maximum range according to his observation or adopts the default values, but a number of patients experienced difficulty playing the games. Afterwards, he deduced the accurate settings from the previous performance of patients in each exergame. As compared to using the default settings, a more accurate setting ensures patients play better. This indicates the significance of this research to the MIRA platform.</ns0:p><ns0:p>In this study, the ReComS approach is proposed to predict the variables of the input setting according to the data history of the patient. As ReComS is expected to provide low prediction accuracy, it is integrated with a clustering method (as used in similar experimental works [29] <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>) and referred to as the ReComS+ approach. ReComS+ provides good prediction accuracy. ReComS++ is developed by further integrating ReComS+ with the BFOA algorithm to learn the latent features of the patients and to lower the RMSE value throughout the learning iteration process. The experimental results of ReComS and ReComS+ are utilised to benchmark the prediction performance of the ReComS++ approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. ReComS Approach</ns0:head><ns0:p>Most personal recommendation systems use the CF and the K-NN for providing personal recommendations. Here, the CF technique provides the target patient with personal recommendations according to the common behaviours of other patients. K-NN method is used to obtain the nearest neighbours of each target patient based on their similarities <ns0:ref type='bibr' target='#b34'>[33]</ns0:ref>. Thus, ReComS integrates the K-NN algorithm with the CF technique for learning the personal behaviour of patients and predicting the input setting variables. The proposed ReComS approach assists the physiotherapist to collect accurate data from patients who need to play exergames using MIRA platform.</ns0:p><ns0:p>The framework of ReComS is arranged following the steps in Fig. <ns0:ref type='figure'>3</ns0:ref>. The ReComS approach is set to the target patient to manage the entire patients' features in the features matrix. K-NN is applied to classify the k nearest neighbours based on the similarities between the target patient and other patients. The RMSE function calculates ReComS prediction accuracy based on the distance between the features of the target patient and prediction values obtained using this approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>d. ReComS+ Approach</ns0:head><ns0:p>ReComS+ is proposed to improve the prediction accuracy of ReComS. The ReComS approach should yield higher accuracy, even if the RMSE remains high. k -means, K-NN, and CF methods are integrated into ReComS+ to provide the predicted variables as input setting in the MIRA dialogue box of each exergame. The k-means algorithm clustered the records of patients into the sum of clusters (k). The cluster that contained the record of the target patient is selected to obtain the matrix of neighbours that are integrated with the CF to provide the preferences values, as shown in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>e. ReComS++ Approach</ns0:head><ns0:p>Despite the high accuracy of ReComS+ prediction, its error value is relatively high. This error value can be reduced by deploying learning methods. Thus, BFOA is integrated with the ReComS+ for learning the behaviours of neighbours by lowering the error value during the iteration stages. The framework of ReComS++ encapsulates four stages that are needed to provide the preferences for patients' input settings, as shown in Fig. <ns0:ref type='figure' target='#fig_2'>5</ns0:ref>. These stages are described as follows:</ns0:p><ns0:p>i. Data preparation  Reading data records and arranging variables in a matrix.  Encoding the textual data (such as gender, diagnosis, game name, difficulty and side) using numbers.  Normalising the generated variables of each exergame based on the duration by applying Eq. <ns0:ref type='bibr' target='#b4'>(5)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_4'>𝐹 = 𝐹 * 𝑇 𝑑 /𝑇 𝐹 ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where is a feature value, is a default duration i.e., 60 seconds, in this process, and is 𝐹 𝑇 𝑑 𝑇 𝐹 the duration of the exergame. Eq. ( <ns0:ref type='formula' target='#formula_4'>5</ns0:ref>) is implemented for the variables whose values increase with duration. These include Time, Moving Time, Moving Time in the Exercise, Still or Idle Time, Distance, Points, and Repetition. Other variables do not increase with duration since they are considered as either average values or percentage values between 0 and 100.</ns0:p><ns0:p> Normalising the variables into the range between 0 and 1. This is based on the dimension of each variable, features rescaling and the need to provide proper compatible values for machine learning algorithms. The normalization is performed based on the standard data mining requirement to provide accurate variables approximation and prediction using Eq. <ns0:ref type='bibr' target='#b5'>(6)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_5'>𝐹 𝑖𝑗 = 𝐹 𝑖𝑗 -𝑋 Y -X (∂ -∅) + ∅,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where F ij is the value of record and variable , is the least value, and is the highest 𝑖 𝑗 𝑋 𝑌 value in the whole matrix of variables. is the highest target value (1) and is the least ∂ ∅ target value (0).</ns0:p><ns0:p> Assigning the target patient, target movement, and side, to find the latest record of the patient where he played the selected movement exergame on the target side. Variables of this record would be arranged in the first row in the matrix of features.</ns0:p><ns0:p> Selecting whole records from data containing the target movement of the target patient and putting the variables of these records in the matrix of features.</ns0:p><ns0:p> The matrix of features would be divided into two parts. The first part would have 70% of records for the purpose of training and the second part would have 30% for evaluations.</ns0:p></ns0:div>
<ns0:div><ns0:head>ii. Clustering by k-means algorithm</ns0:head><ns0:p>The MIRA data consists of 30 games and 34 movements that provide 1020 kinds of features, which make it challenging to analyse. The features are grouped into a single matrix. Hence, the k-means clustering algorithm is used to simplify the various types of features generated by playing the games and movements in the MIRA application. These features are divided into a set of clusters. The challenge of this experimental work is in determining the accurate sum of clusters. To address this issue, MIRA data are tested in a set of k clusters ranging between 5 and 10. Based on the data collected, we assign the range from 5 to 10 clusters as the sufficient range of clusters. This is intended to avoid the k-means clustering problem (k problem) when using clusters above 10 because of the high number of zeros in each matrix of features. After that, the prediction performed by ReComS is assessed based on this set of clusters.</ns0:p></ns0:div>
<ns0:div><ns0:head>iii. Classification by K-NN algorithm</ns0:head><ns0:p>The K-NN is applied to retrieve similar records to the target record in the matrix. The challenge in this process is in determining the accurate sum of neighbours. A small number of neighbours yields an accurate prediction performance while a vast number of neighbours exhibits the lowest performance. However, the few numbers of neighbours are not sufficient to learn the accurate features of patients. Thus, a larger number is required to accurately learn the patients' features. Addressing this problem, BFOA is integrated with the ReComS+ to improve the prediction accuracy. In this experimental work, the K-NN algorithm applies the squared ED, as given in Eq. <ns0:ref type='bibr' target='#b6'>(7)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_6'>D = , (𝑥 -𝑦) ∑ 𝑛 𝑖 = 1 (𝑥𝑖 -𝑦𝑖)^2<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where refers to the distance value between records and while n denotes the number of 𝐷 𝑥 𝑦 features. Eq. ( <ns0:ref type='formula' target='#formula_6'>7</ns0:ref>) calculates the distance between the target record and the total records of the target cluster. The target record is hinged on the target cluster mainly because the distance between the target record and the centroid point of this cluster is smaller when compared to other clusters. After calculating the distance between the total records of the target cluster and the target record, the values are sorted in ascending order to derive closely related neighbours to the target record. Records having small values of distance have the highest similarity value to the target record thus, being the nearest neighbours. In this experimental work, the k of neighbours is determined based on a set of k, i.e., 25 neighbours, 50 neighbours, 75 neighbours and 100 neighbours. These four numbers of neighbours have been chosen according to the available features of each exergame within each cluster. These features constitute an effective solution for executing the required training processes for three reasons. First, most patients like to play only a few interesting MIRA exergames. Thus, other exergames have fewer records. Meanwhile, the machine learning algorithms need a large number of records to facilitate the process of learning the latent-features using the exergame output features. Second, interesting exergames can be predicted easily due to their rich output clusters using most similar features to the target exergame. Then, K-NN method can find k neighbours of over 100 records while it is more difficult in clusters with less than or equal 100 records. Third, some patients need to play specific exergames to improve their idle movement skills. The number of output records of these specific exergames is small. Thus, the output cluster of the target matrix of such special exergame does not converge due to the gap between them and the popular exergames. Hence, there is only a few neighbours such as 25 or 50 from these output clusters while it is quite difficult to find 75 or 100 neighbours. In addition, more than 100 neighbours can be considered as impossible or inaccurate due to the resulting poor convergence between the features of target exergame and those of poorperforming clusters. Each k neighbour is then evaluated using ReComS based on its prediction performance in comparison to other numbers of neighbours.</ns0:p><ns0:p>f. Developing CF Performance with BFOA The notion of predicting values for variables that constitute the item settings in MIRA, based on the behaviour of patients that have played a few exergames, is similar to the idea of predicting products for customers based on their preferences in the recommendation system that uses the CF technique. Obviously, the CF predicts the score values for products while ReComS predicts variables for all input and output features associated with a specific game and a peculiar movement. Typically, the CF technique uses three functions to estimate values, as described next.</ns0:p></ns0:div>
<ns0:div><ns0:head> Similarity Function</ns0:head><ns0:p>This function provides the correlation between the target record (of game and movement) and total records. The similarity functions that apply the CF technique are Cosine Similarity <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref> and PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the Pearson Correlation Coefficient <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref>. Note that when using the Cosine or Correlation coefficient for MIRA data, these functions generate some outliers due to the existence of zeros in the feature matrix. Thus, the Euclidean similarity function, as shown in Eq. ( <ns0:ref type='formula' target='#formula_6'>7</ns0:ref>), emerges as the most suitable similarity function to be applied for MIRA data, mainly because all the calculated similarity values are known.</ns0:p></ns0:div>
<ns0:div><ns0:head> Prediction Function</ns0:head><ns0:p>This is an important computational procedure obtained from the similarity values retrieved from the similarity function and the correlation between the total records. For the purpose of prediction in this experimental work, Eq. ( <ns0:ref type='formula'>8</ns0:ref>) has been proposed based on the current prediction function in the CF technique <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref> after considering the difference between rating scores and features values of MIRA.</ns0:p><ns0:p>,</ns0:p><ns0:formula xml:id='formula_7'>𝑃 𝑖 = 𝑉 𝑎 + ∑ 𝑁 ℎ = 1 𝐷(𝐹 𝑎 ,𝐹 ℎ )(𝐹 ℎ,𝑖 -𝑉 ℎ ) ∑ 𝑁 ℎ = 1 𝐷(𝐹 𝑎 ,𝐹 ℎ ) (8)</ns0:formula><ns0:p>where is the predicted or projected value for feature , is the average value of all feature 𝑃 𝑖 𝑖 𝑉 𝑎 values for the target record, is the sum of neighbours, D is the distance similarity value 𝑁 between (feature value of target record) and F h (feature value of neighbour ). Also, 𝐹 𝑎 ℎ 𝐹 ℎ,𝑖 refers to the feature value i of the neighbour , whereas denotes the average value of all ℎ 𝑉 ℎ features of neighbour h.</ns0:p><ns0:p>In this work, Eq. ( <ns0:ref type='formula'>8</ns0:ref>) is used in ReComS and ReComS+ by employing the error function. Nevertheless, the generated output still has errors and the predicted values are over-fitted. Such over-fitting occurs when the predicted value is larger than the features generated by the target exergames. Fig. <ns0:ref type='figure'>6</ns0:ref> graphically exemplifies the generated features of the target exergame and the features predicted by the CF technique within the procedures of the ReComS approach.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>6</ns0:ref>, eight predicted feature values are overfitted because these values are greater than the feature values of the target exergames. The remaining predicted features have lower fitting values due to their values are smaller than the feature values of the target exergames. Hence, the predicted features need to be normalised to fit/align these values with the target exergame features. Nonetheless, the ReComS and ReComS+ are inaccurate approaches in normalizing the predicted features. Thus, an optimised algorithm is embedded in the prediction method to normalise the prediction values. Notably, BFOA has been acknowledged as an optimisation algorithm commonly applied in recommendation systems <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref> since this algorithm can exceptionally learn the deep features of each matrix. The contribution of the ReComS++ approach is represented in Eq. <ns0:ref type='bibr' target='#b8'>(9)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_8'>, 𝑃 𝑖 = 𝑉 𝑎 + 𝐵 𝑖 ∑ 𝑁 ℎ = 1 𝐷(𝐹 𝑎 ,𝐹 ℎ )(𝐹 ℎ,𝑖 -𝑉 ℎ ) ∑ 𝑁 ℎ = 1 𝐷(𝐹 𝑎 ,𝐹 ℎ )<ns0:label>(9)</ns0:label></ns0:formula><ns0:p> where B i is the bacteria value that can be learned by tracking feature F i, and the sum of bacterium members will be equal to the number of neighbours' features. These bacteria have been used to track all features of the neighbours (such that each column in the matrix is managed by a bacterium member) and provide accurately predicted input variables for MIRA. The remaining vectors of Eq.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>(5) have been described in Eq. ( <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>). The BFOA is implemented based on the algorithmic phases described in Section 2.5 while the values of bacteria factors are listed in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head> Benchmark Function</ns0:head><ns0:p>The proposed approaches are evaluated to examine the performance of the CF technique using RMSE <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref> and Mean Absolute Error (MAE) <ns0:ref type='bibr' target='#b37'>[36]</ns0:ref>. These measures are used for calculating the differences between the predicted and observed values. In this article, RMSE is used to provides the distance between the features generated through the target patient and the predicted features by the CF technique due to the vast number of iterations needed for data testing using ReComS++. In other words, the total number of generated exergame features of all patients are divided into k-clusters using the k-means. Each cluster is classified by the K-NN algorithm that selects the most suitable features needed to provide accurate feedback. BFOA focuses on accurately learning the latent features. This is achieved by tracking the positive effects produced through classified features by reducing the RMSE value throughout the optimization stages. This is achieved by accurately learning the convergence among the generated features for various patients who played various exergames, as shown in Eq. <ns0:ref type='bibr' target='#b9'>(10)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_9'>𝑅𝑀𝑆𝐸 = 1 𝑅 𝑅 ∑ 𝑝 = 1 1 𝑛 𝑛 ∑ 𝑖 = 1 (𝐹 𝑎,𝑖 -𝑃 𝑖 ) ,<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>where RMSE is the average RMSE values for all records in the matrix of neighbours, R denotes the number of records in the training or testing sets, n refers to the number of features in the matrix, F a,i represents the value of feature i in the target record, and P i stands for the predicted value recommended to feature F i . The highest RMSE value reflects the lowest accuracy prediction performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>ReComS improves the performances of the following three experimental procedures.</ns0:p></ns0:div>
<ns0:div><ns0:head>a. Evaluating the ReComS Approach</ns0:head><ns0:p>The K-NN algorithm classifies integral features of a vast number of patients who played several games and with varying movements. The CF computes the similarity between the target features and the features of each neighbour before predicting the new values. The performance of the predicted values is consequently assessed using the RMSE value. To benchmark the varied RMSE values for several target records based on patients' personality behaviours, the average RMSE is calculated to obtain accurate outputs. Fig. <ns0:ref type='figure'>7</ns0:ref> illustrates the prediction performance accuracy using this approach based on two sets, training and testing sets. Both sets show that 25 of the nearest neighbours yields the lowest performance accuracy when compared to those having 50, 75, and 100 neighbours. The case of 100 neighbours yielded more accurate performance when compared to that of 25 neighbours. This deteriorates the prediction performance of the nearest neighbours, which may be solved by other classification approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Evaluating the ReComS+ Approach</ns0:head><ns0:p>The k-means clustering algorithm is applied in ReComS+ to address the limitation of ReComS using CF and K-NN. Deploying these methods, even the use of a small number of nearest neighbours could result in a highly accurate performance. Fig. <ns0:ref type='figure'>8</ns0:ref> indicates that after integrating the k-means algorithm into ReComS, the prediction performance of CF improved and the feedback of the nearest neighbours are corrected. The outcomes, as depicted in Fig. <ns0:ref type='figure'>8</ns0:ref>, show the prediction performance of CF using five clusters and 25 neighbours is better for both the training and testing sets, as compared to the results when 50, 75, and 100 neighbours are integrated with five clusters. Nevertheless, the number of clusters is not justified, as six or ten clusters may offer a higher prediction accuracy as shown in Fig. <ns0:ref type='figure'>9</ns0:ref>. Hence, ReComS+ tests the prediction performance using k clusters to address the problem associated with cluster numbers.</ns0:p><ns0:p>In addition, Fig. <ns0:ref type='figure'>9</ns0:ref> proves that various numbers of clusters can provide similar prediction performance as ReComS+ for both the training and testing sets. The results show that 5 clusters provide the highest accuracy prediction using the training dataset when compared to the performance of ReComS+ by the other k clusters, which ranged between 6 and 10. The accurate prediction of ReComS+ in the testing dataset, therefore, confirms that the prediction performance of ReComS+ for all k clusters is similar to that of 6 clusters that generate low accuracy performance. For this reason, subsequent experimental works using ReComS++ apply 5 clusters. Though the accuracy performance of ReComS+ has been improved using this approach, the RMSE is still high and the range of the predicted values should be normalised. For this reason, the BFOA is applied to normalise the predicted values and minimise RMSE values.</ns0:p></ns0:div>
<ns0:div><ns0:head>c. Evaluating ReComS++ Approach</ns0:head><ns0:p>The BFOA is implemented in this work to normalise the predicted values of the ReComS+ approach, which utilised the CF, K-NN and k-means methods. The results proved that the predicted values are similar to the variables of the target record while emphasizing the need to decrease RMSE values. Fig. <ns0:ref type='figure'>10</ns0:ref> presents the prediction performance of the ReComS++ approach that employed CF, K-NN, k-means, and BFOA for both training and testing sets. The RMSE values, through this approach, appeared to be small, thus indicating high prediction accuracy for all sets of tested neighbours. The set of 25 neighbours provide the highest prediction accuracy when compared to the other sets of neighbours (i.e., 50, 70 and 100) for the MIRA training and testing datasets. The results of both training and testing sets are close, indicating that this approach provides accurate prediction values for the whole target records in the MIRA data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The CF technique is applied to predict the values of future variables of the item settings. This technique is integrated with the K-NN algorithm in ReComS. It provides each exergame with predicted feature values related to the generated features of the target exergame. Few of these predicted features can be used to assign the variables of the exergame dialogue box setting. The ReComS approach provides a low prediction accuracy due to the high percentage of the overfitted predicted values. Hence, ReComS needs further improvement to classify the various output features of all exergames. For this reason, ReComS is subsequently improved by ReComS+ that utilizes the k-means algorithm for grouping the various generated features into k clusters and then finding the nearest features to the target exergame features within the same cluster.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ReComS+ approach chooses the best cluster and number of neighbours through its accurate predictions. The prediction performance of ReComS+ is more accurate than that of ReComS because the former accurately learn the convergence among exergame features using the kmeans. Similarly, each cluster is classified to accurately learn the neighbours' features using K-NN. Nevertheless, the predicted values generated in ReComS+ vary and the prediction accuracy of ReComS+ is low due to the difficulty in learning the latent features of the vast amount of generated exergame features. Thus, the ReComS+ approach should be further improved using an optimization algorithm that can effectively learn the latent features of the neighbours within each cluster.</ns0:p><ns0:p>BFOA is one of the efficient optimisation algorithms that have been used in improving the prediction performance of the CF technique in some recommendation systems [29] <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>. Accordingly, BFOA is utilized in the ReComS++ approach for reducing the overfitted prediction values by learning the latent features of the neighbours within each cluster. The experimental approaches show that the ReComS++ approach has addressed the inherent challenges of the first and second approaches. Fig. <ns0:ref type='figure'>11</ns0:ref> shows the comparisons between the RMSE values of the three experimental approaches: ReComS, ReComS+, and ReComS++ by MIRA training sat. The ReComS++ approach provides the lowest RMSE value, indicating it has highest prediction accuracy when compared with both ReComS and ReComS+ approaches. Furthermore, Fig. <ns0:ref type='figure'>11</ns0:ref> illustrates the experimental outcomes derived from the MIRA testing dataset. The results are similar to those generated for the training dataset using the ReComS++ approach. This indicates that ReComS++ successfully addressed the drawbacks of the ReComS and ReComS+ approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>a.</ns0:head><ns0:p>Criteria for the ReComS++ approach according to the output A significant milestone in this work is determining the predicted difficulty level and the remaining variables in the dialogue box for the setting of each item. Fig. <ns0:ref type='figure'>12</ns0:ref> shows the minimum and maximum predicted values for the difficulty, tolerance, as well as minimum and maximum ranges. Based on the predicted and observed values obtained from the MIRA application in Melaka, Malaysia while supervising patients who played MIRA games. The determined threshold intervals are as illustrated in Fig. <ns0:ref type='figure'>13</ns0:ref> where this figure presents two observations for two target records created by the output of the predicted variables using ReComS++. First, the example in Fig. <ns0:ref type='figure'>13</ns0:ref> depicts the actual observation made by the physiotherapist for a patient who has played the game with movement exercise at an easy (difficulty) level with a tolerance up to 30% (percentage of range of movement is 0 -60%). After that, the experimental approach normalises these values into the range between 0 and 1, as shown in the figure. The experimental approach performs closely to the actual values in terms of difficulty, tolerance, minimum range, and maximum range. Based on the threshold shown in Fig. <ns0:ref type='figure'>12</ns0:ref>, the experimental approach determines the final prediction values of the next item settings as Easy; 20 for tolerance, 0 for min range and 70% for max range. The second example depicts similar procedures needed to arrive at the final decision in accordance with the threshold.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science b.</ns0:note></ns0:div>
<ns0:div><ns0:head>Evaluating the ReComS++ approach based on the physiotherapist observations</ns0:head><ns0:p>The ReComS++ approach, programmed using Java, is included in the MIRA system to provide physiotherapists and patients with preferences. Fig. <ns0:ref type='figure'>14</ns0:ref> shows the interface of the preferences where the physiotherapist (who helps patients to play MIRA exergames) could easily obtain the recommended preferences for the selected patient, movement, and side. On evaluating the ReComS++ approach, we obtain the evaluation file completed by physiotherapists of Perkeso Rehabilitation Centre in Melaka, Malaysia. The file records the physiotherapists' observations after using the preferences suggested by ReComS++ for a set of patients over a period of 5 weeks. The file contains 1182 records. Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref> reveals an example of the information provided in the file.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref> presents the set of movements performed by a group of patients who play a set of exergames (movements and games) using the MIRA platform. Each exergame has four preferences that constitute the input setting variables in ReComS++ (i.e., difficulty, tolerance, minimum range and maximum range). Here, the physiotherapist observes each patient who performs the exergame and registers his/her activity performance as positive (P) or Negative (N). The evaluation results are summarised in Table <ns0:ref type='table'>5</ns0:ref>. The table shows a higher percentage of positive preferences compared to negative preferences. This implies that ReComS effectively recommends accurate preferences for patients.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>In most cases, patients train their disabled limbs by utilising the facilities offered at rehabilitation centres to regain their limbs functionality. The VRT, such as MIRA, refers to a contemporary rehabilitation technique that aids patients to perform 'game-aided' exercises in order to increase their motivation and engagement in physical therapy. Nonetheless, physiotherapists who deal with this application need to predict the values of the input variables of the item settings for each patient manually, which is the main challenge in this domain. Therefore, in this study, we utilise a recommender system to suggest the most suitable settings for patients' movements based on their movement history. Since the exergames generate various features, automated analysis is required to provide a summary of the patient's (movement) performance. To address these challenges, three experimental approaches: 1) ReComS with the CF and K-NN approach; 2) ReComS+ with the CF, K-NN; and 3) k-means approach, in addition to ReComS++ with the CF, K-NN, k-means and the BFOA approach; were proposed and their shortcomings were tested by learning procedures. The experimental results demonstrated that ReComS+ yields more accurate predictions when compared with ReComS while ReComS++ achieves a higher accuracy as compared to ReComS+.</ns0:p><ns0:p>Overall, ReComS++ performs best for MIRA exergames as it provides MIRA with the most accurate predictions for the input setting dialogue box. It thus assists patients to perform MIRA exergames correctly. There are several potential promising directions for future work to further enhance the MIRA exergames. For example, when the obtained records from the MIRA database is sufficient for each exergame, machine learning algorithms can be used to analyse the data. Moreover, several classification, clustering, and optimisation algorithms can be integrated with the CF technique to improve the auto-prediction in the MIRA input dialogue box and further understand the latent behaviours of rehab patients. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>The case of a patient playing three game-based exercises using the MIRA Platform. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>to denote the means by which the chemical cohesiveness of the signal diffuses, sets the ℎ 𝑟𝑒𝑝𝑒𝑙𝑙𝑎𝑛𝑡 height of the repellent (a propensity to avoid a nearby cell), and defines the negligible 𝑤 𝑟𝑒𝑝𝑒𝑙𝑙𝑎𝑛𝑡 area where the cell is relative to the diffusion of the chemical signal. S is the number of groups within the patients' latent features, denotes the dimension of the search space, is the latent 𝑃 β m features of group number m, and represents latent feature number i in group m. 𝛽 𝑖 𝑚 b.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>5 1</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,310.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,313.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,150.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,148.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,407.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,228.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,301.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,244.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,239.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,305.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,277.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,350.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,199.12,525.00,187.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,311.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Algorithm 2: The k-means clustering algorithm.</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Input:</ns0:cell></ns0:row><ns0:row><ns0:cell>//set of n data items //number of desired clusters 𝐷 = { 𝑑1,𝑑2,….,𝑑𝑛 } 𝑘</ns0:cell></ns0:row><ns0:row><ns0:cell>Output:</ns0:cell></ns0:row><ns0:row><ns0:cell>A set of k clusters</ns0:cell></ns0:row><ns0:row><ns0:cell>Steps:</ns0:cell></ns0:row><ns0:row><ns0:cell>1. Arbitrarily choose k data items from D as initial centroids.</ns0:cell></ns0:row><ns0:row><ns0:cell>2. Repeat</ns0:cell></ns0:row><ns0:row><ns0:cell>a. Allocate each item d1 to the cluster with the nearest centroids (Eq. (1)).</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>An example of information collected by the physiotherapist for MIRA and ReComS++.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Preference</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:note place='foot'>Materials & MethodsPeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:1:2:NEW 13 Apr 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Editor,
The authors like to thank the editor and the reviewers for reviewing our work. We also appreciate the constructive comments of the reviewers.
We have edited the manuscript to address the concerns of the anonymous reviewers by including more details about this work.
Comments of Reviewer 1
The paper provides clear definition of concepts, terms and methods (Collaborative Filtering, Kmeans algorithm, etc.).
Sufficient context is provided to understand:
why exergames are useful for patients?
Thank you for this comment. The advantages of exergames have been further explained within the Introduction.
why rec sys were needed for the MIRA platform, why exergames are useful for patients.
The overall aim of the paper is clear: analysis of patients' movements and use of rec sys to make suggestions on the most appropriate settings for patients' movements.
The structure of the paper is clear; raw data and figures are shared.
Thank you for this comment. More descriptions have been added to the new manuscript.
However, the paper needs a professional English review (e.g. line 43 'who having' > 'who are having'; the overuse of which statements in line 487 and 488, and again in line 69 and 70; in line 47 'where is a' > 'where there is a', etc.).
Thank you for this comment. The English has been improved in the whole paper.
The innovative contributions for MIRA platform are clear (no rec sys before), however the paper needs further work to demonstrate and articulate why MIRA with a rec sys system is innovative compared to the available research ('to the best of the authors' knowledge, addressing this problem has been neglected in previous work' is based on trust, not evidence - we need evidence. Please add references of other works and how your work is innovative compared to others).
Thank you for this insightful comment. The other works and some references have been added to the introduction section.
Experimental design
As mentioned before: The overall aim of the paper is clear: analysis of patients' movements and use of rec sys to make suggestions on the most appropriate settings for patients' movements.
Research methods are clear both in the abstract and in the Materials and Methods section. Benchmarks are specified, as well as Dataset and equations used.
As mentioned above, it's needed some work to state how the research fills an identified knowledge gap, as there are probably other exergames systems developed by others as well.
Thank you for this comment. We have referenced some other works in the introductory part.
As mentioned above, the impact is clear for the MIRA platform, but not clear compared to the state of the art of exergames. If the novelty is provided by the use of recommender systems in an unexplored field (exergames), a clear statement is needed. More extensive state of the art research works in exergames are needed. It's also recommended to add in the conclusions a clear statement of how the usage of rec sys in the MIRA platform would benefit the field of exergames and provide a value compared to other works (it's already clear what results were obtained, just add the overall impact compared to the existing literature).
Thank you for this comment. We have added more description in the introduction section and in the description of MIRA platform. More reasons and examples about the target of this research in helping the MIRA patients have been included.
In the last section, conclusion, we summarized the impact of ReComS++ in lines [624-626].
Suggestion: line 609 'the experimental results demonstrated that the recoms+ approach appeared better compared to the recoms approach' ' please further develop this sentence specifying why (it's mentioned in the discussion of the results, but add a clear conclusive statement here). Also, 'it appeared better' is a vague statement -better in which way?. Same for line 610 about Recoms++.
Thank you for this comment. We have addressed this concern in the discussion [Lines: 549-550] and conclusion [Lines: 622-623].
In the conclusion, there is a reference of Recoms+ having 'better' experimental results than recoms, and recoms++ providing better accuracy performance than recoms+. Does this means recoms++ is the best approach because the accuracy is better, and is therefore the best approach for MIRA? If yes, please assert so. If not, please discuss why, in the evaluation, accuracy is not the only relevant parameter for identifying the best approach.
Thank you for this comment. We have summarized the benefit of ReComS++ approach and presented them clearly in lines [634-636].
Some observations are described in the Discussion section, just make clear conclusive statements on the Conclusion section.
Thank you for this comment. The results are now described in clear sentences.
Comments of Reviewer 2
Language needs proof-reading. A professional, native English-speaking proof-reader is required. There are too many language issues to single out, but this proof-reading is needed.
The language of the paper has been improved by professional native English-speaking proof-reader.
Conceptual clarity needs improving: consistently refer to either patients or users. For example, 'to suggest the most appropriate setting for patients to enhance the users' performance' ==> is the user the same as patient?
Thank you for this comment. It has been addressed and we have explained the relationship between user and patient in Lines [170-171].
In the other parts, we used the word “patient” instead of “user”.
The explanation for the Bacterial Foraging Optimisation Algorithm needs to be clarified. While the other two algorithms are commonly used for recommenders, this algorithm is not.
Specifically,
(a) why was it selected?
(b) how was it applied --- the current explanation is describing micro-biology, but the use case here was exergame. The explanation needs to reflect the use case.
Thank you for these comments. The explanation of BFOA has been updated to show the relationship between this algorithm and MIRA exergames and how it was implemented in this work in Lines [238-275].
The authors allude to these points in later sections of the paper, but they need to be first explained when introducing the algorithm.
Thank you for this comment. The reasons for implementing the algorithms (i.e., CF, KNN, k-means, and BFOA) in this work, and how they were implemented within MIRA data are explained in Lines [148-151, 238-241].
The number assignment of clusters needs to be clarified:
'MIRA data are tested in a set of k clusters ranging between 5 and 10 clusters'
>> why this test range?
The same for kNN:
'In this experimental work, the k of neighbours is determined based on a set of k (25 neighbours, 50 neighbours, 75 neighbours, 100 neighbours).'
>> why this range?
Thank you for this comment.
These concerns have been addressed in Lines [387-390, 409-426].
The inability of normalization to help overfitting needs to be explained:
'prediction values are over-fitting; hence they need to be normalised to fit the prediction values. Nonetheless, the normalisation method is inapt for this process.'
>> why inapt?
>> is there a reference applying BFOA in a similar task before? If so, please include.
Thank you for this comment. The over fitting has been explained, and we added Fig. 6 to show an example of overfitted predictions compared with output features that are created by each exergame, in Lines [453-462].
We added some references that show a similar task within other recommendation systems, in Lines [235-237].
The application of RMSE needs clarification - the authors state, correctly, that the metric is applied 'for calculating the differences between the predicted values and the values observed.' But what were these values in the current study? Combination of different variables? Which variables?
Thank you for this comment. We have added more explanation about RMSE metric in lines [480-496].
The study found that the method that combined three techniques was the best. However, this is typically the case for any machine learning situation - ensemble methods outperform single-method approaches. So, the authors need to clarify if there is anything unexpected in this finding, and what it means for future work - could different ensemble techniques be tested against each other?
Thank you for this comment. We have added more description in Lines [547-565].
Discussion should be enhanced with an evaluation of practical implement ability of the tested methods. While the most complex method yielded the best performance, is it feasible to implement? If so, under which conditions (and under which other methods could be better).
Discussion should be enhanced by discussing the weaknesses of the current work. What are they? Honestly explicate how the research could be improved.
Thank you for this comment. We have added more details about that in the discussion part.
discussion should be enhanced by adding directions for future research. What are the next steps? What other methods could be tested? How 'good' are the results in terms of practical implementation and how far are they from yielding a substantial positive impact to patients' recovery?
Thank you for this comment. We have added future works within the Conclusion part in Lines [636-641].
" | Here is a paper. Please give your review comments after reading it. |
152 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Virtual reality is utilised in exergames to help patients with disabilities improve on the movement of their limbs. Exergame settings, such as the game difficulty, play important roles in the rehabilitation outcome. Similarly, suboptimal exergames' settings may adversely affect the accuracy of the results obtained. As such, the improvement in patients' movement performances falls below the desired expectations. In this paper, the recommender system is incorporated to suggest the most preferred movement setting for each patient, based on the movement history of the patient.</ns0:p><ns0:p>Method. The proposed recommender system (ResComS) suggests the most suitable setting necessary to optimally improve patients' rehabilitation performances. In the course of developing the recommender system, three methods are proposed and compared: ReComS (K-nearest neighbours and collaborative filtering algorithms), ReComS+ (kmeans, K-nearest neighbours, and collaborative filtering algorithms) and ReComS++ (bacterial foraging optimisation, k-means, K-nearest neighbours, and collaborative filtering algorithms). The experimental datasets are collected using the Medical Interactive Recovery Assistant (MIRA) software platform. Result. Experimental results, validated by the patients' exergame performances, reveal that the ReComS++ approach predicts the best exergame settings for patients with 85.76% accuracy prediction.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Human disabilities could develop through cerebral palsy (CP), stroke, spinal cord injury (SCI), traumatic brain injury (TBI), and humerus fracture <ns0:ref type='bibr' target='#b30'>(Tousignant et al., 2014)</ns0:ref>. Rehabilitating these disabled patients can be achieved using traditional and modern treatments. Virtual reality <ns0:ref type='bibr' target='#b31'>(Turolla et al., 2013)</ns0:ref>, robotics <ns0:ref type='bibr' target='#b11'>(Díaz, Gil & Sánchez, 2011)</ns0:ref> <ns0:ref type='bibr' target='#b18'>(Maciejasz et al., 2014)</ns0:ref>, simulation, and exergames <ns0:ref type='bibr' target='#b9'>(Covarrubias et al., 2015)</ns0:ref> are commonly used in modern treatments. These games are quite promising as they furnish patients with new experiences while performing their daily exercises which are rehabilitation therapies <ns0:ref type='bibr' target='#b16'>(Li et al., 2018a)</ns0:ref>. Rehabilitation is traditionally based on the assessment requirements drawn through physiotherapy. Rehabilitation therapy and assessment are provided by rehabilitation centres, where patients train their disabled limbs through a series of pre-determined exercises. Such procedures help to improve the movement of the limbs and improve their functionality.</ns0:p><ns0:p>One of the recently developed systems for training lower limbs is the robotic system. Currently, there are five types: foot-plate-based gait trainers, treadmill gait trainers, overground gait trainers, active foot orthoses, in addition to stationary gait and ankle trainers <ns0:ref type='bibr' target='#b11'>(Díaz, Gil & Sánchez, 2011)</ns0:ref>. These systems include passive robotic devices, which assist the patient to train the idle limb(s) of the lower part. The virtual reality therapy (VRT) systems involve the use of virtual reality as assistant-tools for the rehabilitation process. In assistive technology, serious games come with multimodal functions and immersive characteristics that have been embedded into various types such as robotics, virtual reality, and simulator. Thus, serious games are promising technology that can bring new experiences for people with disabilities to perform their rehabilitation <ns0:ref type='bibr' target='#b17'>(Li et al., 2018b)</ns0:ref> <ns0:ref type='bibr' target='#b20'>(Merilampi et al., 2017)</ns0:ref>. In addition, exergames are VRT systems that are used by patients who suffer from movement disability in their idle limbs. Through several training procedures, exergames help patients to improve on the physical movements of their muscles <ns0:ref type='bibr' target='#b15'>(Jaarsma et al., 2020</ns0:ref>) so that they can move their idle parts gradually. For example, the VRT system employs three exergames (the bike, the pedal boat, and the swimmer) that consists of a virtual system comprising a strengthening machine, Kinect device, large screen, and a computer <ns0:ref type='bibr' target='#b26'>(Pruna et al., 2018)</ns0:ref>. As for the spine, the VRT system is a non-invasive alternative having minimal negative or harmful effects <ns0:ref type='bibr' target='#b7'>(Chi et al., 2019)</ns0:ref>. The VRT systems for lower limbs and spine; however, remain inefficient as existing VRT systems only cater to the upper limbs.</ns0:p><ns0:p>Exergame therapy forms part of the rehabilitation approaches that are offered to patients in rehabilitation centres. Exergames refer to video games <ns0:ref type='bibr' target='#b12'>(Da Gama et al., 2016)</ns0:ref> that encourage patients to continue their exercise without feeling bored. The therapy consists of an iteration of exercises that focus mainly on strengthening a part of the patient's body such as the knee. For effective results, it is essential that the patient performs the right movement following the rules of each exergame; otherwise, the benefits may not be pronounced and the desired results will be less noticeable <ns0:ref type='bibr' target='#b12'>(Da Gama et al., 2016)</ns0:ref>. Different devices provide the virtual environment based on the requirements of the VRT application. These include a large monitor, virtual interface <ns0:ref type='bibr' target='#b6'>(Brokaw & Brewer, 2013)</ns0:ref>, Microsoft Kinect, Xbox <ns0:ref type='bibr' target='#b4'>(Baur et al., 2018)</ns0:ref>, strengthening machine <ns0:ref type='bibr' target='#b26'>(Pruna et al., 2018)</ns0:ref>, customised metal rig that holds standard wheelchair and robotic devices <ns0:ref type='bibr' target='#b27'>(Radman, Ismail & Bahari, 2018)</ns0:ref>.</ns0:p><ns0:p>The Medical Interactive Recovery Assistant (MIRA) platform is a new VRT application that presents a wide variety of games and movements for various rehabilitation needs. It consists of three parts: adapted movement-based interactive video games, the Kinect, and the leap motion sensors <ns0:ref type='bibr' target='#b21'>(Moldovan et al., 2017)</ns0:ref>. The Kinect sensor tracks motion and provides different interactions between the patient and the different types of exergames <ns0:ref type='bibr' target='#b19'>(Mcglinchey et al., 2015)</ns0:ref>. Leap motion tracks the hand's movement that composed with the flexion gauges placed into the glove <ns0:ref type='bibr' target='#b5'>(Borja et al., 2018)</ns0:ref>. In other words, the exergames in MIRA are created specifically to aid physical rehabilitation therapies and assessments. An example is that studies the movement performance of children who are seven years old and suffer from the brachial plexus palsy caused by transverse myelitis <ns0:ref type='bibr' target='#b10'>(Czakó, Silaghi & Vizitiu, 2017)</ns0:ref>. In the study, the movement performance of children was improved by MIRA exergames training. Another case study demonstrates that MIRA exergames have positive effects and can be safely implemented for adult patients <ns0:ref type='bibr' target='#b19'>(Mcglinchey et al., 2015)</ns0:ref>. However, in both case studies (i.e., <ns0:ref type='bibr' target='#b19'>(Mcglinchey et al., 2015)</ns0:ref> and <ns0:ref type='bibr' target='#b10'>(Czakó, Silaghi & Vizitiu, 2017)</ns0:ref>), a physiotherapist uses default exergames settings and no automation was considered. Prediction scoring method is used to suggest the comfortable difficulty mode for rehabilitation patients using k-means algorithm <ns0:ref type='bibr' target='#b38'>(Zainal et al., 2019)</ns0:ref>. It used to analysis five variables that are generated by patients when playing MIRA exergames. However, there are various exergames with several variables need to analysis for finding the accurate prediction scoring.</ns0:p><ns0:p>In the above scenarios, the most suitable exergame settings are required considering each patient's disability type. As such, due to the low engagement between physiotherapists and patients, physiotherapists use default settings which invariably lowers the accuracy in playing the exergames and reduces the patients' performances. To overcome this problem, recommender system (RS) is needed in the MIRA platform. However, to our best, addressing this problem using RS is absent in the literature. In addition, the decision tree model applied to predict a patient's rehabilitation future performance is based on time, average acceleration, distance, moving time, and average speed. This prediction method uses the default exergame settings and the previous performances of patients who played the same exergame with the same side <ns0:ref type='bibr' target='#b36'>(Zainal et al., 2020)</ns0:ref>. In spite of that, the prediction will be more accurate if the exergame settings are controlled automatically. Therefore, RS is utilised in this paper to suggest appropriate settings for patients who use MIRA to enhance their movement disabilities.</ns0:p><ns0:p>RS is a subcategory of an information filtering system that aims to forecast or project the 'rating' or 'preference' of a person <ns0:ref type='bibr' target='#b14'>(Ismail et al., 2019)</ns0:ref>. Over the last ten years, RSs have been explored and used in various applications that include e-health, e-learning, e-commerce, and knowledge management systems <ns0:ref type='bibr' target='#b35'>(Xu, Zhang & Yan, 2018)</ns0:ref> <ns0:ref type='bibr' target='#b36'>(Zainal et al., 2020)</ns0:ref>. In the same manner, we deploy the RS in this research. Traditionally, an exergame records a patient's information and performance during a session. The recorded data is used to monitor the progress of the patient <ns0:ref type='bibr' target='#b9'>(Covarrubias et al., 2015)</ns0:ref>. Likewise, the system analyses patients' movements during the exercise and generates statistical data. Nonetheless, several challenges have been identified while creating tailored exergame schedules for patients. In this respect, this paper explores the use of ReComS as an interface for each exergame, using the patient's movement history as a benchmark. To address the problem related to the input setting, the ReComS approach applies kmeans, K-nearest neighbours (K-NN), collaborative filtering (CF), and bacterial foraging optimisation algorithm (BFOA) to accurately predict input variables through an item settings dialogue box in the MIRA platform itself. From the above discussion, the primary contributions of this research include the following:</ns0:p><ns0:p>1. A proposed ReComS approach that suggests the most appropriate setting for enhancing rehabilitation patients' performances.</ns0:p><ns0:p>2. A novel deployment of RS in the MIRA platform to accurately suggest the most suitable settings needed to improve the limb movement of patients.</ns0:p><ns0:p>3. An enhancement of input variables' prediction accuracy using three newly-developed comparison methods named ReComS (K-NN and CF algorithms), ReComS+ (k-means, K-NN and CF algorithms) and ReComS++ <ns0:ref type='bibr'>(BFOA, k-means, K-NN, and CF algorithms)</ns0:ref>.</ns0:p><ns0:p>The remainder of this paper is structured as follows: Section 2 describes the MIRA platform. The concepts used in this paper are defined in Section 3. In Section 4, the proposed methods are provided. Sections 5 and 6 present the empirical results and conclusion, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>The MIRA Platform</ns0:head><ns0:p>The MIRA platform is an effective system that allows patients to play their way towards recovery <ns0:ref type='bibr' target='#b38'>(Zainal et al., 2019)</ns0:ref>. MIRA is a non-immersive type of VRT application developed to make physiotherapy entertaining and enjoyable for patients. The platform transforms prevailing physical therapy exercises into clinically-designed video games. Asides from improving patients' interests in exercising, an external sensor monitors and evaluates their adherence. MIRA contains a broad range of games and exercises for the upper limbs, lower limbs, and spine. Fig. <ns0:ref type='figure'>1</ns0:ref> illustrates three instances of games in the MIRA platform. First, the patient plays the Catch game with hip abduction movement (Fig. <ns0:ref type='figure'>1(a)</ns0:ref>). In the second image, the patient plays the Airplane game with elbow flexion in abduction movement (Fig. <ns0:ref type='figure'>1(b)</ns0:ref>). In the third example, the patient plays the Flight control game with general shoulder movement (Fig. <ns0:ref type='figure'>1(c</ns0:ref>)). All MIRA games are played following the rules of each game and movement.</ns0:p><ns0:p>Kinects and screens are connected to computers (as shown in Fig. <ns0:ref type='figure'>1</ns0:ref>) to provide the virtual environment. The physiotherapist utilises these devices and the MIRA platform to create a session for patients based on their ability and rehabilitation needs. He combines exercises and games with the specific difficulty setting, movement tolerance, and range of movement <ns0:ref type='bibr' target='#b34'>(Wilson et al., 2017)</ns0:ref> based on his observations from the previous data of each patient. Through these settings, this research examines the input and output attributes of the scheduled games in order to autodetermine input variables for each exergame. Traditionally, the physiotherapist tracks the exergame history of each patient (that reflects the movement threshold of the idle limb) then suggests the values of input settings in the dialogue box for future exergaming. This method of observation is costly and time-consuming because the physiotherapist needs to prepare a manual list of variables to track the patient's history. Thus, most physiotherapists use the default setting for all patients which retards the patients' performances, especially when they play using their idle limbs. In view of the aforementioned, this research proposes ReComS approaches for learning the 'best' setting for each patient. It deploys k-means, K-NN, and BFOA algorithms, in addition to the CF technique. The main reasons for using MIRA data include:  MIRA data contains several exergame features that can be performed by moving the upper or lower limbs.</ns0:p><ns0:p> Patients' data are normalized and arranged in a matrix of features. However, this matrix includes a high percentage of zeros (i.e., unknown values) because the exergame output variables are different due to patients' diverse movement skills.</ns0:p><ns0:p> Some exergames generate a small number of records because only a few patients are interested in playing them. Our approach is discussed further in the following subsections. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Definition of Concepts a. Collaborative Filtering (CF)</ns0:head><ns0:p>The use of CF (a popular technique in RSs) to send personalised recommendations to users based on their behaviour has become widely used due to its superb role in recommending preferred items. The CF technique utilises product ratings provided by a collection of customers and recommends products that the target customer has not yet considered but will likely enjoy <ns0:ref type='bibr'>(Al-Hadi et al., 2020)</ns0:ref>. The rating score (with a value between 1 and 5) is used to indicate if a person likes a product or otherwise. These values are arranged in a matrix as rows. Thereafter, the similarity values between the target customer and other customers in the matrix are calculated to predict the customer's interest in the products <ns0:ref type='bibr' target='#b23'>(Natarajan et al., 2020)</ns0:ref>. In this work, the patients represent users and the generated output features of the MIRA platform represent items. The considered MIRA platform offers a vast range of features for several games and movements managed in a single matrix whereas, the preferences of customers are managed in the rating matrix for estimation via the CF technique. This matrix of features necessitates division using the k-means algorithm to minimise the prediction errors. Moreover, Algorithm 1 summarizes the procedure involved in the CF technique.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm 1. Pseudocode of the CF technique</ns0:head></ns0:div>
<ns0:div><ns0:head>Input: Matrix features (patients, features) Output: A set of prediction scores for features Steps:</ns0:head><ns0:p> Choose the target patient from the matrix features.</ns0:p><ns0:p> Assign the similarity between target patient and other patients utilising similarity distance.</ns0:p><ns0:p> Assign the prediction value for each feature using the prediction approach.</ns0:p><ns0:p> Measure the accuracy prediction performance using error function, such as Root Mean Squared Error (RMSE) <ns0:ref type='bibr'>(Al-hadi et al., 2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. k-means Algorithm</ns0:head><ns0:p>The k-means is a clustering algorithm that is often used in iterative optimisation, given its efficiency <ns0:ref type='bibr' target='#b38'>(Zainal et al., 2019)</ns0:ref>. Algorithm 2 displays the proximity measures of k-means which are city block, hamming, cosine, correlation coefficient, and squared Euclidean Distance (ED). The ED between two points is the length of the straight line that connects them. Within the Euclidean plane, the distance between points (x 1 , y 1 ) and (x 2 , y 2 ) can be calculated using Eq. ( <ns0:ref type='formula'>1</ns0:ref>) <ns0:ref type='bibr' target='#b5'>(Borja et al., 2018)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>𝐸𝐷(𝑥 1 ,𝑦 1 ,𝑥 2 ,𝑦) = (𝑥 1 -𝑦 1 ) 2 + (𝑥 2 -𝑦 2 ) 2</ns0:formula><ns0:p>(1)</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:2:0:NEW 11 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 2: The k-means clustering algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Input:</ns0:head><ns0:p>//set of n data items</ns0:p><ns0:formula xml:id='formula_1'>𝐷 = { 𝑑1,𝑑2,….,𝑑𝑛 }</ns0:formula><ns0:p>//number of desired clusters</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑘</ns0:head></ns0:div>
<ns0:div><ns0:head>Output: A set of k clusters</ns0:head><ns0:p>Steps:</ns0:p><ns0:p>1. Arbitrarily choose k data items from D as initial centroids.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Repeat</ns0:head><ns0:p>a. Allocate each item d1 to the cluster with the nearest centroids (Eq. ( <ns0:ref type='formula'>1</ns0:ref>)).</ns0:p><ns0:p>b. For each cluster, calculate the new mean.</ns0:p></ns0:div>
<ns0:div><ns0:head>c. Calculate the junction among clusters to keep the k number of clusters.</ns0:head><ns0:p>Until convergence criteria are met.</ns0:p></ns0:div>
<ns0:div><ns0:head>c. K-Nearest Neighbours Algorithm</ns0:head><ns0:p>The K-NN is a simple classification method used to analyse a large matrix of features or to provide recommendations <ns0:ref type='bibr'>(Weisstein)</ns0:ref>. When new data are required to be categorised, the K-NN algorithm computes the distance in values between the target record and other records. These records are ordered based on distance <ns0:ref type='bibr' target='#b29'>(Tarus, Niu & Mustafa, 2018)</ns0:ref>. At the final stage, the first k record will be chosen from the ordered list i.e., K-NN. The pseudocode in Algorithm 3 describes the stages of applying K-NN.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm 3. Pseudocode of K-nearest neighbours algorithm.</ns0:head><ns0:p>Input:</ns0:p></ns0:div>
<ns0:div><ns0:head>Matrix Features (Patients, Features) Output:</ns0:head><ns0:p>A set of neighbours most like the target patient.</ns0:p></ns0:div>
<ns0:div><ns0:head>Steps:</ns0:head><ns0:p>1-Choose the target patient from the matrix features.</ns0:p><ns0:p>2-Assign the similarity values amid the target patient and other patients by a similarity distance measure, as shown in Eq. ( <ns0:ref type='formula'>1</ns0:ref>). 3-Sort the patients starting from the lowest distance value to the topmost distance value (from the highest similarity to the lowest similarity). 4-Choose k number of patients from the first in the sorted list.</ns0:p></ns0:div>
<ns0:div><ns0:head>d. Bacterial Foraging Optimisation Algorithm</ns0:head><ns0:p>Optimisation algorithms have proven effective in several areas including RS <ns0:ref type='bibr' target='#b2'>(Al-Hadi et al., 2017)</ns0:ref> and healthcare <ns0:ref type='bibr' target='#b36'>(Zainal et al., 2020)</ns0:ref>. For instance, BFOA has been well-embraced in recent RS approaches for providing high accuracy prediction <ns0:ref type='bibr'>(Al-hadi et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b2'>)(Al-Hadi et al., 2017)</ns0:ref>. This motivates us to use BFOA in this experiment for learning patients' latent features and for optimising the output prediction. The BFOA is an evolutionary computational algorithm for global optimisation. It is used to classify and learn better convergence <ns0:ref type='bibr' target='#b3'>(Amghar & Fizazi, 2017)</ns0:ref>. For example, in the human intestine, the BFOA conventions the features of E.coli bacteria in the foraging procedure and recycles them for universal optimisation to produce effective clarifications for large ranging issues <ns0:ref type='bibr' target='#b24'>(Naveen, Sathish Kumar & Rajalakshmi, 2015)</ns0:ref>. Similarly, BFOA is utilized to learn patients' latent features which are classified as nearest neighbours within the features of the best cluster while other clusters will be neglected. There are three main phases during the swarming evolution of bacteria which are chemotaxis, reproduction, and eliminationdispersal. These phases are described as follows:</ns0:p><ns0:p>a.</ns0:p><ns0:p>Chemotaxis: During this stage, each bacterium locates rich nutrients and avoids noxious substances. Patients' accurate features represent rich nutrients that can be tracked by learning the lowest error value throughout the learning iteration. The chemotaxis stage includes three processes: swimming, tumbling, and swarming. The bacterium swims for a certain period and tumbles while using its flagella to change its swimming direction <ns0:ref type='bibr' target='#b36'>(Yang et al., 2016)</ns0:ref>. The direction of movement after a tumble is given in Eq. ( <ns0:ref type='formula'>2</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_2'>𝛽 𝑖 (𝑗 + 1,𝑘,𝑙) = 𝛽 𝑖 (𝑗,𝑘,𝑙) + 𝐶 𝑖 + ∅/ ∅ 𝑡 𝑖 ∅ 𝑖 (2)</ns0:formula><ns0:p>where represents the members of bacteria i (i.e., latent features of patients), is the 𝛽 𝑖 𝐶 𝑖 stage degree in the direction of the tumble, denotes the index for the sum of 𝑗 chemotactic, refers to the index for the number of reproductions, and reflects the 𝑘 𝑙 index for the sum of elimination-dispersal. Besides, is the random unit length ∅/ ∅ 𝑡 𝑖 ∅ 𝑖 direction shown during the swimming phase. In the swarming mechanism, the latent features of a patient release attractant or repellent signals regarding other patient's latent features, as portrayed in Eq. (3).</ns0:p><ns0:formula xml:id='formula_3'>𝐽 𝑐𝑐 (𝛽 + 𝛽 𝑖 (𝑗,𝑘,𝑙)) = [ 𝑆 ∑ 𝑖 = 1 -𝑑 𝑎𝑡𝑡𝑟𝑎𝑐𝑡 exp (𝑤 𝑎𝑡𝑡𝑟𝑎𝑐𝑡 𝑃 ∑ 𝑚 = 1 (𝛽 𝑚 -𝛽 𝑖 𝑚 ) 2 ] + [ 𝑆 ∑ 𝑖 = 1 -ℎ 𝑟𝑒𝑝𝑒𝑙𝑙𝑎𝑛𝑡 exp ( -𝑤 𝑟𝑒𝑝𝑒𝑙𝑙𝑎𝑛𝑡 𝑃 ∑ 𝑚 = 1 (𝛽 𝑚 -𝛽 𝑖 𝑚 ) 2 ] (3)</ns0:formula><ns0:p>where is the depth of the attractant that can be recycled to establish the immensity of 𝑑 𝑎𝑡𝑡𝑟𝑎𝑐𝑡 secretion of attractant by a cell value, is the width of the attractant than can be recycled 𝑤 𝑎𝑡𝑡𝑟𝑎𝑐𝑡 Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>half of the population is separated into two equivalent parts having equal values. This phase keeps the bacteria population constant. Eq. ( <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>) shows the healthy values for patients' latent features.</ns0:p><ns0:formula xml:id='formula_4'>𝐽 𝑖 ℎ𝑒𝑎𝑙𝑡ℎ = 𝑁 𝑐 + 1 ∑ 𝑗 = 1 𝐽(𝑖,𝑗,𝑘,𝑙),<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where is the sum of patients' latent features, is the sum of chemotactic steps , k is the 𝑖 𝑗 𝑁 𝑐 reproduction step, and is the elimination-dispersal step. 𝑙 c. Elimination-Dispersal: This phase provides the position shifting probability for the limited latent features of patients. The random vectors are produced and arranged in ascending order.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>In this study, ReComS, ReComS+ and ReComS++ approaches are proposed to recommend preferences for the MIRA platform. These preferences are used to determine the input settings of the exergames by learning the precise behaviours of patients. ReComS integrates the K-NN and CF methods to classify the predicted values by reducing the error value. The error value is obtained using the projected and actual values of the previous session of the exergame. The ReComS+ approach improves the prediction performance of ReComS by integrating the k-means algorithm with K-NN and CF, which reduces the error value. However, this error value is relatively high. Hence, it lowers the prediction performance of the ReComS and ReComS+. ReComS++ further reduces the error value by optimising the prediction or projection values and integrating k-means, K-NN, CF, and BFOA algorithms.</ns0:p></ns0:div>
<ns0:div><ns0:head>a. Dataset</ns0:head><ns0:p>This study was carried out in the rehabilitation centre of Melaka, Malaysia to analyse the generated data using MIRA platform with ethic approval no PRPTAR.600-5( <ns0:ref type='formula'>27</ns0:ref>) by Pusat Rehabilitasi Perkeso Sdn. Bhd. The MIRA platform patient data file in this study contains patients' personal information such as first and last names, patient ID, and birth date. It also entails information related to the games played such as the session ID, name of the game, movement ID, movement name, and associated dates <ns0:ref type='bibr' target='#b34'>(Wilson et al., 2017)</ns0:ref>. Each selected game and movement acts as one exergame with its unique input variables in the item settings dialogue. The settings include the sides used (left or right), duration, difficulty, tolerance, minimum and maximum ranges. The values of these variables could be fixed based on the default values or adjusted by the physiotherapist after evaluating the performance of the patient. The MIRA platform could generate 26 variables based on the exergame or cognigame (a game that trains the cognitive function). Table <ns0:ref type='table'>1</ns0:ref> describes the most significant variables generated by the exergames.</ns0:p><ns0:p>The experimental data contained 3553 records generated by 61 patients with different types of diagnoses in which 41 patients had a stroke, 14 patients had TBI, seven patients had SCI, one patient had CP, and two patients had humerus. Patients provided written informed consent before the start of each experiment. Fig. <ns0:ref type='figure'>2</ns0:ref> portrays an example of the MIRA setting for animals' exergame with the elbow flexion movement. The item settings includes six variables that can be manipulated by the physiotherapist or player. Table <ns0:ref type='table'>2</ns0:ref> presents a description of the generated exergame features by patients using the MIRA platform. In each session, a patient plays an exergame by moving his/her limbs according to the rules of the game and movement exercise. During the exercise, the physiotherapist predicts the variables of the input setting such as difficulty, tolerance, minimum range, and maximum range according to his observation or adopts the default values, but a number of patients experienced difficulty playing the games. Afterwards, he deduced the accurate settings from the previous performance of patients in each exergame. As compared to using the default settings, a more accurate setting ensures patients play better. This indicates the significance of this research to the MIRA platform.</ns0:p><ns0:p>In this study, the ReComS approach is proposed to predict the variables of the input setting according to the data history of the patient. As ReComS is expected to provide low prediction accuracy, it is integrated with a clustering method (as used in similar experimental works <ns0:ref type='bibr' target='#b2'>(Al-Hadi et al., 2017</ns0:ref><ns0:ref type='bibr'>) (Al-Hadi et al., 2020)</ns0:ref>) and referred to as the ReComS+ approach. ReComS+ provides good prediction accuracy. ReComS++ is developed by further integrating ReComS+ with the BFOA algorithm to learn the latent features of the patients and to lower the RMSE value throughout the learning iteration process. The experimental results of ReComS and ReComS+ are utilised to benchmark the prediction performance of the ReComS++ approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. ReComS Approach</ns0:head><ns0:p>Most personal recommendation systems use the CF and the K-NN for providing personal recommendations. Here, the CF technique provides the target patient with personal recommendations according to the common behaviours of other patients. K-NN method is used to obtain the nearest neighbours of each target patient based on their similarities <ns0:ref type='bibr' target='#b25'>(Portugal, Alencar & Cowan, 2018)</ns0:ref>. Thus, ReComS integrates the K-NN algorithm with the CF technique for learning the personal behaviour of patients and predicting the input setting variables. The proposed ReComS approach assists the physiotherapist to collect accurate data from patients who need to play exergames using MIRA platform.</ns0:p><ns0:p>The framework of ReComS is arranged following the steps in Fig. <ns0:ref type='figure'>3</ns0:ref>. The ReComS approach is set to the target patient to manage the entire patients' features in the features matrix. K-NN is applied to classify the k nearest neighbours based on the similarities between the target patient and other patients. The RMSE function calculates ReComS prediction accuracy based on the distance between the features of the target patient and prediction values obtained using this approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>d. ReComS+ Approach</ns0:head><ns0:p>ReComS+ is proposed to improve the prediction accuracy of ReComS. The ReComS approach should yield higher accuracy, even if the RMSE remains high. k -means, K-NN, and CF methods are integrated into ReComS+ to provide the predicted variables as input setting in the MIRA dialogue box of each exergame. The k-means algorithm clustered the records of patients into the sum of clusters (k). The cluster that contained the record of the target patient is selected to obtain the matrix of neighbours that are integrated with the CF to provide the preferences values, as shown in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>e. ReComS++ Approach</ns0:head><ns0:p>Despite the high accuracy of ReComS+ prediction, its error value is relatively high. This error value can be reduced by deploying learning methods. Thus, BFOA is integrated with the ReComS+ for learning the behaviours of neighbours by lowering the error value during the iteration stages. The framework of ReComS++ encapsulates four stages that are needed to provide the preferences for patients' input settings, as shown in Fig. <ns0:ref type='figure'>5</ns0:ref>. These stages are described as follows:</ns0:p><ns0:p>i. Data preparation  Reading data records and arranging variables in a matrix.  Encoding the textual data (such as gender, diagnosis, game name, difficulty and side) using numbers.  Normalising the generated variables of each exergame based on the duration by applying Eq.</ns0:p><ns0:p>(5).</ns0:p><ns0:formula xml:id='formula_5'>𝐹 = 𝐹 * 𝑇 𝑑 /𝑇 𝐹 ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>where is a feature value, is a default duration i.e., 60 seconds, in this process, and is 𝐹 𝑇 𝑑 𝑇 𝐹 the duration of the exergame. Eq. ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) is implemented for the variables whose values increase with duration. These include Time, Moving Time, Moving Time in the Exercise, Still or Idle Time, Distance, Points, and Repetition. Other variables do not increase with duration since they are considered as either average values or percentage values between 0 and 100.</ns0:p><ns0:p> Normalising the variables into the range between 0 and 1. This is based on the dimension of each variable, features rescaling and the need to provide proper compatible values for machine learning algorithms. The normalization is performed based on the standard data mining requirement to provide accurate variables approximation and prediction using Eq. ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_6'>𝐹 𝑖𝑗 = 𝐹 𝑖𝑗 -𝑋 Y -X (∂ -∅) + ∅,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where F ij is the value of record and variable , is the least value, and is the highest value 𝑖 𝑗 𝑋 𝑌 in the whole matrix of variables. is the highest target value (1) and is the least target value ∂ ∅ (0).</ns0:p><ns0:p> Assigning the target patient, target movement, and side, to find the latest record of the patient where he played the selected movement exergame on the target side. Variables of this record would be arranged in the first row in the matrix of features.</ns0:p><ns0:p> Selecting whole records from data containing the target movement of the target patient and putting the variables of these records in the matrix of features.</ns0:p><ns0:p> The matrix of features would be divided into two parts. The first part would have 70% of records for the purpose of training and the second part would have 30% for evaluations.</ns0:p></ns0:div>
<ns0:div><ns0:head>ii. Clustering by k-means algorithm</ns0:head><ns0:p>The MIRA data consists of 30 games and 34 movements that provide 1020 kinds of features, which make it challenging to analyse. The features are grouped into a single matrix. Hence, the k-means clustering algorithm is used to simplify the various types of features generated by playing the games and movements in the MIRA application. These features are divided into a set of clusters. The challenge of this experimental work is in determining the accurate sum of clusters. To address this issue, MIRA data are tested in a set of k clusters ranging between 5 and 10. Based on the data collected, we assign the range from 5 to 10 clusters as the sufficient range of clusters. This is intended to avoid the k-means clustering problem (k problem) when using clusters above 10 because of the high number of zeros in each matrix of features. After that, the prediction performed by ReComS is assessed based on this set of clusters.</ns0:p></ns0:div>
<ns0:div><ns0:head>iii. Classification by K-NN algorithm</ns0:head><ns0:p>The K-NN is applied to retrieve similar records to the target record in the matrix. The challenge in this process is in determining the accurate sum of neighbours. A small number of neighbours yields an accurate prediction performance while a vast number of neighbours exhibits the lowest performance. However, the few numbers of neighbours are not sufficient to learn the accurate features of patients. Thus, a larger number is required to accurately learn the patients' features.</ns0:p><ns0:p>Addressing this problem, BFOA is integrated with the ReComS+ to improve the prediction accuracy. In this experimental work, the K-NN algorithm applies the squared ED, as given in Eq. ( <ns0:ref type='formula' target='#formula_7'>7</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_7'>D = , (𝑥 -𝑦) ∑ 𝑛 𝑖 = 1 (𝑥𝑖 -𝑦𝑖)^2<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where refers to the distance value between records and while n denotes the number of 𝐷 𝑥 𝑦 features. Eq. ( <ns0:ref type='formula' target='#formula_7'>7</ns0:ref>) calculates the distance between the target record and the total records of the target cluster. The target record is hinged on the target cluster mainly because the distance between the target record and the centroid point of this cluster is smaller when compared to other clusters. After calculating the distance between the total records of the target cluster and the target record, the values are sorted in ascending order to derive closely related neighbours to the target record. Records having small values of distance have the highest similarity value to the target record thus, being the nearest neighbours. In this experimental work, the k of neighbours is determined based on a set of k, i.e., 25 neighbours, 50 neighbours, 75 neighbours and 100 neighbours. These four numbers of neighbours have been chosen according to the available features of each exergame within each cluster. These features constitute an effective solution for executing the required training processes for three reasons. First, most patients like to play only a few interesting MIRA exergames. Thus, other exergames have fewer records. Meanwhile, the machine learning algorithms need a large number of records to facilitate the process of learning the latent-features using the exergame output features. Second, interesting exergames can be predicted easily due to their rich output clusters using most similar features to the target exergame. Then, K-NN method can find k neighbours of over 100 records while it is more difficult in clusters with less than or equal 100 records. Third, some patients need to play specific exergames to improve their idle movement skills. The number of output records of these specific exergames is small. Thus, the output cluster of the target matrix of such special exergame does not converge due to the gap between them and the popular exergames. Hence, there is only a few neighbours such as 25 or 50 from these output clusters while it is quite difficult to find 75 or 100 neighbours. In addition, more than 100 neighbours can be considered as impossible or inaccurate due to the resulting poor convergence between the features of target exergame and those of poor-performing clusters. Each k neighbour is then evaluated using ReComS based on its prediction performance in comparison to other numbers of neighbours.</ns0:p></ns0:div>
<ns0:div><ns0:head>f.</ns0:head><ns0:p>Developing CF Performance with BFOA The notion of predicting values for variables that constitute the item settings in MIRA, based on the behaviour of patients that have played a few exergames, is similar to the idea of predicting products for customers based on their preferences in the recommendation system that uses the CF technique. Obviously, the CF predicts the score values for products while ReComS predicts variables for all input and output features associated with a specific game and a peculiar movement. Typically, the CF technique uses three functions to estimate values, as described next.</ns0:p></ns0:div>
<ns0:div><ns0:head> Similarity Function</ns0:head><ns0:p>This function provides the correlation between the target record (of game and movement) and total records. The similarity functions that apply the CF technique are Cosine Similarity <ns0:ref type='bibr'>(Al-hadi et al., 2020)</ns0:ref> and the Pearson Correlation Coefficient <ns0:ref type='bibr' target='#b28'>(Srifi et al., 2020)</ns0:ref>. Note that when using the Cosine or Correlation coefficient for MIRA data, these functions generate some outliers due to the existence of zeros in the feature matrix. Thus, the Euclidean similarity function, as shown in Eq. ( <ns0:ref type='formula' target='#formula_7'>7</ns0:ref>), emerges as the most suitable similarity function to be applied for MIRA data, mainly because all the calculated similarity values are known.</ns0:p></ns0:div>
<ns0:div><ns0:head> Prediction Function</ns0:head><ns0:p>This is an important computational procedure obtained from the similarity values retrieved from the similarity function and the correlation between the total records. For the purpose of prediction in this experimental work, Eq. ( <ns0:ref type='formula'>8</ns0:ref>) has been proposed based on the current prediction function in the CF technique <ns0:ref type='bibr' target='#b23'>(Natarajan et al., 2020)</ns0:ref> after considering the difference between rating scores and features values of MIRA.</ns0:p><ns0:p>,</ns0:p><ns0:formula xml:id='formula_8'>𝑃 𝑖 = 𝑉 𝑎 + ∑ 𝑁 ℎ = 1 𝐷(𝐹 𝑎 ,𝐹 ℎ )(𝐹 ℎ,𝑖 -𝑉 ℎ ) ∑ 𝑁 ℎ = 1 𝐷(𝐹 𝑎 ,𝐹 ℎ ) (8)</ns0:formula><ns0:p>where is the predicted or projected value for feature , is the average value of all feature 𝑃 𝑖 𝑖 𝑉 𝑎 values for the target record, is the sum of neighbours, D is the distance similarity value between 𝑁 (feature value of target record) and F h (feature value of neighbour ). Also, refers to the 𝐹 𝑎 ℎ 𝐹 ℎ,𝑖 feature value i of the neighbour , whereas denotes the average value of all features of ℎ 𝑉 ℎ neighbour h.</ns0:p><ns0:p>In this work, Eq. ( <ns0:ref type='formula'>8</ns0:ref>) is used in ReComS and ReComS+ by employing the error function. Nevertheless, the generated output still has errors and the predicted values are over-fitted. Such over-fitting occurs when the predicted value is larger than the features generated by the target exergames. Fig. <ns0:ref type='figure'>6</ns0:ref> graphically exemplifies the generated features of the target exergame and the features predicted by the CF technique within the procedures of the ReComS approach.</ns0:p><ns0:p>In Fig. <ns0:ref type='figure'>6</ns0:ref>, eight predicted feature values are overfitted because these values are greater than the feature values of the target exergames. The remaining predicted features have lower fitting values due to their values are smaller than the feature values of the target exergames. Hence, the predicted features need to be normalised to fit/align these values with the target exergame features.</ns0:p><ns0:p>Nonetheless, the ReComS and ReComS+ are inaccurate approaches in normalizing the predicted features. Thus, an optimised algorithm is embedded in the prediction method to normalise the prediction values. Notably, BFOA has been acknowledged as an optimisation algorithm commonly applied in recommendation systems <ns0:ref type='bibr' target='#b13'>(Hwangbo, Kim & Cha, 2018)</ns0:ref> since this algorithm can exceptionally learn the deep features of each matrix. The contribution of the ReComS++ approach is represented in Eq. ( <ns0:ref type='formula'>9</ns0:ref>).</ns0:p><ns0:p>,</ns0:p><ns0:formula xml:id='formula_9'>𝑃 𝑖 = 𝑉 𝑎 + 𝐵 𝑖 ∑ 𝑁 ℎ = 1 𝐷(𝐹 𝑎 ,𝐹 ℎ )(𝐹 ℎ,𝑖 -𝑉 ℎ ) ∑ 𝑁 ℎ = 1 𝐷(𝐹 𝑎 ,𝐹 ℎ ) (9)</ns0:formula><ns0:p> where B i is the bacteria value that can be learned by tracking feature F i, and the sum of bacterium members will be equal to the number of neighbours' features. These bacteria have been used to track all features of the neighbours (such that each column in the matrix is managed by a bacterium member) and provide accurately predicted input variables for MIRA. The remaining vectors of Eq. ( <ns0:ref type='formula' target='#formula_5'>5</ns0:ref>) have been described in Eq. ( <ns0:ref type='formula' target='#formula_4'>4</ns0:ref>). The BFOA is implemented based on the algorithmic phases described in Section 2.5 while the values of bacteria factors are listed in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head> Benchmark Function</ns0:head><ns0:p>The proposed approaches are evaluated to examine the performance of the CF technique using RMSE <ns0:ref type='bibr'>(Al-Hadi et al., 2020)</ns0:ref> and Mean Absolute Error (MAE) <ns0:ref type='bibr' target='#b32'>(Wang, Yih & Ventresca, 2020)</ns0:ref>.</ns0:p><ns0:p>In this article, RMSE measure is used for calculating the differences between the variable of target patient and predicted values for same variables of target patient. These variables are the input setting variables (difficulty, tolerance, minimum range, and maximum range) and the generated variables by exergame such as average acceleration, average deceleration, moving time and other variables that are described in Table <ns0:ref type='table'>1</ns0:ref>. The predicted values are calculated by Equation 9 according to the similarity values for patients comparing to the target patient variables.</ns0:p><ns0:p>In other words, the total number of generated exergame features of all patients are divided into kclusters using the k-means. Each cluster is classified by the K-NN algorithm that selects the most suitable features needed to provide accurate feedback. BFOA focuses on accurately learning the latent features. This is achieved by tracking the positive effects produced through classified features by reducing the RMSE value throughout the optimization stages. This is achieved by accurately learning the convergence among the generated features for various patients who played various exergames, as shown in Eq. ( <ns0:ref type='formula' target='#formula_10'>10</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_10'>𝑅𝑀𝑆𝐸 = 1 𝑅 𝑅 ∑ 𝑝 = 1 1 𝑛 𝑛 ∑ 𝑖 = 1 (𝐹 𝑎,𝑖 -𝑃 𝑖 ) ,<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>where RMSE is the average RMSE values for all records in the matrix of neighbours, R denotes the number of records in the training or testing sets, n refers to the number of features in the matrix, F a,i represents the value of feature i in the target record, and P i stands for the predicted value recommended to feature F i . The highest RMSE value reflects the lowest accuracy prediction performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>ReComS improves the performances of the following three experimental procedures. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The K-NN algorithm classifies integral features of a vast number of patients who played several games and with varying movements. The CF computes the similarity between the target features and the features of each neighbour before predicting the new values. The performance of the predicted values is consequently assessed using the RMSE value. To benchmark the varied RMSE values for several target records based on patients' personality behaviours, the average RMSE is calculated to obtain accurate outputs. Fig. <ns0:ref type='figure'>7</ns0:ref> illustrates the prediction performance accuracy using this approach based on two sets, training and testing sets. Both sets show that 25 of the nearest neighbours yields the lowest performance accuracy when compared to those having 50, 75, and 100 neighbours. The case of 100 neighbours yielded more accurate performance when compared to that of 25 neighbours. This deteriorates the prediction performance of the nearest neighbours, which may be solved by other classification approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Evaluating the ReComS+ Approach</ns0:head><ns0:p>The k-means clustering algorithm is applied in ReComS+ to address the limitation of ReComS using CF and K-NN. Deploying these methods, even the use of a small number of nearest neighbours could result in a highly accurate performance. Fig. <ns0:ref type='figure'>8</ns0:ref> indicates that after integrating the k-means algorithm into ReComS, the prediction performance of CF is improved and the feedback of the nearest neighbours are corrected. The outcomes, as depicted in Fig. <ns0:ref type='figure'>8</ns0:ref>, show the prediction performance of CF using five clusters and 25 neighbours is better for both the training and testing sets, as compared to the results when 50, 75, and 100 neighbours are integrated with five clusters. Nevertheless, the number of clusters is not justified, as six or ten clusters may offer a higher prediction accuracy as shown in Fig. <ns0:ref type='figure'>9</ns0:ref>. Hence, ReComS+ tests the prediction performance using k clusters to address the problem associated with cluster numbers.</ns0:p><ns0:p>In addition, Fig. <ns0:ref type='figure'>9</ns0:ref> proves that various numbers of clusters can provide similar prediction performance as ReComS+ for both the training and testing sets. The results show that 5 clusters provide the highest accuracy prediction using the training dataset when compared to the performance of ReComS+ by the other k clusters, which ranged between 6 and 10. The accurate prediction of ReComS+ in the testing dataset, therefore, confirms that the prediction performance of ReComS+ for all k clusters is similar to that of 6 clusters that generate low accuracy performance. For this reason, subsequent experimental works using ReComS++ apply 5 clusters. Though the accuracy performance of ReComS+ has been improved using this approach, the RMSE is still high and the range of the predicted values should be normalised. For this reason, the BFOA is applied to normalise the predicted values and minimise RMSE values.</ns0:p></ns0:div>
<ns0:div><ns0:head>c. Evaluating ReComS++ Approach</ns0:head><ns0:p>The BFOA is implemented in this work to normalise the predicted values of the ReComS+ approach, which utilised the CF, K-NN and k-means methods. The results proved that the predicted values are similar to the variables of the target record while emphasizing the need to decrease RMSE values. Fig. <ns0:ref type='figure'>10</ns0:ref> presents the prediction performance of the ReComS++ approach that employed CF, K-NN, k-means, and BFOA for both training and testing sets. The RMSE values, through this approach, appeared to be small, thus indicating high prediction accuracy for all sets of tested neighbours. The set of 25 neighbours provide the highest prediction accuracy when compared to the other sets of neighbours (i.e., 50, 70 and 100) for the MIRA training and testing datasets. The results of both training and testing sets are close, indicating that this approach provides accurate prediction values for the whole target records in the MIRA data. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Discussion and Future works</ns0:head><ns0:p>The CF technique is applied to predict the values of future variables of the item settings. This technique is integrated with the K-NN algorithm in ReComS. It provides each exergame with predicted feature values related to the generated features of the target exergame. Few of these predicted features can be used to assign the variables of the exergame dialogue box setting. The ReComS approach provides a low prediction accuracy due to the high percentage of the overfitted predicted values. Hence, ReComS needs further improvement to classify the various output features of all exergames. For this reason, ReComS is subsequently improved by ReComS+ that utilizes the k-means algorithm for grouping the various generated features into k clusters and then finding the nearest features to the target exergame features within the same cluster.</ns0:p><ns0:p>ReComS+ approach chooses the best cluster and number of neighbours through its accurate predictions. The prediction performance of ReComS+ is more accurate than that of ReComS because the former accurately learn the convergence among exergame features using the k-means. Similarly, each cluster is classified to accurately learn the neighbours' features using K-NN. Nevertheless, the predicted values generated in ReComS+ vary and the prediction accuracy of ReComS+ is low due to the difficulty in learning the latent features of the vast amount of generated exergame features. Thus, the ReComS+ approach should be further improved using an optimization algorithm that can effectively learn the latent features of the neighbours within each cluster.</ns0:p><ns0:p>BFOA is one of the efficient optimisation algorithms that have been used in improving the prediction performance of the CF technique in some recommendation systems <ns0:ref type='bibr'>(Al-hadi et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b2'>)(Al-Hadi et al., 2017)</ns0:ref>. Accordingly, BFOA is utilized in the ReComS++ approach for reducing the overfitted prediction values by learning the latent features of the neighbours within each cluster. Further, BFOA is used to ensure the outlier data belonging to the cluster. The experimental approaches show that the ReComS++ approach has addressed the inherent challenges of the first and second approaches. Fig. <ns0:ref type='figure'>11</ns0:ref> shows the comparisons between the RMSE values of the three experimental approaches: ReComS, ReComS+, and ReComS++ by MIRA training sat. The ReComS++ approach provides the lowest RMSE value, indicating it has highest prediction accuracy when compared with both ReComS and ReComS+ approaches. Furthermore, Fig. <ns0:ref type='figure'>11</ns0:ref> illustrates the experimental outcomes derived from the MIRA testing dataset. The results are similar to those generated for the training dataset using the ReComS++ approach. This indicates that ReComS++ successfully addressed the drawbacks of the ReComS and ReComS+ approaches.</ns0:p><ns0:p>There are several potential promising directions where ReComS++ approach can be integrated with other platforms that have the profile settings for exergames. It may be used to learn latent features of the exergames output features by each platform that has settings variable for predicting the input settings' variables. Further, ReComS++ approach provides 85% of prediction results is correct while this result can be upgraded up to 90% by exploring other machine learning methods to reduce the computational time of ReComS++ approach throughout the iteration learning. It can focus on specific variables such as average correct answer reaction time for cognitive. The cognitive and the repetition of range of motion need more study to understand the suddenly patient movements and predicting the suitable settings. Therefore, we trend to explore various linear regression techniques for predicting the optimal setting variables. In addition, we trend to explore several structures of deep learning methods to find the best structure that can be reducing the computational time and learning more accurate latent features of patients based on their personal behaviours during playing the exergames.</ns0:p></ns0:div>
<ns0:div><ns0:head>a.</ns0:head><ns0:p>Criteria for the ReComS++ approach according to the output A significant milestone in this work is determining the predicted difficulty level and the remaining variables in the dialogue box for the setting of each item. Fig. <ns0:ref type='figure'>12</ns0:ref> shows the minimum and maximum predicted values for the difficulty, tolerance, as well as minimum and maximum ranges.</ns0:p><ns0:p>Based on the predicted and observed values obtained from the MIRA application in Melaka, Malaysia while supervising patients who played MIRA games. The determined threshold intervals are as illustrated in Fig. <ns0:ref type='figure'>13</ns0:ref> where this figure presents two observations for two target records created by the output of the predicted variables using ReComS++. First, the example in Fig. <ns0:ref type='figure'>13</ns0:ref> depicts the actual observation made by the physiotherapist for a patient who has played the game with movement exercise at an easy (difficulty) level with a tolerance up to 30% (percentage of range of movement is 0 -60%). After that, the experimental approach normalises these values into the range between 0 and 1, as shown in the figure. The experimental approach performs closely to the actual values in terms of difficulty, tolerance, minimum range, and maximum range. Based on the threshold shown in Fig. <ns0:ref type='figure'>12</ns0:ref>, the experimental approach determines the final prediction values of the next item settings as Easy; 20 for tolerance, 0 for min range and 70% for max range. The second example depicts similar procedures needed to arrive at the final decision in accordance with the threshold.</ns0:p></ns0:div>
<ns0:div><ns0:head>b. Evaluating the ReComS++ approach based on the physiotherapist observations</ns0:head><ns0:p>The ReComS++ approach, programmed using Java, is included in the MIRA system to provide physiotherapists and patients with preferences. Fig. <ns0:ref type='figure'>14</ns0:ref> shows the interface of the preferences where the physiotherapist (who helps patients to play MIRA exergames) could easily obtain the recommended preferences for the selected patient, movement, and side. On evaluating the ReComS++ approach, we obtain the evaluation file completed by physiotherapists of Perkeso Rehabilitation Centre in Melaka, Malaysia. The file records the physiotherapists' observations after using the preferences suggested by ReComS++ for a set of patients over a period of 5 weeks. The file contains 1182 records. Table <ns0:ref type='table' target='#tab_0'>4</ns0:ref> reveals an example of the information provided in the file.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>4</ns0:ref> presents the set of movements performed by a group of patients who play a set of exergames (movements and games) using the MIRA platform. Each exergame has four preferences that constitute the input setting variables in ReComS++ (i.e., difficulty, tolerance, minimum range and maximum range). Here, the physiotherapist observes each patient who performs the exergame and registers his/her activity performance as positive (P) or Negative (N). The evaluation results are summarised in Table <ns0:ref type='table'>5</ns0:ref>. The table shows a higher percentage of positive preferences compared to negative preferences. This implies that ReComS effectively recommends accurate preferences for patients.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>In most cases, patients train their disabled limbs by utilising the facilities offered at rehabilitation centres to regain their limbs functionality. The VRT, such as MIRA, refers to a contemporary rehabilitation technique that aids patients to perform 'game-aided' exercises in order to increase their motivation and engagement in physical therapy. Nonetheless, physiotherapists who deal with this application need to predict the values of the input variables of the item settings for each patient manually, which is the main challenge in this domain. Therefore, in this study, we utilise a recommender system to suggest the most suitable settings for patients' movements based on their movement history. Since the exergames generate various features, automated analysis is required to provide a summary of the patient's (movement) performance. To address these challenges, three experimental approaches: 1) ReComS with the CF and K-NN approach; 2) ReComS+ with the CF, K-NN; and 3) k-means approach, in addition to ReComS++ with the CF, K-NN, k-means and the BFOA approach; were proposed and their shortcomings were tested by learning procedures. The experimental results demonstrated that ReComS+ yields more accurate predictions when compared with ReComS while ReComS++ achieves a higher accuracy as compared to ReComS+. Overall, ReComS++ performs best for MIRA exergames as it provides MIRA with the most accurate predictions for the input setting dialogue box. It thus assists patients to perform MIRA exergames correctly.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>The case of a patient playing three game-based exercises using the MIRA Platform. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:2:0:NEW 11 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>to denote the means by which the chemical cohesiveness of the signal diffuses, sets the ℎ 𝑟𝑒𝑝𝑒𝑙𝑙𝑎𝑛𝑡 height of the repellent (a propensity to avoid a nearby cell), and defines the negligible 𝑤 𝑟𝑒𝑝𝑒𝑙𝑙𝑎𝑛𝑡 area where the cell is relative to the diffusion of the chemical signal. , denotes the dimension of 𝑆 𝑖𝑠 𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑔𝑟𝑜𝑢𝑝𝑠 𝑤𝑖𝑡ℎ𝑖𝑛 𝑡ℎ𝑒 𝑝𝑎𝑡𝑖𝑒𝑛𝑡𝑠' 𝑙𝑎𝑡𝑒𝑛𝑡 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠 𝑃 the search space, is the latent features of group number m, and represents latent feature β m 𝛽 𝑖 𝑚 number i in group m. b. Reproduction: This phase deals with the feedback (RMSE) value which acts as the fitness value. These values are obtained after training the target patients' features that have been extracted through the current training stage using the k-means, K-NN, BFOA and CF methods. The RMSE values will be saved in an array before sorting (smaller and larger values). The lower half of the latent features having a larger fitness value (dies) while the outstanding latent features or the other PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:2:0:NEW 11 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>a.</ns0:head><ns0:label /><ns0:figDesc>Evaluating the ReComS Approach PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:2:0:NEW 11 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:2:0:NEW 11 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:2:0:NEW 11 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,310.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,313.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,150.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,148.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,407.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,228.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,301.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,244.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,239.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,305.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,277.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,350.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,199.12,525.00,187.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,311.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>An example of information collected by the physiotherapist for MIRA and ReComS++.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Preference</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53958:2:0:NEW 11 May 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
</ns0:body>
" | "Dear Editor,
The authors like to thank the editor and the reviewers for reviewing our work. We also appreciate the constructive comments of the reviewers.
We have edited the manuscript to address the concerns of the anonymous reviewers by including more details about this work.
Comments of Reviewer 1
The paper provides clear definition of concepts, terms, and methods (Collaborative Filtering, Kmeans algorithm, etc.).
Sufficient context is provided to understand:
why exergames are useful for patients?
Thank you for this comment. We added further explanation to show the advantages of exergames to patients within the section of Introduction, such as
“These games are quite promising as they furnish patients with new experiences while performing their daily exercises which are rehabilitation therapies (Li et al., 2018a).”
“Through several training procedures, exergames help patients to improve on the physical movements of their muscles (Jaarsma et al., 2020) so that they can move their idle parts gradually.”
In assistive technology, serious games come with multimodal functions and immersive characteristics that have been embedded into various types such as robotics, virtual reality, and simulator. Thus, serious games are promising technology that can bring new experiences for people with disabilities to perform their rehabilitation (Li et al., 2018b) (Merilampi et al., 2017).
why rec sys were needed for the MIRA platform, why exergames are useful for patients.
The overall aim of the paper is clear: analysis of patients' movements and use of rec sys to make suggestions on the most appropriate settings for patients' movements.
The structure of the paper is clear; raw data and figures are shared.
Thank you for this comment.
The ReComS methods are proposed to improve the manual predictions by the therapist through providing the therapist by automatic predictions, this reason is provided in sentences “Traditionally, the physiotherapist tracks the exergame history of each patient (that reflects the movement threshold of the idle limb) then suggests the values of input settings in the dialogue box for future exergaming. This method of observation is costly and time-consuming because the physiotherapist needs to prepare a manual list of variables to track the patient's history.”
In addition, for getting more accurate automatic predictions for MIRA platform than the decision tree model which is one of developed models for MIRA platform where this method defined in “In addition, the decision tree model applied to predict a patient’s rehabilitation future performance is based on time, average acceleration, distance, moving time, and average speed. This prediction method uses the default exergame settings and the previous performances of patients who played the same exergame with the same side (Zainal et al., 2020). In spite of that, the prediction will be more accurate if the exergame settings are controlled automatically.”
However, the paper needs a professional English review (e.g. line 43 'who having' > 'who are having'; the overuse of which statements in line 487 and 488, and again in line 69 and 70; in line 47 'where is a' > 'where there is a', etc.).
Thank you for this comment. The English has been improved in the whole paper by professional native English-speaking proof-reader.
The innovative contributions for MIRA platform are clear (no rec sys before), however the paper needs further work to demonstrate and articulate why MIRA with a rec sys system is innovative compared to the available research ('to the best of the authors' knowledge, addressing this problem has been neglected in previous work' is based on trust, not evidence - we need evidence. Please add references of other works and how your work is innovative compared to others).
Thank you for this insightful comment. The other works and some references have been added to the introduction such as:
1) The positive effect of MIRA exergames for children who suffer from the brachial plexus palsy caused by transverse myelitis (Czakó, Silaghi & Vizitiu, 2017).
2) The positive effect of MIRA exergames for adult patients (Mcglinchey et al., 2015).
where in both case studies (Mcglinchey et al., 2015) and (Czakó, Silaghi & Vizitiu, 2017), a physiotherapist uses default exergames settings and no automation was considered.
3) The decision tree model
“the exergames in MIRA are created specifically to aid physical rehabilitation therapies and assessments. An example is that studies the movement performance of children who are seven years old and suffer from the brachial plexus palsy caused by transverse myelitis (Czakó, Silaghi & Vizitiu, 2017). In the study, the movement performance of children was improved by MIRA exergames training. Another case study demonstrates that MIRA exergames have positive effects and can be safely implemented for adult patients (Mcglinchey et al., 2015). However, in both case studies (i.e., (Mcglinchey et al., 2015) and (Czakó, Silaghi & Vizitiu, 2017)), a physiotherapist uses default exergames settings and no automation was considered.”
“In addition, the decision tree model applied to predict a patient’s rehabilitation future performance is based on time, average acceleration, distance, moving time, and average speed. This prediction method uses the default exergame settings and the previous performances of patients who played the same exergame with the same side (Zainal et al., 2020). In spite of that, the prediction will be more accurate if the exergame settings are controlled automatically.”
Experimental design
As mentioned before: The overall aim of the paper is clear: analysis of patients' movements and use of rec sys to make suggestions on the most appropriate settings for patients' movements.
Research methods are clear both in the abstract and in the Materials and Methods section. Benchmarks are specified, as well as Dataset and equations used.
As mentioned above, it's needed some work to state how the research fills an identified knowledge gap, as there are probably other exergames systems developed by others as well.
Thank you for this comment.
The existing works are added to this paper for improving our case study using MIRA platform in prediction scoring model and decision tree.
The decision tree model is used to suggest the comfortable difficulty mode for rehabilitation patients using k-means algorithm. It used to analysis five variables that are generated by patients when playing MIRA exergames. However, there are various exergames with several variables need to analysis for developing the performance of prediction.
Prediction scoring method is used to suggest the comfortable difficulty mode for rehabilitation patients using k-means algorithm (Zainal et al., 2019). It used to analysis five variables that are generated by patients when playing MIRA exergames. However, there are various exergames with several variables need to analysis for finding the accurate prediction scoring.”
As mentioned above, the impact is clear for the MIRA platform, but not clear compared to the state of the art of exergames. If the novelty is provided by the use of recommender systems in an unexplored field (exergames), a clear statement is needed. More extensive state of the art research works in exergames are needed. It's also recommended to add in the conclusions a clear statement of how the usage of rec sys in the MIRA platform would benefit the field of exergames and provide a value compared to other works (it's already clear what results were obtained, just add the overall impact compared to the existing literature).
Thank you for this comment.
We have added in the introduction section and in the description of MIRA platform.
The ReComS approaches can be implemented for predicting input variables settings of another platforms' exergames.
“There are several potential promising directions where ReComS++ approach can be integrated with other platforms that have the profile settings for exergames. It may be used to learn latent features of the exergames output features by each platform that has settings variable for predicting the input settings’ variables.”
In the conclusion part, we summarized the impact of ReComS++
“Overall, ReComS++ performs best for MIRA exergames as it provides MIRA with the most accurate predictions for the input setting dialogue box. It thus assists patients to perform MIRA exergames correctly.”
Suggestion: line 609 'the experimental results demonstrated that the recoms+ approach appeared better compared to the recoms approach' ' please further develop this sentence specifying why (it's mentioned in the discussion of the results, but add a clear conclusive statement here). Also, 'it appeared better' is a vague statement -better in which way?. Same for line 610 about Recoms++.
Thank you for this comment.
We have addressed this concern in the discussion in:
“The prediction performance of ReComS+ is more accurate than that of ReComS because the former accurately learn the convergence among exergame features using the k-means. Similarly, each cluster is classified to accurately learn the neighbours’ features using K-NN. Nevertheless, the predicted values generated in ReComS+ vary and the prediction accuracy of ReComS+ is low due to the difficulty in learning the latent features of the vast amount of generated exergame features. Thus, the ReComS+ approach should be further improved using an optimization algorithm that can effectively learn the latent features of the neighbours within each cluster.”
“The ReComS++ approach provides the lowest RMSE value, indicating it has highest prediction accuracy when compared with both ReComS and ReComS+ approaches.”
In the conclusion, there is a reference of Recoms+ having 'better' experimental results than recoms, and recoms++ providing better accuracy performance than recoms+. Does this means recoms++ is the best approach because the accuracy is better, and is therefore the best approach for MIRA? If yes, please assert so. If not, please discuss why, in the evaluation, accuracy is not the only relevant parameter for identifying the best approach.
Thank you for this comment. We have been improved these sentences in the conclusion in:
“The experimental results demonstrated that ReComS+ yields more accurate predictions when compared with ReComS while ReComS++ achieves a higher accuracy as compared to ReComS+.”
“Overall, ReComS++ performs best for MIRA exergames as it provides MIRA with the most accurate predictions for the input setting dialogue box. It thus assists patients to perform MIRA exergames correctly.”
Some observations are described in the Discussion section, just make clear conclusive statements on the Conclusion section.
Thank you for this comment. The prediction performance of ReComS, ReComS+ and ReComS++ approaches are described in clear sentences in the Discussion as:
“The ReComS++ approach provides the lowest RMSE value, indicating it has highest prediction accuracy when compared with both ReComS and ReComS+ approaches.”
and in the Conclusion as:
“The experimental results demonstrated that ReComS+ yields more accurate predictions when compared with ReComS while ReComS++ achieves a higher accuracy as compared to ReComS+. Overall, ReComS++ performs best for MIRA exergames as it provides MIRA with the most accurate predictions for the input setting dialogue box. It thus assists patients to perform MIRA exergames correctly.”
Comments of Reviewer 2
Language needs proof-reading. A professional, native English-speaking proof-reader is required. There are too many language issues to single out, but this proof-reading is needed.
The language of the paper has been improved by professional native English-speaking proof-reader.
Conceptual clarity needs improving consistently refer to either patients or users. For example, 'to suggest the most appropriate setting for patients to enhance the users' performance' ==> is the user the same as patient?
Thank you for this comment. It has been addressed and we have explained the relationship between user and patient in the CF part
“In this work, the patients represent users and the generated output features of the MIRA platform represent items.”.
In the other parts, we used the word “patient” instead of “user”.
The explanation for the Bacterial Foraging Optimisation Algorithm needs to be clarified. While the other two algorithms are commonly used for recommenders, this algorithm is not.
Specifically,
(a) why was it selected?
(b) how was it applied --- the current explanation is describing micro-biology, but the use case here was exergame. The explanation needs to reflect the use case.
Thank you for these comments. The reasons to motivate us to use BFOA as one of the optimization algorithms due to “Optimisation algorithms have proven effective in several areas including RS (Al-Hadi et al., 2017) and healthcare (Zainal et al., 2020). For instance, BFOA has been well-embraced in recent RS approaches for providing high accuracy prediction (Al-hadi et al., 2020)(Al-Hadi et al., 2017). This motivates us to use BFOA in this experiment for learning patients’ latent features and for optimising the output prediction.” In other words, the optimisation algorithm is integrated with the ReComS+ approach to ensure the outlier data belong to one of the clusters.
The explanation of BFOA has been updated to show the relationship between this algorithm and MIRA exergames and how it was implemented in this work in:
“Similarly, BFOA is utilized to learn patients’ latent features which are classified as nearest neighbours within the features of the best cluster while other clusters will be neglected.”
The relation between patient features and BFOA parts is declared in:
“Patients’ accurate features represent rich nutrients that can be tracked by learning the lowest error value throughout the learning iteration.”
The bacteria weights represent the latent features of patients using factor and represents latent feature i within bacteria group m.
The authors allude to these points in later sections of the paper, but they need to be first explained when introducing the algorithm.
Thank you for this comment.
The reasons for implementing the algorithms (i.e., CF, KNN, k-means, and BFOA) in this work, and how they were implemented within MIRA data are explained in:
“the physiotherapist tracks the exergame history of each patient (that reflects the movement threshold of the idle limb) then suggests the values of input settings in the dialogue box for future exergaming. This method of observation is costly and time-consuming because the physiotherapist needs to prepare a manual list of variables to track the patient's history. Thus, most physiotherapists use the default setting for all patients which retards the patients’ performances, especially when they play using their idle limbs. In view of the aforementioned, this research proposes ReComS approaches for learning the “best” setting for each patient.”
• The CF is used for learning the personality features of patients.
• KNN is used for analysing a large matrix of features based on personality to find the nearest neighbours to the target patient.
• k-means is used for reducing the high dimensionality of a huge number of features.
• the BFOA is integrated with the ReComS+ approach to ensure the outlier data belong to one of the clusters.
The number assignment of clusters needs to be clarified:
'MIRA data are tested in a set of k clusters ranging between 5 and 10 clusters'
>> why this test range?
The same for kNN:
'In this experimental work, the k of neighbours is determined based on a set of k (25 neighbours, 50 neighbours, 75 neighbours, 100 neighbours).'
>> why this range?
Thank you for this comment.
The reasons of assign the range of neighbours is described in:
“These four numbers of neighbours have been chosen according to the available features of each exergame within each cluster. These features constitute an effective solution for executing the required training processes for three reasons. First, most patients like to play only a few interesting MIRA exergames. Thus, other exergames have fewer records. Meanwhile, the machine learning algorithms need large number of records to facilitate the process of learning the latent features using the exergame output features. Second, interesting exergames can be predicted easily due to their rich output clusters using most similar features to the target exergame. Then, K-NN method can find k neighbours of over 100 records while it is more difficult in clusters with less than or equal 100 records. Third, some patients need to play specific exergames to improve their idle movement skills. The number of output records of these specific exergames is small. Thus, the output cluster of the target matrix of such special exergame does not converge due to the gap between them and the popular exergames. Hence, there is only a few neighbours such as 25 or 50 from these output clusters while it is quite difficult to find 75 or 100 neighbours. In addition, more than 100 neighbours can be considered as impossible or inaccurate due to the resulting poor convergence between the features of target exergame and those of poor-performing clusters.
The reasons of assign the number of clusters is described in “Based on the data collected, we assign the range from 5 to 10 clusters as the sufficient range of clusters. This is intended to avoid the k-means clustering problem (k problem) when using clusters above 10 because of the high number of zeros in each matrix of features.”.
The inability of normalization to help overfitting needs to be explained:
'prediction values are over-fitting; hence they need to be normalised to fit the prediction values. Nonetheless, the normalisation method is inapt for this process.'
>> why inapt?
>> is there a reference applying BFOA in a similar task before? If so, please include.
Thank you for this comment.
The over fitting has been explained in:
“Such over-fitting occurs when the predicted value is larger than the features generated by the target exergames. Fig. 6 graphically exemplifies the generated features of the target exergame and the features predicted by the CF technique within the procedures of the ReComS approach. In Fig. 6, eight predicted feature values are overfitted because these values are greater than the feature values of the target exergames. The remaining predicted features have lower fitting values due to their values are smaller than the feature values of the target exergames. Hence, the predicted features need to be normalised to fit/align these values with the target exergame features.” and we added Fig. 6 to show an example of overfitted predictions compared with output features that are created by each exergame.
We added some references that show a similar task within other recommendation systems, in “Optimisation algorithms have proven effective in several areas including RS (Al-Hadi et al., 2017) and healthcare (Zainal et al., 2020). For instance, BFOA has been well-embraced in recent RS approaches for providing high accuracy prediction (Al-hadi et al., 2020)(Al-Hadi et al., 2017). This motivates us to use BFOA in this experiment for learning patients’ latent features and for optimising the output prediction.”.
The application of RMSE needs clarification - the authors state, correctly, that the metric is applied 'for calculating the differences between the predicted values and the values observed.' But what were these values in the current study? Combination of different variables? Which variables?
• Thank you for this comment. We have added more explanation about RMSE metric in Benchmark Function part “In this article, RMSE measure is used for calculating the differences between the variable of target patient and predicted values for same variables of target patient. These variables are the input setting variables (difficulty, tolerance, minimum range, and maximum range) and the generated variables by exergame such as average acceleration, average deceleration, moving time and other variables that are described in Table 1. The predicted values are calculated by Equation 9 according to the similarity values for patients comparing to the target patient variables.”
The study found that the method that combined three techniques was the best. However, this is typically the case for any machine learning situation - ensemble methods outperform single-method approaches. So, the authors need to clarify if there is anything unexpected in this finding, and what it means for future work - could different ensemble techniques be tested against each other?
Thank you for this comment. In discussion part, there are more details summarized in:
1. the weakness of ReComS approach is the high percentage of the overfitted predicted values.
2. The weakness of ReComS+ approach is acted in the the outlier data which reduce the prediction performance of this method.
3. The weakness of ReComS++ is computational time and the prediction is needed to upgrade from 85% to 90%.
4. Our team works in the direction of analysing MIRA data and expected the future settings for patients’ movements as our case study.
Discussion should be enhanced with an evaluation of practical implement ability of the tested methods. While the most complex method yielded the best performance, is it feasible to implement? If so, under which conditions (and under which other methods could be better).
Discussion should be enhanced by discussing the weaknesses of the current work. What are they? Honestly explicate how the research could be improved.
Thank you for this comment.
We have added the details about the weakness points in our approach within future work.
ReComS++ approach provides 85% of prediction results is correct while this result can be upgraded up to 90% by exploring other machine learning methods. ReComS++ approach uses many stages throughout the iteration learning (computational time). In future work can focus on specific variables such as average correct answer reaction time for cognitive. The cognitive and the repetition of range of motion need more study to understand the suddenly patient movements and predicting the suitable settings. Therefore, we trend to explore various linear regression techniques for predicting the optimal setting variables. In other direction, several structures of deep learning methods can be utilized for learning the accurate latent features of patients and explore the accurate input dialogue box settings.
discussion should be enhanced by adding directions for future research. What are the next steps? What other methods could be tested? How 'good' are the results in terms of practical implementation and how far are they from yielding a substantial positive impact to patients' recovery?
Thank you for this comment.
We have added future works within the discussion part.
“There are several potential promising directions where ReComS++ approach can be integrated with other platforms that have the profile settings for exergames. ReComS++ approach provides 85% of prediction results is correct while this result can be upgraded up to 90% by exploring other machine learning methods. ReComS++ approach uses many stages throughout the iteration learning (computational time). In future work can focus on specific variables such as average correct answer reaction time for cognitive. The cognitive and the repetition of range of motion need more study to understand the suddenly patient movements and predicting the suitable settings. Therefore, we trend to explore various linear regression techniques for predicting the optimal setting variables. In other direction, several structures of deep learning methods can be utilized for learning the accurate latent features of patients and explore the accurate input dialogue box settings.”
" | Here is a paper. Please give your review comments after reading it. |
153 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Ebooks of the future may respond to the emotional experience of the reader. (Neuro-) physiological measures could capture a reader's emotional state and use this to enhance the reading experience by adding matching sounds or to change the storyline therewith creating a hybrid art form in between literature and gaming. We describe the theoretical foundation of the emotional and creative brain and review the neurophysiological indices that can be used to drive future ebook interactivity in a real life situation. As a case study, we report the neurophysiological measurements of a bestselling author during nine days of writing which can potentially be used later to compare them to those of the readers. In designated calibration blocks, the artist wrote emotional paragraphs for emotional (IAPS) pictures. Analyses showed that we can reliably distinguish writing blocks from resting but we found no reliable differences related to the emotional content of the writing. The study shows that measurements of EEG, heart rate (variability), skin conductance, facial expression and subjective ratings can be done over several hours a day and for several days in a row. In follow-up phases, we will measure 300 readers with a similar setup.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The sales of ebooks are rapidly increasing and are expected to surpass that of printed books in the near future. In its basic form, an ebook is an electronic version of the printed book. However, the devices used to access an ebook (ereader, tablet, etc.) have more capabilities than just displaying the book and turning pages on request of the reader. The device may enable true bidirectional interaction with the reader, which is a significant innovation compared to the onedirectional printed book. This interactivity may substantially change the future of the ebook as artistic form and may result in new interactive media products that only slightly resemble the basic version of the printed book as sold today.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Together with scientific and cultural organizations we have started to explore the potential of interactive ebooks. One of the key questions is which reader parameter or actions (other than turning pages) are useful for interactive ebooks. One of the driving forces behind this exploration was the prominent Dutch writer Arnon Grunberg who also had a genuine interest in what his readers actually experience while reading his work, or more generally stated: 'Is reading a novel good for you?' (the writer himself takes a devil's advocate stance and postulates the possibility that reading literature has a detrimental influence <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>). From neuroscientific data, we know that reading is a complex task involving many brain areas ( <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, see <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>for a recent review and <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> for individual differences in narrative comprehension) and that reading can (at least temporarily) alter connectivity in an individual's brain <ns0:ref type='bibr' target='#b4'>[5,</ns0:ref><ns0:ref type='bibr' target='#b5'>6]</ns0:ref>. However, just reading text doesn't make one more social or empathetic. This may only happen after so-called 'emotional transportation' <ns0:ref type='bibr' target='#b6'>[7,</ns0:ref><ns0:ref type='bibr' target='#b7'>8]</ns0:ref>, i.e. as a reader one needs to be involved at an emotional level. It is postulated that there are no effects of reading non-fiction and also no effects of reading fiction when there is no emotional transportation <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>. A similar concept (immersion) is used in the 'fiction feeling hypothesis' <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> which postulates that negative, high arousal text activates the affective empathic network <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> which facilitates immersion in the text. In an experiment, participants read neutral and fearful sections of the Harry Potter saga and the results indeed showed a relation between neuronal activation pattern and subjectively rated immersion. Emotional experience is not only an essential catalyst, but also important in choosing which book to read, experiencing the content <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> and interpreting the narrative <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>.</ns0:p><ns0:p>All the above led us to develop a research project to measure readers' emotions while reading an ebook. Emotional state can be a key parameter to drive interactivity in future ebooks and may be viable in a real-life situation using recent Brain-Computer Interface (BCI)</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>technology. In addition, we were interested in measuring the emotions of the writer during the writing process to be able to compare the reader's emotional state while reading a certain paragraph to that of the author during writing that same paragraph. Capturing the emotional state of the writer (both through neurophysiology and subjective ratings) became our case study and is reported in this paper to illustrate the implementation of the theoretical foundation, the use of sensor technology, and to investigate whether prolonged physiological measurements are feasible in a real life situation. The framework described here is the basis for follow-up studies in which several hundreds of readers will read the book before publication while being measured with a similar setup as used here with the author <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Art, beauty and neuroscience</ns0:head><ns0:p>There is a growing interest in using neurophysiological measures to assess media, including paintings, music and films. An important question that has fascinated and divided researchers from both the neurosciences and the humanities is whether brain activity can provide insight in what true art and beauty is. From an applied point of view, the relevant question is whether an individual's brain pattern is informative of his or her appraisal of the piece of art.</ns0:p><ns0:p>Research of Zeki and colleagues, amongst others, has shown that there is a functional specialization for perceptual versus aesthetic judgments in the brain <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> and that there is a difference in activation pattern for paintings experienced as beautiful by an individual and those experienced as ugly. This finding is independent of the kind of painting: portrait, landscape, still life, or abstract <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref>. Hasson and colleagues <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> used fMRI to assess the effects of different styles of filmmaking on brain patterns and suggest that neurophysiological sensing techniques can be used by the film industry to better assess its products. The latter was done by <ns0:ref type='bibr'>[18]</ns0:ref> who measured skin conductance as an affective benchmark for movies and by <ns0:ref type='bibr' target='#b17'>[19]</ns0:ref> who measured Manuscript to be reviewed</ns0:p><ns0:p>Computer Science cardiovascular and electrodermal signals and found a high degree of simultaneity between viewers, but also large individual differences with respect to effect size. So far, interactivity based on viewers emotional state has not moved beyond a few artistic experiments: 'unsound' by Filmtrip and SARC (http://www.filmtrip.tv/) and 'Many Worlds' by Alexis Kirke (http://www.alexiskirke.com/). In this paper we look at the neuroscience behind both the creative and the emotional brain and how emotional state can be captured using wearable, mobile technology that is usable while reading an ebook. We will also explore the possibilities opened up after capturing a reader's emotional state and what the ebook of the future might look like. The paper also presents the data of the writer during the creation of emotional text <ns0:ref type='bibr' target='#b18'>[20]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>The emotional brain</ns0:head><ns0:p>Stimuli evoking emotions are processed in the brain through specific pathways and with the involvement of several brain areas. In other words, the emotional brain is a network of collaborating brain areas and not a single location <ns0:ref type='bibr' target='#b19'>[21,</ns0:ref><ns0:ref type='bibr' target='#b20'>22]</ns0:ref>. The majority of the sensory information entering the brain goes to the primary sensory areas, but a small part of the information goes to the amygdala, part of the limbic system deep inside the human brain. A main driver of the amygdala is danger: in case of a potential threat to the organism, the amygdala is able to respond quickly and prepare the body for action without much stimulus processing. The amygdala enables the release of stress hormones leading to peripheral effects, for instance increased heart rate to pump more blood to the lungs and muscles. After the amygdala, processing continues through the cingulate cortex, the ventromedial prefrontal cortex and finally the dorsolateral prefrontal cortex. Only in the dorsolateral prefrontal cortex is the processing stream through the amygdala integrated with the more cognitive processing stream from the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science sensory cortices. The emotional experience is a result of the interpretation of both processing routes taking into account the context and previous experiences. This integration and interpretation of information is a typical function of the prefrontal cortex <ns0:ref type='bibr' target='#b21'>[23]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Psychological framework of emotions</ns0:head><ns0:p>Before we can discuss how we can measure emotional state, we should first look into the frameworks to classify emotions. There are many psychological frameworks available. Classic work by Paul Ekman <ns0:ref type='bibr' target='#b22'>[24]</ns0:ref> and James Russell <ns0:ref type='bibr' target='#b23'>[25]</ns0:ref> shows that there are several basic emotions: fear, disgust, anger, happiness, sadness and surprise. This set of six basic emotions has been expanded through the years with numerous subclasses. From a neuroscientific point of view, an important question is whether these emotions each have their own (unique) neuronal location or circuit (i.e. a discrete model <ns0:ref type='bibr' target='#b24'>[26]</ns0:ref>), or vary along several independent dimensions (i.e, a dimensional model <ns0:ref type='bibr' target='#b25'>[27]</ns0:ref>), a matter that is still under debate. As described above, experiencing an emotion is the result of the integration and interpretation of numerous information streams by an extended network of brain areas which makes a discrete model unlikely. Therefore, we adopt a dimensional model, or more specifically the circumplex model of emotion <ns0:ref type='bibr' target='#b23'>[25,</ns0:ref><ns0:ref type='bibr' target='#b26'>28,</ns0:ref><ns0:ref type='bibr' target='#b27'>29]</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of brain activation, spatially as well as temporally. Arousing words show a different pattern (compared to neutral words) mainly in the early processing stages (i.e. within 400 ms after presentation including the following ERP components: early posterior negativity (EPN), P1, N1, P2, and N400) while the difference between positively versus negatively valenced words shows in later processing stages (between 500 and 800 ms after presentation including the late positive complex (LPC)) <ns0:ref type='bibr' target='#b28'>[30]</ns0:ref>. In the spatial domain, arousal is linked to amygdala activity and valence to the cingulate cortex and the orbitofrontal cortex <ns0:ref type='bibr' target='#b29'>[31]</ns0:ref><ns0:ref type='bibr' target='#b30'>[32]</ns0:ref><ns0:ref type='bibr' target='#b31'>[33]</ns0:ref><ns0:ref type='bibr' target='#b32'>[34]</ns0:ref><ns0:ref type='bibr' target='#b33'>[35]</ns0:ref>. Excellent reviews are given by <ns0:ref type='bibr' target='#b34'>[36]</ns0:ref> and <ns0:ref type='bibr' target='#b35'>[37]</ns0:ref>. Based on her review, Citron <ns0:ref type='bibr' target='#b36'>[38]</ns0:ref> comes to the conclusion that positive and negative valence may differ with respect to the cognitive functions they activate and are not necessarily a continuous dimension. Although a novel is more than a collection of individual words, there has been very little research on the physiological reactions to reading larger pieces of text (see the first section of this paper), but a lot to reading individual words. This project aims to fill that void.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Emotion classification using neurophysiological measures</ns0:head><ns0:p>With the circumplex model as point of departure, we can start to identify physiological signals that reflect the arousal and valence of emotions and that can potentially be measured while reading outside a laboratory environment. We will look at a broader range of methods used to induce an emotional state than written words and at a broader set of physiological measures than EEG and fMRI. For example, <ns0:ref type='bibr' target='#b36'>[38]</ns0:ref> Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.1'>Valence</ns0:head><ns0:p>Valence requires central nervous system indices as it is less clearly reflected in peripheral measures. Wearable sensors like EEG are not able to measure activity in deeper structures like the limbic system but as reviews show <ns0:ref type='bibr' target='#b34'>[36,</ns0:ref><ns0:ref type='bibr' target='#b35'>37]</ns0:ref>, valence is strongly linked to later processing stages involving more superficial brain structures related to cognitive processing.</ns0:p><ns0:p>Valence is reflected in (late) ERP components <ns0:ref type='bibr' target='#b37'>[39]</ns0:ref><ns0:ref type='bibr' target='#b38'>[40]</ns0:ref><ns0:ref type='bibr' target='#b39'>[41]</ns0:ref>, in the power in specific EEG frequency bands like alpha <ns0:ref type='bibr' target='#b41'>[42,</ns0:ref><ns0:ref type='bibr' target='#b42'>43]</ns0:ref>, in the relative power in different EEG bands <ns0:ref type='bibr' target='#b43'>[44]</ns0:ref> and in asymmetrical alpha activity in the prefrontal cortex <ns0:ref type='bibr' target='#b21'>[23,</ns0:ref><ns0:ref type='bibr' target='#b44'>[45]</ns0:ref><ns0:ref type='bibr' target='#b45'>[46]</ns0:ref><ns0:ref type='bibr' target='#b46'>[47]</ns0:ref> indicating increased left prefrontal cortex activity for positive valence and increased right prefrontal cortex for negative valence. However, power in the different frequency bands and hemispheric asymmetry are under the influence of many factors, which may only partially correspond to emotional valence. For example, hemispheric asymmetry has been linked to stress <ns0:ref type='bibr' target='#b47'>[48]</ns0:ref> and the tendency to approach versus to avoid stimuli <ns0:ref type='bibr' target='#b48'>[49]</ns0:ref>, and low power in the alpha band may be caused by the fact that stimuli with high valence may attract more attention <ns0:ref type='bibr' target='#b49'>[50,</ns0:ref><ns0:ref type='bibr' target='#b50'>51]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.2'>Arousal</ns0:head><ns0:p>Arousal is less clearly linked to brain activation patterns except for activity in the amygdala and the reticular formation <ns0:ref type='bibr' target='#b27'>[29]</ns0:ref>, which are difficult to measure with wearable sensors like EEG.</ns0:p><ns0:p>However, arousal is reasonably clearly reflected through a relatively strong activation of the sympathetic as compared to the parasympathetic autonomous nervous system. Arousal can be measured peripherally through, for instance, skin conductance (increasing conductance with increasing arousal <ns0:ref type='bibr' target='#b51'>[52]</ns0:ref>), heart rate variability (HRV), especially high frequency HRV as this is exclusively affected by the parasympathic system (reduced high frequency HRV with increased Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>arousal [53]), pupil size, heart rate (HR) and respiration frequency (all increased with increased arousal, although this pattern is not consistent over studies, see <ns0:ref type='bibr'>[54]</ns0:ref> for an elaborate overview).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Current state of the art in emotion capture</ns0:head><ns0:p>The state-of-the-art in emotion detection using neurophysiological indices is that we are able to distinguish several valence and arousal levels in a lab environment when subjects are sitting still and sufficient control data is gathered beforehand to train classification algorithms (see <ns0:ref type='bibr'>[55]</ns0:ref> for an overview). However, it is important to note that the relation between physiology and emotion Here we also present the case of the writer wearing physiological sensors for several hours a day and for 9 days in a row.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>The creative brain</ns0:head><ns0:p>The current case study focused on the writer and his emotional signals during the creative writing process. Our primary goal was to implement and learn about the transition from We deemed it worthwhile, nevertheless, to have a quick look at the creative brain as well. Most people would agree that creative abilities make us unique in the animal kingdom. Interestingly, we understand little of the processes that drive or facilitate creativity and still debate on the definition of creativity, although most agree upon the importance of both novelty and usefulness (see <ns0:ref type='bibr'>[63]</ns0:ref> for an elaborate discussion). Like emotion, creativity is not related to a single brain area but rather to networks of brain areas. Based on an extensive review, Dietrich and Kanso even state that 'creativity is everywhere' <ns0:ref type='bibr' target='#b52'>[64]</ns0:ref> see also <ns0:ref type='bibr' target='#b53'>[65]</ns0:ref>. Having said that, recent neuroimaging studies seem to show that creativity involves common cognitive and emotional brain networks also active in everyday tasks, especially those involved in combining and integrating information. For the current project, it is useful to distinguish two different types of creative processes as described by</ns0:p><ns0:p>Dietrich <ns0:ref type='bibr' target='#b54'>[66]</ns0:ref>. The first can be called controlled creativity often in relation to finding creative solutions for a particular, given problem. This creative process is controlled through the prefrontal cortex <ns0:ref type='bibr' target='#b55'>[67]</ns0:ref> that guides the search for information and the combination of information within a given solution space. A powerful mechanism which is bound, though, by limitations of the prefrontal cortex, for instance with respect to the number of solutions that can be processed in working memory. The second type can be named spontaneous creativity, often in relation to artistic expression. This form of creativity comes without the restrictive control from the prefrontal cortex, and the process differs from controlled creativity qualitatively (e.g. solutions are not bound by rational rules like the rules of physics) and quantitatively (the number of solutions is not restricted by for instance the limited capacity of working memory). Spontaneous Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>creativity is linked to unconscious processes (of which dreaming may be an extreme form).</ns0:p><ns0:p>However, the prefrontal cortex becomes involved in spontaneous creativity when solutions will eventually reach the conscious mind, and the prefrontal cortex is required to evaluate them and bring them to further maturity.</ns0:p><ns0:p>Recent data show us that less activity in the dorsolateral prefrontal cortex links to increased spontaneous creativity in, for instance, musicians <ns0:ref type='bibr' target='#b56'>[68,</ns0:ref><ns0:ref type='bibr' target='#b57'>69]</ns0:ref>, and increased activation to increased controlled creativity. Results also show that there is a burst of wide-spread gamma activity about 300 ms before the moment of insight in spontaneous creativity. Gamma activity is, amongst other features, linked to binding pieces of information. A burst of gamma activity is indicative of finding (and binding) a new combination of chunks of (existing) information. Fink</ns0:p><ns0:p>and Benedek <ns0:ref type='bibr' target='#b58'>[70]</ns0:ref> underline the importance of internally oriented attention during creative ideation in a more general sense, reflected in an increase in alpha power. Creativity is also linked to hemispheric asymmetry. A meta-analysis <ns0:ref type='bibr' target='#b59'>[71]</ns0:ref> showed that the right hemisphere has a larger role in creative processes than the left hemisphere. This is confirmed by patient research <ns0:ref type='bibr' target='#b60'>[72,</ns0:ref><ns0:ref type='bibr' target='#b61'>73]</ns0:ref>. A lesion in the right medial prefrontal cortex hinders the creation of original solutions while a lesion in the left medial prefrontal cortex seems to be beneficial for spontaneous creativity. However, experiments with creative students <ns0:ref type='bibr' target='#b63'>[74]</ns0:ref> and extremely creative professionals from science and arts <ns0:ref type='bibr' target='#b64'>[75]</ns0:ref> both show bilateral cerebellum involvement, seemingly confirming the statement that 'creativity is everywhere in the brain'. However, these findings are general findings and may not be applicable to the creative writing process <ns0:ref type='bibr' target='#b65'>[76]</ns0:ref>. For instance, creative writing seems to result in increased activity in the left prefrontal cortex (presumably because of its links to important language areas in the left hemisphere), except when writing emotional text, for which activity in the right hemisphere Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <<Figure 1. Location of the 28 electrodes in the 10-20 system.>> <<Figure 2. Writer showing the neurophysiological sensors (A) and during writing (B).>></ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.3'>Experimental protocol</ns0:head><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> gives the outline of the experimental protocol for one day. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the experimental protocol. <<Table 1. Experimental protocol for one day.>> <<Table 2. Specification of the experimental protocol.>></ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.4'>Data processing</ns0:head><ns0:p>Our intention was to use the calibration sessions of each day to identify differences in physiological markers that could be linked to the emotional content of the written paragraph.</ns0:p><ns0:p>When we can reliably establish this 'ground truth', it could consecutively be used to analyze the data gathered during the writing blocks. After checking the synchronization between the different data streams, the physiological data of the calibration session were separated in 10 EEG. The EEG data were processed using the following pipeline: re-referencing to channel TP10, rejection of channels with very large variance (channels O1, Oz and O2 were very noisy and removed completely from the dataset), band pass filtering 0.5 to 43 Hz, and down sampling to 250 Hz. Initialy, the EEG data of the remaining channels were used in an Independent Component Analysis (ICA) to identify and remove potential artifacts. However, the ICA revealed that potential artifacts were non-stationary (i.e. changing over time) and therefore difficult to identify and thus no more data were removed. The power in different frequency bands: delta (0-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), SMR (13-16 Hz), Beta (16-30 Hz) and gamma were used as features in the classification. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>successive R-peaks in the ECG (RRI) for each epoch and converted this to mean Heart Rate (meanHR = 1/meanRRI). Four measures of heart rate variability were derived. The root mean squared successive difference between the RRIs (rmssdRRI) reflects high frequency heart rate variability. We also determined heartrate variability in the low, medium and high band using a spectral analysis (HRVlow, HRVmed, HRVhigh). High-frequency heart rate variability was computed as the power in the high frequency range (0.15-0.5 Hz) of the RRI over time using Welch's method applied after spline interpolation; similarly for mid-frequency (0.07-0.15 Hz) and low-frequency (0-0.07 Hz) heart rate variability. No anomalies were present in the ECG data so no data was removed. From the ESK, the mean ESK over the epochs was calculated. For the ESK we removed one outlier (contentment epoch on day 2).</ns0:p></ns0:div>
<ns0:div><ns0:head>Classification analysis using EEG and peripheral physiology features.</ns0:head><ns0:p>To determine how well various feature sets could predict the emotional state of the author during the calibration session we performed a classification analysis. Classification was performed using the Donders Machine Learning Toolbox <ns0:ref type='bibr' target='#b68'>[79]</ns0:ref>. Two types of classifiers were used: a linear Support Vector Machine (SVM) and an elastic net model with logistic regression <ns0:ref type='bibr' target='#b69'>[80]</ns0:ref>. As input we used the features that were standardized to have mean 0 and standard deviation 1 on the basis of data from the training set. One-tailed binomial tests were used to determine whether classification accuracy was significantly higher than chance. Subjective questionnaires. The data of the feelings grid, VAS, and DES full questionnaires was not pre-processed but directly analysed. We only statistically analysed the main effects of day ( <ns0:ref type='formula'>9</ns0:ref>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science levels) and session (start of day and end of day for DES full, and start of day, end of block 1, end of block 2, end of day for feelings grid and VAS). The DES full scores were analysed using nonparametric statistics with alpha level Bonferroni adjusted for the number of comparisons.</ns0:p><ns0:p>Feelings grid and VAS scores were analysed with a parametric ANOVA.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.5'>Procedures</ns0:head><ns0:p>We started the measurements on the day the writer started with a new novella to be used in phase 2 of the project. We adjusted the measurements to his usual daily writing schedule comprising two blocks: one in the morning and one in the late afternoon or early evening. He normally writes for about two hours and fills the time in between with other activities (including other writing activities). During a writing block, he was engaged in other activities as well like answering emails and phone calls etc., but never during the instrumentation and calibration. All activities during the measurement blocks were logged by the experimenter who was always present during the measurements. We measured for nine consecutive days. At the end of the day, the experimenter and the writer would make a specific schedule for the next day. The writer also reflected on his experiences over the day, including the user experience of wearing the equipment and being observed. The day before the start of the experiment, the protocol, instructions etc. were explained in great detail, the writer signed the informed consent, his workplace was instrumented and the equipment tested. Besides the addition of the equipment, the writer's workplace was not altered in any way to give the writer the best opportunity to behave as usual. On each measurement day, the experimenter came to the apartment as scheduled and followed the protocol as detailed above. At the end of the day, all data were encrypted and saved to an external hard disk. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Results</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Classification of baseline vs. emotion conditions in the calibration blocks</ns0:head><ns0:p>First, we determined whether the feature set contained information to discriminate the baseline conditions (Eyes Open and Eyes Closed) from the emotional (writing) epochs using binary classification (baseline vs. non-baseline). For this purpose we performed a 'leave-one-day-out' cross validation using the SVM classifier. This method is to be preferred over random N-fold cross validation since it better accounts for possible correlations between data during the day <ns0:ref type='bibr' target='#b70'>[81]</ns0:ref>. Still, the results when using random folds were found to be comparable to the results of the analysis presented here. It is also important to compensate for the imbalance in the number of conditions, with 36 baseline blocks and 54 emotional blocks in the set. All reported performance scores follow a binomial distribution and the variability of the binomial distribution follows directly from the average score and the number of measurements (the distribution is not well approximated with a Gaussian distribution and therefore the variance is not a good indicator of the variability in the results). For larger number of measurements the variability is approximately equal to p*(1-p)/N (with p estimated by the score, N the number of measurements).</ns0:p><ns0:p>When all six physiology features (i.e. meanHR, HRVlow, HRVmed, HRVhigh, rmssdRRI, meanESK were used as input to the classifier the average model performance (over all days) was 71%, with a hit-rate (score for correctly classifying baseline blocks) of 58% and a False-Alarm-rate (FA-rate, i.e. fraction of falsely classified emotional blocks) of 20%, resulting in an equal cases (in the situation in which both conditions occur equally frequent) performance of 69% (p < .01). Individual ANOVAs with condition as independent variable (baseline vs writing) and physiological measure as dependent variable showed significant differences for the Manuscript to be reviewed Computer Science wide spread suppression of alpha, and (3) central increase in beta and gamma activity. The first effect is most likely caused by eye movements. The second effect relates to the suppression of the brain's idle state during rest. The increase in gamma activity may be related to creative processes as described in Section 3. If we only use the alpha and gamma features in the classifier, equal class performance is 88%, indicating that a reliable difference can be obtained without using features that may be contaminated with eye movements.</ns0:p><ns0:p>When all features (peripheral physiology and EEG) were used as input the average model performance was also 92%, with a hit-rate of 89% and a False-Alarm-rate (FA-rate) of 6%, resulting in an equal cases performance of 92% (p < .01), a non-significant improvement relative to using only EEG features, indicating that the added value of incorporating features other than EEG ones is small in this case.</ns0:p><ns0:p>A closer inspection of the feature weights in the classification model showed that the highest weights are attributed to the delta (0-4Hz) and theta (4-8) bands in channels Fp1, Fpz, Fp2 (i.e. frontal channels). The equal class performance of a classification model using only these six features is 0.84 (compared to 0.92 for a model using all features). Slow (0-4 Hz) frequency bands of EEG may pick up eye movements and should be evaluated with caution (please note that eye movements were not removed from the EEG data). Indirect measurement of eye movements in the EEG signal masks the information in the primary EEG. Even in case it is a reliable classifier for the current experimental setup, we consider it an artifact.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Classification of valence and arousal in the calibration blocks</ns0:head><ns0:p>Model performance was determined for classifying low vs. high valence and arousal using 10fold cross validation using a range of parameters:</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p> Features from EEG, physiology or both,  Binary classification for predicting outcomes higher than the median value or using only the extreme values, i.e. lower than the 0.33-quantile or higher than the 0.66-quantile,  Using raw or normalized features, in which case the features were normalized by dividing by the average feature value for the Eyes-Open conditions (for that day),  Using SVM or elastic net classifier with logistic regression.</ns0:p><ns0:p>In none of the cases did we find classification performance deviating significantly from chance performance. Since classification performance using the whole set of EEG data did not result in above-chance performance, we did not continue using specific subsets only, e.g. to look at the power in specific EEG frequency bands like alpha <ns0:ref type='bibr' target='#b41'>[42,</ns0:ref><ns0:ref type='bibr' target='#b42'>43]</ns0:ref>, at the relative power in different EEG bands <ns0:ref type='bibr' target='#b38'>[40]</ns0:ref> and at asymmetrical alpha activity in the prefrontal cortex. Individual ANOVAs on the physiological measures confirmed these observations: all F-values < 0.63 and all p values > .67, see also Figure <ns0:ref type='figure' target='#fig_12'>3</ns0:ref>. Because building a reliable valence and arousal classification algorithm using the calibration data turned out to be impossible, we could not further classify the novel writing data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.3'>Facial expression in the calibration blocks</ns0:head><ns0:p>We used the FaceReader® output directly in the analysis and found no significant differences between the different emotional paragraphs. Generally, the facial expression of the writer was classified as neutral (about 30%), sad (about 25%) or angry (about 20%). The remaining 25% was dispersed over happy, surprised, scared and disgusted.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.4'>Subjective questionnaires</ns0:head><ns0:p>The DES full showed neither differences over the days (1-9) nor over sessions (start -end of day). Analysis of the feelings grid scores showed a significant effect of arousal over sessions: F(3, 31) = 4.57, p < .01. A post-hoc LSD test showed a significant difference between start of the day and the end of block 2 and end of the day. The analyses of the VAS scores showed no effect over days, but a large effect over sessions of happy: F(3, 31) = 3.65, p < .03, optimistic: F(3, 31) = 6.28, p < .01 and flow: F(3, 31) = 6.76, p < .001, and a trend for relaxed: F(3, 31) = 2.38, p < .09. The means of the significant effects over session are presented in Figure <ns0:ref type='figure' target='#fig_14'>5</ns0:ref>. The figure shows that happy, optimism and flow are rated high at the start of the day but systematically decrease over the writing sessions with a stabilization or reversal at the end of the day. For arousal, this effect is inverted. These trends are confirmed by post-hoc LSD tests.</ns0:p><ns0:p>In the daily debriefing session at the end of the day, the writer indicated that the EEG cap was uncomfortable at the start but that he got used to wearing the cap and the other physiological Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.3.1'>Set-up and user experience</ns0:head><ns0:p>The case study primarily focussed on measuring neurophysiological indices over prolonged periods outside a laboratory environment before applying the technology in a large scale experiment with the readers of the novel. Inspection of the signals revealed that, except for EEG channels O1, Oz and O2, we were able to record reliable signals in a real life situation using wearable / wireless sensor technology and that the setup was comfortable enough for the writer to work for hours a day wearing the sensors. The noise in the occipital channels may be caused by (neck) muscle activity related to mouse and keyboard actions. The ICA analysis indicated that potential artifacts were non-stationary (i.e. changed over time), an effect similar to what we find with readers <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. Non-stationarities may be more common in real-world, multitasking environments and hamper identification and removal of artifacts <ns0:ref type='bibr'>[58]</ns0:ref>. This increases the relevance of including EMG and EOG sensors to the sensor suite. Data analysis may also benefit from a higher electrode density allowing to apply more advanced techniques for artifact removal and EEG analysis. Recent electrode developments may enable this without reducing usability and comfort over prolonged periods of use.</ns0:p><ns0:p>Although measuring physiology outside a well-controlled laboratory environment is challenging, the data show reliable differences between resting state and writing, which indicates a sufficient signal-to-noise ratio in the data. It could still be the case that this 'writing detector' is triggered by artifacts like eye movements or muscle activity that comes with typing (however, the EEG channels most prone to these muscle artifacts (O1, Oz and O2) were removed from the dataset). If we look at the weight of the different features in the classification algorithm, we see Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>that most weight is attributed to the delta (0-4Hz) and theta (4-8) bands in the frontal channels (Fp1, Fpz, Fp2). Low frequency bands should be evaluated with caution as they may reflects eye movements rather than signals in the primary EEG. This is especially relevant for the current dataset since (eye movement) artifacts were non-stationary and could not be reliably identified and removed. However, the classifier is also based on suppression of alpha and increased central gamma activity during writing. This matches with the expected pattern for creative writing. In addition, the current differences in physiology like increased heartrate and decreased heartrate variability (see Figure <ns0:ref type='figure' target='#fig_12'>3</ns0:ref>) do fit with an interpretation of low (rest) and high cognitive activity (writing) and not just with simple muscle activity. To exclude the aforementioned artifacts, a comparison should be made between writing about an emotion, writing down mundane instructions, and for instance copying text or making random typing movements.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2.'>Neurophysiology of emotional writing</ns0:head><ns0:p>Since there are no data available about neurophysiological correlates of emotional writing, we based our expectations on research into physiological responses based on presenting many repetitions of, for instance, emotional pictures or sounds. In this domain, recent experiments show that changes in emotional state can also be reliably identified with a restricted number of repetitions or even single trial (especially for longer epochs like in our study).</ns0:p><ns0:p>Therefore, we expected to be able to see changes in physiological state as a function of emotional content, despite the limited number of repetitions. However, we were not able to link specific neurophysiological indices to the emotional content of the writing. We have three Manuscript to be reviewed</ns0:p><ns0:p>Computer Science in single-task laboratory experiments. The first explanation pleads for expanding the data set using more authors and possibly more sessions than we were currently able to gather. We should keep in mind, nevertheless, that the current data was sufficiently reliable to classify rest from writing with 92% accuracy, and the employed classifications methods are sensitive enough to be used on smaller datasets. This forces us to look into alternative explanations as well before upscaling. One such explanation is that for this particular writer, the writing process itself may predominantly be a cognitive task and unrelated to the emotional content, i.e. the writer does not experience a particular emotion himself when writing about it. The neurophysiological pattern found in writing compared to rest and the facial expression (often classified as neutral) fit with the signature of a cognitive task. Based on the vast production of the writer and as confirmed in later discussion with him, this is a viable option. In hindsight, the time pressure (2 minutes per item), the strict instruction (write about this particular emotion fitting with this particular picture), the time of day (always in the morning before the writing block started), and the presence of the experimenter may all have triggered cognitive controlled creativity rather than emotional or spontaneous creativity. The third factor that may have played a role in the current results is the task setting that may have resulted in multiple processes (including but not limited to emotional, associative, creative, linguistic and motor planning processes). The resulting brain activity patterns may not be comparable to those for passive viewing of emotional pictures in a laboratory environment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2'>Subjective ratings</ns0:head><ns0:p>The ratings of arousal, happy, optimistic and flow seem to show the same pattern. At the start of the day, the writer is in a 'relaxed, good mood' but his mood seems to dwindle during the writing with increasing arousal. At the end of the day, after the last writing session, this pattern stabilizes Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>or is reversed. This profile in part reflects the circadian modulation of mood and related aspects.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>The ebook of the future</ns0:head><ns0:p>One may ask if uncovering brain states associated with art will de-mythologize the process: will art lose its meaning, beauty or purity when reduced to activity of groups of neurons? Will we eventually reveal the mechanisms of art and thus render it mechanical? Will scientists be able to develop a drug that makes everyone a best-selling author? Will this knowledge increase the 'creativity rat race' for artistic and creative success as cognitive enhancers may do in the 'cognitive rat race' in the academic world <ns0:ref type='bibr' target='#b74'>[85]</ns0:ref>? We think not -but raising and discussing these questions is of utmost importance for the field <ns0:ref type='bibr'>[58]</ns0:ref>. A more interesting debate is whether creative writing is a skill one can develop like skilled behavior in sports and music, or possibly even non-creative writing like scientists and journalists do on a daily basis. Creative skills are important outside the arts and the creative industry and their importance is widely acknowledged in an innovative and knowledge-based economy. We would like to expand our research into (spontaneous) creativity to answer important questions and develop appropriate tests and tools to measure spontaneous creativity (which may require 24 hour measurements). Current ebooks have the ability to track reader behavior and ebook retailers are actively gathering (anonymous) data of their readers on parameters such as the books the reader has finished (or not), how fast, where reading was discontinued and for how long and which words were looked up in a connected dictionary <ns0:ref type='bibr' target='#b75'>[86]</ns0:ref>. None of this information is directly used for the benefit of the reader but serves manufacturers and publishers only. The basis for our approach is to measure the readers' state and behavior to make them the primary beneficiaries, for instance through enhancing the reader experience. There are many approaches foreseeable. A relatively simple one that is not interactive yet is to use the emotional response to give better informed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>advice on other books the reader may enjoy. In a similar way, readers may want to share their emotional profile, for instance by posting it on social media or through new communities of people with similar frames of mind around a specific book. Real interactivity may also come in many forms. For instance, the emotional response may be used to add music or other multisensory stimuli to further intensify the experience or ultimately change the storyline or the flow of the book. This may lead to new media products that are somewhere in between literature, movies and games. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure captions</ns0:note></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Instrumentation of the participant and check of measurement systems</ns0:p><ns0:p>The physiological sensors were attached to the writer and signals checked for their integrity. Video cameras, screen and keyboard logging were switched on and checked. The Mobita recorder was time linked to the Observer XT system by using the Mobita to type a specific series of key strokes recorded by the accelerometers in the Mobita, the uLog module of the Observer XT and the overview camera.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calibration of emotional state</ns0:head><ns0:p> At the beginning of each day the following calibration data were recorded in fixed order: DES self the DES self was a paper and pen test with the instruction 'Please mark the emotions that best reflect your feelings over the past measurement period'. It contained the same 20 items as the DES full on an A4-sized form but with only one tick box to their right.</ns0:p><ns0:formula xml:id='formula_0'>o</ns0:formula></ns0:div>
<ns0:div><ns0:head>DES book</ns0:head><ns0:p>The DES book was the same as the DES self, except for the instruction: 'Please mark the emotions that best describe the section you wrote in the past measurement period'.</ns0:p><ns0:p>Open questions during the debriefing, the writer answered several open questions about the use of substances (coffee, tea, cigarettes, medication etc.), the experience of flow, satisfaction about the progress, significant moments during the writing etc. The experimenter could expand the open questions based on observations made during the day.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>is not straightforward. Different studies with different stimuli and contexts report different types of correlations [54,56]. It is thus important to study relations between (neuro-) physiology and emotion within the context and under the circumstances of interest [57,58]. An important step in this project, is to bring neurophysiological signals out of the lab and explore their potential value in daily life [55,58]. Monitoring and using the (neuro-) physiological signals of readers is new, and entertainment in general is a good first case to transfer the technology from the laboratory to real life. This transition will come with several challenges ranging from coping with external noise due to movement artifacts, multitasking users, and usability aspects such as prolonged usage [58-60]. First steps in this transition have recently been made in studies investigating EEG signals in gaming [61] and into music perception in realistically moving participants [62].</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016) Manuscript to be reviewed Computer Science laboratory to real life before upscaling the set-up to hundreds of readers, and to capture the emotional signals of the writer as function of the emotional content of the written paragraphs.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>epochs corresponding to 1 min rest eyes open, 1 min rest eyes closed, 6x2 min 'emotional writing' (each corresponding to one of six different emotional pictures and descriptors), and again 1 min rest eyes open, 1 min rest eyes closed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Peripheral physiology. As a measure of heart rate, we determined the mean interval between PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Facial expression.</ns0:head><ns0:label /><ns0:figDesc>The images from the close-up camera were analysed offline using NoldusFaceReader software. Output for each epoch are intensity values for the following classifications: Neutral, Happy, Sad, Angry, Surprised, Scared, Disgusted.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Figure4summarizes the power distribution for the different frequency bands averaged</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>sensors. He experienced the cameras as more obtrusive and disturbing than the physiological sensors. He elaborated on this in several public interviews (e.g., in The New York Times: www.nyti.ms/1dGxkFR). <<Figure 5. Significant changes in subjective ratings over the course of a writing day.>> PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>possible explanations: ( 1 )</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>the quantity and/or quality of the data was not sufficient, (2) writing is a cognitive rather than an emotional task for this particular author, and (3) the task involved a multitude of emotional, creative and cognitive processes concealing the single-task indices found PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Location of the 28 electrodes in the 10-20 system.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Writer showing the neurophysiological sensors (A) and during writing (B).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Results of the physiological measures Heart Rate (A) and rmssd RRI (B) as function of the different calibration blocks. Eyes open and Eyes closed blocks were measured before and after the emotional blocks. The order of the emotional blocks was balanced over days. Error bars denote the standard error of the mean.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The left column shows the power in the different frequency bands in the rest blocks (eyes open and eyes closed combined) and the middle column in the writing blocks. The right column gives the weights of the features in the classification model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Significant changes in subjective ratings over the course of a writing day.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,192.61,99.40,226.77,248.03' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,192.61,127.20,226.77,283.46' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,192.61,72.00,226.77,329.39' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>All neurophysiological signals were recorded using a wearable Mobita® 32-channel physiologic signal amplifier system sampling at 1000Hz (TMSi, Hengelo, The Netherlands, http://www.tmsi.com/). The available channels were used for EEG (28 TMSi water based electrodes, see Figure1for the layout of the electrodes), ECG (two pre-gelled disposable TMSi snap electrodes) and Endosomatic Skin Potential ESK (pair of TMSi finger electrodes). The Mobita® has built-in accelerometers which we used to log possible activity of the writer and to synchronize the physiological data to other data gathered through a Noldus Observer XT®</ns0:figDesc><ns0:table><ns0:row><ns0:cell>skilled behavior in sports and music, or possibly even non-creative, non-fiction writing like</ns0:cell></ns0:row><ns0:row><ns0:cell>scientists and journalists do. Lotze and colleagues found that the caudate nucleus (involved in</ns0:cell></ns0:row><ns0:row><ns0:cell>skilled behavior) was active in experienced creative writers but not in novices [77,78], indicating</ns0:cell></ns0:row><ns0:row><ns0:cell>that creative writing can indeed be a (trainable) skill.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)Manuscript to be reviewed Computer Science seems to be greater. This shows that the body of knowledge on the creative brain is growing but still limited and identifying neural correlates of the creative writing process requires further research. Another interesting debate is whether creative writing is a skill one can develop like 4 The case study4.1 Methods4.1.1 ParticipantArnon Grunberg (http://www.arnongrunberg.com/) participated in the study. Arnon Grunberg was born in 1971 and has lived in New York since 1995. He writes novels, short stories, columns, essays and plays. His work was awarded with several national and international prizes and translated into 30 languages. He participated voluntarily, being aware that his participation would not be anonymous. All data were collected in November 2014 in Arnon's apartment in New York. The Institutional Review Board of TNO Human Factors (TCPE Soesterberg, The Netherlands) approved the study after inclusion of specific sections in the informed consent regarding privacy and data dissemination. Arnon read and signed the informed consent before data gathering began.4.1.2 ApparatusWe used commercially available hardware and software to record physiological parameters,PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016) Manuscript to be reviewed Computer Science facial expression and text entry. In addition, paper and pencil were used for subjective questionnaires described in the next section. system (Noldus IT, Wageningen, The Netherlands, http://www.noldus.com/). This system recorded the images from two IP cameras (one providing an overview of the work space and one providing a close-up of the writer's face for later analysis of his facial expression), a continuous screen dump of the writer's PC screen, and the writer's keystrokes (Noldus uLog tool®). The writer used his normal work space and own PC, see Figure 2. PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>gives the details on PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table captions Table 1 .</ns0:head><ns0:label>captions1</ns0:label><ns0:figDesc>Experimental protocol for one day.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Specification of the experimental protocol.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Start of the day</ns0:cell><ns0:cell>Instrumentation of the participant and check of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>measurement systems</ns0:cell></ns0:row><ns0:row><ns0:cell>Calibration of emotional state</ns0:cell><ns0:cell>Subjective questionnaires: feelings grid, VAS,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DES full</ns0:cell></ns0:row><ns0:row><ns0:cell>Block 1</ns0:cell><ns0:cell>Continuous monitoring of physiology and facial</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>expression</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Continuous logging of text entry</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Continuous observation by experimenter</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Subjective questionnaires at significant events 1 :</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>feelings grid, DES self</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Subjective questionnaires at end of block:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>feelings grid, VAS, DES self, DES book</ns0:cell></ns0:row><ns0:row><ns0:cell>Block 2</ns0:cell><ns0:cell>Similar to Block 1</ns0:cell></ns0:row><ns0:row><ns0:cell>End of the day</ns0:cell><ns0:cell>Subjective questionnaires: feelings grid, VAS,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>DES full</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Open Questions</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)Manuscript to be reviewed Computer Science 1 significant events could include a writer's block, or moment of great insight etc. to be identified by the participant himself. Eventually, no 'significant events' were indicated.804 805 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head /><ns0:label /><ns0:figDesc>1 minute of rest with eyes open o 1 minute of rest with eyes closed o 6 blocks of 2 minutes filled with writing a paragraph with the following instruction: 'write a paragraph on this picture with emotion x, as if you are writing a paragraph in your novel' This instruction was accompanied with an A4 sized, full color picture from the IAPS database [75] matching emotion x (disgust, fear, sadness, amusement, contentment, excitement). We selected 60 pictures from the IAPS database, 10 for each emotion: Please indicate how you feel RIGHT AT THIS MOMENT. Place an 'X' in the box closest to how you are feeling at this time.' The form consisted of a a pen and paper test with the instruction 'Please mark how you feel RIGHT AT THIS MOMENT'. Four scales were printed on one A4: relaxed -agitated, happy -sad, optimistic -pessimistic, state of flow -no flow. DES full the DES [77] full was a paper and pen test with the instruction 'Please indicate how each emotion reflects how you feel RIGHT AT THIS MOMENT.' It depicted the following 20 items on an A4-sized paper with five tick boxes to their right representing not at all (1) -completely (5): amusement, awe, contentment, gratitude, hope, love, pride, sexual desire, joy, interest, anger, sad, scared, disgust, contemptuous, embarrassed, repentant, ashamed, surprised, sympathetic.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feelings grid scale) VAS (visual analog</ns0:cell><ns0:cell>disgust (3061, 7360, 7361, 8230, 9290, 9330, 9373, 9390, 9490, 9830), fear (1052, 1110, 1113, 1301, 1302, 1321, 1931, 3280, 5970, 5972), sadness (2130, 2271, 2312, 2490, 8010, 9120, 9190, 9210, 9331, 9912), amusement (1340, 1810, 1811, 1920, 2092, 2344, 2352, 2791, 7195, 8600), contentment (1500, 2150, 2160, 2058, 2304, 2530, 2550, 2560, 4700, 5201), and excitement (8030, 8031, 8034, 8116, 8117, 8200, 8220, 8260, 8370, 8440). The order of the emotions was balanced over the days, each picture was only used once during the experiment. o 1 minute of rest with eyes open o 1 minute of rest with eyes closed  the feelings grid [76] was a pen and paper A4-sized form o left-top: anger, stress; right-top: joy excitement o left-bottom: depression, sadness; right bottom: relaxation, contentment with the instruction 'o left-middle: unpleasant, right-middle: pleasant the VAS was</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016) Manuscript to be reviewed Computer Science 10x10 square grid with the following markers: o middle-top: arousal, middle-bottom: sedation, sleepiness</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:1:1:NEW 3 Feb 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Thank you for your submission to PeerJ Computer Science. I am writing to inform you that in my opinion as the Academic Editor for your article, your manuscript 'Interactive ebooks based on the reader’s physiological responses' (#CS-2015:10:7008:0:1:REVIEW) requires some minor revisions before we could accept it for publication.
The comments supplied by the reviewers on this revision are pasted below. My comments are as follows:
Editor's comments
I agree with the reviewers that this is an interesting and well constructed study. I suggest that you pay particular attention to addressing the following reviewer comments:
• Reviewer 1 points out that the title refers to the final goal of the research rather than the results of this particular study.
• Both Reviewers 1 and 2 suggest clarifications and additions to improve the presentation and explanation of results of the study.
• Reviewer 1 suggests a discussion of how this study relates to the work presented in Onton & Makeig 2009.
Reviewer 1 suggests that the literature review be greatly reduced and perhaps be reword as a separate review article. Reviewer 2 and I do not feel strongly about the length of the review section, so I will leave it to your decision as to whether you wish to retain the literature review in this present article or reduce it.
> We acknowledge the view of reviewer 1 but since this paper is intended to describe the basis for an elaborate research project, we decided that it would be more useful to keep the literature overview in rather than to split it into two papers.
If you are willing to undertake these changes, please submit your revised manuscript (with any rebuttal information*) to the journal within 45 days.
* Resubmission checklist:
When resubmitting, in addition to any revised files (e.g. a clean manuscript version, figures, tables, which you will add to the 'Primary Files' upload section), please also provide the following two items:
1. A rebuttal Letter: A single document where you address all the Editor and reviewers' suggestions or requirements, point-by-point.
2. A 'Tracked Changes' version of your manuscript: A document that shows the tracking of the revisions made to the manuscript. You can also choose to simply highlight or mark in bold the changes if you prefer.
Accepted formats for the rebuttal letter and tracked changes document are: DOCX (preferred), DOC, or PDF.
As you previously uploaded a single manuscript file for your initial submission you will need to upload any primary high resolution image and table files separately if you have not already done so.
PeerJ does not offer copyediting, so please ensure that your revision is free from errors and that the English language meets our standards: uses clear and unambiguous text, is grammatically correct, and conforms to professional standards of courtesy and expression.
Sally Jo Cunningham
Academic Editor for PeerJ Computer Science
________________________________________
Reviewer Comments
Reviewer 1 (Jonathan Touryan)
Basic reporting
In this study the authors attempt to determine the affective or emotional state of a well know writer, engaged in original composition, from a range of physiological and neurophysiological signals. The authors attempt to induce and classify emotional states of the writer in response to an IAPS picture set. While successful at separating calibration from writing sessions, the authors were not able to isolate individual emotional states. The research presented in this manuscript is the beginning of an ambitions study to identify/classify the emotional state of readers in responses a written (emotional) narrative. The ultimate goal being an ebook-like system that can adapted to the mental state of the individual reader.
Experimental design
The work described in this manuscript was conducted with an appropriate amount of scientific rigor, given the real-world task and environment. As such, it will contribute to the growing field of applied neuroscience and real-world neuroimaging. However, there are several revisions that would significantly improve this manuscript (see below).
Validity of the findings
The title and abstract are misleading and imply that this study develops/uses an interactive ebook system – this text must be revised. The final use of this technology can be discussed in the intro, or more appropriately, the discussion section.
> We reworded the title and abstract keeping in mind that the paper is the basis of the elaborate research project.
Related to this comment is the structure of the manuscript, which at first reads like a review/theory article. In the introduction alone there are sections that cover ebooks, art and neuroscience, psychology and neurophysiology of emotions (valence and arousal), as well as cognitive processes that underlie creativity. This is too verbose an introduction for a study that only resulted in a differentiation between calibration and writing sessions. While optional, I suggest the authors split this manuscript into a review article and a shorter case study.
> Thank you for this suggestion. After some deliberation we decided to maintain the manuscript as it is and not split it up because the review is directly linked to the elaborate study and not intended as only a stand-alone manuscript.
The other major concern is related to the neurophysiological analysis described in the manuscript. While the ultimate goal of the study was to classify emotions, the manuscript would be improved by adding a figure that outlines the EEG components that were most discriminative between calibration and writing sessions. What were the most discriminative features/electrodes? This is also related to my concern with the artifact processing steps described in the manuscript. While individual electrodes with high variance (e.g. Oz) were excluded, there appears to be no attempt at removing 1) EoG components and 2) epochs of EMG or other noise artifacts. How can the authors be sure that the difference between calibration and writing sessions was not due to eye or muscle movements (e.g. picture scanning or jaw clenching)? Eye movement metrics especially have been shown to differentiate between even relatively similar tasks, with blinks and saccades often reflected the Alpha spectrum of frontal channels. The manuscript would be significantly improved if these concerns were addressed with the addition of EEG spectral and/or summary figure(s).
> We like the suggestion for a figure outlining the most discriminative features and added it to the manuscript. We used ICA already in the original analyses in an attempt to remove artifacts (but failed, please see remarks in the revision and further below). So no, we cannot be sure that the difference between conditions were not due to artifacts, but there is converging evidence that more is at hand than just artifacts like changes in central gamma and physiology. We already discussed that but expanded this discussion further.
Given the results, it would seem prudent for the next study to have a control component/condition that includes non-emotional writing or non-cognitive writing (e.g. copying) to see if these sessions can be differentiated from the emotional ones.
> Yes, we agree that this is a good recommendation and discussed this in the manuscript as well (section 4.3.1.). Of course, this primarily applies to studying the creative writing process and less to assessing the readers as we will do (or actually did) in the next phase.
The authors should address the relationship of the present study to Onton & Makeig 2009 (“High-frequency Broadband Modulations of Electroencephalographic Spectra”). Would the author’s classification approach have been more successful if they had used methods similar to Onton & Makeig
> We agree that Onton & Makeig’s approach is an advanced and powerful method and can certainly improve classification over more mundane algorithms as applied here. Unfortunately, our experimental and EEG measurement set up do not allow to apply these advanced methods. Onton & Makeig used a 256 electrode setup (i.e. an order of magnitude larger than our setup) and participants were asked to imagine emotions while sitting still with closed eyes (while our participant typed with his eyes open).
The subjective ratings (figure 4) and the participant’s commentary about the comfort and obtrusiveness of the monitoring systems was very informative and relevant for future research in this area.
> Thank you.
Comments for the author
Minor Points:
The majority of the (vertical) figure axes are unlabeled or without units. Please include labels and units in future revisions.
> Done.
Typos and Grammar:
Line 36: change to 'Sales of ebooks are...'
Line 175: add commas '...through, for instance, skin...'
Lines 173 through 179: Sentence is unwieldy. Please break into parts or reduce the examples.
Line 188: Begin sentence with 'An important step...'
Line 189: either remove the word 'potential' or the word 'added'
Line 190: add a comma '...is new, and entertainment...'
Line 192: change 'artefacts' to 'artifacts'
Line 196: remove the word 'use'
Line 216: change to 'This is a powerful mechanism...'
Line 256: change to 'Arnon Grunberg was...'
Line 483: 'scientist' should be plural
> Thank you, all done.
Reviewer 2 (Peter König)
Basic reporting
van Erp et al. suggest an interactive form of ebooks. The story would take into account physiological signals of the reader and thereby create an interactive format. Tests of the concept on the author during writing were partly successful, differentiating writing phases from non writing phases. But it was not possible yet to differentiate parts of the story with different emotional content. The manuscript presents an intriguing idea and is well written.
> Thank you.
Introduction
line 51 - include an up to date authoritative review, e.g. Dehaene et al. Nature Review Neuroscience 2015
> Done (ref. [5]).
line 74 - recording EEG and further physiological measures of several hundreds of subjects is a lot of work. Make such a promise only if you really follow up on it.
> We agree, and we actually already did gather the data of hundreds of readers in two experiments, the first (with data of 57 participants reading under ideal lab conditions) has recently been accepted for publication, the second (with data of hundreds of participants reading in a temporary EEG lab at a museum) is in preparation.
line 100 - this section is devoid of references. Add at least one, e.g. Dalgleish Nature Review Neuroscience 2004 or Tovote et al 2015
> You are correct, that seemed to have slipped through. We added the standard papers as suggested.
line 129 - a side remark: contentment was not listed as a primary emotion.
line 171 - Arousal and emotions also have a significant influence on eye movements. This in turn is important for EEG measurements e.g. Kaspar and König 2012 Front Hum Neurosci
> We agree. This implicates that the confounded eye movements may increase (EEG) classification performance. Please note that even if these “false” cues were present in our EEG data, we still don’t find a reliable emotion classification. Please see the updated artifact removal section.
Experimental design
Methods
line 269 - the setup is nice, I like it.
> Thank you.
line 298 - You should separate eye movement induced artefacts in the EEG recordings by ICA based methods as in Plöchl et al 2012 Front Hum Neurosci. In principle you can live with indirect measurement of eye movements in the EEG signal, but then you lose most of the information in the primary EEG. In the present case the number of electrodes is moderate and it has to be tested whether enough channels remain to make statements on e.g. DLPFC. At the very least, this has to be discussed. It might contribute to the high performance of EEG features in line 377 or low performance in line 386.
> Of course we agree and followed the approach as suggested but didn’t clearly state that in the original manuscript because the ICA did not result in artifact removal. The ICA analysis revealed that potential artifacts were not limited to a small number of components. This suggests that the ICA noise components are nonstationary, i.e. that the noise pattern changes over time. This makes it hard to remove artifacts without affecting the information in the primary EEG. Because we did not record EOG or EMG, we kept all data (except for the three EEG channels with a very large variation). We added the information on the ICA analysis to the manuscript (as it should have been from the start) and discuss the potential effects of eye movements as requested.
Validity of the findings
line 351 - The statistical methods are valid
Results
ok
Comments for the author
Discussion
line 429 - The EEG recordings might be reliable, but wether it is really EEG is a different question. Same issue as above, automated cleaning/separating algorithms are available.
> See above.
line 462 - “i.e. the writer does not experience a particular emotion himself when writing about it. “ is this so?
> We tried to verify this with the writer, but he did not have a clear (introspective) answer to that question. He would neither agree nor disagree. Therefore we used the word ‘may’.
The discussion could address methodological aspects. For example increasing the number of EEG channels could be really helpful.
> Good suggestion. Added.
signed Peter König
________________________________________
© 2015, PeerJ, Inc. PO Box 614 Corte Madera, CA 94976, USA
" | Here is a paper. Please give your review comments after reading it. |
154 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Ebooks of the future may respond to the emotional experience of the reader. (Neuro-) physiological measures could capture a reader's emotional state and use this to enhance the reading experience by adding matching sounds or to change the storyline therewith creating a hybrid art form in between literature and gaming. We describe the theoretical foundation of the emotional and creative brain and review the neurophysiological indices that can be used to drive future ebook interactivity in a real life situation. As a case study, we report the neurophysiological measurements of a bestselling author during nine days of writing which can potentially be used later to compare them to those of the readers. In designated calibration blocks, the artist wrote emotional paragraphs for emotional (IAPS) pictures. Analyses showed that we can reliably distinguish writing blocks from resting but we found no reliable differences related to the emotional content of the writing. The study shows that measurements of EEG, heart rate (variability), skin conductance, facial expression and subjective ratings can be done over several hours a day and for several days in a row. In follow-up phases, we will measure 300 readers with a similar setup.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The sales of ebooks are rapidly increasing and are expected to surpass that of printed books in the near future. In its basic form, an ebook is an electronic version of the printed book. However, the devices used to access an ebook (ereader, tablet, etc.) have more capabilities than just displaying the book and turning pages on request of the reader. The device may enable true bidirectional interaction with the reader, which is a significant innovation compared to the onedirectional printed book. This interactivity may substantially change the future of the ebook as artistic form and may result in new interactive media products that only slightly resemble the basic version of the printed book as sold today.</ns0:p><ns0:p>technology. In addition, we were interested in measuring the emotions of the writer during the writing process to be able to compare the reader's emotional state while reading a certain paragraph to that of the author during writing that same paragraph. Capturing the emotional state of the writer (both through neurophysiology and subjective ratings) became our case study and is reported in this paper to illustrate the use of sensor technology and to investigate whether prolonged physiological measurements are feasible in a real life situation. The framework described here is the basis for follow-up studies in which several hundreds of readers will read the book before publication while being measured with a similar setup as used here with the author <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. The applied, real life perspective guided the selection of theoretical models and measurement methods.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Art, beauty and neuroscience</ns0:head><ns0:p>There is a growing interest in using neurophysiological measures to assess media, including paintings, music and films. Research in this area is still at the forefront of cognitive neuroscience and results and theoretical foundations are still under debate. An important question that has fascinated and divided researchers from both the neurosciences and the humanities is whether brain activity can provide insight in what true art and beauty is. From an applied point of view, the relevant question is whether an individual's brain pattern is informative of his or her appraisal of the piece of art. Research of Zeki and colleagues, amongst others, has shown that there is a functional specialization for perceptual versus aesthetic judgments in the brain <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> and that there is a difference in activation pattern for paintings experienced as beautiful by an individual and those experienced as ugly. This finding is independent of the kind of painting: portrait, landscape, still life, or abstract <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref>. Hasson and colleagues <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> used fMRI to assess the effects of different styles of filmmaking on brain PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science patterns and suggest that neurophysiological sensing techniques can be used by the film industry to better assess its products. The latter was done by <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref> who measured skin conductance as an affective benchmark for movies and by <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> who measured cardiovascular and electrodermal signals and found a high degree of simultaneity between viewers, but also large individual differences with respect to effect size. So far, interactivity based on viewers emotional state has not moved beyond a few artistic experiments: 'unsound' by Filmtrip and SARC (http://www.filmtrip.tv/) and 'Many Worlds' by Alexis Kirke (http://www.alexiskirke.com/). In this paper we look at the (applied) neuroscience behind both the creative and the emotional brain and how emotional state can be captured using wearable, mobile technology that is usable while reading an ebook. We will also explore the possibilities opened up after capturing a reader's emotional state and what the ebook of the future might look like. The paper also presents the data of the writer during the creation of emotional text <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>The emotional brain</ns0:head><ns0:p>Stimuli evoking emotions are processed in the brain through specific pathways and with the involvement of several brain areas. In other words, the emotional brain is a network of collaborating brain areas and not a single location <ns0:ref type='bibr' target='#b20'>[21,</ns0:ref><ns0:ref type='bibr' target='#b21'>22]</ns0:ref>. The majority of the sensory information entering the brain goes to the primary sensory areas, but a small part of the information goes to the amygdala, part of the limbic system deep inside the human brain. A main driver of the amygdala is danger: in case of a potential threat to the organism, the amygdala is able to respond quickly and prepare the body for action without much stimulus processing. The amygdala enables the release of stress hormones leading to peripheral effects, for instance increased heart rate to pump more blood to the lungs and muscles. After the amygdala, processing continues through the cingulate cortex, the ventromedial prefrontal cortex and finally Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the dorsolateral prefrontal cortex. Only in the dorsolateral prefrontal cortex is the processing stream through the amygdala integrated with the more cognitive processing stream from the sensory cortices. The emotional experience is a result of the interpretation of both processing routes taking into account the context and previous experiences. This integration and interpretation of information is a typical function of the prefrontal cortex <ns0:ref type='bibr' target='#b22'>[23]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Psychological framework of emotions</ns0:head><ns0:p>Before we can discuss how we can measure emotional state, we should first look into the frameworks to classify emotions. There are many psychological frameworks available. Classic work by Paul Ekman <ns0:ref type='bibr' target='#b23'>[24]</ns0:ref> and James Russell <ns0:ref type='bibr' target='#b24'>[25]</ns0:ref> shows that there are several basic emotions: fear, disgust, anger, happiness, sadness and surprise. This set of six basic emotions has been expanded through the years with numerous subclasses. From a neuroscientific point of view, an important question is whether these emotions each have their own (unique) neuronal location or circuit (i.e. a discrete model <ns0:ref type='bibr' target='#b25'>[26]</ns0:ref>), or vary along several independent dimensions (i.e, a dimensional model <ns0:ref type='bibr' target='#b26'>[27]</ns0:ref>), a matter that is still under debate. As described above, experiencing an emotion is the result of the integration and interpretation of numerous information streams by an extended network of brain areas which makes a discrete model unlikely. Therefore, we adopt a dimensional model, or more specifically the circumplex model of emotion <ns0:ref type='bibr' target='#b24'>[25,</ns0:ref><ns0:ref type='bibr' target='#b27'>28,</ns0:ref><ns0:ref type='bibr' target='#b28'>29]</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The circumplex model of emotion stems from the ratings of individual, written words.</ns0:p><ns0:p>Neuro imaging studies confirm the two-dimensional model of valence and arousal and although there may be complex interactions between both dimensions, they both have a different signature of brain activation, spatially as well as temporally. Arousing words show a different pattern (compared to neutral words) mainly in the early processing stages (i.e. within 400 ms after presentation including the following ERP components: early posterior negativity (EPN), P1, N1, P2, and N400) while the difference between positively versus negatively valenced words shows in later processing stages (between 500 and 800 ms after presentation including the late positive complex (LPC)) <ns0:ref type='bibr' target='#b29'>[30]</ns0:ref>. In the spatial domain, arousal is linked to amygdala activity and valence to the cingulate cortex and the orbitofrontal cortex <ns0:ref type='bibr' target='#b30'>[31]</ns0:ref><ns0:ref type='bibr' target='#b31'>[32]</ns0:ref><ns0:ref type='bibr' target='#b32'>[33]</ns0:ref><ns0:ref type='bibr' target='#b33'>[34]</ns0:ref><ns0:ref type='bibr' target='#b34'>[35]</ns0:ref>. Excellent reviews are given by <ns0:ref type='bibr' target='#b36'>[36]</ns0:ref> and <ns0:ref type='bibr' target='#b37'>[37]</ns0:ref>. Based on her review, Citron <ns0:ref type='bibr' target='#b38'>[38]</ns0:ref> comes to the conclusion that positive and negative valence may differ with respect to the cognitive functions they activate and are not necessarily a continuous dimension. Although a novel is more than a collection of individual words, there has been very little research on the physiological reactions to reading larger pieces of text (see the first section of this paper), but a lot to reading individual words. This project aims to fill that void.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Emotion classification using neurophysiological measures</ns0:head><ns0:p>With the circumplex model as point of departure, we can start to identify physiological signals that reflect the arousal and valence of emotions and that can potentially be measured while reading outside a laboratory environment. We will look at a broader range of methods used to induce an emotional state than written words and at a broader set of physiological measures than EEG and fMRI. For example, <ns0:ref type='bibr' target='#b38'>[38]</ns0:ref> induced emotional state by letting subjects imagine pleasant, unpleasant, aroused and relaxed situations and measured effects on EEG, heartrate, skin exclusively affected by the parasympathic system (reduced high frequency HRV with increased arousal <ns0:ref type='bibr' target='#b54'>[53]</ns0:ref>), pupil size, heart rate (HR) and respiration frequency (all increased with increased arousal, although this pattern is not consistent over studies, see <ns0:ref type='bibr' target='#b56'>[54]</ns0:ref> for an elaborate overview).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Current state of the art in (applied) emotion capture</ns0:head><ns0:p>The state-of-the-art in emotion detection using neurophysiological indices is that we are able to distinguish several valence and arousal levels in a lab environment when subjects are sitting still and sufficient control data is gathered beforehand to train classification algorithms (see <ns0:ref type='bibr' target='#b57'>[55]</ns0:ref> for an overview). However, it is important to note that the relation between physiology and emotion is not straightforward. Different studies with different stimuli and contexts report different types of correlations <ns0:ref type='bibr' target='#b56'>[54,</ns0:ref><ns0:ref type='bibr' target='#b58'>56]</ns0:ref>. It is thus important to study relations between (neuro-) physiology and emotion within the context and under the circumstances of interest <ns0:ref type='bibr' target='#b59'>[57,</ns0:ref><ns0:ref type='bibr' target='#b60'>58]</ns0:ref>. An important step in this project, is to bring neurophysiological signals out of the lab and explore their potential value in daily life <ns0:ref type='bibr' target='#b57'>[55,</ns0:ref><ns0:ref type='bibr' target='#b60'>58]</ns0:ref>. Monitoring and using the (neuro-) physiological signals of readers is new, and entertainment in general is a good first case to transfer the technology from the laboratory to real life. This transition will come with several challenges ranging from coping with external noise due to movement artifacts, multitasking users, and usability aspects such as prolonged usage <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref><ns0:ref type='bibr' target='#b61'>[59]</ns0:ref><ns0:ref type='bibr' target='#b62'>[60]</ns0:ref>. First steps in this transition have recently been made in studies investigating EEG signals in gaming <ns0:ref type='bibr' target='#b63'>[61]</ns0:ref> and into music perception in realistically moving participants <ns0:ref type='bibr' target='#b64'>[62]</ns0:ref>. Here we also present the case of the writer wearing physiological sensors for several hours a day and for 9 days in a row.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>The creative brain</ns0:head><ns0:p>The current case study focused on the writer and his emotional signals during the creative Manuscript to be reviewed</ns0:p><ns0:p>Computer Science writing process. Our primary goal was to implement and learn about the transition from laboratory to real life before upscaling the set-up to hundreds of readers, and to capture the emotional signals of the writer as function of the emotional content of the written paragraphs.</ns0:p><ns0:p>We deemed it worthwhile, nevertheless, to have a quick look at the creative brain as well. Most people would agree that creative abilities make us unique in the animal kingdom. Interestingly, we understand little of the processes that drive or facilitate creativity and still debate on the definition of creativity, although most agree upon the importance of both novelty and usefulness (see <ns0:ref type='bibr' target='#b65'>[63]</ns0:ref> for an elaborate discussion). Similar to the neuroscience of art and beauty, neuroscientific research into creativity can still be characterised as embryonic and neuroscientific models are not widely established yet.</ns0:p><ns0:p>Like emotion, creativity is not related to a single brain area but rather to networks of brain areas.</ns0:p><ns0:p>Based on an extensive review, Dietrich and Kanso even state that 'creativity is everywhere' <ns0:ref type='bibr' target='#b66'>[64]</ns0:ref> see also <ns0:ref type='bibr' target='#b67'>[65]</ns0:ref>. Having said that, recent neuroimaging studies seem to show that creativity involves common cognitive and emotional brain networks also active in everyday tasks, especially those involved in combining and integrating information. For the current project, it is useful to distinguish two different types of creative processes as described by Dietrich <ns0:ref type='bibr' target='#b68'>[66]</ns0:ref>. The first can be called controlled creativity often in relation to finding creative solutions for a particular, given problem. This creative process is controlled through the prefrontal cortex <ns0:ref type='bibr' target='#b69'>[67]</ns0:ref> that guides the search for information and the combination of information within a given solution space. A powerful mechanism which is bound, though, by limitations of the prefrontal cortex, for instance with respect to the number of solutions that can be processed in working memory. The second type can be named spontaneous creativity, often in relation to artistic expression. This form of creativity comes without the restrictive control from the prefrontal cortex, and the process differs Manuscript to be reviewed</ns0:p><ns0:p>Computer Science from controlled creativity qualitatively (e.g. solutions are not bound by rational rules like the rules of physics) and quantitatively (the number of solutions is not restricted by for instance the limited capacity of working memory). Spontaneous creativity is linked to unconscious processes (of which dreaming may be an extreme form). However, the prefrontal cortex becomes involved in spontaneous creativity when solutions will eventually reach the conscious mind, and the prefrontal cortex is required to evaluate them and bring them to further maturity.</ns0:p><ns0:p>Recent data show us that less activity in the dorsolateral prefrontal cortex links to increased spontaneous creativity in, for instance, musicians <ns0:ref type='bibr' target='#b70'>[68,</ns0:ref><ns0:ref type='bibr' target='#b72'>69]</ns0:ref>, and increased activation to increased controlled creativity. Results also show that there is a burst of wide-spread gamma activity about 300 ms before the moment of insight in spontaneous creativity. Gamma activity is, amongst other features, linked to binding pieces of information. A burst of gamma activity is indicative of finding (and binding) a new combination of chunks of (existing) information. Fink</ns0:p><ns0:p>and Benedek <ns0:ref type='bibr' target='#b73'>[70]</ns0:ref> underline the importance of internally oriented attention during creative ideation in a more general sense, reflected in an increase in alpha power.</ns0:p><ns0:p>Creativity is also linked to hemispheric asymmetry. A meta-analysis <ns0:ref type='bibr' target='#b74'>[71]</ns0:ref> showed that the right hemisphere has a larger role in creative processes than the left hemisphere. This is confirmed by patient research <ns0:ref type='bibr' target='#b75'>[72,</ns0:ref><ns0:ref type='bibr' target='#b76'>73]</ns0:ref>. A lesion in the right medial prefrontal cortex hinders the creation of original solutions while a lesion in the left medial prefrontal cortex seems to be beneficial for spontaneous creativity. However, experiments with creative students <ns0:ref type='bibr' target='#b78'>[74]</ns0:ref> and extremely creative professionals from science and arts <ns0:ref type='bibr' target='#b79'>[75]</ns0:ref> both show bilateral cerebellum involvement, seemingly confirming the statement that 'creativity is everywhere in the brain'. However, these findings are general findings and may not be applicable to the creative writing process <ns0:ref type='bibr' target='#b80'>[76]</ns0:ref>. For instance, creative writing seems to result in increased activity in the left Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>prefrontal cortex (presumably because of its links to important language areas in the left hemisphere), except when writing emotional text, for which activity in the right hemisphere seems to be greater. This shows that the body of knowledge on the creative brain is growing but still limited and identifying neural correlates of the creative writing process requires further research. Another interesting debate is whether creative writing is a skill one can develop like skilled behavior in sports and music, or possibly even non-creative, non-fiction writing like scientists and journalists do. Lotze and colleagues found that the caudate nucleus (involved in skilled behavior) was active in experienced creative writers but not in novices <ns0:ref type='bibr' target='#b81'>[77,</ns0:ref><ns0:ref type='bibr' target='#b82'>78]</ns0:ref>, indicating that creative writing can indeed be a (trainable) skill. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <<Figure 1. Location of the 28 electrodes in the 10-20 system.>> <<Figure 2. Writer showing the neurophysiological sensors (A) and during writing (B).>></ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.3'>Experimental protocol</ns0:head><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> gives the outline of the experimental protocol for one day. Manuscript to be reviewed EEG. The EEG data were processed using the following pipeline: re-referencing to channel TP10, rejection of channels with very large variance (channels O1, Oz and O2 were very noisy and removed completely from the dataset), band pass filtering 0.5 to 43 Hz, and down sampling to 250 Hz. Initialy, the EEG data of the remaining channels were used in an Independent Component Analysis (ICA) to identify and remove potential artifacts. However, the ICA revealed that potential artifacts were non-stationary (i.e. changing over time) and therefore difficult to identify and thus no more data were removed. The power in different frequency bands: delta (0-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), SMR (13-16 Hz), Beta <ns0:ref type='bibr' target='#b15'>(16)</ns0:ref><ns0:ref type='bibr' target='#b16'>(17)</ns0:ref><ns0:ref type='bibr' target='#b17'>(18)</ns0:ref><ns0:ref type='bibr' target='#b18'>(19)</ns0:ref><ns0:ref type='bibr' target='#b19'>(20)</ns0:ref><ns0:ref type='bibr' target='#b20'>(21)</ns0:ref><ns0:ref type='bibr' target='#b21'>(22)</ns0:ref><ns0:ref type='bibr' target='#b22'>(23)</ns0:ref><ns0:ref type='bibr' target='#b23'>(24)</ns0:ref><ns0:ref type='bibr' target='#b24'>(25)</ns0:ref><ns0:ref type='bibr' target='#b25'>(26)</ns0:ref><ns0:ref type='bibr' target='#b26'>(27)</ns0:ref><ns0:ref type='bibr' target='#b27'>(28)</ns0:ref><ns0:ref type='bibr' target='#b28'>(29)</ns0:ref><ns0:ref type='bibr' target='#b29'>(30)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(meanHR = 1/meanRRI). Four measures of heart rate variability were derived. The root mean squared successive difference between the RRIs (rmssdRRI) reflects high frequency heart rate variability. We also determined heartrate variability in the low, medium and high band using a spectral analysis (HRVlow, HRVmed, HRVhigh). High-frequency heart rate variability was computed as the power in the high frequency range (0.15-0.5 Hz) of the RRI over time using Welch's method applied after spline interpolation; similarly for mid-frequency (0.07-0.15 Hz) and low-frequency (0-0.07 Hz) heart rate variability. No anomalies were present in the ECG data so no data was removed. From the ESK, the mean ESK over the epochs was calculated. For the ESK we removed one outlier (contentment epoch on day 2).</ns0:p></ns0:div>
<ns0:div><ns0:head>Classification analysis using EEG and peripheral physiology features.</ns0:head><ns0:p>To determine how well various feature sets could predict the emotional state of the author during the calibration session we performed a classification analysis. Classification was performed using the Donders Machine Learning Toolbox <ns0:ref type='bibr' target='#b83'>[79]</ns0:ref>. Two types of classifiers were used: a linear Support Vector Machine (SVM) and an elastic net model with logistic regression <ns0:ref type='bibr' target='#b85'>[80]</ns0:ref>. As input we used the features that were standardized to have mean 0 and standard deviation 1 on the basis of data from the training set. One-tailed binomial tests were used to determine whether classification accuracy was significantly higher than chance. Manuscript to be reviewed Computer Science of block 2, end of day for feelings grid and VAS). The DES full scores were analysed using nonparametric statistics with alpha level Bonferroni adjusted for the number of comparisons.</ns0:p><ns0:p>Feelings grid and VAS scores were analysed with a parametric ANOVA.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.5'>Procedures</ns0:head><ns0:p>We started the measurements on the day the writer started with a new novella to be used in phase 2 of the project. We adjusted the measurements to his usual daily writing schedule comprising two blocks: one in the morning and one in the late afternoon or early evening. He normally writes for about two hours and fills the time in between with other activities (including other writing activities). During a writing block, he was engaged in other activities as well like answering emails and phone calls etc., but never during the instrumentation and calibration. All activities during the measurement blocks were logged by the experimenter who was always present during the measurements. We measured for nine consecutive days. At the end of the day, the experimenter and the writer would make a specific schedule for the next day. The writer also reflected on his experiences over the day, including the user experience of wearing the equipment and being observed. The day before the start of the experiment, the protocol, instructions etc. were explained in great detail, the writer signed the informed consent, his workplace was instrumented and the equipment tested. Besides the addition of the equipment, the writer's workplace was not altered in any way to give the writer the best opportunity to behave as usual. On each measurement day, the experimenter came to the apartment as scheduled and followed the protocol as detailed above. At the end of the day, all data were encrypted and saved to an external hard disk.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Results</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Classification of baseline vs. emotion conditions in the calibration blocks</ns0:head><ns0:p>First, we determined whether the feature set contained information to discriminate the baseline conditions (Eyes Open and Eyes Closed) from the emotional (writing) epochs using binary classification (baseline vs. non-baseline). For this purpose we performed a 'leave-one-day-out' cross validation using the SVM classifier. This method is to be preferred over random N-fold cross validation since it better accounts for possible correlations between data during the day <ns0:ref type='bibr' target='#b86'>[81]</ns0:ref>. Still, the results when using random folds were found to be comparable to the results of the analysis presented here. It is also important to compensate for the imbalance in the number of conditions, with 36 baseline blocks and 54 emotional blocks in the set. All reported performance scores follow a binomial distribution and the variability of the binomial distribution follows directly from the average score and the number of measurements (the distribution is not well approximated with a Gaussian distribution and therefore the variance is not a good indicator of the variability in the results). For larger number of measurements the variability is approximately equal to p*(1-p)/N (with p estimated by the score, N the number of measurements).</ns0:p><ns0:p>When all six physiology features (i.e. meanHR, HRVlow, HRVmed, HRVhigh, rmssdRRI, meanESK were used as input to the classifier the average model performance (over all days) was 71%, with a hit-rate (score for correctly classifying baseline blocks) of 58% and a</ns0:p><ns0:p>False-Alarm-rate (FA-rate, i.e. fraction of falsely classified emotional blocks) of 20%, resulting in an equal cases (in the situation in which both conditions occur equally frequent) performance of 69% (p < .01). Individual ANOVAs with condition as independent variable (baseline vs writing) and physiological measure as dependent variable showed significant differences for the Manuscript to be reviewed <ns0:ref type='figure' target='#fig_5'>4</ns0:ref> summarizes the power distribution for the different frequency bands averaged over the rest epochs (left column) and the writing epochs (middle column). When the EEG features were used as input, the average model performance (over all days) was 92% with a hitrate of 86% and a False-Alarm-rate (FA-rate) of 4%, resulting in an equal cases performance of 91% (p < .01). The model weights are depicted in the right column of Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. Inspecting Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref> shows that there are three major differences between rest and writing reflected in the model weights. The writing blocks show: (1) increased frontal power in the delta and theta bands, <ns0:ref type='bibr' target='#b1'>(2)</ns0:ref> wide spread suppression of alpha, and (3) central increase in beta and gamma activity. The first effect is most likely caused by eye movements. The second effect relates to the suppression of the brain's idle state during rest. The increase in gamma activity may be related to creative processes as described in Section 3, but gamma components are also susceptible to muscle artifacts caused by e.g. jaw clenching and forehead movements. If we only use the alpha and gamma features in the classifier, equal class performance is 88%, indicating that a reliable difference can be obtained without using features that may be contaminated with eye movements.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>When all features (peripheral physiology and EEG) were used as input the average model performance was also 92%, with a hit-rate of 89% and a False-Alarm-rate (FA-rate) of 6%, resulting in an equal cases performance of 92% (p < .01), a non-significant improvement relative</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to using only EEG features, indicating that the added value of incorporating features other than EEG ones is small in this case.</ns0:p><ns0:p>A closer inspection of the feature weights in the classification model showed that the highest weights are attributed to the delta (0-4Hz) and theta (4-8) bands in channels Fp1, Fpz, Fp2 (i.e. frontal channels). The equal class performance of a classification model using only these six features is 0.84 (compared to 0.92 for a model using all features). Slow (0-4 Hz) frequency bands of EEG may pick up eye movements and should be evaluated with caution (please note that eye movements were not removed from the EEG data). Indirect measurement of eye movements in the EEG signal masks the information in the primary EEG. Even in case it is a reliable classifier for the current experimental setup, we consider it an artifact.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Classification of valence and arousal in the calibration blocks</ns0:head><ns0:p>Model performance was determined for classifying low vs. high valence and arousal using 10fold cross validation using a range of parameters:  Features from EEG, physiology or both,  Binary classification for predicting outcomes higher than the median value or using only the extreme values, i.e. lower than the 0.33-quantile or higher than the 0.66-quantile,  Using raw or normalized features, in which case the features were normalized by dividing by the average feature value for the Eyes-Open conditions (for that day),  Using SVM or elastic net classifier with logistic regression.</ns0:p><ns0:p>In none of the cases did we find classification performance deviating significantly from chance performance. Since classification performance using the whole set of EEG data did not result in above-chance performance, we did not continue using specific subsets only, e.g. to look at the PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science power in specific EEG frequency bands like alpha <ns0:ref type='bibr' target='#b42'>[42,</ns0:ref><ns0:ref type='bibr' target='#b43'>43]</ns0:ref>, at the relative power in different EEG bands <ns0:ref type='bibr' target='#b40'>[40]</ns0:ref> and at asymmetrical alpha activity in the prefrontal cortex. Individual ANOVAs on the physiological measures confirmed these observations: all F-values < 0.63 and all p values > .67, see also Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>. Because building a reliable valence and arousal classification algorithm using the calibration data turned out to be impossible, we could not further classify the novel writing data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.3'>Facial expression in the calibration blocks</ns0:head><ns0:p>We used the FaceReader® output directly in the analysis and found no significant differences between the different emotional paragraphs. Generally, the facial expression of the writer was classified as neutral (about 30%), sad (about 25%) or angry (about 20%). The remaining 25% was dispersed over happy, surprised, scared and disgusted.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.4'>Subjective questionnaires</ns0:head><ns0:p>The DES full showed neither differences over the days (1-9) nor over sessions (start -end of day). Analysis of the feelings grid scores showed a significant effect of arousal over sessions: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the daily debriefing session at the end of the day, the writer indicated that the EEG cap was uncomfortable at the start but that he got used to wearing the cap and the other physiological sensors. He experienced the cameras as more obtrusive and disturbing than the physiological sensors. He elaborated on this in several public interviews (e.g., in The New York Times: www.nyti.ms/1dGxkFR). <<Figure 5. Significant changes in subjective ratings over the course of a writing day.>></ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.3.1'>Set-up and user experience</ns0:head><ns0:p>The case study primarily focussed on measuring neurophysiological indices over prolonged periods outside a laboratory environment before applying the technology in a large scale experiment with the readers of the novel. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science what we find with readers <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. Non-stationarities may be more common in real-world, multitasking environments and hamper identification and removal of artifacts <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref>. This increases the relevance of including EMG and EOG sensors to the sensor suite. Data analysis may also benefit from a higher electrode density allowing to apply more advanced techniques for artifact removal and EEG analysis. Recent electrode developments may enable this without reducing usability and comfort over prolonged periods of use.</ns0:p><ns0:p>Although measuring physiology outside a well-controlled laboratory environment is challenging, the data show reliable differences between resting state and writing, which indicates a sufficient signal-to-noise ratio in the data. It could still be the case that this 'writing detector' is triggered by artifacts like eye movements or muscle activity that comes with typing (however, the EEG channels most prone to these muscle artifacts (O1, Oz and O2) were removed from the dataset). If we look at the weight of the different features in the classification algorithm, we see that most weight is attributed to the delta (0-4Hz) and theta (4-8) bands in the frontal channels (Fp1, Fpz, Fp2). Low frequency bands should be evaluated with caution as they may reflects eye movements rather than signals in the primary EEG. This is especially relevant for the current dataset since (eye movement) artifacts were non-stationary and could not be reliably identified and removed. However, the classifier is also based on suppression of alpha and increased central gamma activity during writing. This matches with the expected pattern for creative writing although one should note that gamma can also be affected by e.g. jaw clenching artifacts. In addition, the current differences in physiology like increased heartrate and decreased heartrate variability (see Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>) do fit with an interpretation of low (rest) and high cognitive activity Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>comparison should be made between writing about an emotion, writing down mundane instructions, and for instance copying text or making random typing movements.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2.'>Neurophysiology of emotional writing</ns0:head><ns0:p>Since there are no data available about neurophysiological correlates of emotional writing, we based our expectations on research into physiological responses based on presenting many repetitions of, for instance, emotional pictures or sounds. In this domain, recent experiments show that changes in emotional state can also be reliably identified with a restricted number of repetitions or even single trial (especially for longer epochs like in our study).</ns0:p><ns0:p>Therefore, we expected to be able to see changes in physiological state as a function of emotional content, despite the limited number of repetitions. However, we were not able to link specific neurophysiological indices to the emotional content of the writing. We have three possible explanations: (1) the quantity and/or quality of the data was not sufficient, (2) writing is a cognitive rather than an emotional task for this particular author, and (3) the task involved a multitude of emotional, creative and cognitive processes concealing the single-task indices found in single-task laboratory experiments. The first explanation pleads for expanding the data set using more authors and possibly more sessions than we were currently able to gather. We should keep in mind, nevertheless, that the current data was sufficiently reliable to classify rest from writing with 92% accuracy, and the employed classifications methods are sensitive enough to be used on smaller datasets. This forces us to look into alternative explanations as well before upscaling. One such explanation is that for this particular writer, the writing process itself may predominantly be a cognitive task and unrelated to the emotional content, i.e. the writer does not experience a particular emotion himself when writing about it. The neurophysiological pattern Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the signature of a cognitive task. Based on the vast production of the writer and as confirmed in later discussion with him, this is a viable option. In hindsight, the time pressure (2 minutes per item), the strict instruction (write about this particular emotion fitting with this particular picture), the time of day (always in the morning before the writing block started), and the presence of the experimenter may all have triggered cognitive controlled creativity rather than emotional or spontaneous creativity. The third factor that may have played a role in the current results is the task setting that may have resulted in multiple processes (including but not limited to emotional, associative, creative, linguistic and motor planning processes). The resulting brain activity patterns may not be comparable to those for passive viewing of emotional pictures in a laboratory environment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2'>Subjective ratings</ns0:head><ns0:p>The ratings of arousal, happy, optimistic and flow seem to show the same pattern. At the start of the day, the writer is in a 'relaxed, good mood' but his mood seems to dwindle during the writing with increasing arousal. At the end of the day, after the last writing session, this pattern stabilizes or is reversed. This profile in part reflects the circadian modulation of mood and related aspects.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>The ebook of the future</ns0:head><ns0:p>One may ask if uncovering brain states associated with art will de-mythologize the process: will art lose its meaning, beauty or purity when reduced to activity of groups of neurons? Will we eventually reveal the mechanisms of art and thus render it mechanical? Will scientists be able to develop a drug that makes everyone a best-selling author? Will this knowledge increase the 'creativity rat race' for artistic and creative success as cognitive enhancers may do in the 'cognitive rat race' in the academic world <ns0:ref type='bibr' target='#b90'>[85]</ns0:ref>? We think not -but raising and discussing these Manuscript to be reviewed</ns0:p><ns0:p>Computer Science questions is of utmost importance for the field <ns0:ref type='bibr' target='#b60'>[58]</ns0:ref>. A more interesting debate is whether creative writing is a skill one can develop like skilled behavior in sports and music, or possibly even non-creative writing like scientists and journalists do on a daily basis. Creative skills are important outside the arts and the creative industry and their importance is widely acknowledged in an innovative and knowledge-based economy. We would like to expand our research into (spontaneous) creativity to answer important questions and develop appropriate tests and tools to measure spontaneous creativity (which may require 24 hour measurements).</ns0:p><ns0:p>Current ebooks have the ability to track reader behavior and ebook retailers are actively gathering (anonymous) data of their readers on parameters such as the books the reader has finished (or not), how fast, where reading was discontinued and for how long and which words were looked up in a connected dictionary <ns0:ref type='bibr' target='#b91'>[86]</ns0:ref>. None of this information is directly used for the benefit of the reader but serves manufacturers and publishers only. The basis for our approach is to measure the readers' state and behavior to make them the primary beneficiaries, for instance through enhancing the reader experience. There are many approaches foreseeable. A relatively simple one that is not interactive yet is to use the emotional response to give better informed advice on other books the reader may enjoy. In a similar way, readers may want to share their emotional profile, for instance by posting it on social media or through new communities of people with similar frames of mind around a specific book. Real interactivity may also come in many forms. For instance, the emotional response may be used to add music or other multisensory stimuli to further intensify the experience or ultimately change the storyline or the flow of the book. This may lead to new media products that are somewhere in between literature, movies and games. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure captions</ns0:note><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head><<Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Experimental protocol for one day.>> <<Table 2. Specification of the experimental protocol.>>4.1.4 Data processingOur intention was to use the calibration sessions of each day to identify differences in physiological markers that could be linked to the emotional content of the written paragraph.When we can reliably establish this 'ground truth', it could consecutively be used to analyze the data gathered during the writing blocks. After checking the synchronization between the different data streams, the physiological data of the calibration session were separated in 10epochs corresponding to 1 min rest eyes open, 1 min rest eyes closed, 6x2 min 'emotional writing' (each corresponding to one of six different emotional pictures and descriptors), and again 1 min rest eyes open, 1 min rest eyes closed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>) and gamma (30-80 Hz) were used as features in the classification. Peripheral physiology. As a measure of heart rate, we determined the mean interval between successive R-peaks in the ECG (RRI) for each epoch and converted this to mean Heart Rate PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Facial expression.</ns0:head><ns0:label /><ns0:figDesc>The images from the close-up camera were analysed offline using NoldusFaceReader software. Output for each epoch are intensity values for the following classifications: Neutral, Happy, Sad, Angry, Surprised, Scared, Disgusted. Subjective questionnaires. The data of the feelings grid, VAS, and DES full questionnaires was not pre-processed but directly analysed. We only statistically analysed the main effects of day (9 levels) and session (start of day and end of day for DES full, and start of day, end of block 1, end PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016) Manuscript to be reviewed Computer Science heart rate variability measures only (all F values > 5.83, all p values < .02). Figure 3 gives the HR and rmssd RRI as function of calibration block. <<Figure 3. Results of the physiological measures Heart Rate (A) and rmssd RRI (B) as function of the different calibration blocks. Eyes open and Eyes closed blocks were measured before and after the emotional blocks. The order of the emotional blocks was balanced over days. Error bars denote the standard error of the mean.>> PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016) Manuscript to be reviewed Computer Science 387 PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head><<Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure4summarizes the power distribution for the different frequency bands averaged</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>F( 3 ,</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>31) = 4.57, p < .01. A post-hoc LSD test showed a significant difference between start of the day and the end of block 2 and end of the day. The analyses of the VAS scores showed no effect over days, but a large effect over sessions of happy: F(3, 31) = 3.65, p < .03, optimistic: F(3, 31) = 6.28, p < .01 and flow: F(3, 31) = 6.76, p < .001, and a trend for relaxed: F(3, 31) = 2.38, p < .09. The means of the significant effects over session are presented in Figure 5. The figure shows that happy, optimism and flow are rated high at the start of the day but systematically decrease over the writing sessions with a stabilization or reversal at the end of the day. For arousal, this effect is inverted. These trends are confirmed by post-hoc LSD tests. PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>writing) and not just with simple muscle activity. To exclude the aforementioned artifacts, a PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>found in writing compared to rest and the facial expression (often classified as neutral) fit with PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Location of the 28 electrodes in the 10-20 system.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Writer showing the neurophysiological sensors (A) and during writing (B).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Results of the physiological measures Heart Rate (A) and rmssd RRI (B) as function of the different calibration blocks. Eyes open and Eyes closed blocks were measured before and after the emotional blocks. The order of the emotional blocks was balanced over days. Error bars denote the standard error of the mean.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The left column shows the power in the different frequency bands in the rest blocks (eyes open and eyes closed combined) and the middle column in the writing blocks. The right column gives the weights of the features in the classification model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Significant changes in subjective ratings over the course of a writing day.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>gives the details on the experimental protocol. PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Inspection of the signals revealed that, except for EEG channels O1, Oz and O2, we were able to record reliable signals in a real life situation using wearable / wireless sensor technology and that the setup was comfortable enough for the writer to work for hours a day wearing the sensors. The noise in the occipital channels may be caused by (neck) muscle activity related to mouse and keyboard actions. The ICA analysis indicated that potential artifacts were non-stationary (i.e. changed over time), an effect similar to</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table captions Table 1 .</ns0:head><ns0:label>captions1</ns0:label><ns0:figDesc>Experimental protocol for one day.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Specification of the experimental protocol.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)</ns0:note></ns0:figure>
<ns0:note place='foot'>NeuroReport, 25 (17), pp. 1356-1361. PeerJ Comput. Sci. reviewing PDF | (CS-2015:10:7008:2:0:CHECK 1 Apr 2016)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "PeerJ
Thank you for your submission to PeerJ Computer Science. I am writing to inform you that in my opinion as the Academic Editor for your article, your manuscript 'Toward physiological indices of emotional state driving future ebook interactivity' (#CS-2015:10:7008:1:1:REVIEW) requires some minor revisions before we could accept it for publication.
________________________________________
Reviewer Comments
Reviewer 1 (Jonathan Touryan)
Minor Concerns:
Some aspects of the introduction present neuroscientific theories or frameworks (e.g. the emotional brain) as more established than they currently are. Many of the components of the introduction (emotion, beauty, creativity) describe research at the forefront of cognitive neuroscience and are, at present, vigorously debated.
> We are aware that there is debate on several of the recent findings and models and added extra nuancing to the manuscript where appropriate.
The authors are appropriately guarded and circumspect in their interpretation of the frontal delta and theta components. Indeed, these are likely to be eye movement related features. However, the gamma components are also likely to be (at least partially) driven by EMG. Even though the occipital channels were removed to mitigate neck muscle artifacts, jaw clenching and forehead movements can also be reflected in the gamma bands across the scalp.
> Added to the results and discussion of gamma.
The embedded image quality was very low making some of the figures very hard to interpret (especially new figure 4). Hopefully, this will be resolved during the publication process.
> Correct. Low resolution is due to the PeerJ produced pdf for reviewing.
Comments for the author
The revisions in response to the reviewers concerns have significantly improved the manuscript. The (substantial) introduction now provides an appropriate background and literature review for this very ambitious study. The abstract, introduction, results and discussion now flow and provide a more coherent narrative. The expanded results likewise provide more insight into the EEG classification component. My previous concerns have been address and I now support publication of this manuscript.
> Thank you.
Reviewer 2 (Peter König)
Basic reporting
In the first round i gave a rather positive evaluation. With the revision, however, I'm not impressed. The authors were very reluctant to include the suggstions and the manuscript has not been imporved.
The relation to the existing literature is not adequat. Yet, the authors step up their claim by including the claim that 'We describe the theoretical foundation of the emotional and creative brain and review the neurophysiological indices that can be used to drive future ebook interactivity in a real life situation.' If this has aspects of a review, then it is mandatory that the current state of the art is referenced and the claims made adapted to what is delivered in the present manuscript.
> We prepared the revision with utmost care and payed full attention to the original comments. The manuscript was neither intended nor claimed to be a full review paper but a description of basic research findings relevant for the goals of the applied research project (physiological emotion detection for ebook interactivity) and a first proof of concept completed by measuring the writer over prolonged periods. This required a careful balance between the multitude of available models, the relevance of individual fundamental brain studies for applied neuroscientific purposes and the levels of detail and depth of analysis of the case study, all keeping in mind the page limit for the manuscript. To the authors, this combination and balance is relevant for colleague applied scientist and exactly the reason why we decided not to split up the document into two parts as discussed with reviewer 1. Of course, if we would split up the manuscript in a review and an experimental part, both manuscripts would have more space, and keeping the parts together implies that we have to be very selective and what to include and what not. This comes with compromises and we sincerely hope that the reviewer can understand the choices we made. Of course, if there are highly relevant (applied) papers we missed, we would be very happy to include them.
The nature of the signal measured is a central question. In EEG measurements many artifactual sources are known. Eye movements are a prominent example. To state (line 406) 'Initialy, the EEG data of the remaining channels were used in an Independent Component Analysis (ICA) to identify and remove potential artifacts. However, the ICA revealed that potential artifacts were non-stationary (i.e. changing over time) and therefore difficult to identify and thus no more data were removed. ' or in short, there are so many artifacts, we leave them in, is not good enough. I'd expect cleaning the EEG data of artifacts, or in the very least a thorough discussion of the issue with appropriate references to state of the art techniques. Otherwise it is not clear whether the recordings refer to brain activity (as implied by EEG and BCI) or to eye movements or muscle activity.
> We acknowledge that it is important to remove artifacts and that there are many approaches and “schools” to do so. For this n=1 study it is extremely difficult and some would even call it arbitrary to remove non-stationarities without the risk of (selectively) removing real data. Since this is a case study only with a restricted scope and generalizability we feel that it is not justified to add a whole new discussion on this topic but rather safe this for the datasets in the later phases of the project in which we do want to report generalizable findings and we will no longer be reporting a n=1 study with limited results.
Based on the best practices of experts and current state of the art perspectives (e.g. following Uriguen, J.A. & Garcia-Zapirain, B. (2015). EEG artefact removal – state-of-the-art and guidelines. J. Neural Eng., 12, 031001) we choose the following approach:
1. Obvious bad channels were removed from the analysis (O1, Oz, O2). With the removal of these channels we expect to have removed the largest part of the EMG artifacts.
2. Standard ICA methods were tested to remove any (potential) additional artifacts from the data. As reported, no single ICA components could be removed due to the apparent non-stationarity of the ICA noise components.
In our case it is hard or maybe impossible to remove (potential) artifacts without removing part of the EEG signal as well, especially since a reference signal (e.g. an EOG signal) is lacking. Moreover, removing certain components or parts of the data based on visual inspection would introduce unwanted subjectivity in the process (and do not assure that all artifacts are indeed removed). We therefore decided not to remove any additional (potential) artifacts. The result is that we cannot conclude that the results are not (partly) due to artifacts. We are aware of this, discuss this issue and include several statement stressing the need for caution. It is what it is (idiomatic phrase).
" | Here is a paper. Please give your review comments after reading it. |
155 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>We provide a systematic approach to validate the results of clustering methods on weighted networks, in particularly for the cases where the existence of a community structure is unknown. Our validation of clustering comprises a set of criteria for assessing their significance and stability. To test for cluster significance, we introduce a set of community scoring functions adapted to weighted networks, and systematically compare their values to those of a suitable null model. For this we propose a switching model to</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>produce randomized graphs with weighted edges while maintaining the degree distribution constant. To test for cluster stability, we introduce a non parametric bootstrap method combined with similarity metrics derived from information theory and combinatorics. In order to assess the effectiveness of our clustering quality evaluation methods, we test them on synthetically generated weighted networks with a ground truth community structure of varying strength based on the stochastic block model construction. When applying the proposed methods to these synthetic ground truth networks' clusters, as well as to other weighted networks with known community structure, these correctly identify the best performing algorithms, which suggests their adequacy for cases where the clustering structure is not known. We test our clustering validation methods on a varied collection of well known clustering algorithms applied to the synthetically generated networks and to several real world weighted networks. All our clustering validation methods are implemented in R, and will be released in the upcoming package clustAnalytics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Clustering of networks is a popular research field, and a wide variety of algorithms have been proposed over the years. However, determining how meaningful the results are can often be difficult, as well as choosing which algorithm better suits a particular data set. This paper focuses specifically on weighted networks (that is, those in which the connections between nodes have an assigned numerical value representing some property of the data), and we propose novel methods to validate the community partitions of these networks obtained by any given clustering algorithm. In particular our clustering validation methods focus on two of the most important aspects of cluster assessment: the significance and the stability of the resulting clusters.</ns0:p><ns0:p>We consider clusters produced by a clustering algorithm to be significant if there are strong connections within each cluster, and weaker connections (or fewer edges) between different clusters. This notion can be quantified and formalized by applying several community scoring functions (also known as quality functions in <ns0:ref type='bibr' target='#b14'>Fortunato (2010)</ns0:ref>), that gauge either the intra-cluster or inter-cluster density. Then, it can be determined that the partition of a network into clusters is significant if it obtains better scores than those for a comparable network with uniformly distributed edges.</ns0:p><ns0:p>On the other hand, stability measures how much a clustering remains unchanged under small perturbations of the network. In the case of weighted networks, these could include the addition and removal of vertices, as well as the perturbation of edge weights. This is consistent with the idea that meaningful clusters should capture an inherent structure in the data and not be overly sensitive to small and/or local variations, or the particularities of the clustering algorithm.</ns0:p><ns0:p>Our goal is to provide a systematic approach to perform these two important clustering validation criteria, which can be used when the underlying structure of a network is unknown, because in this case different algorithms might produce completely different results, and it is not trivial to determine which ones are more adequate, if any at all.</ns0:p><ns0:p>To assess the significance of communities structure, in general weighted networks, we provide a collection of community scoring functions that measure some topological characteristics of the groundtruth communities as defined by Yang and Leskovec in <ns0:ref type='bibr' target='#b41'>(Yang and Leskovec, 2015)</ns0:ref> for unweighted networks. Our scoring functions are proper extensions of theirs to weighted networks. A separate case is the clustering coefficient, a popular scoring function in the analysis of unweighted networks. We examined several existing definitions for the weighted case, being most relevant to us the descriptions given by <ns0:ref type='bibr' target='#b3'>Barrat et al. (2004)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>Saramäki et al. (2007)</ns0:ref>, and <ns0:ref type='bibr' target='#b23'>McAssey and Bijma (2015)</ns0:ref>, and found the latter to be the most versatile (for instance, it can be used in complete graphs where all the information is given by the values of the edges, such as those generated from correlation networks). The clustering coefficient of <ns0:ref type='bibr' target='#b23'>McAssey and Bijma (2015)</ns0:ref> is defined in terms of an integral, and we provide an efficient way of computing it. Then, to evaluate the significance (in a statistical sense) of the scores produced by any scoring function, we compare them against null models with similar properties but without any expectations of a community structure. For this we propose an extension to weighted graphs of the switching model <ns0:ref type='bibr' target='#b25'>(Milo et al., 2003)</ns0:ref> which produces random graphs by rewiring edges while maintaining the vertex degree sequence. The idea is that a significant community in any given network should present much better scores than those of the randomly generated ones.</ns0:p><ns0:p>As for the stability of clusters, it has been studied more widely for algorithms that work on Euclidean data (as opposed to networks, weighted or not). For instance, von Luxburg (2010) uses both resampling and adding noise to generate perturbed versions of the data. <ns0:ref type='bibr' target='#b17'>Hennig (2007)</ns0:ref> introduces bootstrap resampling (with and without perturbation) to evaluate cluster stability. Also for Euclidean data, <ns0:ref type='bibr' target='#b37'>Vendramin et al. (2010)</ns0:ref> introduce a systematic approach for cluster evaluation that combines cluster quality criteria with similarity and dissimilarity metrics between partitions, and searches for correlations between them. Our approach consists of a bootstrap technique with perturbations adapted to clustering on networks, that resembles what Hennig does for Euclidean data. That is, the set of vertices is resampled multiple times, and the clustering algorithms are applied to the resulting induced networks. In this case, the perturbations are applied to the edge weights after resampling the vertices, but the standard bootstrap method without perturbation can be used on all networks, weighted or not.</ns0:p><ns0:p>To compare how the clusters of the resampled networks differ from the originals, we use three measures. The adjusted Rand index <ns0:ref type='bibr' target='#b19'>(Hubert and Arabie, 1985)</ns0:ref> is a similarity measure that counts the rate of pairs of vertices that are in agreement on both partitions, corrected for chance. Additionally, we use measures derived from information theory to compare partitions such as the recently introduced Reduced Mutual Information <ns0:ref type='bibr' target='#b28'>(Newman et al., 2020)</ns0:ref>, which corrects some of the issues with the original mutual information or its normalized version <ns0:ref type='bibr' target='#b9'>(Danon et al., 2005)</ns0:ref>. For example, giving maximal scores when one of the partitions is trivial, which in our case would mean that failed algorithms that split most of the network into single vertex clusters would be considered very stable. Other attempts at providing adjusted versions of the mutual information include <ns0:ref type='bibr' target='#b11'>Dom (2002)</ns0:ref>, <ns0:ref type='bibr' target='#b38'>Vinh et al. (2010)</ns0:ref> and <ns0:ref type='bibr' target='#b43'>Zhang (2015)</ns0:ref>.</ns0:p><ns0:p>The other information theory measure we employ for the sake of comparison and control is the Variation of Information (VI) <ns0:ref type='bibr' target='#b24'>(Meilȃ, 2007)</ns0:ref>. The VI is a distance measure (as opposed to a similarity measure, like the Rand index and mutual information) that actually satisfies the properties of a proper metric.</ns0:p><ns0:p>We apply these clustering quality evaluation methods on several real world weighted networks together with a varied collection of well known clustering algorithms. Additionally, we also test them on synthetic weighted networks based on the stochastic block model construction <ns0:ref type='bibr' target='#b18'>(Holland et al., 1983;</ns0:ref><ns0:ref type='bibr' target='#b40'>Wang and Wong, 1987)</ns0:ref>, which allows us to have predefined clusters whose strength can be adjusted through a parameter, and which we can compare to the results of the algorithms for their evaluation.</ns0:p><ns0:p>Our main contributions are the following: a switching model for randomizing weighted networks while maintaining the degree distribution, and its use together with the scoring functions we adapted to the weighted case, to provide a general approach to the validation of significance of clustering results.</ns0:p><ns0:p>An implementation of a bootstrap method with perturbation adapted to weighted networks, to test for The remainder of this paper is organized as follows. Section 2 contains the details of our methods for assessing clustering significance and stability, as well as a description of the clustering algorithms we will put to test and the datasets. Section 3 presents the discussion of our experiments. Section 4 presents our conclusions. Finally, we put in an Appendix (section 5) the technical details on the time complexity of our algorithms. Our experiments results figures are reported separately in the Supplemental Tables <ns0:ref type='table'>S1 file,</ns0:ref> which the reader can consult conveniently.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>MATERIALS AND METHODS</ns0:head><ns0:p>To determine if the partition of a graph into communities given by a clustering algorithm provides significant results, we use the scoring functions defined in section 2.1. Our method consists in evaluating these functions on clusters produced by a given algorithm on both the original graph and on multiple samples of randomized graphs generated from the original graph (see section 2.2). Then, for each function we see how the score of the original graph clusters compares to the scores of the randomized graph clusters, as we can define a relative score (score of the original graph over the mean of the scores of the random graphs). We also observe the percentile rank of the original graph's score in the distribution of scores from the randomized graphs. Depending on the nature of the scoring function, a significant cluster structure will be associated with percentile ranks either close to 1 (for scores in which higher is better) or 0 (when lower is better).</ns0:p><ns0:p>For testing cluster stability, we implement a bootstrap resampling on the set of vertices of the network, plus the addition of a perturbation to the weights of the edges in the induced graph. The details of this methodology are described in section 2.3. The Variation of Information (VI), Reduced Mutual Information (RMI) and Adjusted Rand Index (ARI) introduced in section 2.5 are used as similarity measures. Then, the bootstrap statistics are the values of these similarity measures comparing the resampled bootstrap graphs to the original one.</ns0:p><ns0:p>On our experiments we evaluate the results of clustering on a selection of networks with different community structure (section 2.7) with several well-known clustering algorithms (section 2.6). Additionally, we also test the algorithms on synthetic graphs with a preset community structure constructed using stochastic block models (section 2.4). By varying one of the parameters of the model (λ ), we generate networks that range from being mostly uniform (that is, with no community structure) to having very strong communities. This allows us to see how our evaluation methods respond in a controlled environment where the existence or not of strong clusters in the network is known.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Community scoring functions</ns0:head><ns0:p>Here we will provide functions which will evaluate the division of networks into clusters, specifically when the edges have weights. Using the scoring functions for communities in unweighted networks given in <ns0:ref type='bibr' target='#b41'>(Yang and Leskovec, 2015)</ns0:ref> as a reference, we propose generalizations of most of them to the weighted case.</ns0:p></ns0:div>
<ns0:div><ns0:head>Basic definitions.</ns0:head><ns0:p>Let G(V, E) be an undirected graph of order n = |V | and size m = |E|. In the case of a weighted graph 1 G(V, Ẽ), we will denote m = ∑ e∈ Ẽ w(e) the sum of all edge weights. Given S ⊂ G a subset of vertices of the graph, we have n S = |S|, m S = |{(u, v) ∈ E : u ∈ S, v ∈ S}|, and in the weighted case mS = ∑ (u,v)∈ Ẽ:u,v∈S w <ns0:ref type='bibr'>((u, v)</ns0:ref>). We use w uv instead of w <ns0:ref type='bibr'>((u, v)</ns0:ref>). Note that if we treat an unweighted graph as a weighted graph with weights 0 and 1 (1 if two vertices are connected by an edge, 0 otherwise), then m = m and m S = mS for all S ⊂ V . Associated to G there is its adjacency matrix A(G) = (A i j ) 1≤i, j≤n where A i j = 1 if (i, j) ∈ E, 0 otherwise. We insist that A(G) only take binary values 0 or 1 to indicate existence of edges, even in the case of weighted graphs. For the weights we will always use the weight function w((i, j)) = w i j .</ns0:p><ns0:p>The following definitions will also be needed later on:</ns0:p><ns0:formula xml:id='formula_0'>• c S = |{(u, v) ∈ E : u ∈ S, v ∈ S}|</ns0:formula><ns0:p>is the number of edges connecting S to the rest of the graph.</ns0:p><ns0:p>• cS = ∑ (u,v)∈E:u∈S,v ∈S w uv is the natural extension of c S to weighted graphs; the sum of weights of all edges connecting S to G \ S.</ns0:p><ns0:p>1 For every variable or function defined over the unweighted graph, will use a '∼' to denote its weighted counterpart </ns0:p><ns0:formula xml:id='formula_1'>max u∈S ∑ v ∈S w uv d(u) ↓ Average ODF 1 n s ∑ u∈S |{(u,v)∈E:v ∈S}| d(u) 1 n s ∑ u∈S ∑ v ∈S w uv d(u)</ns0:formula><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Community scoring functions f (S) for weighted and unweighted networks.</ns0:p><ns0:p>• d(u) = ∑ v =u w uv is the natural extension of the vertex degree d(u) to weighted graphs; the sum of weights of edges incident to u.</ns0:p><ns0:p>• d S (u) = |{v ∈ S : (u, v) ∈ E}| and dS (u) = ∑ v∈S w uv are the (unweighted and weighted, respectively) degrees 2 restricted to the subgraph S.</ns0:p><ns0:p>• d m and dm are the median values of d(u), u ∈ V . 3</ns0:p><ns0:p>The left column in table <ns0:ref type='table'>1</ns0:ref> shows the community scoring functions for unweighted networks defined in <ns0:ref type='bibr' target='#b41'>(Yang and Leskovec, 2015)</ns0:ref>. These functions characterize some of the properties that are expected in networks with a strong community structure, with more ties between nodes in the same community than connecting them to the exterior. There are scoring functions based on internal connectivity (internal density, edges inside, average degree), external connectivity (expansion, cut ratio) or a combination of both (conductance, normalized cut, and maximum and average out degree fractions). Uparrow (respectively, downarrow) indicates the higher (resp., lower) the scoring function value the stronger the clustering.</ns0:p><ns0:p>On the right column we propose generalizations to the scoring functions which are suitable for weighted graphs while most closely resembling their unweighted counterparts. Note that for graphs which only have weights 0 and 1 (essentially unweighted graphs) each pair of functions is equivalent (any definition that didn't satisfy this wouldn't be a generalization at all).</ns0:p><ns0:p>• Internal Density, Edges Inside, Average Degree: These definitions are easily and naturally extended by replacing the number of edges by the sum of their weights.</ns0:p><ns0:p>• Expansion: Average number of edges connected to the outside of the community, per node. For weighted graphs, average sum of edges connected to the outside, per node.</ns0:p><ns0:p>• Cut Ratio: Fraction of edges leaving the cluster, over all possible edges. The proposed generalization is reasonable because edge weights are upper bounded by 1 and therefore relate easily to the unweighted case. In more general weigthed networks, however, this could take values well over 1 while lacking many 'potential' edges (as edges with higher weights would distort the measure). In general bounded networks (with bound other than 1) it would be reasonable to divide the result by the bound, which would result in the function taking values between 0 and 1 (0 with all possible edges being 0 and 1 when all possible edges reached the bound).</ns0:p><ns0:p>• Conductance and Normalized Cut: Again, these definitions are easily extended using the methods described above.</ns0:p><ns0:p>• Maximum and Average Out Degree Fraction: Maximum and average fractions of edges leaving the cluster over the degree of the node. Again, in the weighted case the number of edges is replaced by the sum of edge weights.</ns0:p><ns0:p>Some of the introduced functions (internal density, edges inside, average degree, clustering coefficient) take higher values the stronger the clusterings are, while the others (expansion, cut ratio, conductance, normalized cut, out degree fraction) do the opposite.</ns0:p><ns0:p>Clustering coefficient.</ns0:p><ns0:p>Another possible scoring function for communities is the clustering coefficient or transitivity: the fraction of closed triplets over the number of connected triplets of vertices. A high internal clustering coefficient (computed on the graph induced by the vertices of a community) matches the intuition of a well connected and cohesive community inside a network, but its generalization to weighted networks is not trivial.</ns0:p><ns0:p>There have been several attempts to come up with a definition of the clustering coefficient for weighted networks. One is proposed in <ns0:ref type='bibr' target='#b3'>(Barrat et al., 2004</ns0:ref>) and is given by</ns0:p><ns0:formula xml:id='formula_2'>c i = 1 d(i)(d(i)−1) ∑ j,h w i j +w ih 2 A i j A jh A ih .</ns0:formula><ns0:p>Note that this gives a local (i.e. defined for each vertex) clustering coefficient.</ns0:p><ns0:p>While this may work well on some weighted networks, in the case of complete networks (e.g. such as those built from correlation of time series as in <ns0:ref type='bibr' target='#b35'>(Renedo and Arratia, 2016</ns0:ref>)), we obtain</ns0:p><ns0:formula xml:id='formula_3'>c i = 1 d(i)(d(i) − 1) ∑ j,h w i j + w ih 2 (1) = ∑ jh w i j + ∑ jh w ih d(i)(n − 2) • 2 = (n − 2) ∑ j w i j + (n − 2) ∑ h w ih d(i)(n − 2) • 2 = (n − 2) d(i) + (n − 2) d(i) d(i)(n − 2) • 2 = 1,</ns0:formula><ns0:p>which doesn't give any information about the network.</ns0:p><ns0:p>An alternative was proposed in <ns0:ref type='bibr' target='#b23'>(McAssey and Bijma, 2015)</ns0:ref> with complete weighted networks in mind (with weights in the interval [0, 1]), which makes it more adequate for our case.</ns0:p><ns0:p>• For t ∈ [0, 1] let A t be the adjacency matrix with elements A t i j = 1 if w i j ≥ t and 0 otherwise.</ns0:p><ns0:p>• Let C t the clustering coefficient of the graph defined by A t .</ns0:p><ns0:p>• The resulting weighted clustering coefficient is defined as</ns0:p><ns0:formula xml:id='formula_4'>C = 1 0 C t dt (2)</ns0:formula><ns0:p>For networks where the weights are either not bounded or bounded into a different interval than [0, 1], the most natural approach is to simply take</ns0:p><ns0:formula xml:id='formula_5'>C = 1 w w 0 C t dt, (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>)</ns0:formula><ns0:p>where w can be either the upper bound or, in the case of networks with no natural bound, the maximum edge weight. The computation of this integral, which can be expressed as a sum of the values of C t (a finite amount) is detailed in the appendix (5.1.1).</ns0:p><ns0:p>It is a desirable property that the output of scoring functions remain invariant under uniform scaling, that is, if we multiply all edge weights by a constant φ > 0, as the community structure of the network would be the same. This holds for all of the measures of the third group, which combine the notions of internal and external connectivity.</ns0:p><ns0:p>This means that these scores will be less biased in favour of networks with high overall weight (for the internal connectivity based scores) or low overall weight (for the external connectivity ones). It Manuscript to be reviewed</ns0:p><ns0:p>Computer Science though, the total weight is kept constant, so even scores without this property could still give valuable information.</ns0:p><ns0:p>Let Gφ (V, Ẽφ ) be the weigthed graph obtained by multiplying all edge weights in Ẽ by real positive number φ . In this case, <ns0:ref type='bibr'>u)</ns0:ref>. This means that the internal density, edges inside, average degree, expansion and cut ratio behave linearly (with respect to their edge weights). Conductance, normalized cut and maximum and average out degree fractions, on the other hand, remain constant under these transformations. Since the notion of community structure is generally considered in relation to the rest of the network (a subset of vertices belong to the same community because they are more connected among themselves than to vertices outside of the community), it seems reasonable to consider that the same partitions on two graphs whose weights are the same up to a multiplicative positive constant factor have the same scores. This makes the scores in the third group, the only ones for which this property holds, more adequate in principle.</ns0:p><ns0:formula xml:id='formula_7'>n S φ = n S , m S φ = φ m S , c S φ = φ c S , d S φ (u) = φ d S (</ns0:formula><ns0:p>For the chosen definition of clustering coefficient this property also holds, as all terms in the integral in equation ( <ns0:ref type='formula' target='#formula_5'>3</ns0:ref>) behave linearly (the proof is immediate with a change of variables), and that linear factor cancels out.</ns0:p></ns0:div>
<ns0:div><ns0:head>Modularity</ns0:head><ns0:p>As for the modularity <ns0:ref type='bibr' target='#b27'>(Newman, 2006b)</ns0:ref>, it is defined as:</ns0:p><ns0:formula xml:id='formula_8'>Q = 1 2 m ∑ i j w i j − d(i) d( j) 2 m δ (c i , c j ).<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Then, by multiplying the edges by a constant φ > 0, we get the graph Gφ (V, Ẽφ ) of modularity:</ns0:p><ns0:formula xml:id='formula_9'>Q φ = 1 2 mφ ∑ i j φ w i j − φ 2 d(i) d( j) 2 mφ δ (c i , c j ) = 1 2 mφ φ ∑ i j w i j − d(i) d( j) 2 m δ (c i , c j ) = Q,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>which means that modularity is also invariant under uniform scaling.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Randomized graph</ns0:head><ns0:p>The algorithm proposed here to generate a random graph which will serve as a null model is a modification of the switching algorithm described in <ns0:ref type='bibr' target='#b25'>(Milo et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b33'>Rao et al., 1996)</ns0:ref>. It produces a graph with the same weighted degree sequence as the original, but otherwise as independent from it as possible. Each step of this algorithm involves randomly selecting two edges AC and BD and replacing them with the new edges AD and BC (provided they didn't exist already). This leaves the degrees of each vertex A, B,C and D unchanged while shuffling the edges of the graph.</ns0:p><ns0:p>One way to adapt this algorithm to our weighted graphs (more specifically, complete weighted graphs, with weights in [0, 1]) is, given vertices A, B, C and D, transfer a certain weight w from w AC to w AD , and from w BD to w BC 4 . We will select only sets of vertices {A, B,C, D} such that w AC > w AD and w BD > w BC , that is, we will be transferring weight from 'heavy' edges to 'weak' edges. For any value of w, the weighted degree of the vertices remains constant, but if it is not chosen carefully there could be undesirable side effects.</ns0:p></ns0:div>
<ns0:div><ns0:head>Selection of w</ns0:head><ns0:p>We distinguish between two types of weighted networks: those with an upper bound on the possible values of their edge weights given by the nature of the data (usually 1, such as in the Forex correlation network -see §2.7 below), and those without (such as social networks where edge weights count the number of interactions between nodes). Networks with negative weights have not been studied here, so 0 will be a lower bound in all cases.</ns0:p><ns0:p>However, in the case of networks which are upper and lower bounded, this results in a very large number of edge weights attaining the bounds, which might be undesirable (particularly networks like the Forex network, in which very few edges, if any, have weights 0 or 1) and give new randomized graphs that look nothing like the original data. 4 Recall w i j refers to the weight of the edge between vertices i and j</ns0:p></ns0:div>
<ns0:div><ns0:head>6/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The value of w that most closely translates the essence of the switching method for unweighted graphs would perhaps be the maximum that still keeps all edges within their set bounds. This method seems particularly suited to sparse graphs with no upper bound, because it eliminates (by reducing its weight to zero) at least an edge per iteration. Other methods without this property could dramatically increase the edge density of the graph, constantly adding edges by transferring weight to them, while rarely removing them.</ns0:p><ns0:p>However, in the case of very dense graphs such as the Forex correlation network (or any other graph similarly constructed from a correlation measure), this method results in a large number of edge weights attaining the bounds (and in the case of the lower bound 0, removing the edge), which can reduce this density dramatically.</ns0:p><ns0:p>As an alternative, to produce a new set of edges with a similar distribution to those of the original network, we can impose the sample variance (i.e. 1 n−1 ∑ n i, j=1 (w i j − m) 2 , where m is the mean) to remain constant after applying the transformation, and find the appropriate value of w. The variance remains constant if and only if the following equality holds:</ns0:p><ns0:formula xml:id='formula_10'>(w AC − m) 2 + (w BD − m) 2 + (w AD − m) 2 + (w BC − m) 2 = (w AC − w − m) 2 + (w BD − w − m) 2 + (w AD + w − m) 2 + (w BC + w − m) 2 (6) ⇐⇒ 2 w2 + w(−w AC − w BD + w AD + w BC ) = 0.</ns0:formula><ns0:p>The solutions to this equation are w = 0 (which is trivial and corresponds to not applying any transformation to the edge weights) and w = w AC +w BD −w AD −w BC 2 .</ns0:p><ns0:p>While this alternative can result in some weights falling outside of the bounds, in the networks we studied it is very rare, so it is enough to discard these few steps to obtain the desired results.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> shows how the graph size decreases as the algorithm iterates with the maximum weight method, which also produces a dramatic increase in the variance. The constant variance method on the other hand doesn't remove any edges and the the size stays constant (as well as the variance, which is constant by definition, so the their corresponding lines coincide at 1).</ns0:p><ns0:p>However, applying the constant variance method on networks that are sparsely connected (such as most reasonably big social networks) results in a big increase in the graph size, to the point of actually becoming complete weighted graphs (see figure <ns0:ref type='figure'>2</ns0:ref>). Meanwhile, the maximum weight method doesn't significantly alter the size of the graph.</ns0:p><ns0:p>Therefore, we will use the constant variance method only for very densely connected networks, such as correlation networks, which are in fact complete weighted graphs. For sparse networks, the maximum weight method will be the preferred choice.</ns0:p><ns0:p>Note that if all edge weights are either 0 or 1, in both cases this algorithm is equivalent to the original switching algorithm for discrete graphs, as in every step the transferred weight will be one if the switch can be made without creating double edges, or zero otherwise (which corresponds to the case in which the switch cannot be made).</ns0:p></ns0:div>
<ns0:div><ns0:head>Number of iterations</ns0:head><ns0:p>To determine the number T m of iterations for the algorithm to sufficiently 'shuffle' the network (where m is the size of the graph, and T a parameter we select), we study the variation of information <ns0:ref type='bibr' target='#b24'>(Meilȃ, 2007)</ns0:ref> of the resulting clustering (in this case using the Louvain algorithm, though other clustering algorithms could be used instead) with respect to the initial one. (In section 2.5 we discuss variation of information, and other clustering similarity metrics that we use in this work, and in section 2.6 we detail all the clustering algorithms that we put to test.) Figures <ns0:ref type='figure'>1 and 2</ns0:ref> show a plateau where the variation of information stops increasing after around T = 1 (which corresponds to one iteration per edge of the initial graph). This is consistent with the results for the original algorithm in <ns0:ref type='bibr' target='#b25'>(Milo et al., 2003)</ns0:ref> for unweighted graphs, and we can also select T = 100 as a value that is by far high enough to obtain a sufficiently mixed graph.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Bootstrap with perturbation</ns0:head><ns0:p>Non-parametric bootstrap, with and without perturbation or 'jittering', has been used to study the stability of clusters of euclidean data sets <ns0:ref type='bibr' target='#b17'>(Hennig, 2007)</ns0:ref>. For graphs, bootstrap resampling can be done on the set of vertices, and then build the resampled graph with the edges that the original graph induces on them (i.e. two resampled vertices will be joined by an edge if and only if they were adjacent in the original graph, with the same weight in the case of weighted graphs). As for adding noise to avoid duplicate elements, it can be added to the edge weights. We suggest generating that noise from a normal distribution truncated to stay within the bounds of the edge weights of each graph (which means it can be truncated on one or both sides depending on the graph).</ns0:p><ns0:p>Then, to deal with copies of the same vertex on the resampled graph, it seems necessary to add heavy edges between them to reflect the idea that a vertex and its copy should be similar and well connected between each other. Not doing so would incentivize the clustering methods to separate them in different clusters, because they generally try to separate poorly connected vertices. We can distinguish two cases:</ns0:p><ns0:p>• Graphs with edge weights built from correlations or other similar graphs which by their nature have a specific upper bound on the edge weights (usually 1): We assign the value of the upper bound to the edge weight. After applying the perturbation, this will result in a weight which will be close to that upper bound.</ns0:p><ns0:p>• Other weighted graphs, where no particular upper bound to the edge weights is known: To assign these edges very high weights (to reflect the similarity that duplicate vertices should have in the resampled network) within the context of the network, one option is to sample values from the highest weights (e.g. the top 5%) of the original edge set.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Synthetic ground truth models</ns0:head><ns0:p>Another way of comparing and assessing the fit of a clustering algorithm is to compare it to a ground truth community structure if there is one, which is seldom known in reality. Alternatively one can synthetically generate a graph with a ground truth community structure. This will allow us to verify that the results of the algorithm match the expected outcome. For the particular case of time series correlation networks one can generate the time series using a suitable model that imposes a community structure with respect to correlations, such as the Vector Autoregressive (VAR) model construction in <ns0:ref type='bibr' target='#b2'>(Arratia and Cabaña, 2013)</ns0:ref>, and then compute the values of the edges accordingly.</ns0:p><ns0:p>A common benchmark for clustering algorithm evaluation is the family of graphs with a pre-determined community structure generated by the l-partition model <ns0:ref type='bibr' target='#b5'>(Condon and Karp, 2001;</ns0:ref><ns0:ref type='bibr' target='#b16'>Girvan and Newman, 2002;</ns0:ref><ns0:ref type='bibr' target='#b14'>Fortunato, 2010)</ns0:ref>. It is essentially a block-based extension of the Erdös-Renyi model, with l blocks of g vertices, and with probabilities p in and p out of having edges within the same block and between different blocks respectively.</ns0:p><ns0:p>A more general approach is the stochastic block model (SBM) <ns0:ref type='bibr' target='#b18'>(Holland et al., 1983;</ns0:ref><ns0:ref type='bibr' target='#b40'>Wang and Wong, 1987)</ns0:ref>, which uses a probability matrix P (which has to be symmetric in the undirected case) to determine probabilities of edges between blocks. P i j will be the probability of having an edge between any given pair of vertices belonging to blocks i and j respectively. Then, having higher values in the diagonal than in the rest of the matrix will produce strongly connected communities. Note that subgraph induced by each community is in itself an Erdös-Renyi graph (with p = P ii for the community i). This model also allows having blocks of different sizes. While this model can itself be used for community detection by trying to fit it to any given graph <ns0:ref type='bibr' target='#b22'>(Lee and Wilkinson, 2019)</ns0:ref>, here we will simply use it as a tool to generate graphs of a predetermined community structure.</ns0:p><ns0:p>To obtain a weighted SBM (WSBM) graph, we propose a variation of the model which produces multigraphs, which can then be easily converted into weighted graphs by setting all edge weights as their corresponding edge count. In this case, probability matrix of the original SBM will be treated as the matrix of expectations between edges of each pair of blocks. Then, we simply add edges one by one with the appropriate probability (the same at each step) that will allow each weight expectation to match its defined value. By definition, the probability of the edge added at step k to join vertices i and j is given by</ns0:p><ns0:formula xml:id='formula_11'>P(e k = (i, j)) = E i j #steps ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where E i j is the expected number of edges between them given by the expectation matrix. The sum of these probabilities for all vertices must add up to one, which gives</ns0:p><ns0:formula xml:id='formula_12'>#steps = 1 2 ∑ (|C i ||C j |E i, j ).<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>Note that the 1 2 factor is added because we are using undirected graphs, and we don't want to count edges (i, j) and ( j, i) twice.</ns0:p><ns0:p>This process produces a binomially distributed weight for each edge, though these distributions are not independent, so independently sampling each edge weight from the appropriate binomial distribution would not be equivalent.</ns0:p><ns0:p>We will use a graph sampled from this model with block sizes (40, 25, 25, 10), with a parametrized expectation matrix: </ns0:p><ns0:formula xml:id='formula_13'>    0.</ns0:formula><ns0:p>With λ = 1 the network will be quite uniform, but as it increases, the high values in the diagonal compared to the rest of the matrix will result in a very strong community structure, which should be detected by the clustering algorithms.</ns0:p><ns0:p>There are other possible extensions of the stochastic block model to weighted networks such as <ns0:ref type='bibr' target='#b0'>(Aicher et al., 2014)</ns0:ref> most cases), the edge distributions obtained in <ns0:ref type='bibr' target='#b0'>(Aicher et al., 2014)</ns0:ref> are not independent from each other, so the results are not exactly equivalent. For instance, in our case the total network weight is fixed and will not vary between samples.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5'>Clustering similarity measures</ns0:head><ns0:p>To compare and measure how similar two clusterings of the same network are, we will use two measures based on information theory, the Variation of Information (VI) and the Reduced Mutual Information (RMI), and another more classical measure, the (adjusted) Rand index which relates to the accuracy. All of these measures are constructed upon the contingency table of the labeling, which is summarized in table 2, and the terms are explained below. </ns0:p><ns0:formula xml:id='formula_15'>c rs = |P r ∩ P ′ s |<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>Define the probability P(r) (respectively, P(s)) of an object chosen uniformly at random has label r (resp. s), and the probability P(r, s) that it has both labels r and s, that is</ns0:p><ns0:formula xml:id='formula_16'>P(r) = a r n , P(s) = b s n , P(r, s) = c rs n (13)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Variation of information</ns0:head><ns0:p>The variation of information between two clusterings, a criterion introduced in <ns0:ref type='bibr' target='#b24'>(Meilȃ, 2007)</ns0:ref>, is defined as follows.</ns0:p><ns0:p>Definition 2.1. The entropy of a partition P = {P 1 , ..., P R } of a set is given by:</ns0:p><ns0:formula xml:id='formula_17'>H (r) = − R ∑ r=1 P(r) log (P(r)) ,<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>Definition 2.2. The mutual information is defined as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_18'>I(r; s) =</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Intuitively, the mutual information measures how much knowing the membership of an element of the set in partition P reduces the uncertainty of its membership in P ′ . This is consistent with the fact that the mutual information is bounded between zero and the individual partition entropies 0 ≤ I(r; s) ≤ min{H (r), H (s)},</ns0:p><ns0:p>and the right side equality holds if and only if one of the partitions is a refinement of the other.</ns0:p><ns0:p>Consequently, the variation of information will be 0 if and only if the partitions are equal (up to permutations of indices of the parts), and will get bigger the more the partitions differ. It also satisfies the triangle inequality, so it is a metric in the space of partitions of any given set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Reduced Mutual Information</ns0:head><ns0:p>The mutual information <ns0:ref type='bibr' target='#b6'>(Cover and Thomas, 1991)</ns0:ref>, often in its normalized form is one of the most widely used measures to compare graph partitions in cluster analysis. More recently <ns0:ref type='bibr' target='#b28'>(Newman et al., 2020)</ns0:ref> proposed the Reduced Mutual Information (RMI), an improved version which corrects the high mutual information values given to quite dissimilar partitions in some cases. For instance, if one of the partition is the trivial one splitting the network into n clusters of one element each, the standard mutual information will always take the maximal value (1, in the case of the normalized mutual information), even if the other is completely different. More generally, any partitions will always have maximal mutual information with all of their filtrations. This is crucial when comparing clustering algorithms, as some algorithms will output trivial partitions into single-element clusters when they fail to find a clustering structure. Therefore, it would not be possible to reliably measure the stability of these clustering methods with the standard mutual information.</ns0:p><ns0:p>Given r and s two labelings of a set of n elements, the Reduced Mutual Information is defined as:</ns0:p><ns0:formula xml:id='formula_20'>RMI(r; s) = I(r; s) − 1 n log Ω(a, b)<ns0:label>(18)</ns0:label></ns0:formula><ns0:p>where Ω(a, b) is an integer equal to the number of R × S non-negative integer matrices with row sums a = {a r } and column sums b = {b s }. Details on how to compute or approximate Ω(a, b) are given in the appendix 5.2.</ns0:p><ns0:p>The Reduced Mutual Information can also be defined in a normalized form (NRMI), in the same way the standard mutual information is, by dividing it by the average of the values of the reduced mutual information of labelings a and b with themselves:</ns0:p><ns0:formula xml:id='formula_21'>NRMI(r; s) = RMI(r; s) 1 2 [RMI(r; r) + RMI(s; s)] = I(r; s) − 1 n log Ω(a, b) 1 2 [H(r) + H(s) − 1 n (log Ω(a, a) + log Ω(b, b))] (19)</ns0:formula><ns0:p>We will use this normalized form to be able to compare more easily the results of networks with different number of nodes, as well as to compare them to other similarity measures.</ns0:p></ns0:div>
<ns0:div><ns0:head>Rand index</ns0:head><ns0:p>The Rand Index (RI) and the different measures derived from it <ns0:ref type='bibr' target='#b19'>(Hubert and Arabie, 1985)</ns0:ref> are based on the idea of counting pairs of elements that are classified similarly and dissimilarly across the two partitions P and P ′ . There are four types of pairs of elements:</ns0:p><ns0:p>• type I: elements are in the same class both in P and P ′</ns0:p><ns0:p>• type II: elements are in different classes both in P and P ′</ns0:p><ns0:p>• type III: elements are in different classes in P and in the same class in P ′ .</ns0:p><ns0:p>• type IV: elements are in the same class in P and different classes in P ′ .</ns0:p><ns0:p>Then, similar partitions would have many pairs of elements of types I and II (agreements) and few of type III and IV (disagreements). The Rand index is defined as the ratio of agreements over the total number of pairs of elements.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Using the terms of the contingency table (table 2), the Rand index is given by</ns0:p><ns0:formula xml:id='formula_22'>RI(r, s) = n 2 + 2 ∑ r s n rs 2 − [ R ∑ r=1 a r 2 + S ∑ s=1 b s 2 ] (20)</ns0:formula><ns0:p>An adjusted form of the Rand index <ns0:ref type='bibr' target='#b19'>(Hubert and Arabie, 1985)</ns0:ref> introduces a correction to account for all the pairings that match on both partitions because of random chance. The Adjusted Rand Index (ARI) is defined as:</ns0:p><ns0:formula xml:id='formula_23'>ARI(r; s) = Index − Expected Index Maximum Index − Expected Index , (<ns0:label>21</ns0:label></ns0:formula><ns0:formula xml:id='formula_24'>)</ns0:formula><ns0:p>which in terms of the contingency table (table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>) can be expressed as:</ns0:p><ns0:formula xml:id='formula_25'>ARI(r; s) = ∑ rs c rs 2 − [∑ r a r 2 ∑ s b s 2 ]/ n 2 1 2 [∑ r a r 2 + ∑ s b s 2 ] − [∑ r a r 2 ∑ s b s 2 ]/ n 2 (22)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='2.6'>Clustering algorithms</ns0:head><ns0:p>We have selected five well known state-of-the-art clustering algorithms based on different approaches, and all suitable for weighted graphs. They will be applied to all of the networks to then evaluate the results:</ns0:p><ns0:p>1. Louvain method <ns0:ref type='bibr' target='#b4'>(Blondel et al., 2008)</ns0:ref>, a multi-level greedy algorithm for modularity optimization.</ns0:p><ns0:p>We use the original algorithm, without the resolution parameter (i.e. with resolution γ = 1).</ns0:p><ns0:p>2. Leading eigenvector method <ns0:ref type='bibr' target='#b26'>(Newman, 2006a)</ns0:ref>, based on spectral optimization of modularity.</ns0:p><ns0:p>3. Label propagation <ns0:ref type='bibr' target='#b32'>(Raghavan et al., 2007)</ns0:ref>, a fast algorithm in which nodes are iteratively assigned to the communities most frequent in their neighbors.</ns0:p><ns0:p>4. Walktrap <ns0:ref type='bibr' target='#b30'>(Pons and Latapy, 2005)</ns0:ref>, based on random walks. 5. Spin-glass <ns0:ref type='bibr' target='#b34'>(Reichardt and Bornholdt, 2006)</ns0:ref>, tries to find communities in graphs via a spin-glass model and simulated annealing.</ns0:p><ns0:p>In any application, the choice of the clustering algorithm will be hugely dependant on the characteristics of the dataset, as well as its size. The methods proposed here, though, can be applied to evaluate any combination of weighted graph and clustering algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.7'>Data</ns0:head><ns0:p>• Zachary's karate club: Social network of a university karate club <ns0:ref type='bibr' target='#b42'>(Zachary, 1977)</ns0:ref>. The vertices are its 34 members, and the edge weights are the number of interactions between each pair of them.</ns0:p><ns0:p>In this case, we have a 'ground truth' clustering, which corresponds to the split of the club after a conflict, resulting into two clusters.</ns0:p><ns0:p>• Forex network: Network built from correlations between time series of exchange rate returns <ns0:ref type='bibr' target='#b35'>(Renedo and Arratia, 2016)</ns0:ref>. It was built from the 13 most traded currencies and with data of January 2009. It is a complete graph of 78 edges (corresponding to pairs of currencies) and has edge weights bounded between 0 and 1.</ns0:p><ns0:p>• News on Corporations network: In this network, a list of relevant companies are the nodes, while the weighted edges between them are set by the amount of times they have appeared together in news stories over a certain period of time (in this instance, on 2019-03-13). It has 899 nodes and 13469 edges.</ns0:p><ns0:p>• Social network: A Facebook-like social network for students from the University of California, Irvine <ns0:ref type='bibr' target='#b29'>(Opsahl and Panzarasa, 2009)</ns0:ref>. It has 1899 nodes (students) and 20296 edges, weighted by the number of characters of the messages sent between users.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Enron emails: a network composed of email communications among Enron employees <ns0:ref type='bibr' target='#b21'>(Klimt and Yang, 2004)</ns0:ref>. The version of the dataset used here is available in the igraphdata R package <ns0:ref type='bibr' target='#b7'>(Csardi, 2015)</ns0:ref>, and consists of a multigraph with 184 vertices (users) and 125,409 edges, corresponding to emails between users. We convert it to a an undirected weighted graph by using as weights for the edges the number of edges in the multigraph (i.e. the number of emails between the corresponding users).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.8'>Software</ns0:head><ns0:p>All the methods proposed here are implemented in R (R Core Team, 2015), and will be released in an upcoming package. This includes all of the significance functions and the adaptations to the existing boostrap methods to make them work on weighted graphs. All the code interacts with igraph objects <ns0:ref type='bibr' target='#b8'>(Csardi and Nepusz, 2006)</ns0:ref> for easy testing and manipulation of the graphs, as well as allowing the use of already implemented clustering methods and other existing functions for exploring graphs. The more computationally intensive parts such as the switching model have been written in C++ for better efficiency, and are called from R through Rcpp <ns0:ref type='bibr' target='#b13'>(Eddelbuettel and Franc ¸ois, 2011;</ns0:ref><ns0:ref type='bibr' target='#b12'>Eddelbuettel, 2013)</ns0:ref>.</ns0:p><ns0:p>Our code is available online: https://github.com/martirm/cluster_assessment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>DISCUSSION</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Cluster Significance</ns0:head><ns0:p>As explained in the Materials and Methods section, to test for cluster significance of a given clustering algorithm, we apply the scoring functions defined in section 2.1 to the clustering produced on the original graph and on randomized versions obtained by the method described in section 2.2. It should be expected that whenever the communities found by an algorithm on the original graph are significant, they will receive better scores than those found by the same algorithm on a graph with no actual community structure.</ns0:p><ns0:p>The results of computing these scores on the clustering obtained by the algorithms on each of the networks can be seen on tables S1.1, S1.2, S1.3, S1.4, S1.5 and S1.6 (recall that ↑ identifies scores for which higher is best, and ↓ means lower is best). For each combination of scoring function and algorithm, we represent its value on the original network, its mean across multiple samples of its randomized switching model, and the percentile rank of the original score in the distribution of randomized graph scores. This percentile rank value serves as a statistical test of significance for each of the scores: a score is significant if its value is more extreme (either higher or lower, depending on its type) than most of the distribution.</ns0:p><ns0:p>It is important to note that some of the scores greatly depend on the number of clusters, and cannot adequately compare partitions in which that number differs. For instance, internal density can easily be high on small communities, while it will generally take lower values on bigger ones, even when they are very well connected. This can result in networks with no apparent community structure having high overall internal density scores just because they are partitioned into many small clusters.</ns0:p><ns0:p>In comparison, scores that combine both internal and external connectivity (conductance, normalized cut, out degree fractions), clustering coefficient, and modularity suffer less from this effect and seem more adequate in most circumstances. These also happen to be the scores that are invariant under the multiplication of the weights by positive constants (see section 2.1).</ns0:p><ns0:p>We suggest focusing on the relative scores (the score of the actual network over the mean of the randomized ones) to simplify the process of interpreting the results, especially when trying to compare graphs of different nature. With relative scores, anything that differs significantly from 1 will suggest that the clustering is strong. For instance, in figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> we have the modularity of the stochastic block model for each algorithm, and for different values of the parameter λ (which will give increasingly stronger clusters). While the algorithms find results closer to the ground truth the bigger λ is, only the relative scores give us that insight. However, when comparing several clustering methods on the same network (and not simply trying to determine if a single given method produces significant results), absolute scores are more meaningful to determine which one is best.</ns0:p><ns0:p>For the weighted stochastic block model graph, the clustering algorithms get results closer to the ground truth clustering the bigger the λ parameter is, as one would expect, and for λ > 30 the results perfectly match the ground truth clustering outcome in almost all cases (a bit earlier for the Louvain, Walktrap and spinglass cases). The relative scores match these results, and get better as λ increases as Manuscript to be reviewed Computer Science well (figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>). Note that in figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>, there are some jumps for the relative modularity in the spin-glass case, which are caused by the instability of this algorithm (see section 3.2). This effect is no longer present when the structure of the network is stronger (λ > 8).</ns0:p><ns0:p>In table <ns0:ref type='table'>S1</ns0:ref>.1, corresponding to λ = 15, we can see how for the Louvain algorithm, the scores are more extreme (lower when lower is better, higher when higher is better) than those of the randomized network in almost all circumstances. In the case of the leading eigenvector algorithm the scores are slightly worse, but almost all of them still fall within statistical significance (if we consider p-values < 0.05). In both cases, the only metric that is better in the random network is internal density, due to the smaller size of the detected clusters (which is why by itself internal density isn't a reliable metric, as even in a network with very poor community structure it will be high for certain partitions into very small clusters that arise by chance). For both the label propagation and the Walktrap algorithms, the real network scores are not as close to the edge of the distribution of random scores, but they are still much better than the mean in all meaningful cases (the only exceptions are the internal density and edges inside, which are hugely dependent on cluster size and are therefore inadequate to compare partitions with a different number of clusters).</ns0:p><ns0:p>In the case of the karate club network (table <ns0:ref type='table'>S1</ns0:ref>.2), the label propagation algorithm gets the closest results to the ground truth clustering, and this is reflected in most scores being better than those of other methods. This doesn't apply to the modularity though, which is always higher for the Louvain and spin-glass, which produce identical clusters (this is to be expected, because Louvain is a method based on modularity optimization).</ns0:p><ns0:p>On the Forex graph (table <ns0:ref type='table'>S1</ns0:ref>.3), we can see that both the leading eigenvector and Walktrap algorithms produce almost identical results splitting the network into two clusters, while the spin-glass algorithm splits it into three and Louvain into four. The scores which are based on external connectivity give better results to the Walktrap and leading eigenvector, while the spin-glass partition has slightly better clustering coefficient and better modularity (with Louvain having very similar values in those two scores).</ns0:p><ns0:p>It is also important to disregard the results of the scoring functions whenever the algorithms fail to distinguish any communities and either groups the whole network together or separates each element into its own cluster (such as the label propagation algorithm on the Forex network, seen in table <ns0:ref type='table'>S1</ns0:ref>.3). In this case, the scores which are based on external connectivity will be optimum, as the cut c s of the partition is 0, but that of course doesn't give any information at all. In addition, the normalized cut and conductance could be not well defined in this case, as it is possible to have a division by 0 for some of the clusters.</ns0:p><ns0:p>As for the news on corporations graph (table <ns0:ref type='table'>S1</ns0:ref>.4), the results and in particular the number of clusters vary greatly between algorithms (from 82 clusters for the Walktrap to only 2 for the label propagation).</ns0:p><ns0:p>While the label propagation algorithm scores well on some measures due to successfully splitting to very weakly connected components of the network, others such as the clustering coefficient or internal density are very low. Louvain and spin-glass have very similar scores across most measures and seem to be the best, though leading eigenvector does have better conductance and normalized cut. In this case the high variation in number of clusters across algorithms that still score highly could suggest that there is not a single predominant community structure in the network.</ns0:p><ns0:p>In the Enron graph (table <ns0:ref type='table'>S1</ns0:ref>.5) Louvain also produces the best results for most scores, particularly in conductance and normalized cut, and it significantly surpasses all other algorithms while having larger clusters, with the only exception of label propagation, which partitions the network into much smaller clusters. The spin-glass algorithm stands out as having by far the worse results across all scores, even though its number of clusters ( <ns0:ref type='formula' target='#formula_15'>12</ns0:ref>) is the same as in leading eigenvector and similar to Louvain.</ns0:p><ns0:p>For the social network (table <ns0:ref type='table'>S1</ns0:ref>.6), the Louvain, leading eigenvector and label propagation algorithms produce the same number of clusters (with spin-glass being also very close), which allows an unbiased comparison of scores. In this case, leading eigenvector has better results for almost all scores, except for clustering coefficient and modularity, for which Louvain is again the best algorithm. This huge disparity may be explained by the fact that modularity compares edge weights to a null model that considers the degrees of their incident vertices, and doesn't only discriminate between internal and external edges (as most of the scoring functions do).</ns0:p><ns0:p>Overall the Louvain algorithm seems to be the best at finding significant clusters, performing consistently well on a variety of weighted networks of very different nature. It is worth noting though that there are some limitations to it (and all modularity based methods in general) in terms of resolution limit <ns0:ref type='bibr' target='#b15'>(Fortunato and Barthélemy, 2007)</ns0:ref> that can appear when there are small communities in large networks, Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>though there are methods to address it, such as the use of a resolution parameter <ns0:ref type='bibr' target='#b1'>(Arenas et al., 2008)</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Cluster Stability</ns0:head><ns0:p>Using the non-parametric bootstrap method described in section 2.3, we resample the networks 999 times (R = 999), apply clustering algorithms to them, and compare them to their original clustering with the metrics from section 2.5. Stable clusterings are expected to persist through the process, giving small mean values of the variation of information, and high (close to 1) values of the normalized reduced mutual information and the Rand index. The results of the same method applied to the randomized versions of each network (see section 2.2) are also included, to have reference values for the stability of networks where there is no community structure. If the values of the clustering similarity measures, for the original and randomized networks, happened to be close together, that would suggest that the chosen algorithm produces a very unstable clustering on the network.</ns0:p><ns0:p>We observe in table <ns0:ref type='table'>S1</ns0:ref>.7 that for the stochastic block model example graph, all algorithms except for spin-glass produce very stable clusters, which is consistent with the fact that we chose parameters to give it a very strong community structure. Meanwhile, clustering algorithms applied to the Zachary and Forex networks (tables S1.8 and S1.9) produce clusters which are not as stable, but still much better than their baseline randomized counterparts. Note that the stability values for the label propagation algorithm in the Forex network (table <ns0:ref type='table'>S1</ns0:ref>.9) should be ignored, as in that instance the output is a single cluster (see table <ns0:ref type='table'>S1</ns0:ref>.3) which doesn't give any information. It is clear that while it works on less dense networks, the label propagation algorithm is not useful for complete weighted networks and it fails to give results that are at all meaningful.</ns0:p><ns0:p>On the news on corporations graph (table <ns0:ref type='table'>S1</ns0:ref>.10) spin-glass is again the most unstable algorithm, with results for the RMI and ARI (which are both close to 0) that suggest that the clusters of the original network and all the resampled ones are completely unrelated. In this case the label propagation algorithm is the most stable, while the rest of the algorithms are not as good. This might be in part explained by the fact that its clusters are much bigger than in other networks, which allows them to remain strongly connected after small perturbations.</ns0:p><ns0:p>Finally, we observe that algorithms on the Enron graph ( Manuscript to be reviewed</ns0:p><ns0:p>Computer Science As a general remark on stability observed from all resulting experiments is that the spin-glass algorithm is the most unstable across the networks we tested, which are a diverse representation of different kinds of weighted networks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSIONS</ns0:head><ns0:p>We have successfully observed how the community scoring functions, combined with the switching model, can easily help distinguish networks which have a community structure from others that don't.</ns0:p><ns0:p>A combination of a network and a clustering algorithm can be said to produce significant clusters when their scores stand out from the distribution of scores produced by the same algorithm on the collection of randomized graphs produced by the switching model. The experiments conducted on the stochastic block model networks of varying community strength support this hypothesis. This will be useful when working with networks for which there is little information available, and one wants to determine whether the results obtained from any given clustering algorithms do reflect an actual community structure or if they are simply given by chance.</ns0:p><ns0:p>We recommend avoiding the scoring functions that can be heavily influenced by variables like the number of clusters or their size (like internal density, which favours smaller clusters), because the information they provide is hard to interpret in a systematic manner. In comparison, functions that combine internal and external connectivity, like conductance or normalized cut, seem more robust. However, we observed a tendency of these measures to favour partitions into fewer bigger clusters, which makes comparisons difficult when we want to compare partitions with a different number of parts. In contrast, both modularity and clustering coefficient don't seem to be so dependant on the number of communities in the partition, which is a relevant advantage.</ns0:p><ns0:p>We remark that our approach consists of a global analysis of the partitions, but it is possible to perform similar evaluations based on individual scores of each cluster. In this case, some of the scoring functions that haven't proved very useful might provide a more meaningful insight into the local structure of the partition, as it is possible to have both strong and weak clusters in the same network.</ns0:p><ns0:p>Additionally, the use of the switching model to generate randomized graphs provides a valuable point of reference, especially when we don't have much information on the structure of the network. The methods proposed here to test cluster significance can also be used with any other scoring functions, which could even be customized depending on the characteristics that one might want to prioritize in any Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>given network clustering, such as giving more emphasis to internal connectivity, or external connectivity, or scores that naturally favour larger or smaller clusters.</ns0:p><ns0:p>In a more particular note, and according to our analyses, the Louvain algoritm, and to a lesser extent, the Walktrap algorithm, seem to be the most stable while producing significant clusterings, as specified by our scoring functions and across all networks considered. This reaffirms Louvain as one of the state-of-the-art clustering algorithms.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>APPENDIX</ns0:head><ns0:p>5.1 Computational complexity of scoring functions</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.1'>Weighted clustering coefficient</ns0:head><ns0:p>Let Γ be the number of connected triplets in the graph and γ the number of closed triplets (i.e. 3 times the number of triangles). As before Γ(t) and γ(t) are their respective values when only edges with weight greater or equal than t are considered. Then, the clustering coefficient or transitivity is defined as:</ns0:p><ns0:formula xml:id='formula_26'>C = 1 w t≥0 γ(t) Γ(t) dt, (<ns0:label>23</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>)</ns0:formula><ns0:p>This is an integral of a step function that takes a finite number of values (bounded by the number of different edge weights) which we will compute as follows:</ns0:p><ns0:p>1. Construct a hash table of all edges with their corresponding weights to be able to search if there is an edge between any two edges (and obtain it's weight) in constant time. Complexity: O(m)</ns0:p><ns0:p>2. Construct a hash table for each vertex containing all its neighbors. Can be done by iterating once over the edges and updating the corresponding tables at each step. This will be used to iterate over the connected triplets incident to each vertex. Complexity: O(m)</ns0:p><ns0:p>3. Construct a sorted list containing the edge weights at which either a connected triplet or a triangle appears (i.e. the maximum edge weight of that triangle or triplet), and an associated variable for each indicating whether it corresponds to a triangle or a triplet. For this, we iterate over the connected triplets using the hash tables from step 2, and for each, we check if it also forms a triangle by checking the hash table from step 1 (which allows each iteration to be done in constant amortized time). This step has complexity O(Γ log Γ), as the list has Γ + γ elements, and (Γ + γ) ∈ O(Γ).</ns0:p><ns0:p>4. We iterate the list from step 3 and compute the cumulative sums of connected triplets and closed connected triplets (which correspond to γ(t) and Γ(t) for increasing values of t in the list). This</ns0:p><ns0:p>gives us all values of γ(t) Γ(t) , from which we compute the integral (equation 23). This involves O(Γ) steps of constant complexity.</ns0:p><ns0:p>Therefore, the overall complexity of the algorithm is O(m + Γ log Γ). Because Γ is bounded by m 2 , we can also express the complexity only in terms of m (which will then be O(m 2 )), but that bound is not tight in most graphs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.2'>Other scoring functions</ns0:head><ns0:p>Computing m, mS , cS (for all values of S), as well as all vertex degrees and out degrees has complexity O(m), as it can be done sequentially by reading the edge list and updating the appropriate values as necessary. This means that all scoring functions except for the clustering coefficient, which are derived from these values, can be computed very efficiently.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Methods for counting contingency tables</ns0:head><ns0:p>To compute the number of contingency tables with fixed margins needed to obtain the value of the reduced mutual information, we mainly use the analytical approximation suggested by <ns0:ref type='bibr' target='#b28'>Newman et al. (2020)</ns0:ref>, which works whenever the number of clusters is substantially smaller than the number of nodes. This works well in most of the cases we study, except for the News graph when clustered with the Walktrap algorithm, which produces many single node clusters. For this case, we use a hybrid approach combining the analytical approximation for the clusters with more than one element, and then extending it to the full contingency table with the Markov chain Monte Carlo method described by Diaconis and Gangolli Manuscript to be reviewed <ns0:ref type='bibr'>(1995)</ns0:ref>. This estimates the size of the set by defining a nested sequence of subsets and obtaining the ratio between the size of each one of them and its predecessor with a Monte Carlo approximation.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Our solution consists of sorting and rearranging the rows and columns of the original contingency table so that smaller elements sit at the top left part of the table. Then, we use the analytical approximation on the submatrix formed by rows and columns with sums strictly greater than one (which will sit on the bottom right corner). This will be the size of the first subset of the chain, and the rest are estimated successively with the Markov chain Monte Carlo method.</ns0:p><ns0:p>This method works well on the contingency tables generated by the Walktrap clustering on our News graph, unlike the analytical approximation alone, which is inaccurate, or the Monte Carlo method alone, which is much slower. However if the RMI is to be used to compare partitions of very large graphs, establishing some general criteria to determine the largest subset that can be analytically estimated with enough accuracy might be needed, with the goal of minimizing the need for costly Monte Carlo approximations. This topic has a lot of potential to be studied in future work, and which we hope to address in the future in our clustAnalytics package.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>stability. A model to generate benchmark weighted networks based on the stochastic block model, which we use to test our methodology for stability and significance of clusters. Additionally we contribute with an R package clustAnalytics, which contains all the functions and methods for cluster analysis that are 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021) Manuscript to be reviewed Computer Science explained in this paper.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>is particularly interesting for networks with weights that are not naturally upper bounded by one, and facilitates comparisons between networks with completely different weight distributions. When we compare each network's scores to those of a randomized counterpart generated by the switching model,5/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1. Normalized size, variance and variation of information for the Louvain clustering after applying the proposed algorithm on the Forex graph. Horizontal axis is on logarithmic scale.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Consider a set of n elements and two labelings or partitions, one labeled by integers r = 1, . . . , R and the other labeled by integers s = 1, . . . , S, let's say P = {P 1 , . . . , P R } and P ′ = {P ′ 1 , . . . , P ′ S }. Define a r as the number of elements with label r in the first partition, b s the number of elements with label s in the second partition, and c rs be the number of elements with label r in the first partition and label s in the second. Formally, a r = |P r | =</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>, s) log P(r, s) P(r)P(s) (15) Definition 2.3. The variation of information of partitions P and P ′ is given by: V I(r; s) = H (r) + H (s) − 2I(r; s) Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Scores of the weighted stochastic block model as a function of the parameter λ , for each of the algorithms</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. VI distance between the ground truth clustering and the result of each of the algorithms for the weighted stochastic block model (WSBM), as a function of the parameter λ .</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>11 c 12 ... c 1S a 1 P 2 c 21 c 22 c 2S a 2 ... ... P R c R1 c R2 ... c RS a r sum b 1 b 2 ... b S n = ∑ c i j Contingency table of partitions P and P ′ , with labelings r and s.</ns0:figDesc><ns0:table /><ns0:note>, which can have edges sampled from any exponential family distribution. While our approach produces Bernoulli distributed edges (which can be approximated by a Poisson distribution in 9/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021) Manuscript to be reviewed Computer Science P ′ 1 P ′ 2 ... P ′ S sum P 1 c</ns0:note></ns0:figure>
<ns0:note place='foot' n='2'>We assume the weight function w uv is defined for every pair of vertices u,v of the weighted graph, with w uv = 0 if there is no edge between them.3 To prevent confusion between the function d S (•) and the median value (which only depends on G) d m we will always refer to subgraphs of G with uppercase letters.4/19PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:1:1:NEW 1 May 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Editor,
First of all, we would like to thank you for managing our manuscript and
the reviewers for their constructive remarks. We completed the requested major
revision of our manuscript entitle Clustering assessment in weighted networks,
Paper ID: CS-2021:02:57675, submitted to PeerJ Computer Science, on the basis
of the observations made by the reviewers.
We apologise for the longer time we took in revising our work in order to
comply to all reviewers’ comments. This was mostly due to an error that was
overlooked by all (reviewers and us alike), and we found when working through
the reviewers’ observations. It was an error in the values of the Reduced Mutual
Information for the walktrap algorithm in table 13, which was caused by the
data not meeting the assumptions for the analytical approximation used. We
implemented an alternative method which combines the analytical approximation on a subset of the table with a Markov chain Monte Carlo approximation
to extend it to the full set of contingency tables, and which we briefly describe
in the appendix.
We have removed figure 3 from the original version and move all our experiments results tables to a separate Supplemental Tables S1 file. We inform of
this at the end of Introduction.
Below you will find the details of our responses to each of the reviewer’s observations. The reviewers’ observations and our responses have been organized
as a series of numbered questions (Q) and answers (A).
Thanks again for all your feedback and support that helped us improve the
paper.
Best regards,
Argimiro Arratia and Martı́ Renedo Mirambell
1
Reviewer 1
(Q1) In equation 1 explain the steps that were performed.
(A1) We have explained the substitutions that are used in the equation,
which should make the steps clearer.
(Q2) The Louvain algorithm has parameters that can be modified, the parameters used must be presented.
(A2) We have clarified that we use resolution γ = 1.
(Q3) The 1/2 used in equation 21 can be applied to graphs in addition to
the directed ones, for example in weighted graphs (Ex: value -0.5/0.5).
(A3) The need for the 1/2 factor is to maintain consistency of the model.
The procedure uses an unweighted multigraph which is then converted into a
weighted graph by taking as edge weights the number of parallel edges between
each pair of vertices. That’s why in this case we only need to distinguish between
the directed multigraph case (which wouldn’t need the 1/2 factor) and the
undirected one, which is what we use in the article.
(Q4) The data in section 2.7 must be made available for download or the
work must explain where they are.
1
(A4) Clarified in section 2.8 that the datasets are available in the Github
repository together with the code.
(Q5) The article could feature a discussion session
(A5) We have reorganized the paper in such a way that includes an explicit
Discussion section. See also answers to Reviewer 3 where further details on the
rewriting of the paper are given.
(Q6) The ”scoring functions” of section 2.1 must be referenced.
(A6) The definitions of scoring functions for weighted networks are originally
ours. These extend related scoring functions for unweighted networks listed in
Yang and Leskoveck 2015. We have referenced this contribution of Yang and
Leskoveck at the beginning of the section 2.1.
(Q7) Present equation 2 also as a summation.
(A7) We have added an appendix where we describe how the integral is
computed. We have put a reference to that section after the equation and
direct the reader to the implementation. Thanks for this observation.
(Q8) In equation 18 add parentheses.
(A8) Parentheses added.
(Q9) The computational complexity of the methods used must be explained.
Many use the adjacency matrix, it has a high cost of n2 .
(A9) We have included an appendix dedicated to explain the computational
complexity of the scoring functions. In general we do not compute from the
adjacency matrix but we represent graphs as adjacency lists. Thanks for this
observation
(Q10) Say the scale of the graphics. The y-axis in figure 4 did not appear
to me.
(A10) We have improved the graphic and added the y-axis label to make
more clear that it represents the Variation of Information (VI).
2
Reviewer 2
(Q1) For the abstract, the authors should give some specific performance analyses for the verification experiments.
(A1) In the abstract we have written a general statement about performance: ”When applying the proposed methods to these synthetic ground truth
networks’ clusters, as well as to other weighted networks with known community
structure these correctly identify the best performing algorithms, which suggests
their adequacy for cases where the clustering structure is not known.” We feel
that adding technical details about performance analysis is not fit for an abstract. We have added an Appendix where we discuss these performance details
for algorithms, and within the Discussion session we make ample commentaries
on results.
(Q2) For the introduction, the background and the research significance
should be re-expressed. Furthermore, the challenges and contributions should
be explicitly summarized. There needs a paragraph at the end of this part to
describe the organization of this paper.
2
(A2) We have added a paragraph at the end of the Introduction giving an
outline of the paper. Thanks for reminding us of this. We have also rewritten
parts of the Introduction to stress our major contributions and major goals
which are to give criteria for assessing clustering significance and stability. We
have made clear what its meant with significance and stability. Thanks for
pointing this to us.
(Q3) There exist a lot of acronyms without definitions for the first time, such
as VAR, SBM, WSBM, and wSBM, etc., and some full names have been referred
to repeatedly after the first acronyms, such as reduced mutual information.
Please check the whole manuscript to revise them.
(A3) We have throughly revised the paper and introduced the acronyms
where each definition first appears (VI, SBM, WSBM, RMI, NRMI, VAR).
(Q4) In section 2.1, the definition and presentation of modularity should be
given sub-title to highlight liking other metrics. Further, the variable Q has
the same name with different definitions as in Section “Number of iterations”.
Please revise the second variable name. In a complex network, Q is always
considered modularity.
(A4) We added the modularity subtitle, and we renamed the variable Q to
T to avoid confusion with the modularity.
(Q5) For Figure 2, the “Forex club graph” should be corrected as “Forex
graph” and “club” is “karate club graph”. For the second sub-figure, are the
lines of variance and VI overlapped, please give the related explanation.
(A5) The names have been fixed. As for the second subfigure, the lines that
overlap correspond to size and variance, which both remain constant. We added
a comment explicitly pointing this out to make it more clear to the reader.
(Q6) Based on Figures 4 and 5, the trend analysis referred to the λ > 20
and λ = 15 may be inaccurate. Furthermore, why Figure 4 has not compared
with the algorithm spin-class?
(A6) The spin-glass algorithm was indeed missing from figures 4 and 5 plots,
and it has been added. The analysis of figure 4 was fixed, and we now mention
λ > 30 at which point all the Variation of Information plots sit at or very close
to 0 (which means the clusters found by the algorithms perfectly match the
ground truth). Also a comment on the behaviour of spin-glass in figure 4 for
lower λ values has been added. Finally, note that in the final version figures 4
and 5 are renumbered 3 and 4, since we have suppressed the original figure 3.
(Q7) In Section 3, some of the analysis referred to the Figures or Tables are
inaccurate, for example, “the leading eigenvector algorithm the effect isn’t as
pronounced” in line 451 on page 13, “internal density” in line 476 on page 13.
Please check the related content in terms of the tables.
(A7) We have rephrased this sentence to reflect the fact that the scores
for leading eigenvector are slightly worse than for Louvain (”the effect isn’t as
pronounced” could indeed be misleading and needlessly ambiguous).
(Q8) Please improve the pixel for each figure.
(A8) We have made improvements to the figures to improve clarity. We
tried to make them large enough to be readable while not occupying too much
3
space. They are all in vectorial format to make sure they look sharp and can
be zoomed into if necessary.
(Q9) The manuscript has been presented in a disorganized way, the theoretical presentation and the experimental design should be given in different
sections. Furthermore, the experiments and analysis should be reorganized.
(A9) We have reorganized the paper so that theory and experiments are in
different sections. And within theory (Materials and Methods section) we clearly
separate in subsections the different methods we designed and their use. In the
Discussion of experiments we also clearly separate the tests for significance from
the tests for stability. See further comments on reorganization below. Thanks
for pointing this out to us.
3
Reviewer 3
(Q1) Despite the importance of evaluating the performance of such methods, the
current work, in fact, is not able of telling readers how well a method performs.
As the authors themselves say at the conclusion, the metrics help distinguish
networks, which have a community structure from others that do not. It does
not tell how well the metric can cluster or form groups, maintaining properties.
(A1) We emphasize the importance of distinguishing networks with community structure from others that don’t because this is key in measuring cluster
significance. On networks with no significant community structure, many clustering algorithms will still give us a partition, so it is important to know whether
that output is actually meaningful, especially whenever we study a network for
which we have no prior information.
(Q2) I suggest authors work on article organization, first by presenting the
things you propose. Second, in the following section, I suggest presenting the
experimental design. In this section, you can present carefully all the material
and methods. You detail the way you generate random graphs. The way you
generate perturbations.
(A2) We made some improvements on the organization of the article, following the structure suggested by the reviewer. We added a few paragraphs at the
beginning of section 2 where we describe the experimental design, referencing
the subsections where each particular method is described, as well as making
some changes and better presenting the discussion section. This should help
follow the structure and organization of the article.
(Q3) Validity of the findings. the findings do no support the main goal of
the paper. As you are presenting new metrics because the existing metrics do
not perform well (or are not appropriate) to weighted graphs, you must show
this clearly. Moreover, it is not clear that your metrics show how a clustering
method performs. The way you conclude the article, you are just telling us that
you can differentiate random graphs from graphs with clusters.
(A3) Since our work focuses exclusively on weighted networks we disregarded
any unweighted metrics because they can’t properly capture the properties of
the network (for this the edge weights are essential). For instance, correlation
4
networks such as our Forex network simply become an n-clique if they are
treated as unweighted networks, rendering all these metrics useless. Following
this reviewer’s advise we have reorganized the presentation and rewritten part
of the conclusions making now clear our objectives, which are to give criteria
for assessing significance and stability of clustering algorithms.
(Q4) The abstract is confusing. I suggest focusing on the real paper contribution. Developing a way to generate random graphs is not a contribution of
the article. There are many methods to this purpose.
(A4) We have reorganized some ideas in the abstract and introduction to
emphasize our contributions. Thank you for pointing this out.
We understand that the reviewer refers to the switching model method for
generating random graphs. In that regard, while there are many existing methods to produce random graphs with prescribed degree distributions, ours is
specifically made for weighted graphs, and we propose two variants that suit
different kinds of graphs (complete graphs where all the information is given by
the edge weights, and graphs with more sparse sets of edges). This is important
when testing cluster significance (as we want to compare to random graphs with
otherwise similar structure), which is why we consider it a contribution.
5
" | Here is a paper. Please give your review comments after reading it. |
156 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>We provide a systematic approach to validate the results of clustering methods on weighted networks, in particularly for the cases where the existence of a community structure is unknown. Our validation of clustering comprises a set of criteria for assessing their significance and stability. To test for cluster significance, we introduce a set of community scoring functions adapted to weighted networks, and systematically compare their values to those of a suitable null model. For this we propose a switching model to</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>produce randomized graphs with weighted edges while maintaining the degree distribution constant. To test for cluster stability, we introduce a non parametric bootstrap method combined with similarity metrics derived from information theory and combinatorics. In order to assess the effectiveness of our clustering quality evaluation methods, we test them on synthetically generated weighted networks with a ground truth community structure of varying strength based on the stochastic block model construction. When applying the proposed methods to these synthetic ground truth networks' clusters, as well as to other weighted networks with known community structure, these correctly identify the best performing algorithms, which suggests their adequacy for cases where the clustering structure is not known. We test our clustering validation methods on a varied collection of well known clustering algorithms applied to the synthetically generated networks and to several real world weighted networks. All our clustering validation methods are implemented in R, and will be released in the upcoming package clustAnalytics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Clustering of networks is a popular research field, and a wide variety of algorithms have been proposed over the years. However, determining how meaningful the results are can often be difficult, as well as choosing which algorithm better suits a particular data set. This paper focuses specifically on weighted networks (that is, those in which the connections between nodes have an assigned numerical value representing some property of the data), and we propose novel methods to validate the community partitions of these networks obtained by any given clustering algorithm. In particular our clustering validation methods focus on two of the most important aspects of cluster assessment: the significance and the stability of the resulting clusters.</ns0:p><ns0:p>We consider clusters produced by a clustering algorithm to be significant if there are strong connections within each cluster, and weaker connections (or fewer edges) between different clusters. This notion can be quantified and formalized by applying several community scoring functions (also known as quality functions in <ns0:ref type='bibr' target='#b14'>Fortunato (2010)</ns0:ref>), that gauge either the intra-cluster or inter-cluster density. Then, it can be determined that the partition of a network into clusters is significant if it obtains better scores than those for a comparable network with uniformly distributed edges.</ns0:p><ns0:p>On the other hand, stability measures how much a clustering remains unchanged under small perturbations of the network. In the case of weighted networks, these could include the addition and removal of vertices, as well as the perturbation of edge weights. This is consistent with the idea that meaningful clusters should capture an inherent structure in the data and not be overly sensitive to small and/or local variations, or the particularities of the clustering algorithm.</ns0:p><ns0:p>Our goal is to provide a systematic approach to perform these two important clustering validation criteria, which can be used when the underlying structure of a network is unknown, because in this case different algorithms might produce completely different results, and it is not trivial to determine which ones are more adequate, if any at all.</ns0:p><ns0:p>To assess the significance of communities structure, in general weighted networks, we provide a collection of community scoring functions that measure some topological characteristics of the groundtruth communities as defined by Yang and Leskovec in <ns0:ref type='bibr' target='#b41'>(Yang and Leskovec, 2015)</ns0:ref> for unweighted networks. Most of these topological characteristics focus on the relation between the external and internal connectivity of clusters, density of edges and degree distributions. Our scoring functions are proper extensions of those in <ns0:ref type='bibr' target='#b41'>(Yang and Leskovec, 2015)</ns0:ref> to weighted networks. A separate case is the clustering coefficient, a popular scoring function in the analysis of unweighted networks. We examined several existing definitions for the weighted case, being most relevant to us the descriptions given by <ns0:ref type='bibr' target='#b3'>Barrat et al. (2004)</ns0:ref>, <ns0:ref type='bibr' target='#b36'>Saramäki et al. (2007)</ns0:ref>, and <ns0:ref type='bibr' target='#b23'>McAssey and Bijma (2015)</ns0:ref>, and found the latter to be the most versatile (for instance, it can be used in complete graphs where all the information is given by the values of the edges, such as those generated from correlation networks). The clustering coefficient of <ns0:ref type='bibr' target='#b23'>McAssey and Bijma (2015)</ns0:ref> is defined in terms of an integral, and we provide an efficient way of computing it. Then, to evaluate the significance (in a statistical sense) of the scores produced by any scoring function, we compare them against null models with similar properties but without any expectations of a community structure. For this we propose an extension to weighted graphs of the switching model <ns0:ref type='bibr' target='#b25'>(Milo et al., 2003)</ns0:ref> which produces random graphs by rewiring edges while maintaining the vertex degree sequence. The idea is that a significant community in any given network should present much better scores than those of the randomly generated ones.</ns0:p><ns0:p>As for the stability of clusters, it has been studied more widely for algorithms that work on Euclidean data (as opposed to networks, weighted or not). For instance, von Luxburg (2010) uses both resampling and adding noise to generate perturbed versions of the data. <ns0:ref type='bibr' target='#b17'>Hennig (2007)</ns0:ref> introduces bootstrap resampling (with and without perturbation) to evaluate cluster stability. Also for Euclidean data, <ns0:ref type='bibr' target='#b37'>Vendramin et al. (2010)</ns0:ref> introduce a systematic approach for cluster evaluation that combines cluster quality criteria with similarity and dissimilarity metrics between partitions, and searches for correlations between them. Our approach consists of a bootstrap technique with perturbations adapted to clustering on networks, that resembles what Hennig does for Euclidean data. That is, the set of vertices is resampled multiple times, and the clustering algorithms are applied to the resulting induced networks. In this case, the perturbations are applied to the edge weights after resampling the vertices, but the standard bootstrap method without perturbation can be used on all networks, weighted or not.</ns0:p><ns0:p>To compare how the clusters of the resampled networks differ from the originals, we use three measures. The adjusted Rand index <ns0:ref type='bibr' target='#b19'>(Hubert and Arabie, 1985)</ns0:ref> is a similarity measure that counts the rate of pairs of vertices that are in agreement on both partitions, corrected for chance. Additionally, we use measures derived from information theory to compare partitions such as the recently introduced Reduced Mutual Information <ns0:ref type='bibr' target='#b28'>(Newman et al., 2020)</ns0:ref>, which corrects some of the issues with the original mutual information or its normalized version <ns0:ref type='bibr' target='#b9'>(Danon et al., 2005)</ns0:ref>. For example, giving maximal scores when one of the partitions is trivial, which in our case would mean that failed algorithms that split most of the network into single vertex clusters would be considered very stable. Other attempts at providing adjusted versions of the mutual information include <ns0:ref type='bibr' target='#b11'>Dom (2002)</ns0:ref>, <ns0:ref type='bibr' target='#b38'>Vinh et al. (2010)</ns0:ref> and <ns0:ref type='bibr' target='#b43'>Zhang (2015)</ns0:ref>.</ns0:p><ns0:p>The other information theory measure we employ for the sake of comparison and control is the Variation of Information (VI) <ns0:ref type='bibr' target='#b24'>(Meilȃ, 2007)</ns0:ref>. The VI is a distance measure (as opposed to a similarity measure, like the Rand index and mutual information) that actually satisfies the properties of a proper metric.</ns0:p><ns0:p>We apply these clustering quality evaluation methods on several real world weighted networks together with a varied collection of well known clustering algorithms. Additionally, we also test them on synthetic weighted networks based on the stochastic block model construction <ns0:ref type='bibr' target='#b18'>(Holland et al., 1983;</ns0:ref><ns0:ref type='bibr' target='#b40'>Wang and Wong, 1987)</ns0:ref>, which allows us to have predefined clusters whose strength can be adjusted through a parameter, and which we can compare to the results of the algorithms for their evaluation.</ns0:p><ns0:p>Our main contributions are the following: a switching model for randomizing weighted networks while maintaining the degree distribution, and its use together with the scoring functions we adapted to the weighted case, to provide a general approach to the validation of significance of clustering results.</ns0:p><ns0:p>An implementation of a bootstrap method with perturbation adapted to weighted networks, to test for Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we use to test our methodology for stability and significance of clusters. Additionally we contribute with an R package clustAnalytics, which contains all the functions and methods for cluster analysis that are explained in this paper.</ns0:p><ns0:p>The remainder of this paper is organized as follows. Section 2 contains the details of our methods for assessing clustering significance and stability, as well as a description of the clustering algorithms we will put to test and the datasets. Section 3 presents the discussion of our experiments. Section 4 presents our conclusions. Finally, we put in an Appendix (section 5) the technical details on the time complexity of our algorithms. Our experiments results figures are reported separately in the Supplemental Tables <ns0:ref type='table'>S1 file,</ns0:ref> which the reader can consult conveniently.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>MATERIALS AND METHODS</ns0:head><ns0:p>To determine if the partition of a graph into communities given by a clustering algorithm provides significant results, we use the scoring functions defined in section 2.1. Our method consists in evaluating these functions on clusters produced by a given algorithm on both the original graph and on multiple samples of randomized graphs generated from the original graph (see section 2.2). Then, for each function we see how the score of the original graph clusters compares to the scores of the randomized graph clusters, as we can define a relative score (score of the original graph over the mean of the scores of the random graphs). We also observe the percentile rank of the original score of the graph in the distribution of scores from the randomized graphs. Depending on the nature of the scoring function, a significant cluster structure will be associated with percentile ranks either close to 1 (for scores in which higher is better) or 0 (when lower is better).</ns0:p><ns0:p>For testing cluster stability, we implement a bootstrap resampling on the set of vertices of the network, plus the addition of a perturbation to the weights of the edges in the induced graph. The details of this methodology are described in section 2.3. The Variation of Information (VI), Reduced Mutual Information (RMI) and Adjusted Rand Index (ARI) introduced in section 2.5 are used as similarity measures. Then, the bootstrap statistics are the values of these similarity measures comparing the resampled bootstrap graphs to the original one.</ns0:p><ns0:p>On our experiments we evaluate the results of clustering on a selection of networks with different community structure (section 2.7) with several well-known clustering algorithms (section 2.6). Additionally, we also test the algorithms on synthetic graphs with a preset community structure constructed using stochastic block models (section 2.4). By varying one of the parameters of the model (λ ), we generate networks that range from being mostly uniform (that is, with no community structure) to having very strong communities. This allows us to see how our evaluation methods respond in a controlled environment where the existence or not of strong clusters in the network is known.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Community scoring functions</ns0:head><ns0:p>Here we will provide functions which will evaluate the division of networks into clusters, specifically when the edges have weights. Using the scoring functions for communities in unweighted networks given in <ns0:ref type='bibr' target='#b41'>(Yang and Leskovec, 2015)</ns0:ref> as a reference, we propose generalizations of most of them to the weighted case.</ns0:p></ns0:div>
<ns0:div><ns0:head>Basic definitions.</ns0:head><ns0:p>Let G(V, E) be an undirected graph of order n = |V | and size m = |E|. In the case of a weighted graph 1 G(V, Ẽ), we will denote m = ∑ e∈ Ẽ w(e) the sum of all edge weights. Given S ⊂ G a subset of vertices of the graph, we have n S = |S|, m S = |{(u, v) ∈ E : u ∈ S, v ∈ S}|, and in the weighted case mS = ∑ (u,v)∈ Ẽ:u,v∈S w <ns0:ref type='bibr'>((u, v)</ns0:ref>). We use w uv instead of w <ns0:ref type='bibr'>((u, v)</ns0:ref>). Note that if we treat an unweighted graph as a weighted graph with weights 0 and 1 (1 if two vertices are connected by an edge, 0 otherwise), then m = m and m S = mS for all S ⊂ V . Associated to G there is its adjacency matrix A(G) = (A i j ) 1≤i, j≤n where A i j = 1 if (i, j) ∈ E, 0 otherwise. We insist that A(G) only take binary values 0 or 1 to indicate existence of edges, even in the case of weighted graphs. For the weights we will always use the weight function w((i, j)) = w i j .</ns0:p><ns0:p>The following definitions will also be needed later on: </ns0:p><ns0:formula xml:id='formula_0'>• c S = |{(u, v) ∈ E : u ∈ S, v ∈ S}|</ns0:formula><ns0:formula xml:id='formula_1'>max u∈S ∑ v ∈S w uv d(u) ↓ Average ODF 1 n s ∑ u∈S |{(u,v)∈E:v ∈S}| d(u) 1 n s ∑ u∈S ∑ v ∈S w uv d(u)</ns0:formula><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Community scoring functions f (S) for weighted and unweighted networks.</ns0:p><ns0:p>• cS = ∑ (u,v)∈E:u∈S,v ∈S w uv is the natural extension of c S to weighted graphs; the sum of weights of all edges connecting S to G \ S.</ns0:p><ns0:p>• d(u) = ∑ v =u w uv is the natural extension of the vertex degree d(u) to weighted graphs; the sum of weights of edges incident to u.</ns0:p><ns0:p>• d S (u) = |{v ∈ S : (u, v) ∈ E}| and dS (u) = ∑ v∈S w uv are the (unweighted and weighted, respectively) degrees 2 restricted to the subgraph S.</ns0:p><ns0:p>• d m and dm are the median values of d(u), u ∈ V . 3</ns0:p><ns0:p>The left column in table <ns0:ref type='table'>1</ns0:ref> shows the community scoring functions for unweighted networks defined in <ns0:ref type='bibr' target='#b41'>(Yang and Leskovec, 2015)</ns0:ref>. These functions characterize some of the properties that are expected in networks with a strong community structure, with more ties between nodes in the same community than connecting them to the exterior. There are scoring functions based on internal connectivity (internal density, edges inside, average degree), external connectivity (expansion, cut ratio) or a combination of both (conductance, normalized cut, and maximum and average out degree fractions). Uparrow (respectively, downarrow) indicates the higher (resp., lower) the scoring function value the stronger the clustering.</ns0:p><ns0:p>On the right column we propose generalizations to the scoring functions which are suitable for weighted graphs while most closely resembling their unweighted counterparts. Note that for graphs which only have weights 0 and 1 (essentially unweighted graphs) each pair of functions is equivalent (any definition that did not satisfy this would not be a generalization at all).</ns0:p><ns0:p>• Internal Density, Edges Inside, Average Degree: These definitions are easily and naturally extended by replacing the number of edges by the sum of their weights.</ns0:p><ns0:p>• Expansion: Average number of edges connected to the outside of the community, per node. For weighted graphs, average sum of edges connected to the outside, per node.</ns0:p><ns0:p>• Cut Ratio: Fraction of edges leaving the cluster, over all possible edges. The proposed generalization is reasonable because edge weights are upper bounded by 1 and therefore relate easily to the unweighted case. In more general weigthed networks, however, this could take values well over 1 while lacking many 'potential' edges (as edges with higher weights would distort the measure). In general bounded networks (with bound other than 1) it would be reasonable to divide the result by the bound, which would result in the function taking values between 0 and 1 (0 with all possible edges being 0 and 1 when all possible edges reached the bound).</ns0:p><ns0:p>• Conductance and Normalized Cut: Again, these definitions are easily extended using the methods described above.</ns0:p><ns0:p>• Maximum and Average Out Degree Fraction: Maximum and average fractions of edges leaving the cluster over the degree of the node. Again, in the weighted case the number of edges is replaced by the sum of edge weights.</ns0:p><ns0:p>Some of the introduced functions (internal density, edges inside, average degree, clustering coefficient) take higher values the stronger the clusterings are, while the others (expansion, cut ratio, conductance, normalized cut, out degree fraction) do the opposite.</ns0:p><ns0:p>Clustering coefficient.</ns0:p><ns0:p>Another possible scoring function for communities is the clustering coefficient or transitivity: the fraction of closed triplets over the number of connected triplets of vertices. A high internal clustering coefficient (computed on the graph induced by the vertices of a community) matches the intuition of a well connected and cohesive community inside a network, but its generalization to weighted networks is not trivial.</ns0:p><ns0:p>There have been several attempts to come up with a definition of the clustering coefficient for weighted networks. One is proposed in <ns0:ref type='bibr' target='#b3'>(Barrat et al., 2004</ns0:ref>) and is given by</ns0:p><ns0:formula xml:id='formula_2'>c i = 1 d(i)(d(i)−1) ∑ j,h w i j +w ih 2 A i j A jh A ih .</ns0:formula><ns0:p>Note that this gives a local (i.e. defined for each vertex) clustering coefficient.</ns0:p><ns0:p>While this may work well on some weighted networks, in the case of complete networks (e.g. such as those built from correlation of time series as in <ns0:ref type='bibr' target='#b35'>(Renedo and Arratia, 2016</ns0:ref>)), we obtain</ns0:p><ns0:formula xml:id='formula_3'>c i = 1 d(i)(d(i) − 1) ∑ j,h w i j + w ih 2 (1) = ∑ jh w i j + ∑ jh w ih d(i)(n − 2) • 2 = (n − 2) ∑ j w i j + (n − 2) ∑ h w ih d(i)(n − 2) • 2 = (n − 2) d(i) + (n − 2) d(i) d(i)(n − 2) • 2 = 1,</ns0:formula><ns0:p>which does not give any information about the network.</ns0:p><ns0:p>An alternative was proposed in <ns0:ref type='bibr' target='#b23'>(McAssey and Bijma, 2015)</ns0:ref> with complete weighted networks in mind (with weights in the interval [0, 1]), which makes it more adequate for our case.</ns0:p><ns0:p>• For t ∈ [0, 1] let A t be the adjacency matrix with elements A t i j = 1 if w i j ≥ t and 0 otherwise.</ns0:p><ns0:p>• Let C t the clustering coefficient of the graph defined by A t .</ns0:p><ns0:p>• The resulting weighted clustering coefficient is defined as</ns0:p><ns0:formula xml:id='formula_4'>C = 1 0 C t dt (2)</ns0:formula><ns0:p>For networks where the weights are either not bounded or bounded into a different interval than [0, 1], the most natural approach is to simply take</ns0:p><ns0:formula xml:id='formula_5'>C = 1 w w 0 C t dt, (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>)</ns0:formula><ns0:p>where w can be either the upper bound or, in the case of networks with no natural bound, the maximum edge weight. The computation of this integral, which can be expressed as a sum of the values of C t (a finite amount) is detailed in the appendix (5.1.1).</ns0:p><ns0:p>It is a desirable property that the output of scoring functions remain invariant under uniform scaling, that is, if we multiply all edge weights by a constant φ > 0, as the community structure of the network would be the same. This holds for all of the measures of the third group, which combine the notions of internal and external connectivity.</ns0:p><ns0:p>This means that these scores will be less biased in favour of networks with high overall weight (for the internal connectivity based scores) or low overall weight (for the external connectivity ones). It</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:02:57675:2:1:NEW 26 May 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science is particularly interesting for networks with weights that are not naturally upper bounded by one, and facilitates comparisons between networks with completely different weight distributions. When we compare each network's scores to those of a randomized counterpart generated by the switching model, though, the total weight is kept constant, so even scores without this property could still give valuable information.</ns0:p><ns0:p>Let Gφ (V, Ẽφ ) be the weigthed graph obtained by multiplying all edge weights in Ẽ by real positive number φ . In this case, <ns0:ref type='bibr'>u)</ns0:ref>. This means that the internal density, edges inside, average degree, expansion and cut ratio behave linearly (with respect to their edge weights). Conductance, normalized cut and maximum and average out degree fractions, on the other hand, remain constant under these transformations. Since the notion of community structure is generally considered in relation to the rest of the network (a subset of vertices belong to the same community because they are more connected among themselves than to vertices outside of the community), it seems reasonable to consider that the same partitions on two graphs whose weights are the same up to a multiplicative positive constant factor have the same scores. This makes the scores in the third group, the only ones for which this property holds, more adequate in principle.</ns0:p><ns0:formula xml:id='formula_7'>n S φ = n S , m S φ = φ m S , c S φ = φ c S , d S φ (u) = φ d S (</ns0:formula><ns0:p>For the chosen definition of clustering coefficient this property also holds, as all terms in the integral in equation ( <ns0:ref type='formula' target='#formula_5'>3</ns0:ref>) behave linearly (the proof is immediate with a change of variables), and that linear factor cancels out.</ns0:p></ns0:div>
<ns0:div><ns0:head>Modularity</ns0:head><ns0:p>As for the modularity <ns0:ref type='bibr' target='#b27'>(Newman, 2006b)</ns0:ref>, it is defined as:</ns0:p><ns0:formula xml:id='formula_8'>Q = 1 2 m ∑ i j w i j − d(i) d( j) 2 m δ (c i , c j ).<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Then, by multiplying the edges by a constant φ > 0, we get the graph Gφ (V, Ẽφ ) of modularity:</ns0:p><ns0:formula xml:id='formula_9'>Q φ = 1 2 mφ ∑ i j φ w i j − φ 2 d(i) d( j) 2 mφ δ (c i , c j ) = 1 2 mφ φ ∑ i j w i j − d(i) d( j) 2 m δ (c i , c j ) = Q, (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:p>which means that modularity is also invariant under uniform scaling.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Randomized graph</ns0:head><ns0:p>The algorithm proposed here to generate a random graph which will serve as a null model is a modification of the switching algorithm described in <ns0:ref type='bibr' target='#b25'>(Milo et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b33'>Rao et al., 1996)</ns0:ref>. It produces a graph with the same weighted degree sequence as the original, but otherwise as independent from it as possible. Each step of this algorithm involves randomly selecting two edges AC and BD and replacing them with the new edges AD and BC (provided they did not exist already). This leaves the degrees of each vertex A, B,C and D unchanged while shuffling the edges of the graph.</ns0:p><ns0:p>One way to adapt this algorithm to our weighted graphs (more specifically, complete weighted graphs, with weights in [0, 1]) is, given vertices A, B, C and D, transfer a certain weight w from w AC to w AD , and from w BD to w BC 4 . We will select only sets of vertices {A, B,C, D} such that w AC > w AD and w BD > w BC , that is, we will be transferring weight from 'heavy' edges to 'weak' edges. For any value of w, the weighted degree of the vertices remains constant, but if it is not chosen carefully there could be undesirable side effects.</ns0:p></ns0:div>
<ns0:div><ns0:head>Selection of w</ns0:head><ns0:p>We distinguish between two types of weighted networks: those with an upper bound on the possible values of their edge weights given by the nature of the data (usually 1, such as in the Forex correlation network -see §2.7 below), and those without (such as social networks where edge weights count the number of interactions between nodes). Networks with negative weights have not been studied here, so 0 will be a lower bound in all cases. 4 Recall w i j refers to the weight of the edge between vertices i and j</ns0:p></ns0:div>
<ns0:div><ns0:head>6/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:2:1:NEW 26 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>However, in the case of networks which are upper and lower bounded, this results in a very large number of edge weights attaining the bounds, which might be undesirable (particularly networks like the Forex network, in which very few edges, if any, have weights 0 or 1) and give new randomized graphs that look nothing like the original data.</ns0:p><ns0:p>The value of w that most closely translates the essence of the switching method for unweighted graphs would perhaps be the maximum that still keeps all edges within their set bounds. This method seems particularly suited to sparse graphs with no upper bound, because it eliminates (by reducing its weight to zero) at least an edge per iteration. Other methods without this property could dramatically increase the edge density of the graph, constantly adding edges by transferring weight to them, while rarely removing them.</ns0:p><ns0:p>However, in the case of very dense graphs such as the Forex correlation network (or any other graph similarly constructed from a correlation measure), this method results in a large number of edge weights attaining the bounds (and in the case of the lower bound 0, removing the edge), which can reduce this density dramatically.</ns0:p><ns0:p>As an alternative, to produce a new set of edges with a similar distribution to those of the original network, we can impose the sample variance (i.e.</ns0:p><ns0:formula xml:id='formula_11'>1 n−1 ∑ n i, j=1 (w i j − m) 2</ns0:formula><ns0:p>, where m is the mean) to remain constant after applying the transformation, and find the appropriate value of w. The variance remains constant if and only if the following equality holds:</ns0:p><ns0:formula xml:id='formula_12'>(w AC − m) 2 + (w BD − m) 2 + (w AD − m) 2 + (w BC − m) 2 = (w AC − w − m) 2 + (w BD − w − m) 2 + (w AD + w − m) 2 + (w BC + w − m) 2 (6) ⇐⇒ 2 w2 + w(−w AC − w BD + w AD + w BC ) = 0.</ns0:formula><ns0:p>The solutions to this equation are w = 0 (which is trivial and corresponds to not applying any transformation to the edge weights) and w = w AC +w BD −w AD −w BC 2 .</ns0:p><ns0:p>While this alternative can result in some weights falling outside of the bounds, in the networks we studied it is very rare, so it is enough to discard these few steps to obtain the desired results.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> shows how the graph size decreases as the algorithm iterates with the maximum weight method, which also produces a dramatic increase in the variance. The constant variance method on the other hand does not remove any edges and the the size stays constant (as well as the variance, which is constant by definition, so the their corresponding lines coincide at 1).</ns0:p><ns0:p>However, applying the constant variance method on networks that are sparsely connected (such as most reasonably big social networks) results in a big increase in the graph size, to the point of actually becoming complete weighted graphs (see figure <ns0:ref type='figure'>2</ns0:ref>). Meanwhile, the maximum weight method does not significantly alter the size of the graph.</ns0:p><ns0:p>Therefore, we will use the constant variance method only for very densely connected networks, such as correlation networks, which are in fact complete weighted graphs. For sparse networks, the maximum weight method will be the preferred choice.</ns0:p><ns0:p>Note that if all edge weights are either 0 or 1, in both cases this algorithm is equivalent to the original switching algorithm for discrete graphs, as in every step the transferred weight will be one if the switch can be made without creating double edges, or zero otherwise (which corresponds to the case in which the switch cannot be made).</ns0:p></ns0:div>
<ns0:div><ns0:head>Number of iterations</ns0:head><ns0:p>To determine the number T m of iterations for the algorithm to sufficiently 'shuffle' the network (where m is the size of the graph, and T a parameter we select), we study the variation of information <ns0:ref type='bibr' target='#b24'>(Meilȃ, 2007)</ns0:ref> of the resulting clustering (in this case using the Louvain algorithm, though other clustering algorithms could be used instead) with respect to the initial one. (In section 2.5 we discuss variation of information, and other clustering similarity metrics that we use in this work, and in section 2.6 we detail all the clustering algorithms that we put to test.) Figures <ns0:ref type='figure'>1 and 2</ns0:ref> show a plateau where the variation of information stops increasing after around T = 1 (which corresponds to one iteration per edge of the initial graph). This is consistent with the results for the original algorithm in <ns0:ref type='bibr' target='#b25'>(Milo et al., 2003)</ns0:ref> for unweighted graphs, and we can also select T = 100 as a value that is by far high enough to obtain a sufficiently mixed graph. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Bootstrap with perturbation</ns0:head><ns0:p>Non-parametric bootstrap, with and without perturbation or 'jittering', has been used to study the stability of clusters of euclidean data sets <ns0:ref type='bibr' target='#b17'>(Hennig, 2007)</ns0:ref>. For graphs, bootstrap resampling can be done on the set of vertices, and then build the resampled graph with the edges that the original graph induces on them (i.e.</ns0:p><ns0:p>two resampled vertices will be joined by an edge if and only if they were adjacent in the original graph, with the same weight in the case of weighted graphs). As for adding noise to avoid duplicate elements, it can be added to the edge weights. We suggest generating that noise from a normal distribution truncated to stay within the bounds of the edge weights of each graph (which means it can be truncated on one or both sides depending on the graph).</ns0:p><ns0:p>Then, to deal with copies of the same vertex on the resampled graph, it seems necessary to add heavy edges between them to reflect the idea that a vertex and its copy should be similar and well connected between each other. Not doing so would incentivize the clustering methods to separate them in different clusters, because they generally try to separate poorly connected vertices. We can distinguish two cases:</ns0:p><ns0:p>• Graphs with edge weights built from correlations or other similar graphs which by their nature have a specific upper bound on the edge weights (usually 1): We assign the value of the upper bound to the edge weight. After applying the perturbation, this will result in a weight which will be close to that upper bound.</ns0:p><ns0:p>• Other weighted graphs, where no particular upper bound to the edge weights is known: To assign these edges very high weights (to reflect the similarity that duplicate vertices should have in the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science resampled network) within the context of the network, one option is to sample values from the highest weights (e.g. the top 5%) of the original edge set.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Synthetic ground truth models</ns0:head><ns0:p>Another way of comparing and assessing the fit of a clustering algorithm is to compare it to a ground truth community structure if there is one, which is seldom known in reality. Alternatively one can synthetically generate a graph with a ground truth community structure. This will allow us to verify that the results of the algorithm match the expected outcome. For the particular case of time series correlation networks one can generate the time series using a suitable model that imposes a community structure with respect to correlations, such as the Vector Autoregressive (VAR) model construction in <ns0:ref type='bibr' target='#b2'>(Arratia and Cabaña, 2013)</ns0:ref>, and then compute the values of the edges accordingly.</ns0:p><ns0:p>A common benchmark for clustering algorithm evaluation is the family of graphs with a pre-determined community structure generated by the l-partition model <ns0:ref type='bibr' target='#b5'>(Condon and Karp, 2001;</ns0:ref><ns0:ref type='bibr' target='#b16'>Girvan and Newman, 2002;</ns0:ref><ns0:ref type='bibr' target='#b14'>Fortunato, 2010)</ns0:ref>. It is essentially a block-based extension of the Erdös-Renyi model, with l blocks of g vertices, and with probabilities p in and p out of having edges within the same block and between different blocks respectively.</ns0:p><ns0:p>A more general approach is the stochastic block model (SBM) <ns0:ref type='bibr' target='#b18'>(Holland et al., 1983;</ns0:ref><ns0:ref type='bibr' target='#b40'>Wang and Wong, 1987)</ns0:ref>, which uses a probability matrix P (which has to be symmetric in the undirected case) to determine probabilities of edges between blocks. P i j will be the probability of having an edge between any given pair of vertices belonging to blocks i and j respectively. Then, having higher values in the diagonal than in the rest of the matrix will produce strongly connected communities. Note that subgraph induced by each community is in itself an Erdös-Renyi graph (with p = P ii for the community i). This model also allows having blocks of different sizes. While this model can itself be used for community detection by trying to fit it to any given graph <ns0:ref type='bibr' target='#b22'>(Lee and Wilkinson, 2019)</ns0:ref>, here we will simply use it as a tool to generate graphs of a predetermined community structure.</ns0:p><ns0:p>To obtain a weighted SBM (WSBM) graph, we propose a variation of the model which produces multigraphs, which can then be easily converted into weighted graphs by setting all edge weights as their corresponding edge count. In this case, probability matrix of the original SBM will be treated as the matrix of expectations between edges of each pair of blocks. Then, we simply add edges one by one with the appropriate probability (the same at each step) that will allow each weight expectation to match its defined value. By definition, the probability of the edge added at step k to join vertices i and j is given by</ns0:p><ns0:formula xml:id='formula_13'>P(e k = (i, j)) = E i j #steps ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>where E i j is the expected number of edges between them given by the expectation matrix. The sum of these probabilities for all vertices must add up to one, which gives</ns0:p><ns0:formula xml:id='formula_14'>#steps = 1 2 ∑ (|C i ||C j |E i, j ).<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>Note that the 1 2 factor is added because we are using undirected graphs, and we do not want to count edges (i, j) and ( j, i) twice.</ns0:p><ns0:p>This process produces a binomially distributed weight for each edge, though these distributions are not independent, so independently sampling each edge weight from the appropriate binomial distribution would not be equivalent.</ns0:p><ns0:p>We will use a graph sampled from this model with block sizes (40, 25, 25, 10), with a parametrized expectation matrix: There are other possible extensions of the stochastic block model to weighted networks such as <ns0:ref type='bibr' target='#b0'>(Aicher et al., 2014)</ns0:ref>, which can have edges sampled from any exponential family distribution. While our approach produces Bernoulli distributed edges (which can be approximated by a Poisson distribution in most cases), the edge distributions obtained in <ns0:ref type='bibr' target='#b0'>(Aicher et al., 2014)</ns0:ref> are not independent from each other, so the results are not exactly equivalent. For instance, in our case the total network weight is fixed and will not vary between samples.</ns0:p><ns0:formula xml:id='formula_15'>    0.</ns0:formula></ns0:div>
<ns0:div><ns0:head n='2.5'>Clustering similarity measures</ns0:head><ns0:p>To compare and measure how similar two clusterings of the same network are, we will use two measures based on information theory, the Variation of Information (VI) and the Reduced Mutual Information (RMI), and another more classical measure, the (adjusted) Rand index which relates to the accuracy. All of these measures are constructed upon the contingency table of the labeling, which is summarized in table 2, and the terms are explained below. </ns0:p><ns0:formula xml:id='formula_16'>c rs = |P r ∩ P ′ s |<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>Define the probability P(r) (respectively, P(s)) of an object chosen uniformly at random has label r (resp. s), and the probability P(r, s) that it has both labels r and s, that is</ns0:p><ns0:formula xml:id='formula_17'>P(r) = a r n , P(s) = b s n , P(r, s) = c rs n (13)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Variation of information</ns0:head><ns0:p>The variation of information between two clusterings, a criterion introduced in <ns0:ref type='bibr' target='#b24'>(Meilȃ, 2007)</ns0:ref>, is defined as follows.</ns0:p><ns0:p>Definition 2.1. The entropy of a partition P = {P 1 , ..., P R } of a set is given by:</ns0:p><ns0:formula xml:id='formula_18'>H (r) = − R ∑ r=1 P(r) log (P(r)) ,<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>Definition 2.2. The mutual information is defined as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_19'>I(r; s) = R ∑ r=1 S ∑ s=1 P(</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Intuitively, the mutual information measures how much knowing the membership of an element of the set in partition P reduces the uncertainty of its membership in P ′ . This is consistent with the fact that the mutual information is bounded between zero and the individual partition entropies 0 ≤ I(r; s) ≤ min{H (r), H (s)},</ns0:p><ns0:p>and the right side equality holds if and only if one of the partitions is a refinement of the other.</ns0:p><ns0:p>Consequently, the variation of information will be 0 if and only if the partitions are equal (up to permutations of indices of the parts), and will get bigger the more the partitions differ. It also satisfies the triangle inequality, so it is a metric in the space of partitions of any given set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Reduced Mutual Information</ns0:head><ns0:p>The mutual information <ns0:ref type='bibr' target='#b6'>(Cover and Thomas, 1991)</ns0:ref>, often in its normalized form is one of the most widely used measures to compare graph partitions in cluster analysis. More recently <ns0:ref type='bibr' target='#b28'>(Newman et al., 2020)</ns0:ref> proposed the Reduced Mutual Information (RMI), an improved version which corrects the high mutual information values given to quite dissimilar partitions in some cases. For instance, if one of the partition is the trivial one splitting the network into n clusters of one element each, the standard mutual information will always take the maximal value (1, in the case of the normalized mutual information), even if the other is completely different. More generally, any partitions will always have maximal mutual information with all of their filtrations. This is crucial when comparing clustering algorithms, as some algorithms will output trivial partitions into single-element clusters when they fail to find a clustering structure. Therefore, it would not be possible to reliably measure the stability of these clustering methods with the standard mutual information.</ns0:p><ns0:p>Given r and s two labelings of a set of n elements, the Reduced Mutual Information is defined as:</ns0:p><ns0:formula xml:id='formula_21'>RMI(r; s) = I(r; s) − 1 n log Ω(a, b)<ns0:label>(18)</ns0:label></ns0:formula><ns0:p>where Ω(a, b) is an integer equal to the number of R × S non-negative integer matrices with row sums a = {a r } and column sums b = {b s }. Details on how to compute or approximate Ω(a, b) are given in the appendix 5.2.</ns0:p><ns0:p>The Reduced Mutual Information can also be defined in a normalized form (NRMI), in the same way the standard mutual information is, by dividing it by the average of the values of the reduced mutual information of labelings a and b with themselves:</ns0:p><ns0:formula xml:id='formula_22'>NRMI(r; s) = RMI(r; s) 1 2 [RMI(r; r) + RMI(s; s)] = I(r; s) − 1 n log Ω(a, b) 1 2 [H(r) + H(s) − 1 n (log Ω(a, a) + log Ω(b, b))] (19)</ns0:formula><ns0:p>We will use this normalized form to be able to compare more easily the results of networks with different number of nodes, as well as to compare them to other similarity measures.</ns0:p></ns0:div>
<ns0:div><ns0:head>Rand index</ns0:head><ns0:p>The Rand Index (RI) and the different measures derived from it <ns0:ref type='bibr' target='#b19'>(Hubert and Arabie, 1985)</ns0:ref> are based on the idea of counting pairs of elements that are classified similarly and dissimilarly across the two partitions P and P ′ . There are four types of pairs of elements:</ns0:p><ns0:p>• type I: elements are in the same class both in P and P ′</ns0:p><ns0:p>• type II: elements are in different classes both in P and P ′</ns0:p><ns0:p>• type III: elements are in different classes in P and in the same class in P ′ .</ns0:p><ns0:p>• type IV: elements are in the same class in P and different classes in P ′ .</ns0:p><ns0:p>Then, similar partitions would have many pairs of elements of types I and II (agreements) and few of type III and IV (disagreements). The Rand index is defined as the ratio of agreements over the total number of pairs of elements. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Using the terms of the contingency table (table 2), the Rand index is given by</ns0:p><ns0:formula xml:id='formula_23'>RI(r, s) = n 2 + 2 ∑ r s n rs 2 − [ R ∑ r=1 a r 2 + S ∑ s=1 b s 2 ] (20)</ns0:formula><ns0:p>An adjusted form of the Rand index <ns0:ref type='bibr' target='#b19'>(Hubert and Arabie, 1985)</ns0:ref> introduces a correction to account for all the pairings that match on both partitions because of random chance. The Adjusted Rand Index (ARI) is defined as:</ns0:p><ns0:formula xml:id='formula_24'>ARI(r; s) = Index − Expected Index Maximum Index − Expected Index , (<ns0:label>21</ns0:label></ns0:formula><ns0:formula xml:id='formula_25'>)</ns0:formula><ns0:p>which in terms of the contingency table (table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>) can be expressed as:</ns0:p><ns0:formula xml:id='formula_26'>ARI(r; s) = ∑ rs c rs 2 − [∑ r a r 2 ∑ s b s 2 ]/ n 2 1 2 [∑ r a r 2 + ∑ s b s 2 ] − [∑ r a r 2 ∑ s b s 2 ]/ n 2 (22)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='2.6'>Clustering algorithms</ns0:head><ns0:p>We have selected five well known state-of-the-art clustering algorithms based on different approaches, and all suitable for weighted graphs. They will be applied to all of the networks to then evaluate the results:</ns0:p><ns0:p>1. Louvain method <ns0:ref type='bibr' target='#b4'>(Blondel et al., 2008)</ns0:ref>, a multi-level greedy algorithm for modularity optimization.</ns0:p><ns0:p>We use the original algorithm, without the resolution parameter (i.e. with resolution γ = 1).</ns0:p><ns0:p>2. Leading eigenvector method <ns0:ref type='bibr' target='#b26'>(Newman, 2006a)</ns0:ref>, based on spectral optimization of modularity.</ns0:p><ns0:p>3. Label propagation <ns0:ref type='bibr' target='#b32'>(Raghavan et al., 2007)</ns0:ref>, a fast algorithm in which nodes are iteratively assigned to the communities most frequent in their neighbors.</ns0:p><ns0:p>4. Walktrap <ns0:ref type='bibr' target='#b30'>(Pons and Latapy, 2005)</ns0:ref>, based on random walks. 5. Spin-glass <ns0:ref type='bibr' target='#b34'>(Reichardt and Bornholdt, 2006)</ns0:ref>, tries to find communities in graphs via a spin-glass model and simulated annealing.</ns0:p><ns0:p>In any application, the choice of the clustering algorithm will be hugely dependant on the characteristics of the dataset, as well as its size. The methods proposed here, though, can be applied to evaluate any combination of weighted graph and clustering algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.7'>Data</ns0:head><ns0:p>• Zachary's karate club: Social network of a university karate club <ns0:ref type='bibr' target='#b42'>(Zachary, 1977)</ns0:ref>. The vertices are its 34 members, and the edge weights are the number of interactions between each pair of them.</ns0:p><ns0:p>In this case, we have a 'ground truth' clustering, which corresponds to the split of the club after a conflict, resulting into two clusters.</ns0:p><ns0:p>• Forex network: Network built from correlations between time series of exchange rate returns <ns0:ref type='bibr' target='#b35'>(Renedo and Arratia, 2016)</ns0:ref>. It was built from the 13 most traded currencies and with data of January 2009. It is a complete graph of 78 edges (corresponding to pairs of currencies) and has edge weights bounded between 0 and 1.</ns0:p><ns0:p>• News on Corporations network: In this network, a list of relevant companies are the nodes, while the weighted edges between them are set by the amount of times they have appeared together in news stories over a certain period of time (in this instance, on 2019-03-13). It has 899 nodes and 13469 edges.</ns0:p><ns0:p>• Social network: A Facebook-like social network for students from the University of California, Irvine <ns0:ref type='bibr' target='#b29'>(Opsahl and Panzarasa, 2009)</ns0:ref>. It has 1899 nodes (students) and 20296 edges, weighted by the number of characters of the messages sent between users.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:2:1:NEW 26 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Enron emails: a network composed of email communications among Enron employees <ns0:ref type='bibr' target='#b21'>(Klimt and Yang, 2004)</ns0:ref>. The version of the dataset used here is available in the igraphdata R package <ns0:ref type='bibr' target='#b7'>(Csardi, 2015)</ns0:ref>, and consists of a multigraph with 184 vertices (users) and 125,409 edges, corresponding to emails between users. We convert it to a an undirected weighted graph by using as weights for the edges the number of edges in the multigraph (i.e. the number of emails between the corresponding users).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.8'>Software</ns0:head><ns0:p>All the methods proposed here are implemented in R (R Core Team, 2015), and will be released in an upcoming package. This includes all of the significance functions and the adaptations to the existing boostrap methods to make them work on weighted graphs. All the code interacts with igraph objects <ns0:ref type='bibr' target='#b8'>(Csardi and Nepusz, 2006)</ns0:ref> for easy testing and manipulation of the graphs, as well as allowing the use of already implemented clustering methods and other existing functions for exploring graphs. The more computationally intensive parts such as the switching model have been written in C++ for better efficiency, and are called from R through Rcpp <ns0:ref type='bibr' target='#b13'>(Eddelbuettel and Franc ¸ois, 2011;</ns0:ref><ns0:ref type='bibr' target='#b12'>Eddelbuettel, 2013)</ns0:ref>.</ns0:p><ns0:p>Our code is available online: https://github.com/martirm/cluster_assessment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>DISCUSSION</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Cluster Significance</ns0:head><ns0:p>As explained in the Materials and Methods section, to test for cluster significance of a given clustering algorithm, we apply the scoring functions defined in section 2.1 to the clustering produced on the original graph and on randomized versions obtained by the method described in section 2.2. It should be expected that whenever the communities found by an algorithm on the original graph are significant, they will receive better scores than those found by the same algorithm on a graph with no actual community structure.</ns0:p><ns0:p>The results of computing these scores on the clustering obtained by the algorithms on each of the networks can be seen on tables S1.1, S1.2, S1.3, S1.4, S1.5 and S1.6 (recall that ↑ identifies scores for which higher is best, and ↓ means lower is best). For each combination of scoring function and algorithm, we represent its value on the original network, its mean across multiple samples of its randomized switching model, and the percentile rank of the original score in the distribution of randomized graph scores. This percentile rank value serves as a statistical test of significance for each of the scores: a score is significant if its value is more extreme (either higher or lower, depending on its type) than most of the distribution.</ns0:p><ns0:p>It is important to note that some of the scores greatly depend on the number of clusters, and cannot adequately compare partitions in which that number differs. For instance, internal density can easily be high on small communities, while it will generally take lower values on bigger ones, even when they are very well connected. This can result in networks with no apparent community structure having high overall internal density scores just because they are partitioned into many small clusters.</ns0:p><ns0:p>In comparison, scores that combine both internal and external connectivity (conductance, normalized cut, out degree fractions), clustering coefficient, and modularity suffer less from this effect and seem more adequate in most circumstances. These also happen to be the scores that are invariant under the multiplication of the weights by positive constants (see section 2.1).</ns0:p><ns0:p>We suggest focusing on the relative scores (the score of the actual network over the mean of the randomized ones) to simplify the process of interpreting the results, especially when trying to compare graphs of different nature. With relative scores, anything that differs significantly from 1 will suggest that the clustering is strong. For instance, in figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> we have the modularity of the stochastic block model for each algorithm, and for different values of the parameter λ (which will give increasingly stronger clusters). While the algorithms find results closer to the ground truth the bigger λ is, only the relative scores give us that insight. However, when comparing several clustering methods on the same network (and not simply trying to determine if a single given method produces significant results), absolute scores are more meaningful to determine which one is best.</ns0:p><ns0:p>For the weighted stochastic block model graph, the clustering algorithms get results closer to the ground truth clustering the bigger the λ parameter is, as one would expect, and for λ > 30 the results perfectly match the ground truth clustering outcome in almost all cases (a bit earlier for the Louvain, Walktrap and spinglass cases, see figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>). The relative scores match these results, and get better as λ Manuscript to be reviewed Computer Science increases as well (figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). Note that in figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, there are some jumps for the relative modularity in the spin-glass case, which are caused by the instability of this algorithm (see section 3.2). This effect is no longer present when the structure of the network is stronger (λ > 8). In table <ns0:ref type='table'>S1</ns0:ref>.1, corresponding to λ = 15, we can see how for the Louvain algorithm, the scores are more extreme (lower when lower is better, higher when higher is better) than those of the randomized network in almost all circumstances. In the case of the leading eigenvector algorithm the scores are slightly worse, but almost all of them still fall within statistical significance (if we consider p-values < 0.05). In both cases, the only metric that is better in the random network is internal density, due to the smaller size of the detected clusters (which is why by itself internal density is not a reliable metric, as even in a network with very poor community structure it will be high for certain partitions into very small clusters that arise by chance). For both the label propagation and the Walktrap algorithms, the real network scores are not as close to the edge of the distribution of random scores, but they are still much better than the mean in all meaningful cases (the only exceptions are the internal density and edges inside, which are hugely dependent on cluster size and are therefore inadequate to compare partitions with a different number of clusters).</ns0:p><ns0:p>In the case of the karate club network (table <ns0:ref type='table'>S1</ns0:ref>.2), the label propagation algorithm gets the closest results to the ground truth clustering, and this is reflected in most scores being better than those of other methods. This does not apply to the modularity though, which is always higher for the Louvain and spin-glass, which produce identical clusters (this is to be expected, because Louvain is a method based on modularity optimization).</ns0:p><ns0:p>On the Forex graph (table <ns0:ref type='table'>S1</ns0:ref>.3), we can see that both the leading eigenvector and Walktrap algorithms produce almost identical results splitting the network into two clusters, while the spin-glass algorithm splits it into three and Louvain into four. The scores which are based on external connectivity give better results to the Walktrap and leading eigenvector, while the spin-glass partition has slightly better clustering coefficient and better modularity (with Louvain having very similar values in those two scores).</ns0:p><ns0:p>It is also important to disregard the results of the scoring functions whenever the algorithms fail to distinguish any communities and either groups the whole network together or separates each element into its own cluster (such as the label propagation algorithm on the Forex network, seen in table <ns0:ref type='table'>S1</ns0:ref>.3). In this case, the scores which are based on external connectivity will be optimum, as the cut c s of the partition is 0, but that of course does not give any information at all. In addition, the normalized cut and conductance Manuscript to be reviewed</ns0:p><ns0:p>Computer Science could be not well defined in this case, as it is possible to have a division by 0 for some of the clusters.</ns0:p><ns0:p>As for the news on corporations graph (table <ns0:ref type='table'>S1</ns0:ref>.4), the results and in particular the number of clusters vary greatly between algorithms (from 82 clusters for the Walktrap to only 2 for the label propagation).</ns0:p><ns0:p>While the label propagation algorithm scores well on some measures due to successfully splitting to very weakly connected components of the network, others such as the clustering coefficient or internal density are very low. Louvain and spin-glass have very similar scores across most measures and seem to be the best, though leading eigenvector does have better conductance and normalized cut. In this case the high variation in number of clusters across algorithms that still score highly could suggest that there is not a single predominant community structure in the network.</ns0:p><ns0:p>In the Enron graph (table <ns0:ref type='table'>S1</ns0:ref>.5) Louvain also produces the best results for most scores, particularly in conductance and normalized cut, and it significantly surpasses all other algorithms while having larger clusters, with the only exception of label propagation, which partitions the network into much smaller clusters. The spin-glass algorithm stands out as having by far the worse results across all scores, even though its number of clusters ( <ns0:ref type='formula' target='#formula_16'>12</ns0:ref>) is the same as in leading eigenvector and similar to Louvain.</ns0:p><ns0:p>For the social network (table <ns0:ref type='table'>S1</ns0:ref>.6), the Louvain, leading eigenvector and label propagation algorithms produce the same number of clusters (with spin-glass being also very close), which allows an unbiased comparison of scores. In this case, leading eigenvector has better results for almost all scores, except for clustering coefficient and modularity, for which Louvain is again the best algorithm. This huge disparity may be explained by the fact that modularity compares edge weights to a null model that considers the degrees of their incident vertices, and does not only discriminate between internal and external edges (as most of the scoring functions do).</ns0:p><ns0:p>Overall the Louvain algorithm seems to be the best at finding significant clusters, performing consistently well on a variety of weighted networks of very different nature. It is worth noting though that there are some limitations to it (and all modularity based methods in general) in terms of resolution limit <ns0:ref type='bibr' target='#b15'>(Fortunato and Barthélemy, 2007</ns0:ref>) that can appear when there are small communities in large networks, though there are methods to address it, such as the use of a resolution parameter <ns0:ref type='bibr' target='#b1'>(Arenas et al., 2008)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Cluster Stability</ns0:head><ns0:p>Using the non-parametric bootstrap method described in section 2.3, we resample the networks 999 times (R = 999), apply clustering algorithms to them, and compare them to their original clustering with the metrics from section 2.5. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>values of the variation of information, and high (close to 1) values of the normalized reduced mutual information and the Rand index. The results of the same method applied to the randomized versions of each network (see section 2.2) are also included, to have reference values for the stability of networks where there is no community structure. If the values of the clustering similarity measures, for the original and randomized networks, happened to be close together, that would suggest that the chosen algorithm produces a very unstable clustering on the network.</ns0:p><ns0:p>We observe in table <ns0:ref type='table'>S1</ns0:ref>.7 that for the stochastic block model example graph, all algorithms except for spin-glass produce very stable clusters, which is consistent with the fact that we chose parameters to give it a very strong community structure. Meanwhile, clustering algorithms applied to the Zachary and Forex networks (tables S1.8 and S1.9) produce clusters which are not as stable, but still much better than their baseline randomized counterparts. Note that the stability values for the label propagation algorithm in the Forex network (table <ns0:ref type='table'>S1</ns0:ref>.9) should be ignored, as in that instance the output is a single cluster (see table <ns0:ref type='table'>S1</ns0:ref>.3) which does not give any information. It is clear that while it works on less dense networks, the label propagation algorithm is not useful for complete weighted networks and it fails to give results that are at all meaningful.</ns0:p><ns0:p>On the news on corporations graph (table <ns0:ref type='table'>S1</ns0:ref>.10) spin-glass is again the most unstable algorithm, with results for the RMI and ARI (which are both close to 0) that suggest that the clusters of the original network and all the resampled ones are completely unrelated. In this case the label propagation algorithm is the most stable, while the rest of the algorithms are not as good. This might be in part explained by the fact that its clusters are much bigger than in other networks, which allows them to remain strongly connected after small perturbations.</ns0:p><ns0:p>Finally, we observe that algorithms on the Enron graph (table <ns0:ref type='table'>S1</ns0:ref>.11) produce the most unstable clusters out of all that were tested, which would suggest that the network does not have a single prevalent clustering structure that can be consistently detected, at least in the weighed graph configuration that we tested.</ns0:p><ns0:p>As a general remark on stability observed from all resulting experiments is that the spin-glass algorithm is the most unstable across the networks we tested, which are a diverse representation of different kinds of weighted networks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSIONS</ns0:head><ns0:p>We have successfully observed how the community scoring functions, combined with the switching model, can easily help distinguish networks which have a community structure from others that do not.</ns0:p><ns0:p>A combination of a network and a clustering algorithm can be said to produce significant clusters when their scores stand out from the distribution of scores produced by the same algorithm on the collection of randomized graphs produced by the switching model. The experiments conducted on the stochastic block model networks of varying community strength support this hypothesis. This will be useful when working with networks for which there is little information available, and one wants to determine whether the results obtained from any given clustering algorithms do reflect an actual community structure or if they are simply given by chance.</ns0:p><ns0:p>We recommend avoiding the scoring functions that can be heavily influenced by variables like the number of clusters or their size (like internal density, which favours smaller clusters), because the information they provide is hard to interpret in a systematic manner. In comparison, functions that combine internal and external connectivity, like conductance or normalized cut, seem more robust. However, we observed a tendency of these measures to favour partitions into fewer bigger clusters, which makes comparisons difficult when we want to compare partitions with a different number of parts. In contrast, both modularity and clustering coefficient do not seem to be so dependant on the number of communities in the partition, which is a relevant advantage.</ns0:p><ns0:p>We remark that our approach consists of a global analysis of the partitions, but it is possible to perform similar evaluations based on individual scores of each cluster. In this case, some of the scoring functions that have not proved very useful might provide a more meaningful insight into the local structure of the partition, as it is possible to have both strong and weak clusters in the same network.</ns0:p><ns0:p>Additionally, the use of the switching model to generate randomized graphs provides a valuable point of reference, especially when we do not have much information on the structure of the network. The methods proposed here to test cluster significance can also be used with any other scoring functions, which could even be customized depending on the characteristics that one might want to prioritize in any Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>given network clustering, such as giving more emphasis to internal connectivity, or external connectivity, or scores that naturally favour larger or smaller clusters.</ns0:p><ns0:p>In a more particular note, and according to our analyses, the Louvain algoritm, and to a lesser extent, the Walktrap algorithm, seem to be the most stable while producing significant clusterings, as specified by our scoring functions and across all networks considered. This reaffirms Louvain as one of the state-of-the-art clustering algorithms.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>APPENDIX</ns0:head><ns0:p>5.1 Computational complexity of scoring functions</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.1'>Weighted clustering coefficient</ns0:head><ns0:p>Let Γ be the number of connected triplets in the graph and γ the number of closed triplets (i.e. 3 times the number of triangles). As before Γ(t) and γ(t) are their respective values when only edges with weight greater or equal than t are considered. Then, the clustering coefficient or transitivity is defined as:</ns0:p><ns0:formula xml:id='formula_27'>C = 1 w t≥0 γ(t) Γ(t) dt, (<ns0:label>23</ns0:label></ns0:formula><ns0:formula xml:id='formula_28'>)</ns0:formula><ns0:p>This is an integral of a step function that takes a finite number of values (bounded by the number of different edge weights) which we will compute as follows:</ns0:p><ns0:p>1. Construct a hash table of all edges with their corresponding weights to be able to search if there is an edge between any two edges (and obtain it's weight) in constant time. Complexity: O(m)</ns0:p><ns0:p>2. Construct a hash table for each vertex containing all its neighbors. Can be done by iterating once over the edges and updating the corresponding tables at each step. This will be used to iterate over the connected triplets incident to each vertex. Complexity: O(m)</ns0:p><ns0:p>3. Construct a sorted list containing the edge weights at which either a connected triplet or a triangle appears (i.e. the maximum edge weight of that triangle or triplet), and an associated variable for each indicating whether it corresponds to a triangle or a triplet. For this, we iterate over the connected triplets using the hash tables from step 2, and for each, we check if it also forms a triangle by checking the hash table from step 1 (which allows each iteration to be done in constant amortized time). This step has complexity O(Γ log Γ), as the list has Γ + γ elements, and (Γ + γ) ∈ O(Γ).</ns0:p><ns0:p>4. We iterate the list from step 3 and compute the cumulative sums of connected triplets and closed connected triplets (which correspond to γ(t) and Γ(t) for increasing values of t in the list). This</ns0:p><ns0:p>gives us all values of γ(t) Γ(t) , from which we compute the integral (equation 23). This involves O(Γ) steps of constant complexity.</ns0:p><ns0:p>Therefore, the overall complexity of the algorithm is O(m + Γ log Γ). Because Γ is bounded by m 2 , we can also express the complexity only in terms of m (which will then be O(m 2 )), but that bound is not tight in most graphs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.2'>Other scoring functions</ns0:head><ns0:p>Computing m, mS , cS (for all values of S), as well as all vertex degrees and out degrees has complexity O(m), as it can be done sequentially by reading the edge list and updating the appropriate values as necessary. This means that all scoring functions except for the clustering coefficient, which are derived from these values, can be computed very efficiently.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Methods for counting contingency tables</ns0:head><ns0:p>To compute the number of contingency tables with fixed margins needed to obtain the value of the reduced mutual information, we mainly use the analytical approximation suggested by <ns0:ref type='bibr' target='#b28'>Newman et al. (2020)</ns0:ref>, which works whenever the number of clusters is substantially smaller than the number of nodes. This works well in most of the cases we study, except for the News graph when clustered with the Walktrap algorithm, which produces many single node clusters. For this case, we use a hybrid approach combining the analytical approximation for the clusters with more than one element, and then extending it to the full contingency Manuscript to be reviewed <ns0:ref type='bibr'>(1995)</ns0:ref>. This estimates the size of the set by defining a nested sequence of subsets and obtaining the ratio between the size of each one of them and its predecessor with a Monte Carlo approximation.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Our solution consists of sorting and rearranging the rows and columns of the original contingency table so that smaller elements sit at the top left part of the table. Then, we use the analytical approximation on the submatrix formed by rows and columns with sums strictly greater than one (which will sit on the bottom right corner). This will be the size of the first subset of the chain, and the rest are estimated successively with the Markov chain Monte Carlo method.</ns0:p><ns0:p>This method works well on the contingency tables generated by the Walktrap clustering on our News graph, unlike the analytical approximation alone, which is inaccurate, or the Monte Carlo method alone, which is much slower. However if the RMI is to be used to compare partitions of very large graphs, establishing some general criteria to determine the largest subset that can be analytically estimated with enough accuracy might be needed, with the goal of minimizing the need for costly Monte Carlo approximations. This topic has a lot of potential to be studied in future work, and which we hope to address in the future in our clustAnalytics package.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>stability. A model to generate benchmark weighted networks based on the stochastic block model, which 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:2:1:NEW 26 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1. Normalized size, variance and variation of information for the Louvain clustering after applying the proposed algorithm on the Forex graph. Horizontal axis is on logarithmic scale.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Consider a set of n elements and two labelings or partitions, one labeled by integers r = 1, . . . , R and the other labeled by integers s = 1, . . . , S, let's say P = {P 1 , . . . , P R } and P ′ = {P ′ 1 , . . . , P ′ S }. Define a r as the number of elements with label r in the first partition, b s the number of elements with label s in the second partition, and c rs be the number of elements with label r in the first partition and label s in the second. Formally, a r = |P r | =</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:57675:2:1:NEW 26 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Scores of the weighted stochastic block model as a function of the parameter λ , for each of the algorithms</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. VI distance between the ground truth clustering and the result of each of the algorithms for the weighted stochastic block model (WSBM), as a function of the parameter λ .</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>is the number of edges connecting S to the rest of the graph.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>unweighted f (S)</ns0:cell><ns0:cell>weighted f (S)</ns0:cell></ns0:row><ns0:row><ns0:cell>↑ Internal density</ns0:cell><ns0:cell>m S n S (n S −1)/2</ns0:cell><ns0:cell>ms n S (n S −1)/2</ns0:cell></ns0:row><ns0:row><ns0:cell>↑ Edges Inside</ns0:cell><ns0:cell>m S</ns0:cell><ns0:cell>mS</ns0:cell></ns0:row><ns0:row><ns0:cell>↑ Average Degree</ns0:cell><ns0:cell>2m S n S</ns0:cell><ns0:cell>2 mS n S</ns0:cell></ns0:row><ns0:row><ns0:cell>↓ Expansion ↓ Cut Ratio</ns0:cell><ns0:cell>c s n s c s n s (n−n s )</ns0:cell><ns0:cell>cs n s cs n s (n−n s )</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>↓ Conductance ↓ Normalized Cut ↓ Maximum ODF max u∈S c s 2m s +c s + 2m s +c s c s 2(m−m s )+c s c s |{(u,v)∈E:v ∈S}| d(u)</ns0:cell><ns0:cell>cs 2 ms + cs 2 ms + cs + cs 2( m− ms )+ cs cs</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>1 For every variable or function defined over the unweighted graph, will use a '∼' to denote its weighted counterpart</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:2:1:NEW 26 May 2021)</ns0:cell><ns0:cell>3/19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>11 c 12 ... c 1S a 1 P 2 c 21 c 22 c 2S a 2 ... ... P R c R1 c R2 ... c RS a r sum b 1 b 2 ... b S n = ∑ c i j Contingency table of partitions P and P ′ , with labelings r and s.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>03λ 0.01</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell></ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>0.01 0.02λ 0.05 0.01 0.05 0.02λ 0.01 0.02</ns0:cell><ns0:cell>   .</ns0:cell></ns0:row><ns0:row><ns0:cell>0.03</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell cols='2'>0.01 0.03λ</ns0:cell></ns0:row></ns0:table><ns0:note>With λ = 1 the network will be quite uniform, but as it increases, the high values in the diagonal compared to the rest of the matrix will result in a very strong community structure, which should be detected by the clustering algorithms.9/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:2:1:NEW 26 May 2021) Manuscript to be reviewed Computer Science P ′ 1 P ′ 2 ... P ′ S sum P 1 c</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Stable clusterings are expected to persist through the process, giving small mean</ns0:figDesc><ns0:table /><ns0:note>15/19 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:2:1:NEW 26 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head /><ns0:label /><ns0:figDesc>table with the Markov chain Monte Carlo method described by Diaconis and Gangolli</ns0:figDesc><ns0:table /><ns0:note>17/19PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57675:2:1:NEW 26 May 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='2'>We assume the weight function w uv is defined for every pair of vertices u,v of the weighted graph, with w uv = 0 if there is no edge between them.3 To prevent confusion between the function d S (•) and the median value (which only depends on G) d m we will always refer to subgraphs of G with uppercase letters.</ns0:note>
</ns0:body>
" | "Minor Revision (2nd round)
A. Arratia and M. Renedo Mirambell
May 2021
Dear Editor,
First of all, we would like to thank you for managing our manuscript and
the reviewers for their constructive remarks. We completed the requested minor
revision of our manuscript entitle Clustering assessment in weighted networks,
Paper ID: CS-2021:02:57675, submitted to PeerJ Computer Science, on the basis
of the observations made by the reviewers.
Below you will find the details of our responses to each of the reviewer’s observations. The reviewers’ observations and our responses have been organized
as a series of numbered questions (Q) and answers (A).
Thanks again for all your feedback and support that helped us improve the
paper.
Best regards,
Argimiro Arratia and Martı́ Renedo Mirambell
1
Reviewer 2
(Q) the figures should be re-generated with high quality, and resolution, font
size, etc. Especially, the figure 1 and figure 2 are poor quality.
(A) Done. We redraw figures 1 and 2 with higher resolution, producing the
legends with more inter-line space, so now they can be read clearly.
2
Reviewer 3
(Q1) My biggest criticism about the article remains about the lack of evaluation
of how well a method performs.
(A1) In terms of evaluating clustering methods, the measures of significance
and stability can be used both to compare algorithms with each other (for example in case we wanted to determine if some new algorithm improves on existing
state of the art methods), or on it’s own by testing it against the randomized
graphs, as shown in the article. Of course, there is no single definition on what
a good cluster is, so depending on the application some of the metrics provided
might be more relevant than others and can be prioritized.
1
(Q2) I think authors could use more formal of negative tenses in most part
of the text, instead of abbreviation
(A2) We have revised all English contractions/abbreviations and change to
a formal text:
l 116: change abbreviation “graph’s score” to “score of the graph”
l 197: doesn’t –¿ does not
l 167, 236: didn’t –¿ did not
l 167: wouldn’t –¿ would not
l 274, 278, 505, 517, 537, 560, 570: doesn’t –¿ does not
l 341, 578, 592, 599: don’t –¿ do not
l 596 haven’t –¿ have not
(Q3) Minor suggestion: in line 52, – “that measure some topological characteristics.” - can you specify the topological characteristics here to be more
precise?
(A3) Following that sentence in line 56 we have added: “Most of these topological characteristics focus on the relation between the external and internal
connectivity of clusters, density of edges and degree distributions. ” to clarify
the nature of these topological characteristics. Then each one of the scoring
functions named in Table 1 is an specification of the topological property the
function is addressing.
(Q4) Figure 2 – labels are messy. One on top of other.
(A4) This is fixed (see above answer to Reviewer 2).
2
" | Here is a paper. Please give your review comments after reading it. |
157 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Optimizing global connectivity in spatial networks, either through rewiring or adding edges, can increase the flow of information and increase the resilience of the network to failures. Yet, rewiring is not feasible for systems with fixed edges and optimizing global connectivity may not result in optimal local connectivity in systems where that is wanted.</ns0:p><ns0:p>We describe the local network connectivity optimization problem, where costly edges are added to a systems with an established and fixed edge network to increase connectivity to a specific location, such as in transportation and telecommunication systems. Solutions to this problem maximize the number of nodes within a given distance to a focal node in the network while they minimize the number and length of additional connections. We compare several heuristics applied to random networks, including two novel planar random networks that are useful for spatial network simulation research, a real-world transportation case study, and a set of real-world social network data. Across network types, significant variation between nodal characteristics and the optimal connections was observed. The characteristics along with the computational costs of the search for optimal solutions highlights the need of prescribing effective heuristics. We offer a novel formulation of the genetic algorithm, which outperforms existing techniques. We describe how this heuristic can be applied to other combinatorial and dynamic problems.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Spatial networks have become more popular as the interest in networks has spread into more fields and spatial data, and the computational power and methods to analyze it, have become more accessible.</ns0:p><ns0:p>In terms of analysis, spatial network optimization has been at the forefront and focused on increasing network connectivity and information flow <ns0:ref type='bibr' target='#b50'>(Schrijver, 2002;</ns0:ref><ns0:ref type='bibr' target='#b53'>Wu et al., 2004)</ns0:ref>. Heuristics have been developed to rearrange existing networks or creating new ones that optimize the topology of the network for synchronizability <ns0:ref type='bibr' target='#b33'>(Khafa and Jalili, 2019)</ns0:ref>. Several effective methods have also been developed to add new edges to a network that minimize the average shortest path distance <ns0:ref type='bibr' target='#b41'>(Meyerson and Tagiku, 2009)</ns0:ref>, minimize the network diameter <ns0:ref type='bibr' target='#b16'>(Demaine and Zadimoghaddam, 2010)</ns0:ref>, or maximize the network's centrality <ns0:ref type='bibr' target='#b30'>(Jiang et al., 2011)</ns0:ref> or connectivity <ns0:ref type='bibr' target='#b1'>(Alenazi et al., 2014)</ns0:ref>.</ns0:p><ns0:p>While this optimization of spatial networks' global characteristics , optimizing local existing network connectivity around a specific node or location with the introduction of costly new edges has not been explored and yet is important in several domains. For example, increasing an existing network's connectivity around a focal node while minimizing the costs associated with the number and lengths of additional connections is essential in network layout planning for telecommunications and computer systems <ns0:ref type='bibr' target='#b48'>(Resende and Pardalos, 2006;</ns0:ref><ns0:ref type='bibr' target='#b17'>Donoso and Fabregat, 2007)</ns0:ref> and increasing or slowing the spread of information or diseases in social networks <ns0:ref type='bibr' target='#b22'>(Gavrilets et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b19'>Eubank et al., 2004)</ns0:ref>. This local network PeerJ Comput. Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science connectivity problem is particularly important with transportation planning in urban environments, where the weights of the network edges can be physical distances or riderships and future street connections or transportation lines can be significantly costly and impact flow to established facilities. For example, planners can optimize thoroughfare connectivity around schools to foster student walking and biking while reducing busing costs <ns0:ref type='bibr' target='#b5'>(Auerbach et al., 2021)</ns0:ref> and increase accessibility and patient travel time to health care facilities <ns0:ref type='bibr' target='#b9'>(Branas et al., 2005)</ns0:ref>.</ns0:p><ns0:p>The search for new edges that maximize connectivity to a focal node and minimize the costs of these new edges is not well understood and this search for optimal solutions can become costly when networks are large and complex. To fill this knowledge gap, we compare a set of heuristics to optimize local network connectivity applied to real-world networks and randomly generated ones. These heuristics are drawn from <ns0:ref type='bibr' target='#b42'>Mladenović et al. (2007)</ns0:ref> review of combinatorial heuristics and from location models that include a spatial component <ns0:ref type='bibr' target='#b10'>(Brimberg and Hodgson, 2011)</ns0:ref>. We also offer a genetic algorithm with a novel chromosome formulation where the genes are not properties of a specific variable but weights for the probability to move in a given dimension across the solution space.</ns0:p><ns0:p>These optimization heuristics are then applied to randomly generated networks that vary in complexity and size to evaluate their efficacy in finding the optimal new connections that maximize local connectivity.</ns0:p><ns0:p>Included in this set of random graphs we provide two novel formulations of random planar networks based on the Voronoi diagram and the Delaunay triangulation. To complement the random network analysis, the network connectivity optimization methods are also applied to two real-world case studies, one from urban transportation planning and another from social network analysis. In this study, we show that optimization heuristics are preferred for the analysis and practice due to the nonlinearity of the solution space and the optimal solution's dependence on nodal characteristics, such as distance to the focal node. The novel genetic algorithm outperformed the other heuristics as it was able to move from suboptimal solutions and explore distance solutions quicker. This is important as researchers and engineers are working with networks or growing complexity and size.</ns0:p><ns0:p>The organization of this paper is as follows. The next section describes the formulation of the connectivity problem in more detail, the local search methodology, and the optimization heuristics (see Appendix A for the specific pseudocode of the optimization algorithms). This is followed by a section that details the data used for the study including descriptions of the random networks, the transportation case study street networks, and the social network data. Results of the heuristics applied to the random networks and the case studies are then presented. The paper concludes with a detailed discussion of these heuristic results, the further implications of these techniques for urban transportation planning, and future work for this avenue of research.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS AND DATA</ns0:head></ns0:div>
<ns0:div><ns0:head>Formulation of the Network Connectivity Problem</ns0:head><ns0:p>For the description of the optimization methodology the following nomenclature will be used (see Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>). In connectivity optimization, network nodes are first segmented and assigned to 'close' and 'distant' sets by a chosen threshold distance D from the network's focal node F. The number of nodes ν is a network is N, and nodes are separated into two sets based on their shortest network path distances to the focal node, d(ν, F). The nodes that are within this distance are assigned to the 'close' set, <ns0:ref type='figure'>1 (A)</ns0:ref>). The nodes that are outside the threshold shortest network path distance to the focal node, D, are assigned to the 'distant' set,</ns0:p><ns0:formula xml:id='formula_0'>N C ⊂ N, i.e., ν ∈ N C if d(ν, F) ≤ D and F ∈ N C (see Figure</ns0:formula><ns0:formula xml:id='formula_1'>N D ⊂ N, i.e., ν ∈ N D if d(ν, F) > D.</ns0:formula><ns0:p>When a new connection is added to the network, the shortest path distance from each distant node to the focal node is recalculated. If there are any distant nodes that are now within the threshold distance to the focal node they are assigned to the new set N C i, j . For example, if a new connection is established between distant node i and close node j,</ns0:p><ns0:formula xml:id='formula_2'>then k ∈ N C i, j if k ∈ N D and d(k, i) + d(i, j) + d( j, F) ≤ D (Figure 1 (C)</ns0:formula><ns0:p>). The benefit of this new connection is B(i, j) and the cost associated with the new connection is C(i, j). The optimal solution is the solution with the greatest benefit, or number of new nodes now within the</ns0:p></ns0:div>
<ns0:div><ns0:head>2/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science distance to the focal node which can be expressed as the bi-objective function</ns0:p><ns0:formula xml:id='formula_3'>O * = max(B(i, j)) + min(C(i, j)) s.t. ∑ i ν i = 1, i ∈ N C ∑ j ν j = 1, j ∈ N D (1)</ns0:formula><ns0:p>where benefits dominate costs. For example, if B(i, j) = B(m, n) and C(i, j) < C(m, n), then the optimal additional edge is between i and j. For nondominated solutions, we select the solution that minimizes the distance to the so-called ideal point. The ideal point represents the solution that simultaneously maximizes the benefit and minimizes the cost. For the analysis in this paper the formulation of the objective function is as follows. For a new edge between i and j the number of nodes in N C i, j set is the benefit of this new connection, B(i, j) = |N C i, j |, and the cost of the new connection is the length of the edge C(i, j) = d(i, j). Therefore,</ns0:p><ns0:formula xml:id='formula_4'>O * = max(|N C i, j |) + min(d(i, j)) s.t. ∑ i ν i = 1, i ∈ N C ∑ j ν j = 1, j ∈ N D (2)</ns0:formula><ns0:p>Most of the heuristics presented below are dependent on the number of iterations (t) and terminate when the solutions converge, O t = O t−1 , or the solution does not improve, O t < O t−1 .</ns0:p></ns0:div>
<ns0:div><ns0:head>Local Search Methodology</ns0:head><ns0:p>For a network of size N the number of new connections to evaluate has an upper bound of N 2 /4, therefore heuristics may be employed to identify (nearly) optimal solutions quicker than an exhaustive search as networks get larger. These optimization algorithms require a search space to explore and using nodal characteristics we create such a multidimensional solution space (see Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>). These nodal characteristics are explored to find the critical network properties for connectivity optimization and their impact on the performance in finding the optimal solution. As several nodes in a network can potentially have the same nodal characteristic values the local searches include a random shuffling routine. In detail, the following individual node characteristics were used to create the dimensions of the solution space: (i) distance to the focal node, (ii) degree centrality, (iii) closeness centrality, (iv) betweenness centrality, (v) eigenvector centrality, (vi) pagerank centrality, (vii) weighted clustering coefficient, and (viii) the neighbor nodes directly connected to the node. Nodes were sorted by their distance to the focal node and moving in this solution dimension may result in lower connectivity length costs but may not maximize the number of nodes ultimately connected to the focal node. In contrast, sorting nodes by their centrality, i.e., the importance of the node, could result in maximizing the number of nodes within the specified distance to the focal node but with the possibility of higher connectivity length costs compared to selecting nodes by other characteristics. To avoid these two extreme possibilities, several commonly used measures of centrality are explored: degree, the number of edges incident to a node; closeness centrality, the average length of the shortest path between the node and all other nodes in the network <ns0:ref type='bibr' target='#b8'>(Bavelas, 1950)</ns0:ref>; betweenness centrality, the frequency of a node included in the shortest paths between all other node pairs <ns0:ref type='bibr' target='#b21'>(Freeman, 1977)</ns0:ref>; eigenvector centrality, which is a relative sorting of nodes such that nodes with high values are connected to other nodes with high values <ns0:ref type='bibr' target='#b44'>(Newman, 2008)</ns0:ref>; and pagerank centrality, a variant of eigenvector centrality that sorts nodes based on their probability of being connected to a randomly selected node and which is commonly used in web-page rankings <ns0:ref type='bibr' target='#b11'>(Brin and Page, 1998)</ns0:ref>. The weighted clustering coefficient of a node is the count of the triplets in the neighborhood of the node and accounts for the weights of the edges times the maximum possible number of triplets that could occur <ns0:ref type='bibr' target='#b7'>(Barrat et al., 2004)</ns0:ref>. The weights used here are the spatial distances between nodes w i j = d(i, j). The nodes that were directly connected to the node under evaluation were also used.</ns0:p></ns0:div>
<ns0:div><ns0:head>Network Connectivity Optimization Heuristics</ns0:head><ns0:p>The following techniques were implemented for the network connectivity optimization study from their extensive use in optimization: hill climbing with random restart <ns0:ref type='bibr' target='#b49'>(Russell and Norvig, 2004)</ns0:ref>; stochastic </ns0:p><ns0:formula xml:id='formula_5'>C F d(j,F) N C ' F F D D k d(k,i) k d(k,F) > D d(k,i) + d(i,j) + d(j,F) < D Figure 1.</ns0:formula><ns0:p>Diagram of the sequence of the network connectivity optimization problem. The close nodes that are within a threshold network distance (orange dashed circle) from the focal node (black square) are colored green, distant nodes that could be within the threshold network distance with additional edges are colored red, and the gray distant nodes are outside the threshold distance regardless of any additional connections. Figure (A) is an example graph, (B) shows the same graph with the optimal new connection that maximizes the number of additional nodes within the threshold network distance and minimizes the length of the new connection, and the inset (C) highlights this optimal connection, between nodes i and j.</ns0:p><ns0:p>hill climbing <ns0:ref type='bibr' target='#b24'>(Greiner, 1992)</ns0:ref>; hill climbing with a variable neighborhood search <ns0:ref type='bibr' target='#b43'>(Mladenović and Hansen, 1997)</ns0:ref>; simulated annealing, which has a history of applications in graph problems <ns0:ref type='bibr' target='#b36'>(Kirkpatrick et al., 1983;</ns0:ref><ns0:ref type='bibr' target='#b31'>Johnson et al., 1989;</ns0:ref><ns0:ref type='bibr' target='#b35'>Kirkpatrick, 1984)</ns0:ref>; and genetic algorithms, which has been successfully used for combinatorial optimization <ns0:ref type='bibr' target='#b3'>(Anderson and Ferris, 1994;</ns0:ref><ns0:ref type='bibr' target='#b28'>Jaramillo et al., 2002)</ns0:ref>. A Tabu heuristic was not employed as it has been observed to be an inferior method for multi-objective optimization problems compared to simulated annealing and genetic algorithms <ns0:ref type='bibr' target='#b23'>(Golden and Skiscim, 1986;</ns0:ref><ns0:ref type='bibr' target='#b34'>Kim et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Parameter selection was simplified for easy comparison of the methods (see the Supplemental Information for the algorithms). To ensure that the heuristics did not converge on suboptimal solutions due to the initial starting values, random restart, i.e., randomly selecting initial nodes to avoid local optima and running the routine until the optimal solution is found, was used.</ns0:p></ns0:div>
<ns0:div><ns0:head>Exhaustive search (ES).</ns0:head><ns0:p>The exhaustive search calculates the solution for every pair of distant and close nodes for a network (see Algorithm 1 in the Supplemental Information). While this approach ensures that the optimal edges are found, as the number of nodes increases and therefore the number of possible connections between close and distant nodes increases, it can become computationally expensive and timely to implement. Since the results of ES provide the optimal solution, the times it takes to find</ns0:p></ns0:div>
<ns0:div><ns0:head>4/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science evaluate all solutions are used to benchmark the other heuristics.</ns0:p><ns0:p>Hill climbing (HC). The solution space was observed to be hilly from the exhaustive search results, so several modifications were introduced to the hill climbing technique to avoid getting stuck in suboptimal solutions (Algorithm 2 in the Supplemental Information). A stochastic hill climbing (HCS), an advanced search method based on HC, routine is also explored where the selection of nodes for the next iteration is randomly picked with</ns0:p><ns0:formula xml:id='formula_6'>probability(i, j) = O ( i, j) ∑ (m,n) O ( m, n) ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>which terminates when an improved solution is no longer found (Algorithm 3 in the Supplemental Information). A hill climbing algorithm is coupled with a variable neighborhood (HCVN) where the size of the neighborhood starts with the nearest neighbors (η = 1) and is updated as follows:</ns0:p><ns0:formula xml:id='formula_7'>η = 1 if O t > O t−1 η + 1 if O t ≤ O t−1 ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>and the HCVN method terminates after n max is reached (Algorithm 4 in the Supplemental Information).</ns0:p><ns0:p>Simulated annealing (SA). As a meta-heuristic approach, the simulated annealing method randomly selects an initial solution from the solution space to avoid entrapment in a local optima. At each iteration, the heuristic evaluates the neighboring solutions and if it does not find an improved solution, it moves to a new solution with the following probability:</ns0:p><ns0:formula xml:id='formula_8'>probability(i, j) = exp − O t−1 − O(i, j) t .<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>The distance of the move decreases with the number of iterations until a better solution is no longer found (Algorithm 5 in the Supplemental Information).</ns0:p><ns0:p>Genetic algorithm (GA). The genetic algorithm begins with a population of P randomly selected solutions with a set of chromosomes composed of genes which represent the weights of selecting a neighbor and are all initialized to unity (Algorithm 6 in the Supplemental Information). During each iteration of the method, solution scores (fitnesses) are computed by</ns0:p><ns0:formula xml:id='formula_9'>f (i, j) = O(i, j) ∑ (m,n) O(m, n) ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>and a new generation of solutions are selected based on the following probability condition</ns0:p><ns0:formula xml:id='formula_10'>probability(i, j) = s * f (i, j) + (1 − s) ∑ (m,n) (s * f (m, n) + (1 − s)) , (<ns0:label>7</ns0:label></ns0:formula><ns0:formula xml:id='formula_11'>)</ns0:formula><ns0:p>where s is the selection coefficient. Weak selection, s ≪ 1, is used to ensure that random mutations impact solution frequency. Crossover is conducted by alternating the weights for the offspring from each parent, also known as cycle crossover <ns0:ref type='bibr' target='#b45'>(Oliver et al., 1987)</ns0:ref>. Mutations are introduced at a low rate µ ≪ 1 for each gene and increase the nodal characteristic selection weight by one. The probability that characteristic m is used to find a neighbor for node i is given by</ns0:p><ns0:formula xml:id='formula_12'>probability(characteristic) = gene(i, m) ∑ k gene(i, k)/K , (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>)</ns0:formula><ns0:p>where K is the total number of nodal characteristics. This formulation ensures that the nodal characteristics that improve the solution increase in weight which results in a greater probability they will be selected for neighborhood exploration, and ultimately reduces the size of the neighborhood search. Among these methods, the genetic algorithm presented here introduces a novel chromosome formulation where the genes are not properties of a specific variable but weights for the probability to move in a given direction in the solution space. This allows the method to occasionally explore different nodal characteristics (at mutation rate µ) while conducting a local neighborhood search.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Simulated Data: Random Networks</ns0:head><ns0:p>To test the efficacy of these optimization heuristics in finding the optimal new network connections they were applied to randomly generated networks that vary in complexity and size. Several types of networks are constructed by randomly creating connection between pairs of nodes with a probability <ns0:ref type='bibr' target='#b18'>(Erdös and Rényi, 1959)</ns0:ref>. These networks, even though they have random connections, consistently have short average path lengths and irregular connections, both of which are well found in natural systems.</ns0:p><ns0:p>The Watts-Strogatz networks also have random connections but the networks also form clusters, another feature commonly found in real-world networks <ns0:ref type='bibr' target='#b52'>(Watts and Strogatz, 1998)</ns0:ref>. The Barabási-Albert model produces random structures with a small number of highly connected nodes, 'hubs', which are observed in numerous types of networks <ns0:ref type='bibr' target='#b6'>(Barabási and Albert, 1999;</ns0:ref><ns0:ref type='bibr' target='#b0'>Albert and Barabási, 2002)</ns0:ref>. Klemm and Eguílez networks have random connections, clusters, and hubs <ns0:ref type='bibr' target='#b37'>(Klemm and Eguílez, 2002)</ns0:ref>. See Supplemental <ns0:ref type='bibr' target='#b47'>Prettejohn et al. (2011)</ns0:ref> for the algorithms used to generate these networks.</ns0:p><ns0:formula xml:id='formula_14'>Information Figure S.1 (A) -(D) and</ns0:formula><ns0:p>We also introduce two novel types of random planar network versions of the Voronoi diagram and the Delaunay triangulation (Supplemental Information Figure S.1 (E) and (F)). Planarity is particularly important in many fields and networks generated from Voronoi diagrams and Delaunay triangles have been used in spatial health epidemiology <ns0:ref type='bibr' target='#b32'>(Johnson, 2007)</ns0:ref>, transportation flow problems <ns0:ref type='bibr' target='#b51'>(Steffen and Seyfried, 2010;</ns0:ref><ns0:ref type='bibr' target='#b46'>Pablo-Martì and Sánchez, 2017)</ns0:ref>, terrain surface modeling <ns0:ref type='bibr' target='#b20'>(Floriani et al., 1985)</ns0:ref>, telecommunications <ns0:ref type='bibr' target='#b40'>(Meguerdichian et al., 2001)</ns0:ref>, computer networks design <ns0:ref type='bibr' target='#b38'>(Liebeherr and Nahas, 2001)</ns0:ref>, and hazard avoidance systems in autonomous vehicles <ns0:ref type='bibr' target='#b4'>(Anderson et al., 2012)</ns0:ref>. Delaunay triangulation maximizes the minimum angles between three nodes to generate planar graphs with consistent network characteristics while Voronoi diagrams, the dual of a Delaunay triangulation, are composed of points and cells such that each cell is closer to its point than any other point <ns0:ref type='bibr' target='#b14'>(Delaunay, 1934)</ns0:ref>. To modify these edges are removed from network nodes randomly based on their distance from the focal node with probability</ns0:p><ns0:formula xml:id='formula_15'>p R • max(d(i, F), d( j, F)) max k d(k, F) ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>where p R is the removal probability and weighted by the normalized edge distance from the focal node.</ns0:p><ns0:p>When edges are randomly removed from the connected Delaunay network or Voronoi network, with weights given by node distance from a focal node, these networks display some of the properties similarly found in the networks mentioned above, such as complexity and randomness. Yet, these networks have the added component of being planar and having edge weights that can be framed as physical distances.</ns0:p><ns0:p>To compare the efficacy of different optimization methods for different network topologies, identifying the best set of parameters are critical. Parameter values were selected for each type of random network to ensure network complexity (Supplemental Information Table <ns0:ref type='table'>S</ns0:ref>.6 summarizes the parameters which were used in the analysis). Variation in network size was also explored and the most connected node in each network was selected as the focal node. Uniformly randomly generated edge weights in [0,1] were used for the network distances and the threshold distance was set to ensure that half of the nodes were initially within the distance to the focal node. The costs and benefits were normalized using the ranges from the exhaustive search routine as a benchmark to compare the results from the different optimization methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Empirical Data: Transportation Case Study</ns0:head><ns0:p>To complement the random network analysis, the network connectivity optimization methods were applied to a study of urban transportation planning. Network connectivity optimization methods were used to evaluate the potential costs and benefits of increased thoroughfare connectivity for student walking or biking to school. It is assumed that expanding the connectivity around a school would allow for more households, and students, to be included within the walking distance to the school. If more students actively commute to school, this reduces the busing costs for the school system and increases the health and academic achievement of the students (Centers for Disease Control and Prevention, 2010). Yet, streets have associated land, construction, and maintenance costs that are primarily dependent on their length.</ns0:p><ns0:p>Networks composed of street edges and residence nodes around several schools from a representative US school system were used for the analysis. Ten suburban and rural schools from Knox County, TN, were Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>selected for the analysis, including seven elementary and three middle, that would benefit the most from increased thoroughfare connectivity, i.e., had the most students within the Euclidean walking distance but not the network distance to the school (characteristics of these schools are provided in the Supplemental Information Table <ns0:ref type='table'>S</ns0:ref>.3). Urban schools were not used since the street connectivity around the schools was significantly high and the from additional thoroughfares would be low. Residential parcels and street networks were provided by the Knoxville-Knox County Metropolitan Planning Commission and the residential parcels were converted to nodes and placed on the nearest street edge. The residences within 1 mile and 1.5 miles, for the elementary schools and the middle schools respectively, are considered close nodes while the nodes outside of these distances were classified as distant nodes (see Figure <ns0:ref type='figure'>2</ns0:ref>). The school networks do not generally display the characteristics of complex network: they had low average degree, large path lengths, and were not efficient, yet they did have power distributions of connectivity with few intersections having a large number of street connections (see Supplemental Information Table <ns0:ref type='table'>S</ns0:ref>.3). The networks were evaluated with each optimization method to maximize the number or close residences connected to the school and minimize the distance of the new thoroughfare. The costs and benefits of these street connections were normalized using the ranges from the exhaustive search routine as a benchmark to compare the results from the different optimization methods. Results of the transportation case study used for the analysis. A network of streets and residences around a school is shown in (A) and with the optimal new walking connection in (B). The red nodes represent the distant residences, i.e., the residences within the 1-mile Euclidean walking distance to the school but not the 1-mile street network walking distance, the green nodes are the close residences within the street network school walking distance, and the black square represents the school. The orange line is the optimal new walking connection that maximizes the number of additional residences (orange nodes) and minimizes the length of the new connection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Empirical Data: Social Network Analysis</ns0:head><ns0:p>The topology of a network and the management of its system can improve information flow and have been shown to help counter the negative effects of social and environmental crises <ns0:ref type='bibr' target='#b26'>(Helbing et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Specifically, increased social network connectivity can reduce the time information spreads among its members. For example, quickly notifying those nearby and unaware of an active shooter event can save lives. Using a set of social network data coupled with location data, we evaluated the heuristics to optimize social network connectivity for those nearby a specific location.</ns0:p><ns0:p>The data set used for this analysis was from Gowalla, a location-based social networking website where users share their locations by checking-in, and this data set was collected and published by <ns0:ref type='bibr' target='#b13'>Cho et al. (2011)</ns0:ref>. This social network is undirected and consists of 196,591 nodes (members) and 950,327 edges (social connections). There is a total of 6,442,890 check-ins of these members over the period of February 2009 to October 2010 (network characteristics are provided in the Supplemental Information Table <ns0:ref type='table'>S</ns0:ref>.4). For the analysis we simulated 100 crisis incidents at a highly populated urban place, Grand Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Central Terminal in New York City (NY). Grand Central Terminal was selected as the location of the simulated events as it is a major transportation hub located in the center of the city that serves over a million commuters and visitors daily. New York City is also a major metropolis with tens of thousands of Gowalla users present in the data set and the city has a history of incidents, such as terror attacks. Ten dates were selected at random and for each date ten times were randomly selected between 1200 and 1800 local to simulate a crisis event (see Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>).</ns0:p><ns0:p>The social network problem was formulated such that an event occurred at the location (Grand Central Station) and members who were within 0.5-miles of the event should be notified. The members within Grand Central Station are automatically notified of the event whereas those outside can be notified through their online social network if a member in their network is aware. The event is considered to be serious enough that those members aware of it will share the information on the social network. New connections are evaluated based on their number of additional nearby members who become aware of the event.</ns0:p><ns0:p>Grand with the optimal connection. The red nodes represent the users within 0.5-mile who are unaware of the event, the green nodes are the users aware of the event, the black line represents. The orange line is the optimal new social network connection that maximizes the number of additional nearby users aware of the event.</ns0:p></ns0:div>
<ns0:div><ns0:head>PERFORMANCE OF THE ALGORITHMS</ns0:head><ns0:p>Several finding are worthy to note regarding the performance of heuristic algorithms used in the analysis.</ns0:p><ns0:p>First, there were consistent nonlinear relationships between the nodal characteristics and the quality of the solutions for each type of random network and the school networks (see Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>). There was also significant variation for which nodal characteristics were correlated with the quality of the solution across networks (see Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>). Among those, the distance between the close node and the focal node and the distance between the distant node and the close node were most often highly correlated with the quality of the solution across networks. The centrality measures were inconsistently related to the solution quality for the random networks yet were related to the optimal solutions for the social networks.</ns0:p><ns0:p>The results of the termination times and the optimal solutions deviations from the optimization heuristics applied to the random networks are summarized in Figure <ns0:ref type='figure'>5</ns0:ref> and Supplemental Information </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION, FUTURE WORK, AND LIMITATIONS</ns0:head><ns0:p>The local network connectivity problem introduced in this study is relevant to a wide range of applications and is nontrivial as the number of potential solutions can become large even for small systems. This class of combinatorial optimization problem highlights the difficulty in determining local search routines a priori. When the exhaustive search routine was applied to random networks and the real-world networks, the optimal solutions were found to be related to nodal characteristics, which entails a great complexity to find optimal solutions. Therefore, the heuristics employed to reduce the computational costs utilized nodal characteristics to search for solutions. Yet, these nodal characteristics were nonlinearly related to the solutions. Given the example networks, it should be noted that distance to the focal node was consistently related to the quality of the solution as this lowers connectivity length costs, while centrality was intermittently correlated with solution quality it provides greater benefit through more connections.</ns0:p><ns0:p>Aside from the distance characteristic, these nodal characteristics also varied in their correlation (sign The optimization heuristics save computational time but vary considerably in their ability to find (near) optimal solutions. The stochastic hill climbing search was not effective due to the large neighborhood search space explored. In our experiment, the number of solutions checked at each iteration is > 300 and resulted in a skewed probability distribution of objective values favoring the selection of low values.</ns0:p><ns0:p>This degraded the efficiency of the method resulting in the selection of poor solutions. The variable neighborhood search method was similarly not reliable because of the significantly large neighborhood search space (the number of possible solutions explored at a given iteration could be > 5, 000), and had intermediate results with cost and benefit deviations. The simulated annealing heuristic consistently took longer to converge than the other optimization methods from the exploration of suboptimal solutions prior to moving towards better solutions, yet it was able to converge to values close to the optimal solution.</ns0:p><ns0:p>The computational costs and the variance in the importance of nodal characteristics for the random networks and real-world systems highlights the need for a heuristic that is able to quickly and effectively explore the solution space without getting stuck in a local optimum. The genetic algorithm provided in this work offers a solution to these issues and outperformed the other algorithms in terms of the consistently higher solution precision and accuracy. The genetic algorithm is able to dynamically reduce the size of the neighborhood search space and what variables to analyze. This reduction in the local solution search space allows the genetic algorithm presented here to converge on solutions near the optimal in a timely fashion. The heuristic is also able to compare solutions from distant search spaces with nonzero probability, thereby avoiding local optimums. The experiments indicate the power of biologically inspired algorithms to effectively explore multidimensional spaces (commonly found in natural systems) and their potential use in a wide variety of disciplines, including the specific applications for planning and crisis management presented above. The combinatorial optimization techniques employed here to identify and evaluate new street connections can also complement the optimization approaches used for other transportation planning problems, such as greenway planning <ns0:ref type='bibr' target='#b39'>(Linehan et al., 1995)</ns0:ref>, bus stop locations <ns0:ref type='bibr' target='#b27'>(Ibeas et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b15'>Delmelle et al., 2012)</ns0:ref>, and health care accessibility <ns0:ref type='bibr' target='#b25'>(Gu et al., 2010)</ns0:ref>.</ns0:p><ns0:p>There are several research directions from these proposed methods. Application of these methods and heuristics can be tested on multi-level networks, such as telecommunication systems, higher dimensional real-world networks (transportation networks with elevation), directed networks, and additional planar random networks (e.g., Gabriel graphs) and this should be conducted. Different distance measures to the focal node, such as the Hamming distance, could also be evaluated for different applications, and other</ns0:p><ns0:p>real world examples should be used for analysis. The methods presented here simplified the costs and did not account for many real-world barriers that may restrict optimal new connections found through the heuristics. For example, the transportation case study did not include legal considerations, such as right-of-way, or physical barriers, such as highways or rivers. Furthermore, the methods presented here do not evaluate whether the new connections intersect existing edges, as in existing transportation networks, and attempts to incorporate such a feature resulted in unrealistic computational times. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Local search selection criteria distance from focal node degree centrality d(i, F) C D i = ∑ j A(i, j) closeness centrality betweenness centrality</ns0:p><ns0:formula xml:id='formula_16'>C C i = 1/ ∑ j d(i, j) C B i = ∑ j =i =k σ jk (i) σ jk</ns0:formula><ns0:p>eigenvector centrality pagerank centrality <ns0:ref type='bibr'>(i, j)</ns0:ref>x j C P i = α P ∑ j A(i, j)</ns0:p><ns0:formula xml:id='formula_17'>C E i = 1 λ ∑ j A</ns0:formula><ns0:p>x j ∑ i A(i, j) + 1 − α P N weighted clustering coefficient</ns0:p><ns0:formula xml:id='formula_18'>c w i = 1 (C D</ns0:formula><ns0:p>i − 1) ∑ j a i j w i j ∑ j,h (w i j + w ih ) 2 a i j a ih a jh </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>random graph networks were generated to analyze the efficacy of the optimization heuristics for systems with different topologies which are generally representative of naturally occurring and built systems: (1) Erdös-Rényi networks, (2) Watts-Strogatz networks, (3) Barabási and Albert networks, (4) Klemm and Eguílez networks, (5) Delaunay triangulation networks, and (6) Voronoi diagrams. Erdös-Rényi random</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Figure2. Results of the transportation case study used for the analysis. A network of streets and residences around a school is shown in (A) and with the optimal new walking connection in (B). The red nodes represent the distant residences, i.e., the residences within the 1-mile Euclidean walking distance to the school but not the 1-mile street network walking distance, the green nodes are the close residences within the street network school walking distance, and the black square represents the school. The orange line is the optimal new walking connection that maximizes the number of additional residences (orange nodes) and minimizes the length of the new connection.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3. Results of the Gowalla social network case study used for the analysis. For simplicity the network shown is the users with a check-in within 1 hour prior to the event and nearby (within 0.5-mile) the event location, Grand Central Station (NYC). (A) the network prior during the event and (B) with the optimal connection. The red nodes represent the users within 0.5-mile who are unaware of the event, the green nodes are the users aware of the event, the black line represents. The orange line is the optimal new social network connection that maximizes the number of additional nearby users aware of the event.</ns0:figDesc><ns0:graphic coords='9,141.73,219.35,205.73,205.68' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>FiguresFigure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figures S.1 and S.2. The hill climbing method was consistently faster for all of the networks, yet had the largest cost and benefit deviations. Simulated annealing and the genetic algorithm had similar termination times, but the genetic algorithm was consistently superior to all of the other methods in approaching the optimal solution. The results from the application of the optimization heuristics applied to the ten school networks are shown in Figure5(E) and (F). The times to termination for each heuristic according to</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021) Manuscript to be reviewed Computer Science and magnitude) with solution quality for different types of networks. This makes it difficult to exclude or prioritize specific nodal characteristics for local network connectivity optimization heuristics. This could arise from the four following issues: (a) the curse of dimensionality, i.e., large sparse subspaces in the solution space; (b) the nodal characteristics are highly correlated with each other; (c) outliers; and (d) the nodal characteristics are heterogeneous across the network. Results from the street networks in the transportation case-study found that the clustering coefficient was a poor measure due to the lack of triplets in the networks.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Initial network size (Barabási and Albert graphs and Klemm and Eguílez graphs) m Degree of new nodes (Barabási and Albert graphs) p S node selection probability (Klemm and Eguílez graphs) p REdge removal probability (Delaunay and Voronoi random graphs) List of symbols and their definitions.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Symbol</ns0:cell><ns0:cell>Definition</ns0:cell></ns0:row><ns0:row><ns0:cell>ν</ns0:cell><ns0:cell>Network node</ns0:cell></ns0:row><ns0:row><ns0:cell>e</ns0:cell><ns0:cell>Network edge</ns0:cell></ns0:row><ns0:row><ns0:cell>N</ns0:cell><ns0:cell>Number of nodes in a given network, N = ∑ i ν i</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>Network adjacency matrix</ns0:cell></ns0:row><ns0:row><ns0:cell>a i j</ns0:cell><ns0:cell>Adjacency matrix element i j</ns0:cell></ns0:row><ns0:row><ns0:cell>F</ns0:cell><ns0:cell>Focal node</ns0:cell></ns0:row><ns0:row><ns0:cell>d(i, j)</ns0:cell><ns0:cell>Network distance between nodes i and j</ns0:cell></ns0:row><ns0:row><ns0:cell>D</ns0:cell><ns0:cell>Threshold distance from focal node</ns0:cell></ns0:row><ns0:row><ns0:cell>N C</ns0:cell><ns0:cell>Set of close nodes, N C ⊂ N</ns0:cell></ns0:row><ns0:row><ns0:cell>N D</ns0:cell><ns0:cell>Set of distant nodes, N D ⊂ N</ns0:cell></ns0:row><ns0:row><ns0:cell>N C i, j</ns0:cell><ns0:cell>Set of nodes that are now close after a new connection between nodes i and j</ns0:cell></ns0:row><ns0:row><ns0:cell>L F</ns0:cell><ns0:cell>Average path length to the focal node</ns0:cell></ns0:row><ns0:row><ns0:cell>C(i, j)</ns0:cell><ns0:cell>Cost of the new connection</ns0:cell></ns0:row><ns0:row><ns0:cell>B(i, j)</ns0:cell><ns0:cell>Benefit of the new connection</ns0:cell></ns0:row><ns0:row><ns0:cell>α</ns0:cell><ns0:cell>Cost weight</ns0:cell></ns0:row><ns0:row><ns0:cell>β</ns0:cell><ns0:cell>Benefit weight</ns0:cell></ns0:row><ns0:row><ns0:cell>t</ns0:cell><ns0:cell>Optimization iteration</ns0:cell></ns0:row><ns0:row><ns0:cell>O t</ns0:cell><ns0:cell>Optimal solution for iteration t</ns0:cell></ns0:row><ns0:row><ns0:cell>O *</ns0:cell><ns0:cell>Optimal solution</ns0:cell></ns0:row><ns0:row><ns0:cell>M</ns0:cell><ns0:cell>Set of long-term memory solutions</ns0:cell></ns0:row><ns0:row><ns0:cell>C D i C C i σ i j</ns0:cell><ns0:cell>Degree centrality of node i Closeness centrality of node i Shortest path between nodes i and j</ns0:cell></ns0:row><ns0:row><ns0:cell>σ jk (i)</ns0:cell><ns0:cell>Shortest path between nodes j and k that includes node i</ns0:cell></ns0:row><ns0:row><ns0:cell>C B i</ns0:cell><ns0:cell>Betweenness centrality of node i</ns0:cell></ns0:row><ns0:row><ns0:cell>λ</ns0:cell><ns0:cell>Eigenvalue</ns0:cell></ns0:row><ns0:row><ns0:cell>x i</ns0:cell><ns0:cell>Eigenvector</ns0:cell></ns0:row><ns0:row><ns0:cell>C E i</ns0:cell><ns0:cell>Eigenvector centrality of node i</ns0:cell></ns0:row><ns0:row><ns0:cell>α P</ns0:cell><ns0:cell>Attenuation factor</ns0:cell></ns0:row><ns0:row><ns0:cell>C P i</ns0:cell><ns0:cell>Pagerank centrality of node i</ns0:cell></ns0:row><ns0:row><ns0:cell>η</ns0:cell><ns0:cell>Variable neighborhood size</ns0:cell></ns0:row><ns0:row><ns0:cell>µ</ns0:cell><ns0:cell>Genetic algorithm mutation rate</ns0:cell></ns0:row><ns0:row><ns0:cell>s</ns0:cell><ns0:cell>Genetic algorithm selection coefficient</ns0:cell></ns0:row><ns0:row><ns0:cell>P</ns0:cell><ns0:cell>Population of solutions for the genetic algorithm</ns0:cell></ns0:row><ns0:row><ns0:cell>f (i, j)</ns0:cell><ns0:cell>Genetic algorithm fitness function</ns0:cell></ns0:row><ns0:row><ns0:cell>ε B</ns0:cell><ns0:cell>Benefit error from heuristic</ns0:cell></ns0:row><ns0:row><ns0:cell>ε C</ns0:cell><ns0:cell>Cost deviation from heuristic</ns0:cell></ns0:row><ns0:row><ns0:cell>p</ns0:cell><ns0:cell>Connection probability (Erdös-Rényi graphs and Klemm and Eguílez graphs)</ns0:cell></ns0:row><ns0:row><ns0:cell>p W</ns0:cell><ns0:cell>Rewiring probability (Watts-Strogatz graphs)</ns0:cell></ns0:row><ns0:row><ns0:cell>k L</ns0:cell><ns0:cell>Initial node degree (Watts-Strogatz graphs)</ns0:cell></ns0:row><ns0:row><ns0:cell>m 0</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>C D</ns0:cell><ns0:cell>Mean degree of a network</ns0:cell></ns0:row><ns0:row><ns0:cell>L</ns0:cell><ns0:cell>Average path length of a network</ns0:cell></ns0:row><ns0:row><ns0:cell>c w i</ns0:cell><ns0:cell>Weighted clustering coefficient for node i</ns0:cell></ns0:row><ns0:row><ns0:cell>w i j</ns0:cell><ns0:cell>Weight of connection between nodes i and j</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>Weighted clustering coefficient of a network</ns0:cell></ns0:row><ns0:row><ns0:cell>C r</ns0:cell><ns0:cell>Weighted clustering coefficient of a completely random network</ns0:cell></ns0:row><ns0:row><ns0:cell>L r</ns0:cell><ns0:cell>Average path length of a completely random network</ns0:cell></ns0:row><ns0:row><ns0:cell>γ</ns0:cell><ns0:cell>Power law exponent</ns0:cell></ns0:row><ns0:row><ns0:cell>P(n)</ns0:cell><ns0:cell>Degree distribution</ns0:cell></ns0:row><ns0:row><ns0:cell>E</ns0:cell><ns0:cell>Efficiency of a network</ns0:cell></ns0:row><ns0:row><ns0:cell>E r</ns0:cell><ns0:cell>Efficiency of a completely random network</ns0:cell></ns0:row><ns0:row><ns0:cell>E G</ns0:cell><ns0:cell>Global efficiency of a network</ns0:cell></ns0:row></ns0:table><ns0:note>14/16PeerJ Comput. Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Node characteristics used for the neighborhood search and their formulations. -0.08 -0.52 -0.37 -0.48 -0.28 -0.41</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Network</ns0:cell></ns0:row></ns0:table><ns0:note>15/16PeerJ Comput. Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Mean correlation coefficients for the nodal characteristics and the solution benefits for the experimental networks. The three coefficients with the largest magnitude are highlighted in bold for each network type. (*) There was no variation in clustering coefficients as triplets were not common in the street networks.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='13'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:07:51376:1:1:NEW 16 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "1
2
3
4
Local network connectivity optimization: An
evaluation of heuristics applied to complex
spatial networks, a transportation case
study, and a spatial social network
5
Jeremy Auerbach1 and Hyun Kim2
6
1 Colorado
7
8
State University, Department of Environmental and Radiological Health
Sciences, Fort Collins, CO 80523, USA
2 University of Tennessee, Department of Geography, Knoxville, TN 37996, USA
10
Corresponding author:
Jeremy Auerbach1
11
Email address: jeremy.auerbach@colostate.edu
12
ABSTRACT
9
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Optimizing global connectivity in spatial networks, either through rewiring or adding edges, can increase
the flow of information and increase the resilience of the network to failures. Yet, rewiring is not feasible
for systems with fixed edges and optimizing global connectivity may not result in optimal local connectivity
in systems where that is wanted. We describe the local network connectivity optimization problem,
where costly edges are added to a systems with an established and fixed edge network to increase
connectivity to a specific location, such as in transportation and telecommunication systems. Solutions
to this problem maximize the number of nodes within a given distance to a focal node in the network
while they minimize the number and length of additional connections. We compare several heuristics
applied to random networks, including two novel planar random networks that are useful for spatial
network simulation research, a real-world transportation case study, and a set of real-world social
network data. Across network types, significant variation between nodal characteristics and the optimal
connections was observed. The characteristics along with the computational costs of the search for
optimal solutions highlights the need of prescribing effective heuristics. We offer a novel formulation of
the genetic algorithm, which outperforms existing techniques. We describe how this heuristic can be
applied to other combinatorial and dynamic problems.
INTRODUCTION
Spatial networks have become more popular as the interest in networks has spread into more fields and
spatial data, and the computational power and methods to analyze it, have become more accessible.
In terms of analysis, spatial network optimization has been at the forefront and focused on increasing
network connectivity and information flow (Schrijver, 2002; Wu et al., 2004). Heuristics have been
developed to rearrange existing networks or creating new ones that optimize the topology of the network
for synchronizability (Khafa and Jalili, 2019). Several effective methods have also been developed to
add new edges to a network that minimize the average shortest path distance (Meyerson and Tagiku,
2009), minimize the network diameter (Demaine and Zadimoghaddam, 2010), or maximize the network’s
centrality (Jiang et al., 2011) or connectivity (Alenazi et al., 2014).
While this optimization of spatial networks’ global characteristics , optimizing local existing network
connectivity around a specific node or location with the introduction of costly new edges has not
been explored and yet is important in several domains. For example, increasing an existing network’s
connectivity around a focal node while minimizing the costs associated with the number and lengths
of additional connections is essential in network layout planning for telecommunications and computer
systems (Resende and Pardalos, 2006; Donoso and Fabregat, 2007) and increasing or slowing the spread of
information or diseases in social networks (Gavrilets et al., 2016; Eubank et al., 2004). This local network
77
connectivity problem is particularly important with transportation planning in urban environments, where
the weights of the network edges can be physical distances or riderships and future street connections
or transportation lines can be significantly costly and impact flow to established facilities. For example,
planners can optimize thoroughfare connectivity around schools to foster student walking and biking
while reducing busing costs (Auerbach et al., 2021) and increase accessibility and patient travel time to
health care facilities (Branas et al., 2005).
The search for new edges that maximize connectivity to a focal node and minimize the costs of these
new edges is not well understood and this search for optimal solutions can become costly when networks
are large and complex. To fill this knowledge gap, we compare a set of heuristics to optimize local
network connectivity applied to real-world networks and randomly generated ones. These heuristics are
drawn from Mladenović et al. (2007) review of combinatorial heuristics and from location models that
include a spatial component (Brimberg and Hodgson, 2011). We also offer a genetic algorithm with a
novel chromosome formulation where the genes are not properties of a specific variable but weights for
the probability to move in a given dimension across the solution space.
These optimization heuristics are then applied to randomly generated networks that vary in complexity
and size to evaluate their efficacy in finding the optimal new connections that maximize local connectivity.
Included in this set of random graphs we provide two novel formulations of random planar networks based
on the Voronoi diagram and the Delaunay triangulation. To complement the random network analysis, the
network connectivity optimization methods are also applied to two real-world case studies, one from urban
transportation planning and another from social network analysis. In this study, we show that optimization
heuristics are preferred for the analysis and practice due to the nonlinearity of the solution space and the
optimal solution’s dependence on nodal characteristics, such as distance to the focal node. The novel
genetic algorithm outperformed the other heuristics as it was able to move from suboptimal solutions
and explore distance solutions quicker. This is important as researchers and engineers are working with
networks or growing complexity and size.
The organization of this paper is as follows. The next section describes the formulation of the
connectivity problem in more detail, the local search methodology, and the optimization heuristics (see
Appendix A for the specific pseudocode of the optimization algorithms). This is followed by a section
that details the data used for the study including descriptions of the random networks, the transportation
case study street networks, and the social network data. Results of the heuristics applied to the random
networks and the case studies are then presented. The paper concludes with a detailed discussion of these
heuristic results, the further implications of these techniques for urban transportation planning, and future
work for this avenue of research.
78
METHODS AND DATA
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
79
Formulation of the Network Connectivity Problem
For the description of the optimization methodology the following nomenclature will be used (see Table 1).
In connectivity optimization, network nodes are first segmented and assigned to ’close’ and ’distant’ sets
by a chosen threshold distance D from the network’s focal node F. The number of nodes ν is a network is
N, and nodes are separated into two sets based on their shortest network path distances to the focal node,
d(ν, F). The nodes that are within this distance are assigned to the ‘close’ set, NC ⊂ N, i.e., ν ∈ NC if
d(ν, F) ≤ D and F ∈ NC (see Figure 1 (A)). The nodes that are outside the threshold shortest network
path distance to the focal node, D, are assigned to the ‘distant’ set, ND ⊂ N, i.e., ν ∈ ND if d(ν, F) > D.
When a new connection is added to the network, the shortest path distance from each distant node to the
focal node is recalculated. If there are any distant nodes that are now within the threshold distance to the
focal node they are assigned to the new set Ni,Cj . For example, if a new connection is established between
distant node i and close node j, then k ∈ Ni,Cj if k ∈ ND and d(k, i) + d(i, j) + d( j, F) ≤ D (Figure 1 (C)).
The benefit of this new connection is B(i, j) and the cost associated with the new connection is C(i, j).
The optimal solution is the solution with the greatest benefit, or number of new nodes now within the
2/16
distance to the focal node which can be expressed as the bi-objective function
O∗ = max(B(i, j)) + min(C(i, j))
s.t.
∑ νi = 1, i ∈ NC
i
(1)
∑ ν j = 1, j ∈ ND
j
where benefits dominate costs. For example, if B(i, j) = B(m, n) and C(i, j) < C(m, n), then the optimal
additional edge is between i and j. For nondominated solutions, we select the solution that minimizes the
distance to the so-called ideal point. The ideal point represents the solution that simultaneously maximizes
the benefit and minimizes the cost. For the analysis in this paper the formulation of the objective function
is as follows. For a new edge between i and j the number of nodes in Ni,Cj set is the benefit of this new
connection, B(i, j) = |Ni,Cj |, and the cost of the new connection is the length of the edge C(i, j) = d(i, j).
Therefore,
O∗ = max(|Ni,Cj |) + min(d(i, j))
s.t.
∑ νi = 1, i ∈ NC
i
(2)
∑ ν j = 1, j ∈ ND
j
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
Most of the heuristics presented below are dependent on the number of iterations (t) and terminate when
the solutions converge, Ot = Ot−1 , or the solution does not improve, Ot < Ot−1 .
Local Search Methodology
For a network of size N the number of new connections to evaluate has an upper bound of N 2 /4, therefore
heuristics may be employed to identify (nearly) optimal solutions quicker than an exhaustive search as
networks get larger. These optimization algorithms require a search space to explore and using nodal
characteristics we create such a multidimensional solution space (see Table 2). These nodal characteristics
are explored to find the critical network properties for connectivity optimization and their impact on the
performance in finding the optimal solution. As several nodes in a network can potentially have the same
nodal characteristic values the local searches include a random shuffling routine. In detail, the following
individual node characteristics were used to create the dimensions of the solution space: (i) distance to
the focal node, (ii) degree centrality, (iii) closeness centrality, (iv) betweenness centrality, (v) eigenvector
centrality, (vi) pagerank centrality, (vii) weighted clustering coefficient, and (viii) the neighbor nodes
directly connected to the node. Nodes were sorted by their distance to the focal node and moving in
this solution dimension may result in lower connectivity length costs but may not maximize the number
of nodes ultimately connected to the focal node. In contrast, sorting nodes by their centrality, i.e., the
importance of the node, could result in maximizing the number of nodes within the specified distance to
the focal node but with the possibility of higher connectivity length costs compared to selecting nodes
by other characteristics. To avoid these two extreme possibilities, several commonly used measures
of centrality are explored: degree, the number of edges incident to a node; closeness centrality, the
average length of the shortest path between the node and all other nodes in the network (Bavelas, 1950);
betweenness centrality, the frequency of a node included in the shortest paths between all other node pairs
(Freeman, 1977); eigenvector centrality, which is a relative sorting of nodes such that nodes with high
values are connected to other nodes with high values (Newman, 2008); and pagerank centrality, a variant
of eigenvector centrality that sorts nodes based on their probability of being connected to a randomly
selected node and which is commonly used in web-page rankings (Brin and Page, 1998). The weighted
clustering coefficient of a node is the count of the triplets in the neighborhood of the node and accounts
for the weights of the edges times the maximum possible number of triplets that could occur (Barrat et al.,
2004). The weights used here are the spatial distances between nodes wi j = d(i, j). The nodes that were
directly connected to the node under evaluation were also used.
Network Connectivity Optimization Heuristics
The following techniques were implemented for the network connectivity optimization study from their
extensive use in optimization: hill climbing with random restart (Russell and Norvig, 2004); stochastic
3/16
A
d(k,F) > D
B
k
F
F
D
D
C
Focal node (F)
Focal node distance threshold (D)
Close node
Distant node
Edge
Optimal new edge
Set of new close nodes (NC’)
d(k,i) + d(i,j) + d(j,F) < D
NC’
i
d(k,i)
k
d(i,j)
j
d(j,F)
F
Figure 1. Diagram of the sequence of the network connectivity optimization problem. The close nodes
that are within a threshold network distance (orange dashed circle) from the focal node (black square) are
colored green, distant nodes that could be within the threshold network distance with additional edges are
colored red, and the gray distant nodes are outside the threshold distance regardless of any additional
connections. Figure (A) is an example graph, (B) shows the same graph with the optimal new connection
that maximizes the number of additional nodes within the threshold network distance and minimizes the
length of the new connection, and the inset (C) highlights this optimal connection, between nodes i and j.
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
hill climbing (Greiner, 1992); hill climbing with a variable neighborhood search (Mladenović and Hansen,
1997); simulated annealing, which has a history of applications in graph problems (Kirkpatrick et al.,
1983; Johnson et al., 1989; Kirkpatrick, 1984); and genetic algorithms, which has been successfully used
for combinatorial optimization (Anderson and Ferris, 1994; Jaramillo et al., 2002). A Tabu heuristic was
not employed as it has been observed to be an inferior method for multi-objective optimization problems
compared to simulated annealing and genetic algorithms (Golden and Skiscim, 1986; Kim et al., 2016).
Parameter selection was simplified for easy comparison of the methods (see the Supplemental Information
for the algorithms). To ensure that the heuristics did not converge on suboptimal solutions due to the
initial starting values, random restart, i.e., randomly selecting initial nodes to avoid local optima and
running the routine until the optimal solution is found, was used.
Exhaustive search (ES). The exhaustive search calculates the solution for every pair of distant and close
nodes for a network (see Algorithm 1 in the Supplemental Information). While this approach ensures
that the optimal edges are found, as the number of nodes increases and therefore the number of possible
connections between close and distant nodes increases, it can become computationally expensive and
timely to implement. Since the results of ES provide the optimal solution, the times it takes to find
4/16
128
evaluate all solutions are used to benchmark the other heuristics.
Hill climbing (HC). The solution space was observed to be hilly from the exhaustive search results, so
several modifications were introduced to the hill climbing technique to avoid getting stuck in suboptimal
solutions (Algorithm 2 in the Supplemental Information). A stochastic hill climbing (HCS), an advanced
search method based on HC, routine is also explored where the selection of nodes for the next iteration is
randomly picked with
probability(i, j) =
O( i, j)
,
∑(m,n) O( m, n)
(3)
which terminates when an improved solution is no longer found (Algorithm 3 in the Supplemental
Information). A hill climbing algorithm is coupled with a variable neighborhood (HCVN) where the size
of the neighborhood starts with the nearest neighbors (η = 1) and is updated as follows:
(
1
if Ot > Ot−1
η=
,
(4)
η +1
if Ot ≤ Ot−1
129
130
131
and the HCVN method terminates after nmax is reached (Algorithm 4 in the Supplemental Information).
Simulated annealing (SA). As a meta-heuristic approach, the simulated annealing method randomly
selects an initial solution from the solution space to avoid entrapment in a local optima. At each iteration,
the heuristic evaluates the neighboring solutions and if it does not find an improved solution, it moves to a
new solution with the following probability:
Ot−1 − O(i, j)
.
(5)
probability(i, j) = exp −
t
The distance of the move decreases with the number of iterations until a better solution is no longer found
(Algorithm 5 in the Supplemental Information).
Genetic algorithm (GA). The genetic algorithm begins with a population of P randomly selected solutions
with a set of chromosomes composed of genes which represent the weights of selecting a neighbor and
are all initialized to unity (Algorithm 6 in the Supplemental Information). During each iteration of the
method, solution scores (fitnesses) are computed by
f (i, j) =
O(i, j)
,
∑(m,n) O(m, n)
(6)
and a new generation of solutions are selected based on the following probability condition
probability(i, j) =
s ∗ f (i, j) + (1 − s)
,
∑(m,n) (s ∗ f (m, n) + (1 − s))
(7)
where s is the selection coefficient. Weak selection, s 1, is used to ensure that random mutations impact
solution frequency. Crossover is conducted by alternating the weights for the offspring from each parent,
also known as cycle crossover (Oliver et al., 1987). Mutations are introduced at a low rate µ 1 for each
gene and increase the nodal characteristic selection weight by one. The probability that characteristic m is
used to find a neighbor for node i is given by
probability(characteristic) =
132
133
134
135
136
137
138
gene(i, m)
,
∑k gene(i, k)/K
(8)
where K is the total number of nodal characteristics. This formulation ensures that the nodal characteristics
that improve the solution increase in weight which results in a greater probability they will be selected
for neighborhood exploration, and ultimately reduces the size of the neighborhood search. Among these
methods, the genetic algorithm presented here introduces a novel chromosome formulation where the
genes are not properties of a specific variable but weights for the probability to move in a given direction
in the solution space. This allows the method to occasionally explore different nodal characteristics (at
mutation rate µ) while conducting a local neighborhood search.
5/16
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
Simulated Data: Random Networks
To test the efficacy of these optimization heuristics in finding the optimal new network connections
they were applied to randomly generated networks that vary in complexity and size. Several types of
random graph networks were generated to analyze the efficacy of the optimization heuristics for systems
with different topologies which are generally representative of naturally occurring and built systems: (1)
Erdös-Rényi networks, (2) Watts-Strogatz networks, (3) Barabási and Albert networks, (4) Klemm and
Eguı́lez networks, (5) Delaunay triangulation networks, and (6) Voronoi diagrams. Erdös-Rényi random
networks are constructed by randomly creating connection between pairs of nodes with a probability
(Erdös and Rényi, 1959). These networks, even though they have random connections, consistently have
short average path lengths and irregular connections, both of which are well found in natural systems.
The Watts-Strogatz networks also have random connections but the networks also form clusters, another
feature commonly found in real-world networks (Watts and Strogatz, 1998). The Barabási-Albert model
produces random structures with a small number of highly connected nodes, ’hubs’, which are observed in
numerous types of networks (Barabási and Albert, 1999; Albert and Barabási, 2002). Klemm and Eguı́lez
networks have random connections, clusters, and hubs (Klemm and Eguı́lez, 2002). See Supplemental
Information Figure S.1 (A) – (D) and Prettejohn et al. (2011) for the algorithms used to generate these
networks.
We also introduce two novel types of random planar network versions of the Voronoi diagram and
the Delaunay triangulation (Supplemental Information Figure S.1 (E) and (F)). Planarity is particularly
important in many fields and networks generated from Voronoi diagrams and Delaunay triangles have been
used in spatial health epidemiology (Johnson, 2007), transportation flow problems (Steffen and Seyfried,
2010; Pablo-Martı̀ and Sánchez, 2017), terrain surface modeling (Floriani et al., 1985), telecommunications (Meguerdichian et al., 2001), computer networks design (Liebeherr and Nahas, 2001), and hazard
avoidance systems in autonomous vehicles (Anderson et al., 2012). Delaunay triangulation maximizes the
minimum angles between three nodes to generate planar graphs with consistent network characteristics
while Voronoi diagrams, the dual of a Delaunay triangulation, are composed of points and cells such that
each cell is closer to its point than any other point (Delaunay, 1934). To modify these edges are removed
from network nodes randomly based on their distance from the focal node with probability
pR ·
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
max(d(i, F), d( j, F))
,
maxk d(k, F)
(9)
where pR is the removal probability and weighted by the normalized edge distance from the focal node.
When edges are randomly removed from the connected Delaunay network or Voronoi network, with
weights given by node distance from a focal node, these networks display some of the properties similarly
found in the networks mentioned above, such as complexity and randomness. Yet, these networks have
the added component of being planar and having edge weights that can be framed as physical distances.
To compare the efficacy of different optimization methods for different network topologies, identifying
the best set of parameters are critical. Parameter values were selected for each type of random network to
ensure network complexity (Supplemental Information Table S.6 summarizes the parameters which were
used in the analysis). Variation in network size was also explored and the most connected node in each
network was selected as the focal node. Uniformly randomly generated edge weights in [0,1] were used
for the network distances and the threshold distance was set to ensure that half of the nodes were initially
within the distance to the focal node. The costs and benefits were normalized using the ranges from the
exhaustive search routine as a benchmark to compare the results from the different optimization methods.
Empirical Data: Transportation Case Study
To complement the random network analysis, the network connectivity optimization methods were applied
to a study of urban transportation planning. Network connectivity optimization methods were used to
evaluate the potential costs and benefits of increased thoroughfare connectivity for student walking or
biking to school. It is assumed that expanding the connectivity around a school would allow for more
households, and students, to be included within the walking distance to the school. If more students
actively commute to school, this reduces the busing costs for the school system and increases the health
and academic achievement of the students (Centers for Disease Control and Prevention, 2010). Yet, streets
have associated land, construction, and maintenance costs that are primarily dependent on their length.
Networks composed of street edges and residence nodes around several schools from a representative
US school system were used for the analysis. Ten suburban and rural schools from Knox County, TN, were
6/16
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
selected for the analysis, including seven elementary and three middle, that would benefit the most from
increased thoroughfare connectivity, i.e., had the most students within the Euclidean walking distance but
not the network distance to the school (characteristics of these schools are provided in the Supplemental
Information Table S.3). Urban schools were not used since the street connectivity around the schools was
significantly high and the from additional thoroughfares would be low. Residential parcels and street
networks were provided by the Knoxville-Knox County Metropolitan Planning Commission and the
residential parcels were converted to nodes and placed on the nearest street edge. The residences within 1
mile and 1.5 miles, for the elementary schools and the middle schools respectively, are considered close
nodes while the nodes outside of these distances were classified as distant nodes (see Figure 2). The
school networks do not generally display the characteristics of complex network: they had low average
degree, large path lengths, and were not efficient, yet they did have power distributions of connectivity
with few intersections having a large number of street connections (see Supplemental Information Table
S.3). The networks were evaluated with each optimization method to maximize the number or close
residences connected to the school and minimize the distance of the new thoroughfare. The costs and
benefits of these street connections were normalized using the ranges from the exhaustive search routine
as a benchmark to compare the results from the different optimization methods.
A
street
close
residences
distant
residences
school
B
street
close
residences
distant
residences
optimal close
residences
school
Figure 2. Results of the transportation case study used for the analysis. A network of streets and
residences around a school is shown in (A) and with the optimal new walking connection in (B). The red
nodes represent the distant residences, i.e., the residences within the 1-mile Euclidean walking distance to
the school but not the 1-mile street network walking distance, the green nodes are the close residences
within the street network school walking distance, and the black square represents the school. The orange
line is the optimal new walking connection that maximizes the number of additional residences (orange
nodes) and minimizes the length of the new connection.
196
197
198
199
200
201
202
203
204
205
206
207
208
Empirical Data: Social Network Analysis
The topology of a network and the management of its system can improve information flow and have
been shown to help counter the negative effects of social and environmental crises (Helbing et al., 2015).
Specifically, increased social network connectivity can reduce the time information spreads among its
members. For example, quickly notifying those nearby and unaware of an active shooter event can save
lives. Using a set of social network data coupled with location data, we evaluated the heuristics to optimize
social network connectivity for those nearby a specific location.
The data set used for this analysis was from Gowalla, a location-based social networking website
where users share their locations by checking-in, and this data set was collected and published by Cho
et al. (2011). This social network is undirected and consists of 196,591 nodes (members) and 950,327
edges (social connections). There is a total of 6,442,890 check-ins of these members over the period of
February 2009 to October 2010 (network characteristics are provided in the Supplemental Information
Table S.4). For the analysis we simulated 100 crisis incidents at a highly populated urban place, Grand
7/16
209
210
211
212
213
214
215
216
217
218
219
220
Central Terminal in New York City (NY). Grand Central Terminal was selected as the location of the
simulated events as it is a major transportation hub located in the center of the city that serves over a
million commuters and visitors daily. New York City is also a major metropolis with tens of thousands of
Gowalla users present in the data set and the city has a history of incidents, such as terror attacks. Ten
dates were selected at random and for each date ten times were randomly selected between 1200 and 1800
local to simulate a crisis event (see Figure 3).
The social network problem was formulated such that an event occurred at the location (Grand Central
Station) and members who were within 0.5-miles of the event should be notified. The members within
Grand Central Station are automatically notified of the event whereas those outside can be notified through
their online social network if a member in their network is aware. The event is considered to be serious
enough that those members aware of it will share the information on the social network. New connections
are evaluated based on their number of additional nearby members who become aware of the event.
Grand Central Station
Users aware of event
Users unaware of event
Social network edge
0.5-mile buffer
A
Grand Central Station
Users aware of event
Users unaware of event
Social network edge
0.5-mile buffer
Optimal new connection
B
Figure 3. Results of the Gowalla social network case study used for the analysis. For simplicity the
network shown is the users with a check-in within 1 hour prior to the event and nearby (within 0.5-mile)
the event location, Grand Central Station (NYC). (A) the network prior during the event and (B) with the
optimal connection. The red nodes represent the users within 0.5-mile who are unaware of the event, the
green nodes are the users aware of the event, the black line represents. The orange line is the optimal new
social network connection that maximizes the number of additional nearby users aware of the event.
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
PERFORMANCE OF THE ALGORITHMS
Several finding are worthy to note regarding the performance of heuristic algorithms used in the analysis.
First, there were consistent nonlinear relationships between the nodal characteristics and the quality of
the solutions for each type of random network and the school networks (see Figure 4). There was also
significant variation for which nodal characteristics were correlated with the quality of the solution across
networks (see Table 3). Among those, the distance between the close node and the focal node and the
distance between the distant node and the close node were most often highly correlated with the quality of
the solution across networks. The centrality measures were inconsistently related to the solution quality
for the random networks yet were related to the optimal solutions for the social networks.
The results of the termination times and the optimal solutions deviations from the optimization
heuristics applied to the random networks are summarized in Figure 5 and Supplemental Information
Figures S.1 and S.2. The hill climbing method was consistently faster for all of the networks, yet had the
largest cost and benefit deviations. Simulated annealing and the genetic algorithm had similar termination
times, but the genetic algorithm was consistently superior to all of the other methods in approaching the
optimal solution. The results from the application of the optimization heuristics applied to the ten school
networks are shown in Figure 5 (E) and (F). The times to termination for each heuristic according to
8/16
0.2
0.2
0
0
0
50
100
Benefit
C
0.8
0.8
0.6
Cost
0.6
0.4
0.4
0.2
0.2
0
0
0
5
10
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0
20
40
Distance between close and focal nodes
0.4
B
0.4
Benefit
5000
D
5000
4000
4000
3000
3000
2000
2000
1000
1000
0
0
0
100
Benefit
200
Distance between close and focal nodes
0.6
0.4
Cost
0.6
0.4
Cost
0.8
Distance between close and focal nodes
Cost
1
Distance between close and focal nodes
A
0.8
1.2
Benefit
Figure 4. The relationship between the distances of the close node and the focal node with the costs and
benefits for each solution for different networks. Figure (A) shows the relationship for a Watts-Strogatz
network with N = 500, (B) a Barabási and Albert network with N = 500, (C) a Delaunay network with
N = 500, and (D) a suburban school transportation network (N ≈ 4000). Each point represents a
connection between a distant and close node, where the cost is the standardized length of the connection
and the benefit is the number of new nodes within the distance to the focal node or school.
239
network size consistently followed the following pattern: ES>SA>HCVN>GA>HCS>HC. The genetic
algorithm clearly outperformed the other heuristics, followed by simulated annealing, in terms of cost and
benefit deviations (see Figure 5 (B), (D), and (F)).
240
DISCUSSION, FUTURE WORK, AND LIMITATIONS
237
238
241
242
243
244
245
246
247
248
249
250
251
The local network connectivity problem introduced in this study is relevant to a wide range of applications
and is nontrivial as the number of potential solutions can become large even for small systems. This class
of combinatorial optimization problem highlights the difficulty in determining local search routines a
priori. When the exhaustive search routine was applied to random networks and the real-world networks,
the optimal solutions were found to be related to nodal characteristics, which entails a great complexity
to find optimal solutions. Therefore, the heuristics employed to reduce the computational costs utilized
nodal characteristics to search for solutions. Yet, these nodal characteristics were nonlinearly related
to the solutions. Given the example networks, it should be noted that distance to the focal node was
consistently related to the quality of the solution as this lowers connectivity length costs, while centrality
was intermittently correlated with solution quality it provides greater benefit through more connections.
Aside from the distance characteristic, these nodal characteristics also varied in their correlation (sign
9/16
294
and magnitude) with solution quality for different types of networks. This makes it difficult to exclude
or prioritize specific nodal characteristics for local network connectivity optimization heuristics. This
could arise from the four following issues: (a) the curse of dimensionality, i.e., large sparse subspaces in
the solution space; (b) the nodal characteristics are highly correlated with each other; (c) outliers; and
(d) the nodal characteristics are heterogeneous across the network. Results from the street networks in
the transportation case-study found that the clustering coefficient was a poor measure due to the lack of
triplets in the networks.
The optimization heuristics save computational time but vary considerably in their ability to find (near)
optimal solutions. The stochastic hill climbing search was not effective due to the large neighborhood
search space explored. In our experiment, the number of solutions checked at each iteration is > 300
and resulted in a skewed probability distribution of objective values favoring the selection of low values.
This degraded the efficiency of the method resulting in the selection of poor solutions. The variable
neighborhood search method was similarly not reliable because of the significantly large neighborhood
search space (the number of possible solutions explored at a given iteration could be > 5, 000), and had
intermediate results with cost and benefit deviations. The simulated annealing heuristic consistently took
longer to converge than the other optimization methods from the exploration of suboptimal solutions prior
to moving towards better solutions, yet it was able to converge to values close to the optimal solution.
The computational costs and the variance in the importance of nodal characteristics for the random
networks and real-world systems highlights the need for a heuristic that is able to quickly and effectively
explore the solution space without getting stuck in a local optimum. The genetic algorithm provided in this
work offers a solution to these issues and outperformed the other algorithms in terms of the consistently
higher solution precision and accuracy. The genetic algorithm is able to dynamically reduce the size
of the neighborhood search space and what variables to analyze. This reduction in the local solution
search space allows the genetic algorithm presented here to converge on solutions near the optimal in a
timely fashion. The heuristic is also able to compare solutions from distant search spaces with nonzero
probability, thereby avoiding local optimums. The experiments indicate the power of biologically inspired
algorithms to effectively explore multidimensional spaces (commonly found in natural systems) and their
potential use in a wide variety of disciplines, including the specific applications for planning and crisis
management presented above. The combinatorial optimization techniques employed here to identify
and evaluate new street connections can also complement the optimization approaches used for other
transportation planning problems, such as greenway planning (Linehan et al., 1995), bus stop locations
(Ibeas et al., 2010; Delmelle et al., 2012), and health care accessibility (Gu et al., 2010).
There are several research directions from these proposed methods. Application of these methods and
heuristics can be tested on multi-level networks, such as telecommunication systems, higher dimensional
real-world networks (transportation networks with elevation), directed networks, and additional planar
random networks (e.g., Gabriel graphs) and this should be conducted. Different distance measures to the
focal node, such as the Hamming distance, could also be evaluated for different applications, and other
real world examples should be used for analysis. The methods presented here simplified the costs and
did not account for many real-world barriers that may restrict optimal new connections found through
the heuristics. For example, the transportation case study did not include legal considerations, such as
right-of-way, or physical barriers, such as highways or rivers. Furthermore, the methods presented here do
not evaluate whether the new connections intersect existing edges, as in existing transportation networks,
and attempts to incorporate such a feature resulted in unrealistic computational times.
295
ACKNOWLEDGMENTS.
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
297
We would like to thank Alex Zendel (GIS Analyst at the Knoxville-Knox County Metropolitan Planning
Commission) for providing the street networks and residential data around the schools.
298
REFERENCES
296
299
300
301
302
Albert, R. and Barabási, A. (2002). Statistical mechanics of complex networks. Reviews of Modern
Physics, 74(1):47–97.
Alenazi, M. J. F., Çetinkaya, E. K., and Sterbenz, J. P. G. (2014). Cost-efficient algebraic connectivity
optimisation of backbone networks. Optical Switching and Networking, 14(2):107–116.
10/16
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
Anderson, E. J. and Ferris, M. C. (1994). Genetic algorithms for combinatorial optimization: The
assembly line balancing problem. ORSA Journal on Computing, 6:161–173.
Anderson, S. J., Karumanchi, S. B., and Iagnemma, K. (2012). Constraint-based planning and control. In
2012 IEEE Intelligent Vehicles Symposium, pages 383–388. IEEE.
Auerbach, J., Fitzhugh, E. C., and Zaviska, E. (2021). Impacts of small changes in thoroughfare
connectivity on the potential for student walking. Journal of Urban Planning and Development.
Barabási, A. and Albert, R. (1999). Emergence of scaling in random networks. Science, 286(5439):509–
512.
Barrat, A., Barthélemy, M., Pastor-Satorras, R., and Vespignani, A. (2004). The architecture of complex
weighted networks. Proceedings of the National Academy of Sciences USA, 101(11):3747–3752.
Bavelas, A. (1950). Communication patterns in task-oriented groups. The Journal of the Acoustical
Society of America, 22:725.
Branas, C. C., MacKenzie, E. J., Williams, J. C., Schwab, C. W., Teter, H. M., Flanigan, M. C., Blatt,
A. J., and ReVelle, C. S. (2005). Access to trauma centers in the united states. Journal of the American
Medical Association, 293(21):2626–2633.
Brimberg, J. and Hodgson, J. M. (2011). Heuristics for location models. In Eiselt, H. A. and Marianov,
V., editors, Foundations of Location Analysis, chapter 15, pages 335–355. Springer.
Brin, S. and Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine. Computer
Networks and ISDN Systems, 30(1-7):107–117.
Centers for Disease Control and Prevention (2010). The association between school based physical
activity, including physical education, and academic performance. Technical report, U.S. Department
of Health and Human Services.
Cho, E., Myers, S. A., and Leskovec, J. (2011). Friendship and mobility: User movement in locationbased social networks. Proceedings of the 17th ACM SIGKDD international conference on Knowledge
discovery and data mining, pages 1082–1090.
Delaunay, B. (1934). Sur la sphère vide. Bulletin de l’Académie des Sciences de l’URSS, Classe des
sciences mathématiques et naturelles, 6:793–800.
Delmelle, E., Shuping, L., and Murray, A. (2012). Identifying bus stop redundancy: A gis-based spatial
optimization approach. Computers, Environment and Urban Systems, 36:445–455.
Demaine, E. D. and Zadimoghaddam, M. (2010). Minimizing the diameter of a network using shortcut
edges. In Proceedings of the 12th Scandinavian Symposium and Workshop on Algorithm Theory
(SWAT) (Lecture Notes in Computer Science), volume 6139, pages 420–431. Springer.
Donoso, Y. and Fabregat, R. (2007). Multi-Objective Optimization in Computer Networks Using Metaheuristics. Auerbach Publications.
Erdös, P. and Rényi, A. (1959). On random graphs. Publicationes Mathematica, 6:290–297.
Eubank, S., Guclu, H., Kumar, V. S. A., Marathe, M. V., Srinivasan, A., Toroczkai, Z., and Wang, N.
(2004). Modelling disease outbreaks in realistic urban social networks. Nature, 429:180–184.
Floriani, L. D., Falcidieno, B., and Pienovi, C. (1985). Delaunay-based representation of surfaces defined
over arbitrarily shaped domains. Computer Vision, Graphics, and Image Processing, 32(1):127–140.
Freeman, L. C. (1977). A set of measures of centrality based on betweenness. Sociometry, 40(1):35–41.
Gavrilets, S., Auerbach, J., and van Vugt, M. (2016). Convergence to consensus in heterogeneous groups
and the emergence of informal leadership. Scientific Reports, 6.
Golden, B. L. and Skiscim, C. C. (1986). Using simulated annealing to solve routing and location
problems. Naval Research Logistics Quarterly, 33(2):261–279.
Greiner, R. (1992). Probabilistic hill-climbing: Theory and applications. In Proceedings of the CSCSI-92,
pages 60–67. Morgan Kaufmann Publishers, Inc.
Gu, W., Wang, X., and McGregor, S. E. (2010). Optimization of preventive health care facility locations.
International Journal of Health Geographies, 9(17).
Helbing, D., Brockmann, D., Chadefaux, T., Donnay, K., Blanke, U., Woolley-Meza, O., Moussaid, M.,
Johansson, A., Krause, J., Schutte, S., and Perc, M. (2015). Saving human lives: What complexity
science and information systems can contribute. Journal of Statistical Physics, 158:735–781.
Ibeas, A., dell’Olio, L., Alonso, B., and Sainz, O. (2010). Optimizing bus stop spacing in urban areas.
Transportation Research Part E: Logistics and Transportation Review, 46:446–458.
Jaramillo, J. H., Bhadury, J., and Batta, R. (2002). On the use of genetic algorithms to solve location
problems. Computers and Operations Research, 29:761–779.
11/16
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
Jiang, Z., Liang, M., and Guo, D. (2011). Enhancing network performance by edge addition. International
Journal of Modern Physics C, 22(11):1211–1226.
Johnson, D. S., Aragon, C. R., McGeoch, L. A., and Schevon, C. (1989). Optimization by simulated
annealing: An experimental evaluation; Part 1, Graph partitioning. Operations Research, 37(6):865–
892.
Johnson, S. (2007). The Ghost Map: The Story of London’s Most Terrifying Epidemic-and How It
Changed Science, Cities, and the Modern World. Riverhead Books.
Khafa, N. A. and Jalili, M. (2019). Optimization of synchronizability in complex spatial networks.
Physica A: Statistical Mechanics and its Applications, 514(15):46–55.
Kim, K., Dean, D., Kim, H., and Chun, Y. (2016). Spatial optimization for regionalization problems with
spatial interaction: a heuristic approach. International Journal of Geographic Information Science,
30(3):451–473.
Kirkpatrick, S. (1984). Optimization by simulated annealing: Quantitative studies. Journal of Statistical
Physics, 34(5/6):975–986.
Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science,
220:671–680.
Klemm, K. and Eguı́lez, V. M. (2002). Growing scale-free networks with small-world behavior. Physical
Review E, 65.
Liebeherr, J. and Nahas, M. (2001). Application-layer multicast with Delaunay triangulations. In
GLOBECOM’01. IEEE Global Telecommunications Conference, pages 1651–1655. IEEE.
Linehan, J., Gross, M., and Finn, J. (1995). Greenway planning: developing a landscape ecological
network approach. Landscape and Urban Planning, 33(1-3):179–193.
Meguerdichian, S., Koushanfar, F., Qu, G., and Potkonjak, M. (2001). Exposure in wireless Ad-Hoc
sensor networks. In Proceedings of the 7th Annual International Conference on Mobile Computing and
Networking, MobiCom ’01, pages 139–150. Association for Computing Machinery.
Meyerson, A. and Tagiku, B. (2009). Minimizing average shortest path distances via shortcut edge
addition. In Dinur, I., Jansen, K., Naor, J., and Rolim, J., editors, Approximation, Randomization, and
Combinatorial Optimization. Algorithms and Techniques, pages 272–285. Springer.
Mladenović, N., Brimberg, J., Hansen, P., and Moreno-Pérez, J. A. (2007). The p-median problem: A
survey of metaheuristic approaches. European Journal of Operational Research, 179:927–939.
Mladenović, N. and Hansen, E. (1997). Variable neighborhood search. Computers and Operations
Research, 24(11):1097–1100.
Newman, M. E. J. (2008). Mathematics of networks. In Blume, L. and Burlauf, S., editors, The New
Palgrave Encyclopedia of Economics. Palgrave Macmillan, Basingstoke, 2nd edition.
Oliver, I. M., Smith, D. J. D., and Holland, R. C. J. (1987). Study of permutation crossover operators on
the traveling salesman problem. In Proceedings of the Second International Conference on Genetic
Algorithms on Genetic algorithms and their application, pages 224–230. MIT.
Pablo-Martı̀, F. and Sánchez, A. (2017). Improving transportation networks: Effects of population
structure and decision making policies. Scientific Reports, 7.
Prettejohn, B., Berryman, M., and McDonnell, M. (2011). Methods for generating complex networks with
selected properties for simulations: a review and tutorial for neuroscientists. Frontiers in Computational
Neuroscience, 5.
Resende, M. G. C. and Pardalos, P. M., editors (2006). Handbook of Optimization in Telecommunications.
Springer.
Russell, S. J. and Norvig, P. (2004). Artificial Intelligence: A Modern Approach. Prentice Hall.
Schrijver, A. (2002). On the history of the transportation and maximum flow problems. Mathematical
Programming, 91(3):437–445.
Steffen, B. and Seyfried, A. (2010). Methods for measuring pedestrian density, flow, speed and direction
with minimal scatter. Physica A: Statistical Mechanics and its Applications, 389(9):1902–1910.
Watts, D. J. and Strogatz, S. H. (1998). Collective dynamics of “small-world” networks. Nature,
393:440–442.
Wu, F., Huberman, B. A., Adamic, L. A., and Tyler, J. R. (2004). Information flow in social groups.
Physica A: Statistical Mechanics and its Applications, 337(1-2):327–335.
12/16
A
B
ES
HC
HCS
HCVN
SA
GA
ES
HC
HCS
HCVN
SA
GA
1.0
Cost
Mean log termination time
1.0
0.0
0.0
-1.0
500
1000
200
-1.0
0.0
Network size (N)
Benefit
C
D
ES
HC
HCS
HCVN
SA
GA
ES
HC
HCS
HCVN
SA
GA
1.0
Cost
Mean log termination time
1.0
0.0
0.0
-1.0
500
1000
200
-1.0
0.0
Network size (N)
Benefit
E
F
ES
HC
HCS
HCVN
SA
GA
ES
HC
HCS
HCVN
SA
GA
1.0
Cost
Mean log termination time
1.0
0.0
0.0
-1.0
1539
1727
1996
2160
3952
4222
4895
School network size (N)
5149
7750
8059
-1.0
0.0
Benefit
Figure 5. The termination times (Figures A, C, and E) and the differences between the heuristic solution
and the global optimal solution costs and benefits (Figures B, D, and F) for the heuristics applied to a
sample of random networks and school networks: (A) and (B) Erdös-Rényi networks, (C) and (D)
Delaunay networks, and (E) and (F) the ten school networks. For the random networks (Figures A and C)
the termination times were averaged over 1000 networks for network sizes of 500, 1000, and 2000 nodes.
The times were normalized by the exhaustive search time and log transformed. For the random network
cost and benefit differences (Figures B and D) were drawn from a sample of 100 networks with 1000
nodes. The costs were the total lengths of the new connections and the benefits were the number of
additional nodes for the solution. The costs and benefits were normalized with the global optimal solution
from the exhaustive search, and a longer connection length is a positive cost difference whereas a shorter
connection is a negative cost difference. For visualization, the 95-percent confidence ellipses drawn from
the Hotelling’s T2 statistic were included. The heuristic acronyms are the following: exhaustive search
(ES), hill climbing (HS), stochastic hill climbing (HCS), hill climbing with variable neighborhood
(HCVN), simulated annealing (SA), and genetic algorithm (GA).
13/16
Symbol
Definition
ν
e
N
A
ai j
F
d(i, j)
D
NC
ND
Ni,Cj
LF
Network node
Network edge
Number of nodes in a given network, N = ∑i νi
Network adjacency matrix
Adjacency matrix element i j
Focal node
Network distance between nodes i and j
Threshold distance from focal node
Set of close nodes, NC ⊂ N
Set of distant nodes, ND ⊂ N
Set of nodes that are now close after a new connection between nodes i and j
Average path length to the focal node
C(i, j)
B(i, j)
α
β
t
Ot
O∗
M
Cost of the new connection
Benefit of the new connection
Cost weight
Benefit weight
Optimization iteration
Optimal solution for iteration t
Optimal solution
Set of long-term memory solutions
CiD
CiC
σi j
σ jk (i)
CiB
λ
xi
CiE
αP
CiP
Degree centrality of node i
Closeness centrality of node i
Shortest path between nodes i and j
Shortest path between nodes j and k that includes node i
Betweenness centrality of node i
Eigenvalue
Eigenvector
Eigenvector centrality of node i
Attenuation factor
Pagerank centrality of node i
η
µ
s
P
f (i, j)
εB
εC
Variable neighborhood size
Genetic algorithm mutation rate
Genetic algorithm selection coefficient
Population of solutions for the genetic algorithm
Genetic algorithm fitness function
Benefit error from heuristic
Cost deviation from heuristic
p
pW
kL
m0
m
pS
pR
Connection probability (Erdös-Rényi graphs and Klemm and Eguı́lez graphs)
Rewiring probability (Watts-Strogatz graphs)
Initial node degree (Watts-Strogatz graphs)
Initial network size (Barabási and Albert graphs and Klemm and Eguı́lez graphs)
Degree of new nodes (Barabási and Albert graphs)
node selection probability (Klemm and Eguı́lez graphs)
Edge removal probability (Delaunay and Voronoi random graphs)
CD
L
cw
i
wi j
C
Cr
Lr
γ
P(n)
E
Er
EG
Mean degree of a network
Average path length of a network
Weighted clustering coefficient for node i
Weight of connection between nodes i and j
Weighted clustering coefficient of a network
Weighted clustering coefficient of a completely random network
Average path length of a completely random network
Power law exponent
Degree distribution
Efficiency of a network
Efficiency of a completely random network
Global efficiency of a network
Table 1. List of symbols and their definitions.
14/16
Local search selection criteria
distance from focal node
d(i, F)
degree centrality
CiD = ∑ j A(i, j)
closeness centrality
betweenness centrality
σ jk (i)
CiB = ∑ j6=i6=k
σ jk
CiC = 1/ ∑ j d(i, j)
eigenvector centrality
1
CiE = ∑ j A(i, j)x j
λ
cwi
pagerank centrality
xj
1 − αP
CiP = αP ∑ j A(i, j)
+
N
∑i A(i, j)
weighted clustering coefficient
(wi j + wih )
1
ai j aih a jh
= D
∑
2
(Ci − 1) ∑ j ai j wi j j,h
Table 2. Node characteristics used for the neighborhood search and their formulations.
15/16
close node
distant node
Nodal characteristics
Network
ER
WS
BA
KS
DT
VD
Schools
Social Network
d( j, F)
CDj
CCj
CBj
CEj
CPj
cwj
-0.08
0.14
0.10
0.14
0.12
0.14
0.00
-0.52
0.08
0.26
0.15
0.11
0.07
-0.03
-0.37
0.05
0.05
0.05
0.05
0.05
0.02
-0.48
0.09
0.15
0.04
0.09
0.10
0.00
-0.28
-0.01
0.15
-0.02
0.01
-0.03
0.14
-0.41
0.02
0.15
0.03
0.08
-0.01
0.03
-0.30
0.02
0.18
0.06
-0.01
-0.00
*
0.11
0.09
0.09
0.05
-0.07
-0.08
-0.04
d(i, F)
CiD
CiC
CiB
CiE
CiP
cwi
-0.46
0.17
0.15
0.19
0.21
0.17
0.08
-0.18
0.03
-0.01
0.03
0.02
0.01
-0.01
-0.12
0.08
0.07
0.07
0.07
0.06
0.03
-0.09
0.05
0.00
0.03
0.06
0.07
0.03
-0.07
0.29
0.35
0.20
-0.16
0.30
0.12
-0.08
0.30
0.21
0.20
-0.01
0.22
0.12
-0.04
0.08
0.06
0.07
-0.03
0.01
*
-0.07
-0.30
0.26
-0.09
-0.10
0.05
-0.08
Table 3. Mean correlation coefficients for the nodal characteristics and the solution benefits for the
experimental networks. The three coefficients with the largest magnitude are highlighted in bold for each
network type. (*) There was no variation in clustering coefficients as triplets were not common in the
street networks.
16/16
" | Here is a paper. Please give your review comments after reading it. |
158 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Scientific Workflows (SWfs) have revolutionized how scientists in various domains of science conduct their experiments. The management of SWfs is performed by complex tools that provide support for workflow composition, monitoring, execution, capturing, and storage of the data generated during execution. In some cases, they also provide components to ease the visualization and analysis of the generated data. During the workflow's composition phase, programs must be selected to perform the activities defined in the workflow specification. These programs often require additional parameters that serve to adjust the program's behavior according to the experiment's goals. Consequently, workflows commonly have many parameters to be manually configured, encompassing even more than one hundred in many cases. Wrongly parameters' values choosing can lead to crash workflows executions or provide undesired results. As the execution of dataand compute-intensive workflows is commonly performed in a high-performance computing environment e.g., a cluster, a supercomputer, or a public cloud), an unsuccessful execution configures a waste of time and resources. In this article, we present FReeP -Feature Recommender from Preferences, a parameter value recommendation method that is designed to suggest values for workflow parameters, taking into account past user preferences. FReeP is based on Machine Learning techniques, particularly in Preference Learning. FReeP is composed of three algorithms, where two of them aim at recommending the value for one parameter at a time, and the third makes recommendations for n parameters at once. The experimental results obtained with provenance data from two broadly used workflows showed FReeP usefulness in the recommendation of values for one parameter. Furthermore, the results indicate the potential of FReeP to recommend values for n parameters in scientific workflows.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION 32</ns0:head><ns0:p>Scientific experiments are the basis for evolution in several areas of human knowledge ( <ns0:ref type='bibr'>de Oliveira 33 et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Mattoso et al., 2010b;</ns0:ref><ns0:ref type='bibr' target='#b39'>Hey and Trefethen, 2020;</ns0:ref><ns0:ref type='bibr' target='#b38'>Hey et al., 2012)</ns0:ref>. Based on observations 34 of open problems in their research areas, scientists formulate hypotheses to explain and solve those 35 problems <ns0:ref type='bibr' target='#b29'>(Gonc ¸alves and Porto, 2015)</ns0:ref>. Such hypothesis may be confirmed or refuted, and also can 36 lead to new hypotheses. For a long time, scientific experiments were manually conducted by scientists, 37 including instrumentation, configuration and management of the environment, annotation and analysis 38 of results. Despite the advances obtained with this approach, time and resources were wasted since a 39 small misconfiguration of the parameters of the experiment could compromise the whole experiment. The 40 analysis of errors in the results was also far from trivial (de <ns0:ref type='bibr'>Oliveira et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>41</ns0:head><ns0:p>The evolution in computer science field allowed for the development of technologies that provided and relationships (i.e., data dependencies) with other activities, according to the stages of the experiment <ns0:ref type='bibr' target='#b88'>(Yong Zhao, 2008)</ns0:ref>.</ns0:p><ns0:p>Several workflows commonly require the execution of multiple data-intensive operations as loading, transformation, and aggregation <ns0:ref type='bibr' target='#b14'>(Mattoso et al., 2010b)</ns0:ref>. Multiple computational paradigms can be used for the design and execution of workflows, e.g., shell and Python scripts <ns0:ref type='bibr' target='#b54'>(Marozzo et al., 2013)</ns0:ref>, Big Data frameworks (e.g., Hadoop and Spark) <ns0:ref type='bibr' target='#b34'>(Guedes et al., 2020b)</ns0:ref>, but they are usually managed by complex engines named Workflow Management Systems (WfMS). A key feature that a WfMS must address is the efficient and automatic management of parallel processing activities in High Performance Computing (HPC) environments <ns0:ref type='bibr' target='#b67'>(Ogasawara et al., 2011)</ns0:ref>. Besides managing the execution of the workflow in HPC environments, WfMSs are also responsible for capturing, structuring and recording metadata associated to all the data generated during the execution: input data, intermediate data, and the final results. These metadata is well-known as provenance <ns0:ref type='bibr' target='#b21'>(Freire et al., 2008)</ns0:ref>. Based on provenance data, it is possible to analyze the results obtained and to foster the reproducibility of the experiment, which is essential to prove the veracity of a produced result.</ns0:p><ns0:p>In this article, the concept of an experiment is seen as encompassing the concept of a workflow, and not as a synonym. A workflow may be seen as a controlled action of the experiment. Hence, the workflow is defined as one of the trials conducted in the context of an experiment. In each trial, the scientist needs to define the parameter values for each activity of the workflow. It is not unusual that a simple workflow has more than 100 parameters to set. Setting up these parameters may be simple for an expert, but not so simple for non-expert users. Although WfMSs represent a step forward by providing the necessary infrastructure to manage workflow executions, they provide a little help (or even no help at all) on defining parameter values for a specific workflow execution. A good parameters values tune in a workflow execution is crucial not only for the quality of the results but also influences if a workflow will execute or not (avoiding unnecessary execution crashes). A poor choice of parameters values can cause failures, which leads to a waste of execution time. Failures caused by poor choices of parameter values are even more severe when workflows are executing in HPC environments that follow a pay-as-you-go model, e.g., clouds, since they can increase the overall financial cost. This way, if the WfMS could 'learn' from previous successfully executions of the workflow and recommend parameter values for scientists, some failures could be avoided. This recommendation is especially useful for non-expert users. Let us take as an example a scenario where an expert user has modeled a workflow and executed several trials of the same workflow varying the parameter values. If a non-expert scientist wants to execute the same workflow with a new set of parameter values and input data, but does not know how to set the values of some of the parameters, one can benefit from parameter values used on previous executions of the same (or similar) workflow. The advantage of the WfMS is provenance data already contains the parameter values used on previous (successful) executions and can be a rich resource to be used for recommendation. Thus, this article hypothesis is that by adopting an approach to recommend the parameters values of workflows in a WfMS, we can increase the probability that the execution of workflow will be completed. As a consequence, the financial cost associated with execution failures is reduced.</ns0:p><ns0:p>In this article, we propose a method named FReeP -Feature Recommender From Preferences, which aims at recommending values for parameters of workflow activities. The proposed approach is able to recommend parameter values in two ways: (i) a single parameter value at a time, and (ii) multiple parameter values at once. The proposed approach relies on user preferences, defined for a subset of workflow parameters, together with the provenance of the workflow. It is essential to highlight that user preferences are fundamental to explore experiment variations in a scientific scenario. Furthermore, for our approach, user preferences help prune search space and consider user restrictions, making personalized recommendations. The idea of combining user preferences and provenance is novel and allows for producing a personalized recommendation for scientists. FReeP is based on Machine Learning algorithms <ns0:ref type='bibr' target='#b60'>(Mitchell, 2015)</ns0:ref>, particularly, Preference Learning <ns0:ref type='bibr' target='#b23'>(Fürnkranz and Hüllermeier, 2011)</ns0:ref>, and Recommender Systems <ns0:ref type='bibr' target='#b72'>(Ricci et al., 2011)</ns0:ref>. We evaluated FReeP using real workflow traces (considered as benchmarks):</ns0:p><ns0:p>Montage <ns0:ref type='bibr' target='#b40'>(Hoffa et al., 2008)</ns0:ref> from astronomy domain and SciPhy <ns0:ref type='bibr' target='#b64'>(Ocaña et al., 2011)</ns0:ref> from bioinformatics domain. Results indicate the potential of the proposed approach. This article is an extension of the conference paper 'FReeP: towards parameter recommendation in scientific workflows using preference learning' <ns0:ref type='bibr' target='#b75'>(Silva Junior et al., 2018)</ns0:ref> This article is organized in five sections besides this introduction. Background section details the theoretical concepts used in the proposal development. FReeP -Feature Recommender from Preferences section presents the algorithm developed for the problem of parameters value recommendation using user preferences. Experimental Evaluation section shows the results of the experimental evaluation of the approach in three different scenarios. Then, Related Work section presents a literature review with papers that have addressed solutions to problems related to the recommendation applied to workflows and the Machine Learning model hyperparameter recommendation. Lastly, Conclusion section brings conclusions about this article and points out future work.</ns0:p></ns0:div>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>This section presents key concepts for understanding the approach presented in this article to recommend values for parameters in workflows based on users' preferences and previous executions. Initially, it is explained about scientific experiments. Following, the concepts related to Recommender Systems are presented. Next, the concept of Preference Learning is presented. This section also brings a Borda Count overview, a non-common voting schema that is used to decide which values to suggest to the user.</ns0:p></ns0:div>
<ns0:div><ns0:head>Scientific Experiment</ns0:head><ns0:p>A scientific experiment arises from the observation of some phenomena and questions raised from the observation. The next step is the hypotheses formulation aiming at developing possible answers to those questions. Then, it is necessary to test the hypothesis to verify if an output produced is a possible solution. The whole process includes many iterations of refinement, consisting, for example, of testing the hypothesis under distinct conditions, until it is possible to have enough elements to support it.</ns0:p><ns0:p>The scientific experiment life-cycle proposed by <ns0:ref type='bibr' target='#b55'>Mattoso et al. (2010a)</ns0:ref> is divided into three major phases: composition, execution and analysis. The composition phase is where the experiment is designed and structured. Execution is the phase where all the necessary instrumentation for the accomplishment of the experiment must be finished. Instrumentation means the definition of input data, parameters to be used at each stage of the experiment, and monitoring mechanisms. Finally, the analysis phase is when the data generated by the composition and execution phases are studied to understand the obtained results.</ns0:p><ns0:p>The approach presented in this article focus on the Execution phase.</ns0:p></ns0:div>
<ns0:div><ns0:head>Scientific Workflows</ns0:head><ns0:p>Scientific workflows have become a de facto standard for modeling in silico experiments <ns0:ref type='bibr' target='#b93'>(Zhou et al., 2018)</ns0:ref>. A Workflow is an abstraction that represents the steps of an experiment and the dataflow through each of these steps. A workflow is formally defined as a directed acyclic graph W (A, Dep). The nodes A = {a 1 , a 2 , . . . , a n } are the activities and the edges Dep represent the data dependencies among activities in A. Thus, given a i | (1 ≤ i ≤ n), the set P = {p 1 , p 2 , ..., p m } represents the possible input parameters for activity a i that defines the behavior of a i . Therefore, a workflow can be represented as a graph where the vertices act as experiment steps and the edges are the relations, or the dataflow between the steps.</ns0:p><ns0:p>A workflow can also be categorized according to the level of abstraction into conceptual or concrete.</ns0:p><ns0:p>A conceptual workflow represents the highest level of abstraction, where the experiment is defined in terms of steps and dataflow between them. This definition does not explain how each step of the experiment will execute. The concrete workflow is an abstraction where the activities are represented by the computer programs that will execute them. The execution of an activity of the workflow is called an activation (de <ns0:ref type='bibr'>Oliveira et al., 2010a)</ns0:ref>, and each activation invokes a program that has its parameters defined. However, managing this execution, which involves setting the correct parameter values for each program, capturing the intermediate data and execution results, becomes a challenge. It was with this in mind, and with the help of the composition of the experiment in the workflow format, that Workflow Management Systems (WfMS), such as Kepler <ns0:ref type='bibr' target='#b2'>(Altintas et al., 2006)</ns0:ref>, Pegasus <ns0:ref type='bibr' target='#b17'>(Deelman et al., 2005)</ns0:ref> and SciCumulus (de <ns0:ref type='bibr'>Oliveira et al., 2010a)</ns0:ref> emerged.</ns0:p><ns0:p>In special, SciCumulus is a key component of the proposed approach since it provides a framework for parallel workflows to benefit from FReeP. Also, data used in the experiments presented in this article are retrieved from previous executions of several workflows in SciCumulus. It is worth noticing that other WfMSs such as Pegasus and Kepler could also benefit from FReeP as long as they provide necessary provenance data for recommendation. SciCumulus architecture is modularized to foster Manuscript to be reviewed Computer Science maintainability and ease the development of new features. SciCumulus is open-source and can be obtained at https://github.com/UFFeScience/SciCumulus/. The system is developed using MPI library (a de facto standard library specification for message-passing), so SciCumulus is a distributed application, i.e., each SciCumulus module has multiple instances created on the machines of the distributed environment (which are different processes and each process has multiple threads) that communicate, triggering functions for sending and receiving messages between these processes. According to <ns0:ref type='bibr' target='#b35'>Guerine et al. (2019)</ns0:ref>, SciCumulus has four main modules: (i) SCSetup, (ii) SCStarter, (iii) SCCore, and (iv) SCQP (SciCumulus Query Processor). The first step towards executing a workflow in SciCumulus is to define the workflow specification and the parameters values to be consumed. This is performed using the SCSetup module. The user has to inform the structure of the workflow, which programs are associated to which activities, etc. When the metadata related to the experiment is loaded into the SciCumulus database, the user can start executing the workflow. Since SciCumulus was developed focusing on supporting the execution of workflows in clouds, instantiating the environment was a top priority. The SCSetup module queries the provenance database to retrieve prospective provenance and creates the virtual machines (in the cloud) or reserve machines (in a cluster). The SCStarter copies and invokes an instance of SCCore in each machine of the environment, and since SCCore is a MPI-based application it runs in all machines simultaneously and follows a Master/Worker architecture (similar to Hadoop and Spark). The SCCore-Master (SCCore 0 ) schedules the activations for several workers and each worker has a specific ID (SCCore 1 , SCCore 2 , etc.). When a worker is idle, it sends a message for the SCCore 0 (Master) and request more activations to execute. The SCCore 0 defines at runtime the best activation to send following a specific cost model. The SCQP component allows for users to submit queries to the provenance database for runtime or post-mortem analysis. For more information about SciCumulus please refer to (de <ns0:ref type='bibr' target='#b11'>Oliveira et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b14'>Oliveira et al., , 2010b;;</ns0:ref><ns0:ref type='bibr' target='#b35'>Guerine et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b74'>Silva et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b32'>Guedes et al., 2020a;</ns0:ref><ns0:ref type='bibr' target='#b12'>de Oliveira et al., 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Provenance</ns0:head><ns0:p>An workflow activation has input data, and generates intermediate and output data. WfMS has to collect all metadata associated to the execution in order to foster reproducibility. This metadata is called provenance <ns0:ref type='bibr' target='#b21'>(Freire et al., 2008)</ns0:ref>. According to <ns0:ref type='bibr' target='#b27'>Goble (2002)</ns0:ref>, the provenance must verify data quality, path audit, assignment verification, and information querying. Data quality check is also related to verifying the reliability of workflow generated data. Path audit is the ability to follow the steps taken at each stage of the experiment that generated a given result. The assignment verification is linked to the ability to know who is responsible for the data generated. Lastly, an information query is essential to analyze the data generated by the experiment's execution. Especially for workflows, provenance can be classified as prospective (p-prov) and retrospective (r-prov) <ns0:ref type='bibr' target='#b21'>(Freire et al., 2008)</ns0:ref>. p-prov represents the specification of the workflow that will be executed. It corresponds to the steps to be followed to achieve a result. r-prov is given by executed activities and information about the environment used to produce a data product, consisting of a structured and detailed history of the execution of the workflow.</ns0:p><ns0:p>Provenance is fundamental for the scientific experiment analysis phase. It allows for verifying what caused an activation to fail or generated an unexpected result, or in the case of success, what were the steps and parameters used until the result. Another advantage of provenance is the reproducibility of an experiment, which is essential for the validation of the results obtained by third parties. Considering the provenance benefits in scientific experiments, it was necessary to define a model of representation of provenance <ns0:ref type='bibr' target='#b7'>(Bose et al., 2006)</ns0:ref>. The standard W3C model is PROV <ns0:ref type='bibr' target='#b25'>(Gil et al., 2013)</ns0:ref>. PROV is a generic data model and is based on three basic components and their links, being the components: Entity, Agent and Activity. The provenance and provenance data model are essential concepts because FReeP operation relies on provenance to recommend parameter values. Also, to extract provenance data to use in FReeP it is necessary to understand the provenance data model used.</ns0:p></ns0:div>
<ns0:div><ns0:head>Recommender Systems</ns0:head><ns0:p>FReeP is a personalized Recommender system (RS) <ns0:ref type='bibr' target='#b71'>(Resnick and Varian, 1997)</ns0:ref> aiming at suggesting the most relevant parameters to the user to perform a task, based on their preferences.There are three essential elements for the development of a recommender system: Users, Items, and Transactions. The Users are the target audience of the recommender system with their characteristics and goals. Items are the recommendation objects and Transactions are records that hold a tuple (user, interaction), where the Manuscript to be reviewed Computer Science interaction encompasses the actions that the user performed when using the recommender system. These interactions are generally user feedbacks, which may be interpreted as their preferences.</ns0:p><ns0:p>A recommender task can be defined as: given the elements Items, Users and Transactions, find the most useful items for each user. According to <ns0:ref type='bibr' target='#b0'>Adomavicius and Tuzhilin (2005)</ns0:ref>, a recommender system must satisfy the equation ∀u ∈ U, i ′ u = arg max i∈I F(u, i) , where U represents the users, I represents the items and F is a utility function that calculates the utility of an item i inI for a u inU user. In case the tuple (u, i) is not defined in the entire search space, the recommender system can extrapolate the F function.</ns0:p><ns0:p>The utility function varies according to the approach followed by the recommender system. Thus, The Model-Based subtype generates a hypothesis from the data and use it to make recommendations instantly. Although widely adopted, Collaborative Filtering only uses collective information, limiting novel discoveries in scientific experiment procedures.</ns0:p><ns0:formula xml:id='formula_0'>recommender</ns0:formula><ns0:p>Content-based Recommender Systems make recommendations similar to items that the user has already expressed a positive rating in the past. To determine the similarity degree between items, this approach is highly dependent on extracting their characteristics. However, each scenario needs the right item representation to give satisfactory results. In scientific experiments, it can be challenging to find an optimal item representation.</ns0:p><ns0:p>Finally, Hybrid Recommender Systems arise out of an attempt to minimize the weaknesses that traditional recommendation techniques have when used individually. Also, it is expected that a hybrid strategy can aggregate the strengths of the techniques used together. There are several methods of combining recommendation techniques in creating a hybrid recommender system, including: Weighting approaches that provides a score for each recommendation item, Switching, which allows for selecting different types of recommending strategies, Mixing, to make more than one recommentation at a time, Feature Combination, to put together both Content-Based and Collaborative Filtering strategies, Cascade, that first filters the candidate items for the recommendation, followed by refining these candidates, looking for the best alternatives, Feature Augmentationand Meta-Level, which chain a series of recommendations one after another <ns0:ref type='bibr' target='#b9'>(Burke, 2002)</ns0:ref>.</ns0:p><ns0:p>FReeP is as a Cascade Hybrid Recommender System because the content of user preferences is used to prune the search space followed by a collaborative strategy to give the final recommendations.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preference Learning</ns0:head><ns0:p>User preferences play a crucial role in recommender systems <ns0:ref type='bibr' target='#b81'>(Viappiani and Boutilier, 2009)</ns0:ref>. From an Artificial Intelligence perspective, a preference is a problem restriction that allows for some degree of relaxation. <ns0:ref type='bibr' target='#b23'>Fürnkranz and Hüllermeier (2011)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>representing preferences is through binary relationships. For example, a tuple (x i > x j ) > would mean a preference for the value i over j for the attribute x.</ns0:p><ns0:p>The main task within Preference Learning area is Learning to Rank as commonly it is necessary to have an ordering of the preferences. The task is divided into three categories: Label Ranking <ns0:ref type='bibr' target='#b80'>(Vembu and Gärtner, 2011)</ns0:ref>, Instance Ranking <ns0:ref type='bibr' target='#b3'>(Bergeron et al., 2008)</ns0:ref> and Object Ranking <ns0:ref type='bibr' target='#b63'>(Nie et al., 2005)</ns0:ref>. In Label ranking a ranker makes an ordering of the set of classes of a problem for each instance of the problem. In cases where the classes of a problem are naturally ordered, the instance ranking task is more suitable, as it orders the instances of a problem according to their classes. The instances belonging to the 'highest' classes precede the instances that belong to the 'lower' classes. In object ranking an instance is not related to a class. This task's objective is, given a subset of items referring to the total set of items, to produce a ranking of the objects in that subset-for example, the ranking of web pages by a search engine.</ns0:p><ns0:p>Pairwise Label Ranking <ns0:ref type='bibr' target='#b22'>(Fürnkranz and Hüllermeier, 2003;</ns0:ref><ns0:ref type='bibr' target='#b41'>Hüllermeier et al., 2008)</ns0:ref> (PLR) relates each instance with a preference type a > b, representing that a is preferable to b. Then, a binary classification task is assembled where each example a, b is annotated with a is a is preferable over b and 0, otherwise. Then, a classifier M a,b is trained over such dataset to learn how to make the preference predictions which returns 1 as a prediction that a is preferable to b and 0 otherwise. Instead of using a single classifier that makes predictions between m classes, given a set L of m classes, there will be m(m − 1)/2 binary classifiers, where a classifier M i, j only predicts between classes i, j inL. Then, the strategy defined by PLR uses the prediction of each classifier as a vote and uses a voting system that defines an ordered list of preferences. Next, we give more details about how FReeP tackles the voting problem.</ns0:p></ns0:div>
<ns0:div><ns0:head>Borda Count</ns0:head><ns0:p>Voting Theory <ns0:ref type='bibr' target='#b78'>(Taylor and Pacelli, 2008</ns0:ref>) is an area of Mathematics aimed at the study of voting systems.</ns0:p><ns0:p>In an election between two elements, it is fair to follow the majority criterion, that is, the winning candidate is the one that has obtained more than half of the votes. However, elections involving more than two candidates require a more robust system. Preferential Voting <ns0:ref type='bibr' target='#b48'>(Karvonen, 2004)</ns0:ref> and Borda</ns0:p><ns0:p>Count <ns0:ref type='bibr' target='#b20'>(Emerson, 2013)</ns0:ref> are two voting schemas concerning the scenarios where there are more than two candidates. In Preferential Voting, voters elicit a list of the most preferred to the least preferred candidate.</ns0:p><ns0:p>The elected candidate is the one most often chosen as the most preferred by voters.</ns0:p><ns0:p>Borda Count is a voting system in which voters draw up a list of candidates arranged according to their preference. Then, each position in the user's preference list gets a score. In a list of n candidates, the candidate in the i − th position on the list receives the score n − i. To determine the winner, the final score is the sum of each candidate's scores for each voter, and the candidate with the highest score is the elected one. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>system based on Collaborative Filtering. Still using the Borda Count, Tang and Tong ( <ns0:ref type='formula'>2016</ns0:ref>) proposes the BordaRank. The method consists of using the Borda Count method directly in the sparse matrix of evaluations, without predictions, to make a recommendation. The way FReeP tackles the recommendation task is presented in three versions. In the first two versions, the algorithm aims at recommending a value for only one parameter at a time. While the naive version assumes that all parameters have a discrete domain, the enhanced second version is an extension of the first one that is able to deal with cases where a parameter has a continuous domain. The third version targets at recommending values for n > 1 parameters at a time.</ns0:p></ns0:div>
<ns0:div><ns0:head>FREEP -FEATURE RECOMMENDER FROM PREFERENCES</ns0:head><ns0:p>Next, we start by presenting the naive version of the method that makes the recommendation for a single parameter at a time. Then, we follow to the improved version with enhancements that improve the performance and allows for working with parameters in the continuous domain. Finally, a generic version of the algorithm is presented, aiming at making the recommendation of values for multiple parameters at a time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete Domain Parameter Value Recommendation</ns0:head><ns0:p>Given a provenance database D, a parameter y ∈ Y , where Y is the workflow parameters set, and a preferences or restrictions set P defined by the user, where p i ∈ P(y i , val k ), FReeP one parameter approach aims at solving the problem of recommending a r value for y, so that the P preferences together with the r recommendation to y maximize the chances of the workflow activation to run to the end. Based on the user's preferences, it would be possible to query the provenance database from which the experiment came from to retrieve records that could assist in the search for other parameters values that had no preferences defined. However, FReeP is based on a model generation that generalizes the return recommendation</ns0:p><ns0:p>The algorithm input data are: target parameter for which the algorithm should make the recommendation, y; user preferences set, such as a list of key-values, where the key is a workflow parameter and value is the user's preference for that parameter, P; provenance database, D.</ns0:p><ns0:p>The storage of provenance data for an experiment may vary from one WfMS to another. For example, SciCumulus, which uses a provenance representation derived from PROV, stores provenance in a relational database. Using SciCumulus example, it is trivial for the user responsible for the experiment to elaborate a SQL query that returns the provenance data related to the parameters used in each activity in a key-value representation. The key-value representation can be easily stored in a csv format file, which is the required format expected as provenance dataset in FReeP implementation. Thus, converting provenance data to the csv format is up to the user. Still, regarding the provenance data, the records present in the algorithm input data containing information about the parameters must be related only to executions that were successfully concluded, that is, there was no failure that resulted in the execution abortion. The inclusion of components to query and transform provenance data and force successful executions parameters selection would require implementations for each type of WfMS, which is out of the scope of this article. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science generated powerset as a partitions ruleset. Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref> shows an example of how this first step works, with some parameters from SciPhy workflow. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>conditions for an algorithm, specifically in the recommender systems. This problem occurs, for example, when there are few users for the neighborhood definition with a similar user profile or lack of ratings for enough items. FReeP can also be affected by Cold Start problem. If only all preferences were used at one time for partitioning the provenance data, in some cases, it could be observed that the resulting partition would be empty. This is because there could be an absence of any of the user's preferences in the provenance data. Therefore, generating multiple partitions with subsets of preferences decreases the chance of obtaining only empty partitions. However, in the worst case where none of the user's preferences are present in the workflow provenance, FReeP will not perform properly, thus failing to make any recommendations.</ns0:p><ns0:p>After the partitions generation and horizontal and vertical filters are discovered, there is a filtered data set that follows part of the user's preferences. These provenance data that will generate the Machine All predictions generated by recommend step, which is within the iteration over the partitioning rules, are stored. The last algorithm step, elect recommendation, uses all of these predictions as votes to define which value should be recommended for the target parameter. When an algorithm instance is setup to return a classifier type model in hypothesis generation step, the most voted value is elected as the recommendation. On the other hand, when an algorithm instance is setup to return a ranker type model in hypothesis generation step, the strategy is Borda Count. The use of the Borda Count strategy seeks to take advantage of the list of lists form that the saved votes acquire when using the ranker model. This list of lists format occurs because the ranker prediction is a list, and since there are as many predictions as partitioning rules, the storage of these predictions takes the list of lists format.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete and Continuous Domain Parameter Value Recommendation</ns0:head><ns0:p>The naive version of FReeP allowed evaluating the algorithm's proposal. The proposal showed relevant results after initial tests (presented in next section), so efforts were focused on improving its performance and utility. In particular, the following problems have been identified: 1) User has some restriction to set his/her parameters preferences; 2) The categorical domain parameters when used as a class variable (parameters for recommendation) are treated as well as they are present in the input data; 3) Machine Learning models used can only learn when the class (parameter) variable has a discrete domain; 4) All partitions generated by workflow parameters powerset present in user preferences are used as partitioning rules for the algorithm.</ns0:p><ns0:p>Regarding problem 1, in Algorithm 1, the user was limited to define his preferences with the equality operator. Depending on the user's preferences, the equality operator is not enough. With this in mind, Support Vector Machines (SVM) <ns0:ref type='bibr' target='#b84'>(Wang, 2005)</ns0:ref>.</ns0:p><ns0:p>This pre-processing step was included in Algorithm 2 as classes preprocessing step. The preprocessing consists in exchanging each distinct categorical value for a distinct integer. Note that the encoding of the parameter used as a class variable in the model generation is different from the encoding applied to the parameters used as attributes represented by the step preprocessing.</ns0:p><ns0:p>Concerning problem 3, by using classifiers to handle a continuous domain class variable degrades the performance results. Performance degradation happens because the numerical class variables are considered as categorical. For continuous numerical domain class variables, the Machine Learning models suggested are Regressors <ns0:ref type='bibr' target='#b62'>(Myers and Myers, 1990)</ns0:ref>. In this way, the Enhanced FReeP checks the parameter y domain, which is the recommendation target parameter, represented as model select step in Algorithm 2.</ns0:p><ns0:p>To analyze problem 4, it is important to note that after converting categorical attributes One-Hot encoding in preprocessing step, the provenance database will have a considerable increase in the number of attributes. Also, after categorical attributes encoding in preprocessing step, the parameters extracted from the user's preferences, are also encoded for partitions generation step. In Algorithm 1, the partitioning rules powerset is calculated on all attributes derived from the original parameters after One-Hot encoding. If FReeP uses the powerset generated from the parameters present in the user's preferences set as partitioning rules (in the partitions generation step), it can be very costly. Thus, using the powerset makes the complexity of the algorithm becomes exponential according to the parameters present in the user's preferences set. Alternatives to select the best partitioning rules and handle the exponential cost are represented in Algorithm 2 as optimized partitions generation step. The two strategies proposed here are based on Principal Components Analysis (PCA) <ns0:ref type='bibr' target='#b24'>(Garthwaite et al., 2002)</ns0:ref> and the Analysis of variance (ANOVA) <ns0:ref type='bibr' target='#b26'>(Girden, 1992)</ns0:ref> statistical metric.</ns0:p><ns0:p>The strategy based on PCA consists of extracting x principal components from all provenance database, pca D , and for each pt ∈ partitions, pca i pt , which are pt partition principal components. Then, the norms are calculated pca D − pca i pt , and from that n partitioning rules are selected that generated pca i vote ← recommend(model, y)</ns0:p><ns0:p>12:</ns0:p><ns0:p>votes ← votes ∪ {vote} 13:</ns0:p><ns0:p>recommendation ← elect recommendation(votes)</ns0:p><ns0:p>14:</ns0:p><ns0:p>return recommendation</ns0:p></ns0:div>
<ns0:div><ns0:head>Recommendation for n Parameters at a time</ns0:head><ns0:p>Algorithms 1 and 2 aim at producing single parameter recommendation at a time. However, in a real usage scenario of scientific workflows, the WfMS will probably need to recommend more than one parameter at a time. A naive alternative to handle this problem is to execute Algorithm 2 for each of the target parameters, always adding the last recommendation to the user's preference set. This alternative assumes that the parameters to be recommended are independent random variables. One way to implement this strategy is by using a classifiers chain <ns0:ref type='bibr' target='#b70'>(Read et al., 2011)</ns0:ref>.</ns0:p><ns0:p>Nevertheless, this naive approach neglects that the order in which the target parameters are used during algorithm interactions can influence the produced recommendations. The influence is due to parameter dependencies that can be found between two (or more) workflow activities (e.g., two activities consume a parameter produced by a third activity of the workflow). In Figure <ns0:ref type='figure'>3</ns0:ref>, the circles represent the activities of workflow, so activities 2 and 3 are preceded by activity 1 (e.g., they consume the output of activity 1). Using this example, we can see that it is possible that there is a dependency relationship between the parameters param2 and param3 with the parameter param1. In this case, the values of param2 and param3 parameters can be influenced by parameter param1 value.</ns0:p><ns0:p>In order to deal with this problem, FReeP leverages the Classifiers Chains Set <ns0:ref type='bibr' target='#b70'>(Read et al., 2011)</ns0:ref> concept. This technique allows for estimating the joint probability distribution of random variables based on a Classifiers Chains Set. In this case, the random variables are the parameters for which values are to be recommended, and the joint probability distribution concerns the possible dependencies between these parameters. The Classifiers Chains and Classifiers Chains Set are techniques from Multi-label Classification <ns0:ref type='bibr' target='#b79'>(Tsoumakas and Katakis, 2007)</ns0:ref> Machine Learning task.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_13'>8</ns0:ref> depicts an architecture overview for the proposed algorithm named as Generic FReeP that recommends n parameters simultaneously. The architecture presented in Figure <ns0:ref type='figure' target='#fig_13'>8</ns0:ref> shows that the solution developed to make n parameter recommendations at a time is a packaging of FReeP algorithm to one parameter. This final approach is divided into five steps: identification of parameters for the recommendation, generation of ordered sequences of these parameters, iteration over each of the sequences generated with the addition of each recommendation from FReeP to the user preferences set, separation of recommendations by parameter and finally the choice of value recommendation for each target parameters.</ns0:p><ns0:p>The formalization can be seen in Algorithm 3. return response</ns0:p><ns0:p>The first step parameters extractor extracts the workflow parameters that are not present in the users' preferences and will be the targets of the recommendations. Thus, all other parameters that are not in the user's preferences will have recommendation values.</ns0:p><ns0:p>Lines 4 and 5 of the algorithm comprise the initialization of the variable responsible for storing the different recommendations for each parameter during the algorithm execution. Then, the list of all parameters that will be recommended is used for generating different ordering of these parameters, indicated by sequence generators step. For example, let w be a workflow with 4 p parameters and let u be an user with pr 1 and pr 3 preferences for the p 1 and p 3 parameters respectively. The parameters to be recommended are p 2 and p 4 , in this case two possible orderings are: {p 2 , p 4 } and {p 4 , p 2 }. Note that the number of sorts used in the algorithm are not all possible sorts, in fact N of the possible sorts are selected at random.</ns0:p><ns0:p>Then, the algorithm initializes an iteration over each of the sorts generated by the step sequence generators.</ns0:p><ns0:p>Another nested iteration over each parameter present in the current order also begins. An intuitive explanation of the algorithm between lines 9 and 13 is that each current sequence parameter is used together with the user's preferences for its recommendation. At the end of the recommendation of one of the ordering parameters, the recommendation is incorporated into the preferences set used in the recommendation of Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the next ordering parameter. In this iteration, the recommendations are grouped by parameter to facilitate the election of the recommended value for each target parameter.</ns0:p><ns0:p>The step of iterating over the generated sequences, always adding the last recommendation to the set of preferences, is the Classifiers Chains concept. To deal with the dependency between the workflow parameters that can influence a parameter value recommendation, the step that generates multiple sequences of parameters, combined with the Classifiers Chains, is the Classifiers Chains Set concept.</ns0:p><ns0:p>Finally, to choose the recommendation for each target parameter, a vote is taken on lines 15 and 16.</ns0:p><ns0:p>The most voted procedure makes the majority election that defines the target parameter recommendation value. This section presented three algorithms that are part of the FReep approach developed for the parameter recommendation problem in workflows. The proposals covered two main scenarios for parameters value recommendation (single and multiple parameter at a time).</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL EVALUATION</ns0:head><ns0:p>This section presents the experimental evaluation of all versions of FReeP. First, we present the workflows used as case studies namely SciPhy <ns0:ref type='bibr' target='#b64'>(Ocaña et al., 2011)</ns0:ref> and Montage <ns0:ref type='bibr' target='#b43'>(Jacob et al., 2009)</ns0:ref>. Following we present the experimental and environment setups. Finally, we discuss the results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Case Studies</ns0:head><ns0:p>In this article, we consider two workflows from bioinformatics and astronomy domains, namely SciPhy <ns0:ref type='bibr' target='#b64'>(Ocaña et al., 2011)</ns0:ref> and Montage <ns0:ref type='bibr' target='#b43'>(Jacob et al., 2009)</ns0:ref>, respectively. SciPhy is a phylogenetic analysis workflow that generates phylogenetic trees (a tree-based representation of the evolutionary relationships among organisms) from input DNA, RNA and aminoacid sequences. SciPhy has four major activities as presented in Figure <ns0:ref type='figure' target='#fig_16'>9</ns0:ref>(a): (i) sequence alignment, (ii) alignment conversion, (iii) evolutionary model election and (iv) tree generation. SciPhy has been used in scientific gateways such as BioInfoPortal <ns0:ref type='bibr' target='#b66'>(Ocaña et al., 2020)</ns0:ref>. SciPhy is a CPU-intensive workflow, bacause many of its activities (especially the evolutionary model election) commonly execute for several hours depending on the input data and the chosen execution environment.</ns0:p><ns0:p>Montage <ns0:ref type='bibr' target='#b43'>(Jacob et al., 2009</ns0:ref>) is a well-known astronomy workflow that assembles astronomical images into mosaics by using FITS (Flexible Image Transport System) files. Those files include a coordinate system and the image size, rotation, and WCS (World Coordinate System) map projection. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental and Environment Setup</ns0:head><ns0:p>All FReeP algorithms presented in this article were implemented using the Python programming language.</ns0:p><ns0:p>FReeP implementation also benefits from Scikit-Learn <ns0:ref type='bibr' target='#b68'>(Pedregosa et al., 2011)</ns0:ref> to learn and evaluate the Machine Learning models, numpy <ns0:ref type='bibr' target='#b83'>(Walt et al., 2011)</ns0:ref>, a numerical data manipulation library; and pandas <ns0:ref type='bibr' target='#b59'>McKinney, 2011)</ns0:ref>, which provides tabular data functionalities.</ns0:p><ns0:formula xml:id='formula_1'>(</ns0:formula><ns0:p>The machine specification where experiments were performed is a CPU Celeron (R) Dual-Core T3300 @ 2.00GHz × 2 processor, 4GB DDR2 RAM and 132GB HDD. To measure recommendations performance when the parameter is categorical, precision and recall are used as metrics. Precision and recall are metrics widely used for the quantitative assessment of recommender systems <ns0:ref type='bibr' target='#b37'>(Herlocker et al., 2004</ns0:ref>) <ns0:ref type='bibr' target='#b73'>(Schein et al., 2002)</ns0:ref>. Equation 1 defines precision and Equation 2 defines recall, following the recommender vocabulary, where T R is the correct recommendation set and R is all recommendations set. An intuitive explanation to precision is that it represents the most appropriate recommendations fraction.</ns0:p><ns0:p>Still, recall represents the appropriate recommendation fraction that was made.</ns0:p><ns0:formula xml:id='formula_2'>precision = T R ∩ R R (1) recall = T R ∩ R T R (2) MSE = 1 n n ∑ i=1 (RV − TV ) 2 (3)</ns0:formula><ns0:p>When the parameter to be recommended is numerical, the performance of FReeP is evaluated with Mean Square Error (MSE). The MSE formula is given by Equation <ns0:ref type='formula'>3</ns0:ref>where n is the recommendations Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In Figure <ns0:ref type='figure' target='#fig_18'>10</ns0:ref>, it is possible to check the correlation between the different attributes in the datasets.</ns0:p><ns0:p>It is notable in both Figure <ns0:ref type='figure' target='#fig_18'>10a</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_18'>10b</ns0:ref> that the attributes (i.e., workflow parameters) present a weak correlation. All those statistics are relevant to understand the results obtained by the experiments performed from each version of FReeP algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete Domain Recommendation Evaluation</ns0:head><ns0:p>This experiment was modeled to evaluate FReeP's algorithm key concepts using the naive version presented in Algorithm 1, that was developed to recommend one discrete domain parameter at a time. This experiment aims at evaluating and comparing the performance of FReeP when its hypothesis generation step instantiates either a single classifier or a ranker. The ranker tested as a model was implemented using the Pairwise Label Ranking technique. K Nearest Neighbors <ns0:ref type='bibr' target='#b49'>(Keller et al., 1985)</ns0:ref> classifier is used 1. The algorithm is instantiated with the classifier or ranker and a recommendation target workflow parameter.</ns0:p><ns0:p>2. The provenance database is divided into k parts to follow a K-Fold Cross Validation procedure <ns0:ref type='bibr' target='#b50'>(Kohavi, 2001)</ns0:ref>. At each step, the procedure takes k − 1 parts to train the model and the 1 remaining part to make the predictions. In this experiment, k = 5.</ns0:p><ns0:p>3. Each workflow parameter is used as recommendation target parameter.</ns0:p><ns0:p>4. Each provenance record in test data is used to retrieve target parameter real value.</ns0:p><ns0:p>5. Parameters that are not the recommendation target are used as preferences, with values from current test record.</ns0:p><ns0:p>6. Then, algorithm performs recommendation and both the result and the value present in the test record for the recommendation target parameter are stored.</ns0:p><ns0:p>7. Precision and recall values are calculated based on all K-Fold Cross Validation iterations.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Experiment 1 results are presented and analyzed based on the values of precision and recall, in addition to the execution time. Figure <ns0:ref type='figure' target='#fig_20'>11</ns0:ref> shows that Algorithm 1 execution with Sciphy provenance database, using both the classifier and the ranker. Only KNN classifier with k = 3 gives a precision greater than 50%.</ns0:p><ns0:p>Also, a high standard deviation is noticed. Even with unsatisfactory performance, Figure <ns0:ref type='figure' target='#fig_21'>12</ns0:ref> shows that KNN classifier presented better recall results than those for precision, both in absolute values terms and standard deviation, which had a slight decrease. In contrast, the ranker recall was even worse with the precision results and still present a very high standard deviation. Figure <ns0:ref type='figure' target='#fig_22'>13</ns0:ref> shows the execution time, in seconds, to obtain the experiment's recommendations for SciPhy. The execution time of ranker is much more significant when compared to the time spent by the classifier. This behavior can be explained by the fact that the technique used to generate the ranker creates multiple binary classifiers. Another point to note is that the execution time standard deviation from ranker is also very high. It is important to note that when FReeP uses KNN, it is memory-based, since each recommendation needs to be loaded into main memory.</ns0:p><ns0:p>Analyzing Figure <ns0:ref type='figure' target='#fig_23'>14</ns0:ref> (Montage) one can conclude that with the use of k = 3 for the classifier and for the ranker produces relevant results. The precision for this case reached 80%, and the standard deviation was considerably smaller compared to the precision results with Sciphy dataset in Figure <ns0:ref type='figure' target='#fig_20'>11</ns0:ref>.</ns0:p><ns0:p>For k ∈ {5, 7}, the same results behavior was observed, considerably below those expected.</ns0:p></ns0:div>
<ns0:div><ns0:head>17/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Considering the precision, Figure <ns0:ref type='figure' target='#fig_24'>15</ns0:ref> shows that the results for k = 3 were the best for both the classifier and for ranker, although for this case they did not reach 80% (although it is close). It can be noted that the standard deviation was smaller when compared to the standard deviations found for precision. One interesting point about the execution time of the experiment with Montage presented in Figure <ns0:ref type='figure' target='#fig_25'>16</ns0:ref> is that for k ∈ {3, 7} the ranker spent less time than the classifier. This behavior can be explained because the ranker, despite being generated by a process where several classifiers are built, relies on binary classifiers. When used alone, the classifier needs to handle all class variables values, in this case, parameter recommendation values, at once. However, it is also important to note that the standard deviation for ranker is much higher than for the classifier. In general, it was possible to notice that the use of ranker did not bring encouraging results. In all cases, ranker precision and recall were lower than those presented by the classifier. Besides, the standard deviation of ranker in the execution time spent results was also very high. Another point to be noted is that the best precision and recall results were obtained with the data from Montage workflow. These results may be linked to the fact that the Montage dataset has more records than the Sciphy dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete and Continuous Domain Recommendation Evaluation</ns0:head><ns0:p>Experiment 1 was modified to evaluate the Algorithm 2 performance, yielding Experiment 2. Algorithm 2 was executed with variations in the choice of classifiers and regressors, partitions strategies, and records percentage from provenance database. All values per algorithm parameter are presented in Table <ns0:ref type='table' target='#tab_11'>4</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Experiment 2 results are presented using precision, recall, and execution time for categorical domain parameters recommendations, while numerical domain parameters recommendations are evaluated using MSE and the execution time. Based on the results obtained in Experiment 1, only classifiers were used as Machine Learning models in Experiment 2, i.e., we do not consider rankers.</ns0:p><ns0:p>The first observation when analyzing the precision data in Figure <ns0:ref type='figure' target='#fig_28'>17</ns0:ref> is that ANOVA partitioning strategy obtained better results than PCA. ANOVA partitioning strategy precision in absolute values is generally more significant, and variation in precision for each attribute considered for recommendation is lower than PCA strategy. The classifiers have very similar performance for all percentages of partitions in the ANOVA strategy. On the other hand, the variation in the percentages of elements per partition also reflects a more significant variation in results between the different classifiers. The Multi Layer Perceptron Manuscript to be reviewed Computer Science (MLP) classifier, which was trained using the Stochastic Descending Gradient <ns0:ref type='bibr' target='#b8'>(Bottou, 2010)</ns0:ref> with a single hidden layer, presents the worst results except in the setup that it follows the PCA partitioning strategy with a percentage of 70% elements in the partitioning. The MLP model performance degradation may be related to the fact that the numerical attributes are not normalized before algorithm execution.</ns0:p><ns0:p>Experiment 2. Algorithm 2 Evaluation Script.</ns0:p><ns0:p>1. Algorithm 2 is instantiated with a classifier or regressor, a partitioning strategy, percentage data to be returned by partitioning strategy, and a target workflow parameter.</ns0:p><ns0:p>2. Provenance database is divided using K-Fold Cross Validation, k = 5</ns0:p><ns0:p>3. Each provenance record on test data is used to retrieve the target parameter's real value.</ns0:p><ns0:p>4. A random number x between 2 and parameters number present in provenance database is chosen to simulated preference number used in recommending target parameter.</ns0:p><ns0:p>5. x parameters are chosen from the remaining test record to be used as preferences.</ns0:p><ns0:p>6. Algorithm performs recommendation, and both result and test record value for the target parameter are stored. Figure <ns0:ref type='figure' target='#fig_30'>19</ns0:ref> shows the average execution time in seconds during the experiment with categorical domain parameters in each setup used. Execution time of ANOVA partitioning strategy was, on average, half the time used with the PCA partitioning strategy. The execution time using different classifiers for each attribute is also much smaller and stable for ANOVA strategy than for PCA, regardless of element partition percentage.</ns0:p><ns0:p>Analyzing precision, recall, and execution time spent data jointly, ANOVA partitioning strategy showed the best recommendation performance for the categorical domain parameters of the Sciphy provenance database. Going further, the element partition percentage generated by the strategy has no significant impact on the results. Another interesting point is that a simpler classifier like KNN presented results very similar to those obtained by a more complex classifier like SVM. The experiment execution time of Montage provenance database was much greater than the time used with the data from the workflow Sciphy. The explanation is the difference in the database size. Another observation is that the ANOVA partitioning strategy produces the fastest recommendations. Another point is that the percentage of the elements in partitioning generated by each partitioning strategy has no impact on the algorithm performance. Finally, it was possible to notice that the more robust classifiers and regressors had their performance exceeded by simpler models in some cases for the data used.</ns0:p></ns0:div>
<ns0:div><ns0:head>Generic FReeP Recommendation Evaluation</ns0:head><ns0:p>A third experiment was modeled to evaluate Algorithm 3 performance. As in Experiment 2, different variations, following Table <ns0:ref type='table' target='#tab_11'>4</ns0:ref> values, were used in algorithm execution. Precision, recall, and MSE are also the metrics used to evaluate the recommendations made by each algorithm instance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Results showed here were obtained by fixing parameter n = 10 in Experiment 3, and using only SciPhy provenance database. Based on Experiment 2 results, it was decided to use the ANOVA partitioning strategy with 50% recovering elements from the provenance database. This choice is because the ANOVA partitioning strategy was the one that obtained the best results in previous experiment. As the percentage of data recovered by the strategy was not an impacting factor in the results, an intermediate percentage used in the previous experiment is selected. In addition, only KNN, with k ∈ {5, 7}, and SVM were kept <ns0:ref type='table' target='#tab_12'>5</ns0:ref> presents the results obtained with the Algorithm 3 instance variations. Each row in the table represents an Algorithm 3 instance setup. The column that draws the most attention is the Failures. What happens is that, for some cases, the algorithm was not able to carry out the recommendation together and therefore did not return any recommendations. It is important to remember that each algorithm setup was tested on a set with 10 records extracted randomly from the database. The random record selection process can select records in which parameter values can be present only in the selected record. For this experiment, the selected examples are removed from the dataset, and therefore there is no other record that allows the correct execution of the algorithm.</ns0:p><ns0:p>Analyzing Table <ns0:ref type='table' target='#tab_12'>5</ns0:ref> results, focusing on the column Failures and taking into account that 10 records were chosen for each setup, it is possible to verify that in most cases, the algorithm was not able to make recommendations. However, considering only the recommendations made, it can be seen that the algorithm had satisfactory results for the precision and recall metrics. The values presented for the MSE metric were mostly satisfactory, differing only in the configurations of lines 4 and 7, both using the regressor KNR with k = 5. Another point to note is that the algorithm had more problems to make recommendations when the SVM classifier was used. Furthermore, it is possible to note that algorithm setups with more sophisticated Machine Learning models such as SVM and SVR do not add performance to the algorithm, specifically for Sciphy provenance dataset used.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Previous literature works had already relied on recommender systems to support scientific workflows.</ns0:p><ns0:p>Moreover, hyperparameter tuning methods also have similar goals as paramater recommendation. Hyperparameters are variables that cannot be estimated directly from data, and, as a result, it is the user's task to explore and define those values. Hyperparameter Optimization (HPO) is a research area that emerged to assist users in adjusting the hyperparameters of Machine Learning models in a non-ad-hoc manner Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b87'>(Yang and Shami, 2020)</ns0:ref>. The well-defined processes resulting from research in the area may speed up the experimentation process and allow for reproducibility and fair comparison between models. Among the different methods of HPO, we can mention Decision Theory, Bayesian Optimization, Multi-fidelity Optimization, and Metaheuristic Algorithms.</ns0:p><ns0:p>Among the Decision Theory methods, the most used are Grid Search <ns0:ref type='bibr' target='#b6'>(Bergstra et al., 2011)</ns0:ref> and Random Search <ns0:ref type='bibr' target='#b5'>(Bergstra and Bengio, 2012)</ns0:ref>. For both strategies, the user defines a list of values to be experimented for each hyperparameter. In Grid Search, the search for optimum values is given by experimenting the predefined values for the entire cartesian product. Random Search selects a sample for the hyperparameters to improve the execution time of the whole process. While the exponential search space of Grid Search may be impossible to complain, in Random Search there is the possibility that an optimal combination will not be explored. Also, the common problem between both approaches is that the dependencies between the hyperparameters are not taken into account. FReeP considers the possible dependencies between parameters by following the concept of classifier chains.</ns0:p><ns0:p>The Bayesian Optimization <ns0:ref type='bibr' target='#b18'>(Eggensperger et al., 2013)</ns0:ref> method optimizes the search space exploration using information from the previously tested hyperparameters to prune the non-promising combinations test. Despite using a surrogate model, the Bayesian optimization method still requires that the target model evaluation direct the search for the optimal hyperparameters. In a scenario of scientific workflows, it is very costly from the economical and runtime perspective to run an experiment, even more so to only evaluate a combination of parameter values. FReeP does not require any new workflow execution to recommend which values to use as it uses only data from past executions.</ns0:p><ns0:p>Multi-fidelity Algorithms <ns0:ref type='bibr' target='#b91'>(Zhang et al., 2016)</ns0:ref> also have the premise of balancing the time spent to search for hyperparameters. This kind of algorithm is based on successively evaluating hyperparameters in a subset search space . Those strategies follow similar motivations as the partitions generation of FReeP. However, in a scenario of scientific workflows, the Multi-fidelity algorithms still require workflow execution to evaluate combination quality.</ns0:p><ns0:p>The Metaheuristics Algorithms <ns0:ref type='bibr' target='#b28'>(Gogna and Tayal, 2013)</ns0:ref>, based on the evolution of populations, use different forms of combinations of pre-existing populations in the hope of generating better populations at each generation. For hyperparameters tuning, hyperparameters with missing values are the population.</ns0:p><ns0:p>Still, FReeP does not require any new execution of the workflow a priori to evaluate a recommendation given by the algorithm.</ns0:p><ns0:p>In general, the works that seek to assist scientists with some type of recommendation involving scientific workflow are focused on the composition phase. <ns0:ref type='bibr' target='#b93'>Zhou et al. (2018)</ns0:ref> uses a graph-based clustering technique to recommend workflows that can be reused in the composition of a developing workflow.</ns0:p><ns0:p>De <ns0:ref type='bibr' target='#b16'>Oliveira et al. (2008)</ns0:ref> uses workflow provenance to extract connection patterns between components in order to make recommendations of new components for a workflow in composition. For each new component used in the composition of workflow, new components are recommended. <ns0:ref type='bibr' target='#b36'>Halioui et al. (2016)</ns0:ref>, uses Natural Language Processing combined with specific ontologies in the field of Bioinformatics to extract concrete workflows from works in the literature. After the reconstruction of concrete workflows, tool combinations patterns , its parameters, and input data used in these workflows are extracted. All this data extracted can be used as assistance for composing new ones workflows that solve problems related to the mined workflows.</ns0:p><ns0:p>Yet concerned with assistance during the workflow composition phase, <ns0:ref type='bibr' target='#b61'>Mohan et al. (2015)</ns0:ref> proposes the use of Folksonomy <ns0:ref type='bibr' target='#b30'>(Gruber, 2007)</ns0:ref> to enrich the data used for the recommendation of others workflows similar to a workflow under development. A design workflow tool was developed that allows free specification tags to be used in each component, making it possible to use not only the recommendation strategy through the workflow syntax, but also component semantics. <ns0:ref type='bibr' target='#b76'>Soomro et al. (2015)</ns0:ref> uses domain ontologies as a knowledge base to incorporate semantics into the recommendation process. A hybrid recommender system was developed using ontologies to improve the already known recommendation strategy based on the extraction of standards from other workflows. <ns0:ref type='bibr' target='#b89'>Zeng et al. (2011)</ns0:ref> uses data and control dependencies between activities, stored in the workflow provenance to build a causality table and another weights table. Subsequently, a Petri network <ns0:ref type='bibr' target='#b92'>(Zhou and Venkatesh, 1999</ns0:ref>) is used to recommend other components for the composition of workflow.</ns0:p><ns0:p>In the context of helping less experienced users in the use of scientific workflows, <ns0:ref type='bibr' target='#b86'>Wickramarachchi et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b53'>Mallawaarachchi et al. (2018)</ns0:ref> show experiments that prove that SWfMS BioWorkflow <ns0:ref type='bibr' target='#b85'>(Welivita et al., 2018)</ns0:ref> use is effective in increasing student engagement and learning in Bioinformatics.</ns0:p></ns0:div>
<ns0:div><ns0:head>23/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Some works propose recommendation approaches that assist less experienced users in analysis of unknown domains, as is the case of <ns0:ref type='bibr' target='#b44'>Kanchana et al. (2016)</ns0:ref> and <ns0:ref type='bibr' target='#b46'>Kanchana et al. (2017)</ns0:ref>, where a chart recommendation system was developed and evolved based on the use of metadata from any domain data.</ns0:p><ns0:p>The system uses Machine Learning and Rule-based components that are refined with user feedback on the usefulness of the recommended charts.</ns0:p><ns0:p>The works that uses recommender system methods to support the scientific process are closely linked to the experiment's composition phase. The execution phase, where there is a need to adjust parameters, still lacks alternatives. This work proposes a hybrid recommendation algorithm capable of making value recommendations for one or n parameters of a scientific workflow, taking into account the user's preferences.</ns0:p></ns0:div>
<ns0:div><ns0:head>FINAL REMARKS</ns0:head><ns0:p>The precision and recall results obtained from the experiments suggest that FReeP is useful in recommending missing parameter values, decreasing the probability that failures will abort scientific experiments performed in High-Performance Computing environments. These results show a high-reliability degree, especially in the recommendation for one workflow parameter due to the number of experimental iterations performed to obtain the evaluations. The low availability of data for the experiments of the recommendation for n parameters impacts the reliability of the results obtained in this scenario. However, the results presented for the n parameters recommendation show that the approach is promising.</ns0:p><ns0:p>FReeP has a number of characteristics pointing out its contribution in saving runtime and financial resources when executing scientific experiments. First, FReeP can be executed on standard hardware, such as that used in the experiments presented in this article, without the need for an HPC environment. Besides, FReeP does not require any further execution of the scientific workflow to assess the recommendation's quality as it uses provenance data. This characteristic of not requiring an instance of the scientific experiment to be performed is the huge difference and advantage compared with Hyperparameter Optimization strategies widely used in the Machine Learning models tuning.</ns0:p><ns0:p>In FReeP, all training data are collected and each tuple represents a different execution of the workflow. This data gathering process can nevertheless be time-consuming. However, one aspect that is expected is that the recommendation process will be performed once and a series of executions of the same workflow is repeated a significant number of (varying the known parameter). In addition, in many research groups there is already a database containing the provenance <ns0:ref type='bibr' target='#b21'>(Freire et al., 2008)</ns0:ref> that can be used to recommend parameter values for non-expert users, i.e., the scientists will not need to effectively execute the workflow to train the model since provenance data is already available. Public provenance repositories such as ProvStore 3 <ns0:ref type='bibr' target='#b42'>(Huynh and Moreau, 2015)</ns0:ref> can be used as input for FReeP. For example, ProvStore contains 1,136 documents (each one associated with a workflow execution) of several different real workflows uploaded by research groups around the world.</ns0:p><ns0:p>From the perspective of runtime, when using the ANOVA partitioning strategy, in the experimental evaluation with the provenance data from the Sciphy workflow, the average time spent on the recommendations is about only 4 minutes. In comparison, the average time of execution of the workflow Sciphy extracted from the provenance data used is about 17 hours and 32 minutes. Still taking into account the use of the ANOVA partitioning strategy, in the experimental evaluation with the Montage workflow provenance data, the average time spent on the recommendation is about 1 hour and 30 minutes. In contrast, the average execution time of a workflow experiment Montage extracted from the provenance data used was about 2 hours and 3 minutes.</ns0:p><ns0:p>Although it is more evident the lower relation between the experiment's execution time and the recommendation time when analyzing the data from the Sciphy workflow, it is essential to emphasize that more robust hardware is not necessary to execute the recommendation process. Yet, future improvements in FReeP includes employing parallelism techniques to further decrease the recommendation time.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The scientific process involves observing phenomena from different areas, formulating hypotheses, testing, and refining them. Arguably, this is an arduous job for the scientist in charge of the process. With the advances in computational resources, there is a growing concern about helping scientists in scientific</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>published in the Proceedings of the 2018 Brazilian Symposium on Databases (SBBD). This extended version provides new empirical shreds of evidence regarding several 2/29 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021) Manuscript to be reviewed Computer Science workflow case studies as well as a broader discussion on related work and experiments.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Votes example that each candidate received in voters preference order.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 depicts an example of Borda Count. There are four candidates: A, B, C and D, and five vote ballots. The lines in each ballot represent the preference positions occupied by each candidate. As there are four candidates, the candidate preferred by a voter receives three points. The score for the candidate D is computed as follows: 1 voter elected the candidate D as the preferred candidate, then 1 * 3 = 3 points; 2 voters elected the candidate D as the second most preferred candidate, then 2 * 2 = 4 points; 2 voters elected the candidate D as the third most preferred candidate, then 2 * 1 = 2 points; 0 voters elected the candidate D as the least preferred, then 0 * 0 = 0 points. Finally, candidate D total score = 3 + 4 + 2 + 0 = 9.Voting algorithms are used together with recommender systems to choose which items the users have liked best to make a good recommendation.<ns0:ref type='bibr' target='#b69'>Rani et al. (2017)</ns0:ref> proposed a recommendation algorithm based on clustering and a voting schema that after clustering and selecting the target user's cluster, uses the Borda Count to select the most popular items in the cluster to be recommended. Similarly,<ns0:ref type='bibr' target='#b51'>Lestari et al. (2018)</ns0:ref> compares Borda Count and the Copeland Score Al-Sharrah (2010) in a recommendation</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 Figure 3 .</ns0:head><ns0:label>33</ns0:label><ns0:figDesc>Figure 3 depicts a synthetic workflow, where one can see four activities represented by colored circleswhere activities 1, 2, and 3 have one parameter each. To execute the workflow, it is required to define values for parameters 1, 2, and 3. Given a scenario where a user has not defined values for all parameters, FReeP targets at helping the user to define values for the missing parameters. For this, FReeP divides the problem into two sub-tasks: 1) recommendation for only one parameter at a time; 2) recommendation for n parameters at once. The second task is more challenging than the first as parameters of different activities may present some data dependencies.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 presents an architecture overview of FReeP's naive version. The algorithm receives as input the provenance database, a target workflow and user preferences. User preferences are also input as this article assumes that the user already has a subset of parameters for which has already defined values to use. In this naive version, the user preferences are only allowed in the form a = b, where a is a parameter, and b is a desired value to a.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>FReePFigure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. FReeP Architecture Overview.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Example of FReeP's Partitioning Rules Generation for Sciphy provenance dataset using user's preferences.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. FReeP's Vertical Filter step.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>With a model created, we can use it to recommend the value for the target parameter. This step is represented in FReeP as recommended, and the recommendation of parameter y is made from the user's preferences. It is important to emphasize that the model's training data may contain parameters that the user did not specify any preference. In this case, an attribute of the instance submitted for the hypothesis does not have a defined value. To clarify the problem, let PW be all workflow parameters set, PP the parameters of workflow for which preference values have been defined; PA the parameters present in the partition rules of an iteration over the partitioning rules; and PV = (PW − PP) ∪ (PP ∩ PA) ∪ {y}, there may be parameters p ∈ PV | p / ∈ PP, and for those parameters p there are no values defined a priori. To handle this problem, the average values present in the provenance data are used to fill in the numerical attributes' values and the most frequent values in the provenance date for the categorical attributes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021) Manuscript to be reviewed Computer Science the Enhanced FReeP allows for the user to have access to the relational operators: ==, >, >=, <, <= and ! = to define his/her preferences. In addition, two logical operators are also supported in setting preferences: | and &. Preferences with combination of supported operators is also allowed, for example: (a > 10)|(at < 5). However, by allowing users to define their preferences in this way we create a problem when setting up the instances for recommendation step. As seen, PW represents all workflow parameters set , PP are workflow parameters that preference values have been set; PA the parameters present in the partitioning rules of an iteration over the partition rules; and PV = (PW − PP) ∪ (PP ∩ PA) ∪ {y}. Thus, there may be parameters p ∈ PV | p / ∈ PP, and for those parameters p, there are no values defined a priori. This enhanced version of the proposal allows the user's preferences to be expressed in a more relaxed way, demanding to create the instances used in the step recommendation that include a range (or set of values).To handle this isse, all possible instances from preference values combinations were generated. In case the preference is related to a numerical domain parameter and is defined in terms of values range, like a ≤ 10.5, FReeP uses all values present in the source provenance database that follows the preference restriction. It is important to note that for both numerical and categorical parameters, the combination of possible values are those present in the provenance database and that respect the user's preferences. Then, predictions are made for a set of instances using the model learned during the training phase.Regarding problem 2, the provenance database, in general, present attributes with numerical and categorical domains. It is FReeP responsibility to convert categorical values into numerical representation due to restrictions related to the nature of the training algorithms of the Machine Learning models, e.g.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>pt such that pca D − pca i pt resulted in the lowest calculated values. Note that both x and n are defined parameters when executing the algorithm. In summary, the PCA strategy will select the partitions where the main components extracted are the closest to the principal components of the original provenance dataset.ANOVA strategy seeks the n partitioning rules that best represent D, selecting those that generate partitions where the data variance is closest to D data variance. In short, original data variance and data variance for each partition are calculated using the ANOVA metric, then partitions with most similar variance to the original provenance data are selected. Here, the n rules are defined in terms of the data11/29 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021) Manuscript to be reviewed Computer Science percentage required to represent the entire data set, and that parameter must also be defined in algorithm execution. Using PCA or ANOVA partitioning strategies means that the partitioning rules used by FReeP can be reduced, depending on the associated parameters that need to be defined. Algorithm 2 Enhanced FReeP Require: y : recommendation target parameter P : {(param, val) | param is a workflow parameter, val is the preference value for param} D : {{(param 1 1 , val 1 1 ), ..., (param l 1 , val l 1 ), ...(param m l , val m l )} | l is the workflow parameters number, m is the provenance dataset length} 1: procedure FREEP(y, P, D) partition ∈ partitions do 6: data ← horizontal f ilter(D ′ , partition) 7: data ← vertical f ilter(data, partition) 8: model type ← model select(data, y) 9: data ′ ← preprocessing(data) 10: model ← hypothesis generation(data ′ , y, model type) 11:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Generic FReeP architecture overview</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>Figure 9(b) shows the montage activities: (i) ListFITS, which extracts compressed FITS files, (ii) Projection, which maps the astronomical positions into a Euclidean plane, (iii) SelectProjections, which joins the planes into a single mosaic file, and (iv) CreateIncorrectedMosaic, which creates an overlapping mosaic as an image. Programs (v) CalculateOverlap, (vi) ExtractDifferences, (vii) CalculateDifferences, (viii) FitPlane, and (ix) CreateMosaic refine the image into the final mosaic. Montage is a data-intensive workflow, since one single execution of Montage can produce several GBs of data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The abstract specification of (a) SciPhy and (b) Montage</ns0:figDesc><ns0:graphic coords='16,203.77,63.78,289.50,206.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>dataset attributes. Also, in Montage dataset, the crota2 attribute (a float value that represents an image rotation on sky) has the largest values range and the largest standard deviation. The dec (an optional float value that represents Dec for region statistics) and crval2 (a float value that represents Axis 2 sky reference value in Montage workflow) attributes have close statistics and are the attributes with the smallest data range and the smallest Montage data standard deviation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Datasets Attributes Correlation matrices.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021) Manuscript to be reviewed Computer Science as the classifier of this ranker implementation. The k parameter of K Nearest Neighbors classifier was set as 3, 5, 7 for both the ranker and classifier. The choice of k ∈ {3, 5, 7} is because small datasets are used, and thus k values greater than 7 do not return any neighbors in the experiments. Experiment 1. Algorithm 1 Evaluation Script.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Precision results with SciPhy data.</ns0:figDesc><ns0:graphic coords='18,141.73,443.93,132.35,116.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Recall results with SciPhy data.</ns0:figDesc><ns0:graphic coords='18,282.35,444.25,132.34,115.64' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Experiment recommendation execution time with SciPhy data.</ns0:figDesc><ns0:graphic coords='18,422.96,437.53,132.35,117.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Precision results with Montage data.</ns0:figDesc><ns0:graphic coords='19,141.73,191.44,132.34,119.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. Recall results with Montage data.</ns0:figDesc><ns0:graphic coords='19,282.35,189.34,132.34,123.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. Experiment recommendation execution time with Montage data.</ns0:figDesc><ns0:graphic coords='19,422.96,184.18,132.34,121.78' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_27'><ns0:head /><ns0:label /><ns0:figDesc>7. Precision and recall, or MSE values are calculated based on all K-Fold Cross Validation iterations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_28'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17. Precision results with Sciphy data.</ns0:figDesc><ns0:graphic coords='20,141.73,337.35,132.35,123.85' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_29'><ns0:head>Figure 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Figure 18. Recall results with Sciphy data.</ns0:figDesc><ns0:graphic coords='20,282.35,338.77,132.35,121.01' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_30'><ns0:head>Figure 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Figure 19. Experiment recommendation execution time with Sciphy data.</ns0:figDesc><ns0:graphic coords='20,422.96,333.00,132.34,120.61' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_31'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 20a brings the data from results obtained for the numerical domain parameter Sciphy provenance database. The data shows zero MSE in all cases, except for the use of Multi Layer Perceptron in the regression. This result can be explained by the small database and the few different values for each</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_32'><ns0:head>Figure 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20. MSE results and recommendation execution time with Sciphy data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_33'><ns0:head>Figure 21 .</ns0:head><ns0:label>21</ns0:label><ns0:figDesc>Figure 21. Precision results with Montage data.</ns0:figDesc><ns0:graphic coords='21,141.73,471.09,132.35,120.05' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_34'><ns0:head>Figure 22 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figure 22. Recall results with Montage data.</ns0:figDesc><ns0:graphic coords='21,282.35,471.36,132.34,119.51' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_35'><ns0:head>Figure 23 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 23. Experiment recommendation execution time with Montage data.</ns0:figDesc><ns0:graphic coords='21,422.96,462.29,132.35,125.69' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_36'><ns0:head>Figure 24 .</ns0:head><ns0:label>24</ns0:label><ns0:figDesc>Figure 24. MSE results and recommendation execution time with Montage data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_37'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_38'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Learning model have numerical and categorical domain parameters. However, traditional Machine Learning models generally work with numerical data because the generation of these models, in most cases, involves many numerical calculations. Therefore, it is necessary to codify these categorical parameters to a numerical representation. The technique used here to encode categorical domain parameters to numerical representation is One-Hot encoding<ns0:ref type='bibr' target='#b10'>(Coates and Ng, 2011)</ns0:ref>. This technique consists of creating a new binary attribute, that is, the domain of this new attribute is 0 or 1, for each different attribute value present in dataset.</ns0:figDesc><ns0:table /><ns0:note>The encoded provenance data allows building Machine Learning models to make predictions for the target parameter under the step hypothesis generation. The model generated has the parameter y as class variable, and the other parameters present in vertical filter step output data are the attributes used to generalize the hypothesis. The model can be a classifier, where the model's prediction is a single recommendation value, or a ranker, where its prediction is an ordered list of values, of the value most suitable for the recommendation to the least suitable.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Dataset characteristics.Table1summarizes the main characteristics of the datasets. The Total Records column shows the number of past executions of each workflow. Each dataset record can be used as an example for generating Machine Learning models during the algorithm's execution. As seen, the SciPhy dataset is relatively small compared to Montage. The column Total Attributes shows how many activity parameters are considered in each workflow execution. Both workflows have the same number of categorical domain parameters,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Total Records</ns0:cell><ns0:cell>Total Attributes</ns0:cell><ns0:cell>Categorical Attributes</ns0:cell><ns0:cell>Numerical Attributes</ns0:cell></ns0:row><ns0:row><ns0:cell>Sciphy</ns0:cell><ns0:cell>376</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Montage</ns0:cell><ns0:cell>1565</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>6</ns0:cell></ns0:row></ns0:table><ns0:note>as presented in the column Categorical Attributes. Montage has more numeric domain parameters than SciPhy, as shown in the Numerical Attributes column.1 https://confluence.pegasus.isi.edu/display/pegasus/WorkflowGenerator 2 Data sources are available at http://irsa.ipac.caltech.edu. 15/29 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>SciPhy dataset statistics.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Minimum Value</ns0:cell><ns0:cell>Maximum Value</ns0:cell><ns0:cell>Standard Deviation</ns0:cell></ns0:row><ns0:row><ns0:cell>cntr</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell>134.00</ns0:cell><ns0:cell>35.34</ns0:cell></ns0:row><ns0:row><ns0:cell>ra</ns0:cell><ns0:cell>83.12</ns0:cell><ns0:cell>323.90</ns0:cell><ns0:cell>91.13</ns0:cell></ns0:row><ns0:row><ns0:cell>dec</ns0:cell><ns0:cell>-27.17</ns0:cell><ns0:cell>28.85</ns0:cell><ns0:cell>17.90</ns0:cell></ns0:row><ns0:row><ns0:cell>crval1</ns0:cell><ns0:cell>83.12</ns0:cell><ns0:cell>323.90</ns0:cell><ns0:cell>91.13</ns0:cell></ns0:row><ns0:row><ns0:cell>crval2</ns0:cell><ns0:cell>-27.17</ns0:cell><ns0:cell>28.85</ns0:cell><ns0:cell>17.90</ns0:cell></ns0:row><ns0:row><ns0:cell>crota2</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell>360.00</ns0:cell><ns0:cell>178.64</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Montage dataset statistics. Statistics on the SciPhy numerical attributes are shown in Table2. This table presents the minimum and maximum values of each attribute, in addition to the standard deviation. The attribute prob1 ( probability of a given evolutive relationship is valid) has the highest standard deviation, and its range of values is the largest among all attributes. The prob2 attribute (probability of a given evolutive relationship is valid) has both a range of values and the standard deviation similar to prob1. The standard deviation of the values of num aligns (total number of alignments in a given data file) is very small, while the attribute length (maximum sequence length in a specific data file) has a high standard deviation, considering its values range.</ns0:figDesc><ns0:table /><ns0:note>The Montage numerical attributes, shown in Table3, in most of the cases, have smaller standard deviation than the SciPhy. On average, Montage attributes also have a smaller values range than SciPhy</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head /><ns0:label /><ns0:figDesc>Montage dataset attributes Correlation Matrix.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>cntr equinox cra_05h_33m_30.26s cra_05h_34m_31.57s cra_05h_36m_03.61s cra_13h_39m_53.29s cra_13h_41m_29.27s cra_13h_44m_07.51s cra_15h_17m_28.13s cra_15h_20m_21.53s cra_16h_22m_57.25s cra_16h_24m_32.78s cra_16h_26m_07.95s cra_21h_32m_12.87s cra_21h_33m_39.80s cra_21h_34m_37.86s cdec_+01d_28m_59.3s cdec_+01d_57m_59.9s cdec_+02d_17m_27.4s cdec_+02d_33m_39.3s cdec_+21d_42m_31.8s cdec_+21d_58m_46.4s cdec_+22d_18m_04.0s cdec_+27d_45m_58.9s cdec_+28d_14m_42.0s cdec_+28d_31m_07.0s cdec_+28d_47m_24.8s cdec_-00d_15m_31.9s cdec_-00d_44m_31.0s cdec_-01d_00m_47.1s cdec_-01d_20m_11.9s cdec_-26d_21m_09.9s cdec_-26d_41m_09.9s (a) num_aligns prob1 model1_BLOSUM62 model1_BLOSUM62+G model1_BLOSUM62+I model1_BLOSUM62+I+G+F model1_CPREV+I+G model1_DCMut+G+F model1_Dayhoff+I+F model1_JTT+G+F model1_JTT+I+G+F model1_RtREV+G model1_RtREV+I model1_RtREV+I+G+F model1_VT+G+F model1_WAG+F model1_WAG+G+F model1_WAG+I+F model1_WAG+I+G+F model2_BLOSUM62+G model2_BLOSUM62+I+G+F model2_CPREV+G model2_DCMut model2_DCMut+I model2_Dayhoff+I+F model2_JTT+G+F model2_RtREV model2_RtREV+G+F model2_RtREV+I+F model2_WAG model2_WAG+G+F model2_WAG+I+F cdec_-26d_57m_33.8s cntr hdu cra_05h_34m_00.85s cra_05h_35m_02.31s cra_05h_36m_33.57s cra_13h_41m_29.25s cra_13h_44m_07.51s cra_15h_17m_56.98s cra_16h_22m_25.33s cra_16h_24m_00.92s cra_16h_25m_36.36s cra_21h_31m_43.88s cra_21h_33m_39.80s cra_21h_35m_06.69s cdec_+01d_41m_50.4s cdec_+02d_01m_19.3s cdec_+02d_33m_37.2s cdec_+21d_42m_30.8s cdec_+21d_58m_46.4s cdec_+22d_31m_01.2s cdec_+27d_58m_47.0s cdec_+28d_18m_06.9s cdec_+28d_47m_02.0s cdec_-00d_15m_30.2s cdec_-00d_44m_31.0s cdec_-01d_04m_01.2s cdec_-26d_04m_59.9s cdec_-26d_25m_13.8s cdec_-26d_57m_08.2s 0.8 0.4 0.0 0.4 0.8 num_aligns model1_CPREV+I+G model1_RtREV+I model1_WAG+I+G+F model2_Dayhoff+I+F model2_WAG+G+F model2_WAG+I+G model2_VT+G model2_RtREV+G+F model2_JTT+I model2_DCMut+G model2_CPREV+G model2_BLOSUM62+I model1_WAG+I model1_WAG+F model1_VT+G model1_RtREV+F model1_JTT+G+F model1_Dayhoff+G model1_BLOSUM62+I+F model1_BLOSUM62+G prob2</ns0:cell><ns0:cell>0.25 0.00 0.25 0.50 0.75 1.00</ns0:cell></ns0:row></ns0:table><ns0:note>(b) Sciphy dataset attributes Correlation Matrix.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Algorithm 2 values per parameter used in Experiment 2</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Experiment 3 results with Sciphy dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Table</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='3'>https://openprovenance.org/store/ 24/29 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='29'>/29 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:1:2:NEW 3 May 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Reply to the Reviewers
Re: Manuscript ID 55050
“Provenance and Machine Learning-based Recommendation of Parameter Values in Scientific Workflows”
Daniel Silva Junior, Esther Pacitti, Aline Paes, Daniel de Oliveira
PeerJ Computer Science
Thank you very much for sending the report on the Decision of our manuscript ID 55050. We would like
to thank the reviewers for their very careful reading of this manuscript and constructive suggestions, which
helped us to further improve the quality of our manuscript.
We revised the manuscript and we believe that this new version fulfills the remarks of the reviewers.
Again, we highlight the major changes in the revised manuscript. In this letter, however, follows our specific
response to the points raised by the referees. For each suggestion, our response follows below (reviewers’
comments are in italics).
Yours Sincerely,
Prof. Aline Paes, on behalf of the authors
Corresponding author: alinepaes@ic.uff.br.
April 22, 2021
1 of 8
Reviewer #1, comment #1
It would be better to include a Discussion section, that compares the proposed methodology and results
with the existing related studies. This will help to justify the validity and the novelty of the proposed study.
Our response #1.1
First of all, we would like to thank you for reading, reviewing, and giving rich feedback about our work.
We appreciate your suggestion. We improved the Related Work section by including a comparison with
methods for Hyper-parameter Optimization for Machine Learning starting in line 870. We also added a
Discussion section (named “Final Remarks”) starting on line 943 where we summarize the contributions of
the article and also highlight differences from other existing approaches.
Reviewer #1, comment #2
Few typos: Equation (4) contains another numbering (1)
Our response #1.2
Thank you for pointing this out, the indicated typo was revised.
Reviewer #1, comment #3
Name out the complete names of the parameter abbreviations using in the tables, with in the text
description. (eg. Table 3, Table 4 - cnty, crval1)
Our response #1.3
Thank you for pointing this out. We have added the explanations of the parameters in the experimental
evaluation section. Following we describe each one of the parameters:
1. crval1: a float value that represents Axis 1 sky reference value in Montage workflow;
2. crval2: a float value that represents Axis 2 sky reference value in Montage workflow;
3. ra1: a float value that represents the RA of image corner 1;
4. crota2: a float value that represents an image rotation on sky in Montage workflow;
5. dec: an optional float value that represents Dec for region statistics;
6. prob1: probability of a given evolutive relationship is valid;
7. prob1: probability of a given evolutive relationship is valid;
8. num aligns: total number of alignments in a given data file;
9. length: maximum sequence length in a specific data file.
We represented the parameter as acronyms since we extrated these data as they are represented in the
provenance database of the SciCumulus Scientific Workflow Management System. For more details about
parameters, please refer to the paper where each of the workflows were proposed.
Reviewer #1, comment #4
Since the paper addresses a technology enhanced workflow management system, it would
be better to consider latest related studies when comparing with the existing work.
Since
the paper will be published in 2021, better to consider latest related work/ applications
(may be 5 years back) For example, user feedback based rule-based recommendation systems were addressed in following references.
DOI: https://doi.org/10.1145/3018009.3018027 DOI:
https://doi.org/10.1109/MERCon.2017.7980467 Some of the technology based workflow system studies in different domains can be found in DOI: https://doi.org/10.1109/TNB.2018.2837122 DOI:
https://doi.org/10.3991/ijet.v13i12.8608 DOI: https://doi.org/10.1109/TALE.2018.8615134
April 22, 2021
2 of 8
Our response #1.4
We really appreciate your comment and we improved Related Work section with the suggestions as can
seen from the line 870 .
April 22, 2021
3 of 8
Reviewer #2, comment #1
The focus is given on the precision and recall of the recommended parameters. Yet, the possibly reduced
overhead between non-recommendation based tuning and recommendation based is not discussed. The
recommendation task is just an automatic test and error process; it will finally find the ”right parameters”
for the workflow activities (precision and recall). But what about resources consumption? What about
explicability and tracking of the choice of parameters? How can these benefits of recommendation be
measured?
Our response #2.1
We are glad for your comments and for the opportunity to have such useful feedbacks. And this first
comment is a very good one! Concerning the resource consumption, although we provide the recommendation
algorithm execution time, we missed giving the average workflows execution time to show a clear advantage
of using FReeP or not. We explore this aspect in the new section named “Final Remarks” starting on line
943. About the affirmation that “The recommendation task is just an automatic test and error process”,
we would like to highlight that the proposed approach implements strategies to prune the search space and
uses Machine Learning models that generalize from data. In contrast, an automatic test error process would
cause an exponential search problem. We added a comparison between our approach and Hyperparameter
Optimization for Machine Learning models in the Related Work section starting on line 870 that helps
clarifying why our approach is not an automatic test and error process.
Reviewer #2, comment #2
The related work is too short and superficial. What about existing work that addresses parameters
tuning for ML algorithms, of course, this is not a recommendation process. Still, the automatic search of
”right” parameters behind is related to the addressed recommendation task.
Our response #2.2
Thank you for your comment. We agree that related work should bring more previous works and we
added several new references concerning parameters tuning in Machine Learning domain. Still, in general,
the parameters tuning process does not use provenance data to guide the search.
Reviewer #2, comment #3
The paper also lacks concrete examples of scientific workflows (e.g., a use case) across the ”theory”
descriptions, particularly in the experiments section. The experiment does not show a concrete scientific
workflow that might include provenance data from different activities that might use different algorithms.
The parameters of a given algorithm might depend on tunning the parameters of another activity’s parameters in the workflow. In this sense, I find the experiments too ideal and too synthetic.
Our response #2.3
Thank you for your comment. In fact, we use provenance data from real executions from two workflows:
SciPhy (from bioinformatics domain) and Montage (from astronomy domain). We added a new subsection
named “Case Studies” to provide more details about the workflows chosen as case studies. Figure 12
shows the parameter correlation in ScPhy and Montage. Although we agree that some activity parameter
configuration may depend on the configurations of previous activities in the workflow, it was not the case in
the executed experiments.
Reviewer #2, comment #4
It seems that scientific workflows might use and combine different such algorithms across their tasks; the
question is whether tasks are independent or whether the recommendation might change when analysing all
the tasks of a workflow. Are the user preferences enough input for dealing with the recommendation? Or do
the characteristics of data impact the parameters’ choice and, therefore, on the possible ways of addressing
recommendation? Should recommendation not be interactive and user preferences are chosen or pondered
by a scientist to guide recommendation?
Our response #2.4
You are correct, and scientific workflows, in general, use different algorithms, and workflows tasks can
April 22, 2021
4 of 8
be dependents or not. Our proposal handles two scenarios: (1) when only one parameter needs to be
recommended and (2) when n parameters must be recommended at once. To address the second scenario
and consider the possible dependencies between data from different tasks, the proposal generates several
ordering sequences of parameters and uses classifiers chains, as shown in section “Recommendation for n
Parameters at a time” from line 559. In addition, we also agree that the input data can have an impact
on the parameter choice. However, to consider the data influence we have chosen some parameters that are
dependent of the input data. In SciPhy workflow the parameter num aligns is calculated by parsing the
input data files and counting the number of aligned sequences. The same rationale applies to the length
parameter that is also calculated based on the input data files.
Reviewer #2, comment #5
A motivation example of a scientific workflow can illustrate the problem and the proposed recommendation solutions. Of course, the audience targeted by the paper is one that knows what a scientific workflow is
yet there are types and families of such workflows that might use different analytics algorithms with other
parameters tuning challenges. It is essential to clarify and put in perspective this aspect in the explanations
and the description of the global recommendation provided by FReeP.
Our response #2.5
Thanks for this observation. We move the workflow example figure in “Recommendation for n Parameters
at a time” subsection to “FREEP - FEATURE RECOMMENDER FROM PREFERENCES” section in line
336. We also added an explanation to contextualize.
Reviewer #2, comment #6
Background. Sometimes the introduction of concepts focuses on a specific system or approach. For a
background section, I suggest a more objective approach, either explaining why are chosen references enough
for introducing concepts or selecting more references and synthesising different definitions and perspectives
to build the section. The background or another section can give a taxonomy of scientific workflows and those
that can best benefit specific parameters’ recommendation. If parameters recommendation regards specific
families of ML and AI, it must be clarified. I believe that different types of algorithms lead to additional
provenance data and introduce other recommendation challenges. This type of discussion is missing in the
paper.
Our response #2.6
Thank you for pointing this out. We agree that different algorithms may produce additional provenance
data. However, since we are “denormalizing” provenance data (creating a single table with all parameters
of the workflow) as input, if new parameters should be considered we have just to add new attributes in
this table. In this sense, many provenance databases are available such as ProvStore. Workflows that are
executed repeatedly can benefit from the proposed approach, Related to AI and ML families’ use in parameter
recommendation, we do not want to state specific options for this case. First of all, we do not have another
approach to make a fair comparison. From a more general perspective, we present in the background section
information about another recommendation technique’s existence.
Reviewer #2, comment #7
The paper is motivated by the fact that scientific workflow tuning requires running the workflows several
times, which is time and resources consuming. Yet, the remainder of the paper and particularly experiments
do not longer respond to this kind of ”research questions”. If the paper’s purpose is not intended to answer
these questions and do not motivate it with them or precise the ”research questions” or aspects of the
problem that you answer in it. In this sense, I also believe that precision and recall assess the recommended
parameters by the three proposed strategies. It seems that the recommender automates the test-error
process when designing a scientific workflow, so it is not surprising that it works. The question is how much
overhead it creates, and whether it consumes fewer resources than repeating the execution of a workflow to
tune parameters.
Our response #2.7
We believe that our response # 2.1 responds this either.
April 22, 2021
5 of 8
Reviewer #2, comment #8
Related work needs to be completed; it is too short and too general.
Our response #2.8
Please refer to our response # 2.2.
April 22, 2021
6 of 8
Reviewer #3, comment #1
The authors go into fine-grained detail on a number of topics, such as ”SciCumulus” (lines 149-173)
and collaborative filtering (lines 242-306) without providing any explanation for how these topics relate to
their own work. For example, the authors devote over a page of their manuscript to explaining collaborative
filtering, but don’t explain how it relates to their method. The authors mention that this manuscript is a
submission of a previously published conference paper which contained less background. If they feel that this
level of detail is necessary to contextualize their contribution, they should make explicit references to how
their model differs from the background. For example, ”Collaborative filtering is a common recommendation
paradigm... [ 2 sentences of detail]. We do not use this method because X,Y,Z,. Instead, we contribute
A,B,C”
Our response #3.1
We want to thank you for bestowing us your time for reading and provide useful feedback on our
work. We appreciate this comment and point out that we give some details about SciCumulus because the
provenance data used in experiments was retrieved from it. We make it more explicit in line 151. Regarding
the Collaborative Filtering subsection, we trim out information that was not relevant to understand the
essential concepts. We also clarify how our approach diverges or is similar to background content, as shown
in lines 231, 241, and 262.
Reviewer #3, comment #2
Figures 5,6,7,8,9,10 were very helpful and effectively communicated the method. Figure 11 should be
improve (axes labels are too small, both heatmaps should use the same color scale). I recommend that the
authors report their results to fewer significant figures in Tables 3 and 4.
Our response #3.2
Thank for this, but Figure 11 already uses the same color scale, and we think that although axes labels
can appear small, using zoom all labels can be read without resolution loss. We reduced results digits values
in Tables 3 and 4.
Reviewer #3, comment #3
The example in Table 1 is confusing. It might be easier to show four or five sample ballots, rather than
the number of times a given ordering occured.
Our response #3.3
Thanks for the feedback, we improved this adding a Figure 5 with sample ballots as suggested.
Reviewer #3, comment #4
The authors provide a link to a GitHub repository with their code, but I can’t find the provenance
datasets.
Our response #3.4
Thanks for the observation, we added the data in the repository.
Reviewer #3, comment #5
After reading the paper I am still confused about the core research question. I understand that the
authors intend to recommend appropriate hyperparameters in scientific workflows, but to what end? Is the
intent to recommend parameters that don’t cause the workflow to crash (as line 82 suggests) or parameters
that fit the users’ prior preferences (as the experiments and name of the method imply)? Without this
clarification I can’t evaluate how meaningful the research question is.
Our response #3.5
Thank you for this comment. As mentioned in line 80, our objective is to recommend parameter values
that will not cause crash workflow executions due to the cost involved. User preferences are an essential
component in our approach to helping prune search space and to take into account user restrictions, whats
makes recommendations be personalized. We make this clearer on line 90.
April 22, 2021
7 of 8
Reviewer #3, comment #6
The authors should add more detail about how their data was collected. Are all of these runs from
the same user, or multiple users? If they are from the same user, then the paper loses some claim to
generalizability, since there is no evidence that the model is robust to parameters drawn from different user
distributions. If they are from different users, then were validations folds split by user, or are samples from
the sample user distributed across folds?
Our response #3.6
Thank you for pointing this out. In the case of SciPhy, the executions were performed by 3 different
users (one expert and 2 undergraduate students). In the case of Montage, the executions in SciCumulus
were performed by an undergraduate student and the ones downloaded from the Workflow Generator site
were performed by experts. We added an explanation about the types of users in subsection “Dataset”
Related to validation folds, we do not distinguished types of users when building them, what means that
folds generation were random. We think that the essential point to generalization here is data diversity.
Reviewer #3, comment #7
The conclusions could be better organized. I’d like to see a summary table or figure that lists the
experiments the authors conducted and the key results. As is it’s prohibitively difficult to scan the paper.
Our response #3.7
Thank you for this comment; we improve this section.
Reviewer #3, comment #8
Clarify the intent of your paper, and how it’s supported by your experiments.
Our response #3.8
Please refer to our response # 3.5.
Reviewer #3, comment #9
Dramatically trim down your background, or better place it in the context of your paper.
Our response #3.9
Please refer to our response # 3.1.
Reviewer #3, comment #10
Better organize and highlight your results so that they can be digested in a quick read.
Our response #3.10
Please refer to our response # 3.7.
April 22, 2021
8 of 8
" | Here is a paper. Please give your review comments after reading it. |
159 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Scientific Workflows (SWfs) have revolutionized how scientists in various domains of science conduct their experiments. The management of SWfs is performed by complex tools that provide support for workflow composition, monitoring, execution, capturing, and storage of the data generated during execution. In some cases, they also provide components to ease the visualization and analysis of the generated data. During the workflow's composition phase, programs must be selected to perform the activities defined in the workflow specification. These programs often require additional parameters that serve to adjust the program's behavior according to the experiment's goals. Consequently, workflows commonly have many parameters to be manually configured, encompassing even more than one hundred in many cases. Wrongly parameters' values choosing can lead to crash workflows executions or provide undesired results. As the execution of dataand compute-intensive workflows is commonly performed in a high-performance computing environment e.g., a cluster, a supercomputer, or a public cloud), an unsuccessful execution configures a waste of time and resources. In this article, we present FReeP -Feature Recommender from Preferences, a parameter value recommendation method that is designed to suggest values for workflow parameters, taking into account past user preferences. FReeP is based on Machine Learning techniques, particularly in Preference Learning. FReeP is composed of three algorithms, where two of them aim at recommending the value for one parameter at a time, and the third makes recommendations for n parameters at once. The experimental results obtained with provenance data from two broadly used workflows showed FReeP usefulness in the recommendation of values for one parameter. Furthermore, the results indicate the potential of FReeP to recommend values for n parameters in scientific workflows.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION 32</ns0:head><ns0:p>Scientific experiments are the basis for evolution in several areas of human knowledge ( <ns0:ref type='bibr'>de Oliveira 33 et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b13'>Mattoso et al., 2010b;</ns0:ref><ns0:ref type='bibr' target='#b38'>Hey and Trefethen, 2020;</ns0:ref><ns0:ref type='bibr' target='#b37'>Hey et al., 2012)</ns0:ref>. Based on observations 34 of open problems in their research areas, scientists formulate hypotheses to explain and solve those 35 problems <ns0:ref type='bibr' target='#b29'>(Gonc ¸alves and Porto, 2015)</ns0:ref>. Such hypothesis may be confirmed or refuted, and also can 36 lead to new hypotheses. For a long time, scientific experiments were manually conducted by scientists, 37 including instrumentation, configuration and management of the environment, annotation and analysis 38 of results. Despite the advances obtained with this approach, time and resources were wasted since a 39 small misconfiguration of the parameters of the experiment could compromise the whole experiment. The 40 analysis of errors in the results was also far from trivial (de <ns0:ref type='bibr'>Oliveira et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>41</ns0:head><ns0:p>The evolution in computer science field allowed for the development of technologies that provided and relationships (i.e., data dependencies) with other activities, according to the stages of the experiment <ns0:ref type='bibr' target='#b87'>(Yong Zhao, 2008)</ns0:ref>.</ns0:p><ns0:p>Several workflows commonly require the execution of multiple data-intensive operations as loading, transformation, and aggregation <ns0:ref type='bibr' target='#b13'>(Mattoso et al., 2010b)</ns0:ref>. Multiple computational paradigms can be used for the design and execution of workflows, e.g., shell and Python scripts <ns0:ref type='bibr' target='#b54'>(Marozzo et al., 2013)</ns0:ref>, Big Data frameworks (e.g., Hadoop and Spark) <ns0:ref type='bibr' target='#b33'>(Guedes et al., 2020b)</ns0:ref>, but they are usually managed by complex engines named Workflow Management Systems (WfMS). A key feature that a WfMS must address is the efficient and automatic management of parallel processing activities in High Performance Computing (HPC) environments <ns0:ref type='bibr' target='#b66'>(Ogasawara et al., 2011)</ns0:ref>. Besides managing the execution of the workflow in HPC environments, WfMSs are also responsible for capturing, structuring and recording metadata associated to all the data generated during the execution: input data, intermediate data, and the final results. These metadata is well-known as provenance <ns0:ref type='bibr' target='#b21'>(Freire et al., 2008)</ns0:ref>. Based on provenance data, it is possible to analyze the results obtained and to foster the reproducibility of the experiment, which is essential to prove the veracity of a produced result.</ns0:p><ns0:p>In this article, the concept of an experiment is seen as encompassing the concept of a workflow, and not as a synonym. A workflow may be seen as a controlled action of the experiment. Hence, the workflow is defined as one of the trials conducted in the context of an experiment. In each trial, the scientist needs to define the parameter values for each activity of the workflow. It is not unusual that a simple workflow has more than 100 parameters to set. Setting up these parameters may be simple for an expert, but not so simple for non-expert users. Although WfMSs represent a step forward by providing the necessary infrastructure to manage workflow executions, they provide a little help (or even no help at all) on defining parameter values for a specific workflow execution. A good parameters values tune in a workflow execution is crucial not only for the quality of the results but also influences if a workflow will execute or not (avoiding unnecessary execution crashes). A poor choice of parameters values can cause failures, which leads to a waste of execution time. Failures caused by poor choices of parameter values are even more severe when workflows are executing in HPC environments that follow a pay-as-you-go model, e.g., clouds, since they can increase the overall financial cost. This way, if the WfMS could 'learn' from previous successfully executions of the workflow and recommend parameter values for scientists, some failures could be avoided. This recommendation is especially useful for non-expert users. Let us take as an example a scenario where an expert user has modeled a workflow and executed several trials of the same workflow varying the parameter values. If a non-expert scientist wants to execute the same workflow with a new set of parameter values and input data, but does not know how to set the values of some of the parameters, one can benefit from parameter values used on previous executions of the same (or similar) workflow. The advantage of the WfMS is provenance data already contains the parameter values used on previous (successful) executions and can be a rich resource to be used for recommendation. Thus, this article hypothesis is that by adopting an approach to recommend the parameters values of workflows in a WfMS, we can increase the probability that the execution of workflow will be completed. As a consequence, the financial cost associated with execution failures is reduced.</ns0:p><ns0:p>In this article, we propose a method named FReeP -Feature Recommender From Preferences, which aims at recommending values for parameters of workflow activities. The proposed approach is able to recommend parameter values in two ways: (i) a single parameter value at a time, and (ii) multiple parameter values at once. The proposed approach relies on user preferences, defined for a subset of workflow parameters, together with the provenance of the workflow. It is essential to highlight that user preferences are fundamental to explore experiment variations in a scientific scenario. Furthermore, for our approach, user preferences help prune search space and consider user restrictions, making personalized recommendations. The idea of combining user preferences and provenance is novel and allows for producing a personalized recommendation for scientists. FReeP is based on Machine Learning algorithms <ns0:ref type='bibr' target='#b59'>(Mitchell, 2015)</ns0:ref>, particularly, Preference Learning <ns0:ref type='bibr' target='#b23'>(Fürnkranz and Hüllermeier, 2011)</ns0:ref>, and Recommender Systems <ns0:ref type='bibr' target='#b72'>(Ricci et al., 2011)</ns0:ref>. We evaluated FReeP using real workflow traces (considered as benchmarks):</ns0:p><ns0:p>Montage <ns0:ref type='bibr' target='#b39'>(Hoffa et al., 2008)</ns0:ref> from astronomy domain and SciPhy <ns0:ref type='bibr' target='#b63'>(Ocaña et al., 2011)</ns0:ref> from bioinformatics domain. Results indicate the potential of the proposed approach. This article is an extension of the conference paper 'FReeP: towards parameter recommendation in scientific workflows using preference learning' <ns0:ref type='bibr' target='#b75'>(Silva Junior et al., 2018)</ns0:ref> This article is organized in five sections besides this introduction. Background section details the theoretical concepts used in the proposal development. FReeP -Feature Recommender from Preferences section presents the algorithm developed for the problem of parameters value recommendation using user preferences. Experimental Evaluation section shows the results of the experimental evaluation of the approach in three different scenarios. Then, Related Work section presents a literature review with papers that have addressed solutions to problems related to the recommendation applied to workflows and the Machine Learning model hyperparameter recommendation. Lastly, Conclusion section brings conclusions about this article and points out future work.</ns0:p></ns0:div>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>This section presents key concepts for understanding the approach presented in this article to recommend values for parameters in workflows based on users' preferences and previous executions. Initially, it is explained about scientific experiments. Following, the concepts related to Recommender Systems are presented. Next, the concept of Preference Learning is presented. This section also brings a Borda Count overview, a non-common voting schema that is used to decide which values to suggest to the user.</ns0:p></ns0:div>
<ns0:div><ns0:head>Scientific Experiment</ns0:head><ns0:p>A scientific experiment arises from the observation of some phenomena and questions raised from the observation. The next step is the hypotheses formulation aiming at developing possible answers to those questions. Then, it is necessary to test the hypothesis to verify if an output produced is a possible solution. The whole process includes many iterations of refinement, consisting, for example, of testing the hypothesis under distinct conditions, until it is possible to have enough elements to support it.</ns0:p><ns0:p>The scientific experiment life-cycle proposed by <ns0:ref type='bibr' target='#b55'>Mattoso et al. (2010a)</ns0:ref> is divided into three major phases: composition, execution and analysis. The composition phase is where the experiment is designed and structured. Execution is the phase where all the necessary instrumentation for the accomplishment of the experiment must be finished. Instrumentation means the definition of input data, parameters to be used at each stage of the experiment, and monitoring mechanisms. Finally, the analysis phase is when the data generated by the composition and execution phases are studied to understand the obtained results.</ns0:p><ns0:p>The approach presented in this article focus on the Execution phase.</ns0:p></ns0:div>
<ns0:div><ns0:head>Scientific Workflows</ns0:head><ns0:p>Scientific workflows have become a de facto standard for modeling in silico experiments <ns0:ref type='bibr' target='#b92'>(Zhou et al., 2018)</ns0:ref>. A Workflow is an abstraction that represents the steps of an experiment and the dataflow through each of these steps. A workflow is formally defined as a directed acyclic graph W (A, Dep). The nodes A = {a 1 , a 2 , . . . , a n } are the activities and the edges Dep represent the data dependencies among activities in A. Thus, given a i | (1 ≤ i ≤ n), the set P = {p 1 , p 2 , ..., p m } represents the possible input parameters for activity a i that defines the behavior of a i . Therefore, a workflow can be represented as a graph where the vertices act as experiment steps and the edges are the relations, or the dataflow between the steps.</ns0:p><ns0:p>A workflow can also be categorized according to the level of abstraction into conceptual or concrete.</ns0:p><ns0:p>A conceptual workflow represents the highest level of abstraction, where the experiment is defined in terms of steps and dataflow between them. This definition does not explain how each step of the experiment will execute. The concrete workflow is an abstraction where the activities are represented by the computer programs that will execute them. The execution of an activity of the workflow is called an activation (de <ns0:ref type='bibr'>Oliveira et al., 2010a)</ns0:ref>, and each activation invokes a program that has its parameters defined. However, managing this execution, which involves setting the correct parameter values for each program, capturing the intermediate data and execution results, becomes a challenge. It was with this in mind, and with the help of the composition of the experiment in the workflow format, that Workflow Management Systems (WfMS), such as Kepler <ns0:ref type='bibr' target='#b2'>(Altintas et al., 2006)</ns0:ref>, Pegasus <ns0:ref type='bibr' target='#b17'>(Deelman et al., 2005)</ns0:ref> and SciCumulus (de <ns0:ref type='bibr'>Oliveira et al., 2010a)</ns0:ref> emerged.</ns0:p><ns0:p>In special, SciCumulus is a key component of the proposed approach since it provides a framework for parallel workflows to benefit from FReeP. Also, data used in the experiments presented in this article are retrieved from previous executions of several workflows in SciCumulus. It is worth noticing that other WfMSs such as Pegasus and Kepler could also benefit from FReeP as long as they provide necessary provenance data for recommendation. SciCumulus architecture is modularized to foster Manuscript to be reviewed Computer Science maintainability and ease the development of new features. SciCumulus is open-source and can be obtained at https://github.com/UFFeScience/SciCumulus/. The system is developed using MPI library (a de facto standard library specification for message-passing), so SciCumulus is a distributed application, i.e., each SciCumulus module has multiple instances created on the machines of the distributed environment (which are different processes and each process has multiple threads) that communicate, triggering functions for sending and receiving messages between these processes. According to <ns0:ref type='bibr' target='#b34'>Guerine et al. (2019)</ns0:ref>, SciCumulus has four main modules: (i) SCSetup, (ii) SCStarter, (iii) SCCore, and (iv) SCQP (SciCumulus Query Processor). The first step towards executing a workflow in SciCumulus is to define the workflow specification and the parameters values to be consumed. This is performed using the SCSetup module. The user has to inform the structure of the workflow, which programs are associated to which activities, etc. When the metadata related to the experiment is loaded into the SciCumulus database, the user can start executing the workflow. Since SciCumulus was developed focusing on supporting the execution of workflows in clouds, instantiating the environment was a top priority. The SCSetup module queries the provenance database to retrieve prospective provenance and creates the virtual machines (in the cloud) or reserve machines (in a cluster). The SCStarter copies and invokes an instance of SCCore in each machine of the environment, and since SCCore is a MPI-based application it runs in all machines simultaneously and follows a Master/Worker architecture (similar to Hadoop and Spark). The SCCore-Master (SCCore 0 ) schedules the activations for several workers and each worker has a specific ID (SCCore 1 , SCCore 2 , etc.). When a worker is idle, it sends a message for the SCCore 0 (Master) and request more activations to execute. The SCCore 0 defines at runtime the best activation to send following a specific cost model. The SCQP component allows for users to submit queries to the provenance database for runtime or post-mortem analysis. For more information about SciCumulus please refer to (de <ns0:ref type='bibr' target='#b10'>Oliveira et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b13'>Oliveira et al., , 2010b;;</ns0:ref><ns0:ref type='bibr' target='#b34'>Guerine et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b74'>Silva et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b31'>Guedes et al., 2020a;</ns0:ref><ns0:ref type='bibr' target='#b11'>de Oliveira et al., 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Provenance</ns0:head><ns0:p>An workflow activation has input data, and generates intermediate and output data. WfMS has to collect all metadata associated to the execution in order to foster reproducibility. This metadata is called provenance <ns0:ref type='bibr' target='#b21'>(Freire et al., 2008)</ns0:ref>. According to <ns0:ref type='bibr' target='#b27'>Goble (2002)</ns0:ref>, the provenance must verify data quality, path audit, assignment verification, and information querying. Data quality check is also related to verifying the reliability of workflow generated data. Path audit is the ability to follow the steps taken at each stage of the experiment that generated a given result. The assignment verification is linked to the ability to know who is responsible for the data generated. Lastly, an information query is essential to analyze the data generated by the experiment's execution. Especially for workflows, provenance can be classified as prospective (p-prov) and retrospective (r-prov) <ns0:ref type='bibr' target='#b21'>(Freire et al., 2008)</ns0:ref>. p-prov represents the specification of the workflow that will be executed. It corresponds to the steps to be followed to achieve a result. r-prov is given by executed activities and information about the environment used to produce a data product, consisting of a structured and detailed history of the execution of the workflow.</ns0:p><ns0:p>Provenance is fundamental for the scientific experiment analysis phase. It allows for verifying what caused an activation to fail or generated an unexpected result, or in the case of success, what were the steps and parameters used until the result. Another advantage of provenance is the reproducibility of an experiment, which is essential for the validation of the results obtained by third parties. Considering the provenance benefits in scientific experiments, it was necessary to define a model of representation of provenance <ns0:ref type='bibr' target='#b6'>(Bose et al., 2006)</ns0:ref>. The standard W3C model is PROV <ns0:ref type='bibr' target='#b25'>(Gil et al., 2013)</ns0:ref>. PROV is a generic data model and is based on three basic components and their links, being the components: Entity, Agent and Activity. The provenance and provenance data model are essential concepts because FReeP operation relies on provenance to recommend parameter values. Also, to extract provenance data to use in FReeP it is necessary to understand the provenance data model used.</ns0:p><ns0:p>interaction encompasses the actions that the user performed when using the recommender system. These interactions are generally user feedbacks, which may be interpreted as their preferences.</ns0:p><ns0:p>A recommender task can be defined as: given the elements Items, Users and Transactions, find the most useful items for each user. According to <ns0:ref type='bibr' target='#b0'>Adomavicius and Tuzhilin (2005)</ns0:ref>, a recommender system must satisfy the equation ∀u ∈ U, i ′ u = arg max i∈I F(u, i) , where U represents the users, I represents the items and F is a utility function that calculates the utility of an item i inI for a u inU user. In case the tuple (u, i) is not defined in the entire search space, the recommender system can extrapolate the F function.</ns0:p><ns0:p>The utility function varies according to the approach followed by the recommender system. Thus, The Model-Based subtype generates a hypothesis from the data and use it to make recommendations instantly. Although widely adopted, Collaborative Filtering only uses collective information, limiting novel discoveries in scientific experiment procedures.</ns0:p><ns0:formula xml:id='formula_0'>recommender</ns0:formula><ns0:p>Content-based Recommender Systems make recommendations similar to items that the user has already expressed a positive rating in the past. To determine the similarity degree between items, this approach is highly dependent on extracting their characteristics. However, each scenario needs the right item representation to give satisfactory results. In scientific experiments, it can be challenging to find an optimal item representation.</ns0:p><ns0:p>Finally, Hybrid Recommender Systems arise out of an attempt to minimize the weaknesses that traditional recommendation techniques have when used individually. Also, it is expected that a hybrid strategy can aggregate the strengths of the techniques used together. There are several methods of combining recommendation techniques in creating a hybrid recommender system, including: Weighting approaches that provides a score for each recommendation item, Switching, which allows for selecting different types of recommending strategies, Mixing, to make more than one recommentation at a time, Feature Combination, to put together both Content-Based and Collaborative Filtering strategies, Cascade, that first filters the candidate items for the recommendation, followed by refining these candidates, looking for the best alternatives, Feature Augmentationand Meta-Level, which chain a series of recommendations one after another <ns0:ref type='bibr' target='#b8'>(Burke, 2002)</ns0:ref>.</ns0:p><ns0:p>FReeP is as a Cascade Hybrid Recommender System because the content of user preferences is used to prune the search space followed by a collaborative strategy to give the final recommendations.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preference Learning</ns0:head><ns0:p>User preferences play a crucial role in recommender systems <ns0:ref type='bibr' target='#b81'>(Viappiani and Boutilier, 2009)</ns0:ref>. From an Artificial Intelligence perspective, a preference is a problem restriction that allows for some degree of relaxation. <ns0:ref type='bibr' target='#b23'>Fürnkranz and Hüllermeier (2011)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>representing preferences is through binary relationships. For example, a tuple (x i > x j ) > would mean a preference for the value i over j for the attribute x.</ns0:p><ns0:p>The main task within Preference Learning area is Learning to Rank as commonly it is necessary to have an ordering of the preferences. The task is divided into three categories: Label Ranking <ns0:ref type='bibr' target='#b80'>(Vembu and Gärtner, 2011)</ns0:ref>, Instance Ranking <ns0:ref type='bibr' target='#b3'>(Bergeron et al., 2008)</ns0:ref> and Object Ranking <ns0:ref type='bibr' target='#b62'>(Nie et al., 2005)</ns0:ref>. In Label ranking a ranker makes an ordering of the set of classes of a problem for each instance of the problem. In cases where the classes of a problem are naturally ordered, the instance ranking task is more suitable, as it orders the instances of a problem according to their classes. The instances belonging to the 'highest' classes precede the instances that belong to the 'lower' classes. In object ranking an instance is not related to a class. This task's objective is, given a subset of items referring to the total set of items, to produce a ranking of the objects in that subset-for example, the ranking of web pages by a search engine.</ns0:p><ns0:p>Pairwise Label Ranking <ns0:ref type='bibr' target='#b22'>(Fürnkranz and Hüllermeier, 2003;</ns0:ref><ns0:ref type='bibr' target='#b40'>Hüllermeier et al., 2008)</ns0:ref> (PLR) relates each instance with a preference type a > b, representing that a is preferable to b. Then, a binary classification task is assembled where each example a, b is annotated with a is a is preferable over b and 0, otherwise. Then, a classifier M a,b is trained over such dataset to learn how to make the preference predictions which returns 1 as a prediction that a is preferable to b and 0 otherwise. Instead of using a single classifier that makes predictions between m classes, given a set L of m classes, there will be m(m − 1)/2 binary classifiers, where a classifier M i, j only predicts between classes i, j inL. Then, the strategy defined by PLR uses the prediction of each classifier as a vote and uses a voting system that defines an ordered list of preferences. Next, we give more details about how FReeP tackles the voting problem.</ns0:p></ns0:div>
<ns0:div><ns0:head>Borda Count</ns0:head><ns0:p>Voting Theory <ns0:ref type='bibr' target='#b78'>(Taylor and Pacelli, 2008</ns0:ref>) is an area of Mathematics aimed at the study of voting systems.</ns0:p><ns0:p>In an election between two elements, it is fair to follow the majority criterion, that is, the winning candidate is the one that has obtained more than half of the votes. However, elections involving more than two candidates require a more robust system. Preferential Voting <ns0:ref type='bibr' target='#b48'>(Karvonen, 2004)</ns0:ref> and Borda</ns0:p><ns0:p>Count <ns0:ref type='bibr' target='#b20'>(Emerson, 2013)</ns0:ref> are two voting schemas concerning the scenarios where there are more than two candidates. In Preferential Voting, voters elicit a list of the most preferred to the least preferred candidate.</ns0:p><ns0:p>The elected candidate is the one most often chosen as the most preferred by voters.</ns0:p><ns0:p>Borda Count is a voting system in which voters draw up a list of candidates arranged according to their preference. Then, each position in the user's preference list gets a score. In a list of n candidates, the candidate in the i − th position on the list receives the score n − i. To determine the winner, the final score is the sum of each candidate's scores for each voter, and the candidate with the highest score is the elected one. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>system based on Collaborative Filtering. Still using the Borda Count, Tang and Tong ( <ns0:ref type='formula'>2016</ns0:ref>) proposes the BordaRank. The method consists of using the Borda Count method directly in the sparse matrix of evaluations, without predictions, to make a recommendation. The way FReeP tackles the recommendation task is presented in three versions. In the first two versions, the algorithm aims at recommending a value for only one parameter at a time. While the naive version assumes that all parameters have a discrete domain, the enhanced second version is an extension of the first one that is able to deal with cases where a parameter has a continuous domain. The third version targets at recommending values for n > 1 parameters at a time.</ns0:p></ns0:div>
<ns0:div><ns0:head>FREEP -FEATURE RECOMMENDER FROM PREFERENCES</ns0:head><ns0:p>Next, we start by presenting the naive version of the method that makes the recommendation for a single parameter at a time. Then, we follow to the improved version with enhancements that improve the performance and allows for working with parameters in the continuous domain. Finally, a generic version of the algorithm is presented, aiming at making the recommendation of values for multiple parameters at a time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete Domain Parameter Value Recommendation</ns0:head><ns0:p>Given a provenance database D, a parameter y ∈ Y , where Y is the workflow parameters set, and a preferences or restrictions set P defined by the user, where p i ∈ P(y i , val k ), FReeP one parameter approach aims at solving the problem of recommending a r value for y, so that the P preferences together with the r recommendation to y maximize the chances of the workflow activation to run to the end. Based on the user's preferences, it would be possible to query the provenance database from which the experiment came from to retrieve records that could assist in the search for other parameters values that had no preferences defined. However, FReeP is based on a model generation that generalizes the return recommendation</ns0:p><ns0:p>The algorithm input data are: target parameter for which the algorithm should make the recommendation, y; user preferences set, such as a list of key-values, where the key is a workflow parameter and value is the user's preference for that parameter, P; provenance database, D.</ns0:p><ns0:p>The storage of provenance data for an experiment may vary from one WfMS to another. For example, SciCumulus, which uses a provenance representation derived from PROV, stores provenance in a relational database. Using SciCumulus example, it is trivial for the user responsible for the experiment to elaborate a SQL query that returns the provenance data related to the parameters used in each activity in a key-value representation. The key-value representation can be easily stored in a csv format file, which is the required format expected as provenance dataset in FReeP implementation. Thus, converting provenance data to the csv format is up to the user. Still, regarding the provenance data, the records present in the algorithm input data containing information about the parameters must be related only to executions that were successfully concluded, that is, there was no failure that resulted in the execution abortion. The inclusion of components to query and transform provenance data and force successful executions parameters selection would require implementations for each type of WfMS, which is out of the scope of this article.</ns0:p><ns0:p>The initial step, partitions generation, builds partitioning rules set based on the user's preferences.</ns0:p><ns0:p>Initially, the preference set parameters P are used to generate a powerset. This first step returns all Then, FReeP initializes an iteration over the partitioning rules generated by the previous step. Iteration begins selecting only the records that follow the user's preferences contained in the current ruleset, named in the algorithm as horizontal filter. Figure <ns0:ref type='figure'>6</ns0:ref> uses the partitions presented in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref> to show how the horizontal filter step works. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>conditions for an algorithm, specifically in the recommender systems. This problem occurs, for example, when there are few users for the neighborhood definition with a similar user profile or lack of ratings for enough items. FReeP can also be affected by Cold Start problem. If only all preferences were used at one time for partitioning the provenance data, in some cases, it could be observed that the resulting partition would be empty. This is because there could be an absence of any of the user's preferences in the provenance data. Therefore, generating multiple partitions with subsets of preferences decreases the chance of obtaining only empty partitions. However, in the worst case where none of the user's preferences are present in the workflow provenance, FReeP will not perform properly, thus failing to make any recommendations.</ns0:p><ns0:p>After the partitions generation and horizontal and vertical filters are discovered, there is a filtered data set that follows part of the user's preferences. These provenance data that will generate the Machine All predictions generated by recommend step, which is within the iteration over the partitioning rules, are stored. The last algorithm step, elect recommendation, uses all of these predictions as votes to define which value should be recommended for the target parameter. When an algorithm instance is setup to return a classifier type model in hypothesis generation step, the most voted value is elected as the recommendation. On the other hand, when an algorithm instance is setup to return a ranker type model in hypothesis generation step, the strategy is Borda Count. The use of the Borda Count strategy seeks to take advantage of the list of lists form that the saved votes acquire when using the ranker model. This list of lists format occurs because the ranker prediction is a list, and since there are as many predictions as partitioning rules, the storage of these predictions takes the list of lists format.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete and Continuous Domain Parameter Value Recommendation</ns0:head><ns0:p>The naive version of FReeP allowed evaluating the algorithm's proposal. The proposal showed relevant results after initial tests (presented in next section), so efforts were focused on improving its performance and utility. In particular, the following problems have been identified: 1) User has some restriction to set his/her parameters preferences; 2) The categorical domain parameters when used as a class variable (parameters for recommendation) are treated as well as they are present in the input data; 3) Machine Learning models used can only learn when the class (parameter) variable has a discrete domain; 4) All partitions generated by workflow parameters powerset present in user preferences are used as partitioning rules for the algorithm.</ns0:p><ns0:p>Regarding problem 1, in Algorithm 1, the user was limited to define his preferences with the equality operator. Depending on the user's preferences, the equality operator is not enough. With this in mind, Support Vector Machines (SVM) <ns0:ref type='bibr' target='#b83'>(Wang, 2005)</ns0:ref>.</ns0:p><ns0:p>This pre-processing step was included in Algorithm 2 as classes preprocessing step. The preprocessing consists in exchanging each distinct categorical value for a distinct integer. Note that the encoding of the parameter used as a class variable in the model generation is different from the encoding applied to the parameters used as attributes represented by the step preprocessing.</ns0:p><ns0:p>Concerning problem 3, by using classifiers to handle a continuous domain class variable degrades the performance results. Performance degradation happens because the numerical class variables are considered as categorical. For continuous numerical domain class variables, the Machine Learning models suggested are Regressors <ns0:ref type='bibr' target='#b61'>(Myers and Myers, 1990)</ns0:ref>. In this way, the Enhanced FReeP checks the parameter y domain, which is the recommendation target parameter, represented as model select step in Algorithm 2.</ns0:p><ns0:p>To analyze problem 4, it is important to note that after converting categorical attributes One-Hot encoding in preprocessing step, the provenance database will have a considerable increase in the number of attributes. Also, after categorical attributes encoding in preprocessing step, the parameters extracted from the user's preferences, are also encoded for partitions generation step. In Algorithm 1, the partitioning rules powerset is calculated on all attributes derived from the original parameters after One-Hot encoding. If FReeP uses the powerset generated from the parameters present in the user's preferences set as partitioning rules (in the partitions generation step), it can be very costly. Thus, using the powerset makes the complexity of the algorithm becomes exponential according to the parameters present in the user's preferences set. Alternatives to select the best partitioning rules and handle the exponential cost are represented in Algorithm 2 as optimized partitions generation step. The two strategies proposed here are based on Principal Components Analysis (PCA) <ns0:ref type='bibr' target='#b24'>(Garthwaite et al., 2002)</ns0:ref> and the Analysis of variance (ANOVA) <ns0:ref type='bibr' target='#b26'>(Girden, 1992)</ns0:ref> statistical metric.</ns0:p><ns0:p>The strategy based on PCA consists of extracting x principal components from all provenance database, pca D , and for each pt ∈ partitions, pca i pt , which are pt partition principal components. Then, the norms are calculated pca D − pca i pt , and from that n partitioning rules are selected that generated pca i vote ← recommend(model, y)</ns0:p><ns0:p>12:</ns0:p><ns0:p>votes ← votes ∪ {vote} 13:</ns0:p><ns0:p>recommendation ← elect recommendation(votes)</ns0:p><ns0:p>14:</ns0:p><ns0:p>return recommendation</ns0:p></ns0:div>
<ns0:div><ns0:head>Recommendation for n Parameters at a time</ns0:head><ns0:p>Algorithms 1 and 2 aim at producing single parameter recommendation at a time. However, in a real usage scenario of scientific workflows, the WfMS will probably need to recommend more than one parameter at a time. A naive alternative to handle this problem is to execute Algorithm 2 for each of the target parameters, always adding the last recommendation to the user's preference set. This alternative assumes that the parameters to be recommended are independent random variables. One way to implement this strategy is by using a classifiers chain <ns0:ref type='bibr' target='#b70'>(Read et al., 2011)</ns0:ref>.</ns0:p><ns0:p>Nevertheless, this naive approach neglects that the order in which the target parameters are used during algorithm interactions can influence the produced recommendations. The influence is due to parameter dependencies that can be found between two (or more) workflow activities (e.g., two activities consume a parameter produced by a third activity of the workflow). In Figure <ns0:ref type='figure'>3</ns0:ref>, the circles represent the activities of workflow, so activities 2 and 3 are preceded by activity 1 (e.g., they consume the output of activity 1). Using this example, we can see that it is possible that there is a dependency relationship between the parameters param2 and param3 with the parameter param1. In this case, the values of param2 and param3 parameters can be influenced by parameter param1 value.</ns0:p><ns0:p>In order to deal with this problem, FReeP leverages the Classifiers Chains Set <ns0:ref type='bibr' target='#b70'>(Read et al., 2011)</ns0:ref> concept. This technique allows for estimating the joint probability distribution of random variables based on a Classifiers Chains Set. In this case, the random variables are the parameters for which values are to be recommended, and the joint probability distribution concerns the possible dependencies between these parameters. The Classifiers Chains and Classifiers Chains Set are techniques from Multi-label Classification <ns0:ref type='bibr' target='#b79'>(Tsoumakas and Katakis, 2007)</ns0:ref> Machine Learning task.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_11'>8</ns0:ref> depicts an architecture overview for the proposed algorithm named as Generic FReeP that recommends n parameters simultaneously. The architecture presented in Figure <ns0:ref type='figure' target='#fig_11'>8</ns0:ref> shows that the solution developed to make n parameter recommendations at a time is a packaging of FReeP algorithm to one parameter. This final approach is divided into five steps: identification of parameters for the recommendation, generation of ordered sequences of these parameters, iteration over each of the sequences generated with the addition of each recommendation from FReeP to the user preferences set, separation of recommendations by parameter and finally the choice of value recommendation for each target parameters.</ns0:p><ns0:p>The formalization can be seen in Algorithm 3. return response</ns0:p><ns0:p>The first step parameters extractor extracts the workflow parameters that are not present in the users' preferences and will be the targets of the recommendations. Thus, all other parameters that are not in the user's preferences will have recommendation values.</ns0:p><ns0:p>Lines 4 and 5 of the algorithm comprise the initialization of the variable responsible for storing the different recommendations for each parameter during the algorithm execution. Then, the list of all parameters that will be recommended is used for generating different ordering of these parameters, indicated by sequence generators step. For example, let w be a workflow with 4 p parameters and let u be an user with pr 1 and pr 3 preferences for the p 1 and p 3 parameters respectively. The parameters to be recommended are p 2 and p 4 , in this case two possible orderings are: {p 2 , p 4 } and {p 4 , p 2 }. Note that the number of sorts used in the algorithm are not all possible sorts, in fact N of the possible sorts are selected at random.</ns0:p><ns0:p>Then, the algorithm initializes an iteration over each of the sorts generated by the step sequence generators.</ns0:p><ns0:p>Another nested iteration over each parameter present in the current order also begins. An intuitive explanation of the algorithm between lines 9 and 13 is that each current sequence parameter is used together with the user's preferences for its recommendation. At the end of the recommendation of one of the ordering parameters, the recommendation is incorporated into the preferences set used in the recommendation of</ns0:p></ns0:div>
<ns0:div><ns0:head>13/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the next ordering parameter. In this iteration, the recommendations are grouped by parameter to facilitate the election of the recommended value for each target parameter.</ns0:p><ns0:p>The step of iterating over the generated sequences, always adding the last recommendation to the set of preferences, is the Classifiers Chains concept. To deal with the dependency between the workflow parameters that can influence a parameter value recommendation, the step that generates multiple sequences of parameters, combined with the Classifiers Chains, is the Classifiers Chains Set concept.</ns0:p><ns0:p>Finally, to choose the recommendation for each target parameter, a vote is taken on lines 15 and 16.</ns0:p><ns0:p>The most voted procedure makes the majority election that defines the target parameter recommendation value. This section presented three algorithms that are part of the FReep approach developed for the parameter recommendation problem in workflows. The proposals covered two main scenarios for parameters value recommendation (single and multiple parameter at a time).</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL EVALUATION</ns0:head><ns0:p>This section presents the experimental evaluation of all versions of FReeP. First, we present the workflows used as case studies namely SciPhy <ns0:ref type='bibr' target='#b63'>(Ocaña et al., 2011)</ns0:ref> and Montage <ns0:ref type='bibr' target='#b42'>(Jacob et al., 2009)</ns0:ref>. Following we present the experimental and environment setups. Finally, we discuss the results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Case Studies</ns0:head><ns0:p>In this article, we consider two workflows from bioinformatics and astronomy domains, namely SciPhy <ns0:ref type='bibr' target='#b63'>(Ocaña et al., 2011)</ns0:ref> and Montage <ns0:ref type='bibr' target='#b42'>(Jacob et al., 2009)</ns0:ref>, respectively. SciPhy is a phylogenetic analysis workflow that generates phylogenetic trees (a tree-based representation of the evolutionary relationships among organisms) from input DNA, RNA and aminoacid sequences. SciPhy has four major activities as presented in Figure <ns0:ref type='figure' target='#fig_13'>9</ns0:ref>(a): (i) sequence alignment, (ii) alignment conversion, (iii) evolutionary model election and (iv) tree generation. SciPhy has been used in scientific gateways such as BioInfoPortal <ns0:ref type='bibr' target='#b65'>(Ocaña et al., 2020)</ns0:ref>. SciPhy is a CPU-intensive workflow, bacause many of its activities (especially the evolutionary model election) commonly execute for several hours depending on the input data and the chosen execution environment.</ns0:p><ns0:p>Montage <ns0:ref type='bibr' target='#b42'>(Jacob et al., 2009</ns0:ref>) is a well-known astronomy workflow that assembles astronomical images into mosaics by using FITS (Flexible Image Transport System) files. Those files include a coordinate system and the image size, rotation, and WCS (World Coordinate System) map projection. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental and Environment Setup</ns0:head><ns0:p>All FReeP algorithms presented in this article were implemented using the Python programming language.</ns0:p><ns0:p>FReeP implementation also benefits from Scikit-Learn <ns0:ref type='bibr' target='#b67'>(Pedregosa et al., 2011)</ns0:ref> to learn and evaluate the Machine Learning models, numpy <ns0:ref type='bibr' target='#b82'>(Walt et al., 2011)</ns0:ref>, a numerical data manipulation library; and pandas <ns0:ref type='bibr' target='#b58'>(McKinney, 2011)</ns0:ref>, which provides tabular data functionalities.</ns0:p><ns0:p>The machine specification where experiments were performed is a CPU Celeron (R) Dual-Core T3300 @ 2.00GHz × 2 processor, 4GB DDR2 RAM and 132GB HDD. To measure recommendations performance when the parameter is categorical, precision and recall are used as metrics. Precision and recall are metrics widely used for the quantitative assessment of recommender systems <ns0:ref type='bibr' target='#b36'>(Herlocker et al., 2004</ns0:ref>) <ns0:ref type='bibr' target='#b73'>(Schein et al., 2002)</ns0:ref>. Equation 1 defines precision and Equation 2 defines recall, following the recommender vocabulary, where T R is the correct recommendation set and R is all recommendations set. An intuitive explanation to precision is that it represents the most appropriate recommendations fraction.</ns0:p><ns0:p>Still, recall represents the appropriate recommendation fraction that was made.</ns0:p><ns0:formula xml:id='formula_1'>precision = T R ∩ R R (1) recall = T R ∩ R T R (2) MSE = 1 n n ∑ i=1 (RV − TV ) 2 (3)</ns0:formula><ns0:p>When the parameter to be recommended is numerical, the performance of FReeP is evaluated with Mean Square Error (MSE). The MSE formula is given by Equation <ns0:ref type='formula'>3</ns0:ref>where n is the recommendations Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In Figure <ns0:ref type='figure' target='#fig_15'>10</ns0:ref>, it is possible to check the correlation between the different attributes in the datasets.</ns0:p><ns0:p>It is notable in both Figure <ns0:ref type='figure' target='#fig_15'>10a</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_15'>10b</ns0:ref> that the attributes (i.e., workflow parameters) present a weak correlation. All those statistics are relevant to understand the results obtained by the experiments performed from each version of FReeP algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete Domain Recommendation Evaluation</ns0:head><ns0:p>This experiment was modeled to evaluate FReeP's algorithm key concepts using the naive version presented in Algorithm 1, that was developed to recommend one discrete domain parameter at a time. This experiment aims at evaluating and comparing the performance of FReeP when its hypothesis generation step instantiates either a single classifier or a ranker. The ranker tested as a model was implemented using the Pairwise Label Ranking technique. K Nearest Neighbors <ns0:ref type='bibr' target='#b49'>(Keller et al., 1985)</ns0:ref> classifier is used 1. The algorithm is instantiated with the classifier or ranker and a recommendation target workflow parameter.</ns0:p><ns0:p>2. The provenance database is divided into k parts to follow a K-Fold Cross Validation procedure <ns0:ref type='bibr' target='#b50'>(Kohavi, 2001)</ns0:ref>. At each step, the procedure takes k − 1 parts to train the model and the 1 remaining part to make the predictions. In this experiment, k = 5.</ns0:p><ns0:p>3. Each workflow parameter is used as recommendation target parameter.</ns0:p><ns0:p>4. Each provenance record in test data is used to retrieve target parameter real value.</ns0:p><ns0:p>5. Parameters that are not the recommendation target are used as preferences, with values from current test record.</ns0:p><ns0:p>6. Then, algorithm performs recommendation and both the result and the value present in the test record for the recommendation target parameter are stored.</ns0:p><ns0:p>7. Precision and recall values are calculated based on all K-Fold Cross Validation iterations.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Experiment 1 results are presented and analyzed based on the values of precision and recall, in addition to the execution time. Figure <ns0:ref type='figure' target='#fig_17'>11</ns0:ref> shows that Algorithm 1 execution with Sciphy provenance database, using both the classifier and the ranker. Only KNN classifier with k = 3 gives a precision greater than 50%.</ns0:p><ns0:p>Also, a high standard deviation is noticed. Even with unsatisfactory performance, Figure <ns0:ref type='figure' target='#fig_18'>12</ns0:ref> shows that KNN classifier presented better recall results than those for precision, both in absolute values terms and standard deviation, which had a slight decrease. In contrast, the ranker recall was even worse with the precision results and still present a very high standard deviation. Figure <ns0:ref type='figure' target='#fig_19'>13</ns0:ref> shows the execution time, in seconds, to obtain the experiment's recommendations for SciPhy. The execution time of ranker is much more significant when compared to the time spent by the classifier. This behavior can be explained by the fact that the technique used to generate the ranker creates multiple binary classifiers. Another point to note is that the execution time standard deviation from ranker is also very high. It is important to note that when FReeP uses KNN, it is memory-based, since each recommendation needs to be loaded into main memory.</ns0:p><ns0:p>Analyzing Figure <ns0:ref type='figure' target='#fig_20'>14</ns0:ref> (Montage) one can conclude that with the use of k = 3 for the classifier and for the ranker produces relevant results. The precision for this case reached 80%, and the standard deviation was considerably smaller compared to the precision results with Sciphy dataset in Figure <ns0:ref type='figure' target='#fig_17'>11</ns0:ref>.</ns0:p><ns0:p>For k ∈ {5, 7}, the same results behavior was observed, considerably below those expected.</ns0:p></ns0:div>
<ns0:div><ns0:head>17/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Considering the precision, Figure <ns0:ref type='figure' target='#fig_21'>15</ns0:ref> shows that the results for k = 3 were the best for both the classifier and for ranker, although for this case they did not reach 80% (although it is close). It can be noted that the standard deviation was smaller when compared to the standard deviations found for precision. One interesting point about the execution time of the experiment with Montage presented in Figure <ns0:ref type='figure' target='#fig_22'>16</ns0:ref> is that for k ∈ {3, 7} the ranker spent less time than the classifier. This behavior can be explained because the ranker, despite being generated by a process where several classifiers are built, relies on binary classifiers. When used alone, the classifier needs to handle all class variables values, in this case, parameter recommendation values, at once. However, it is also important to note that the standard deviation for ranker is much higher than for the classifier. In general, it was possible to notice that the use of ranker did not bring encouraging results. In all cases, ranker precision and recall were lower than those presented by the classifier. Besides, the standard deviation of ranker in the execution time spent results was also very high. Another point to be noted is that the best precision and recall results were obtained with the data from Montage workflow. These results may be linked to the fact that the Montage dataset has more records than the Sciphy dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discrete and Continuous Domain Recommendation Evaluation</ns0:head><ns0:p>Experiment 1 was modified to evaluate the Algorithm 2 performance, yielding Experiment 2. Algorithm 2 was executed with variations in the choice of classifiers and regressors, partitions strategies, and records percentage from provenance database. All values per algorithm parameter are presented in Table <ns0:ref type='table' target='#tab_10'>4</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Experiment 2 results are presented using precision, recall, and execution time for categorical domain parameters recommendations, while numerical domain parameters recommendations are evaluated using MSE and the execution time. Based on the results obtained in Experiment 1, only classifiers were used as Machine Learning models in Experiment 2, i.e., we do not consider rankers.</ns0:p><ns0:p>The first observation when analyzing the precision data in Figure <ns0:ref type='figure' target='#fig_25'>17</ns0:ref> is that ANOVA partitioning strategy obtained better results than PCA. ANOVA partitioning strategy precision in absolute values is generally more significant, and variation in precision for each attribute considered for recommendation is lower than PCA strategy. The classifiers have very similar performance for all percentages of partitions in the ANOVA strategy. On the other hand, the variation in the percentages of elements per partition also reflects a more significant variation in results between the different classifiers. The Multi Layer Perceptron Manuscript to be reviewed Computer Science (MLP) classifier, which was trained using the Stochastic Descending Gradient <ns0:ref type='bibr' target='#b7'>(Bottou, 2010)</ns0:ref> with a single hidden layer, presents the worst results except in the setup that it follows the PCA partitioning strategy with a percentage of 70% elements in the partitioning. The MLP model performance degradation may be related to the fact that the numerical attributes are not normalized before algorithm execution.</ns0:p><ns0:p>Experiment 2. Algorithm 2 Evaluation Script.</ns0:p><ns0:p>1. Algorithm 2 is instantiated with a classifier or regressor, a partitioning strategy, percentage data to be returned by partitioning strategy, and a target workflow parameter.</ns0:p><ns0:p>2. Provenance database is divided using K-Fold Cross Validation, k = 5</ns0:p><ns0:p>3. Each provenance record on test data is used to retrieve the target parameter's real value.</ns0:p><ns0:p>4. A random number x between 2 and parameters number present in provenance database is chosen to simulated preference number used in recommending target parameter.</ns0:p><ns0:p>5. x parameters are chosen from the remaining test record to be used as preferences.</ns0:p><ns0:p>6. Algorithm performs recommendation, and both result and test record value for the target parameter are stored. Recall results, in Figure <ns0:ref type='figure' target='#fig_26'>18</ns0:ref> were very similar to precision results in absolute values. A difference is the smallest variation, in general, of recall results for each attribute used in the recommendation experiment. The Multi Layer Perceptron classifier presented a behavior similar to the precision results, with a degradation in the setup that includes ANOVA partitioning with 70% of the elements in the partitioning.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_27'>19</ns0:ref> shows the average execution time in seconds during the experiment with categorical domain parameters in each setup used. Execution time of ANOVA partitioning strategy was, on average, half the time used with the PCA partitioning strategy. The execution time using different classifiers for each attribute is also much smaller and stable for ANOVA strategy than for PCA, regardless of element partition percentage.</ns0:p><ns0:p>Analyzing precision, recall, and execution time spent data jointly, ANOVA partitioning strategy showed the best recommendation performance for the categorical domain parameters of the Sciphy provenance database. Going further, the element partition percentage generated by the strategy has no significant impact on the results. Another interesting point is that a simpler classifier like KNN presented results very similar to those obtained by a more complex classifier like SVM. The experiment execution time of Montage provenance database was much greater than the time used with the data from the workflow Sciphy. The explanation is the difference in the database size. Another observation is that the ANOVA partitioning strategy produces the fastest recommendations. Another point is that the percentage of the elements in partitioning generated by each partitioning strategy has no impact on the algorithm performance. Finally, it was possible to notice that the more robust classifiers and regressors had their performance exceeded by simpler models in some cases for the data used.</ns0:p></ns0:div>
<ns0:div><ns0:head>Generic FReeP Recommendation Evaluation</ns0:head><ns0:p>A third experiment was modeled to evaluate Algorithm 3 performance. As in Experiment 2, different variations, following Table <ns0:ref type='table' target='#tab_10'>4</ns0:ref> values, were used in algorithm execution. Precision, recall, and MSE are also the metrics used to evaluate the recommendations made by each algorithm instance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Results showed here were obtained by fixing parameter n = 10 in Experiment 3, and using only SciPhy provenance database. Based on Experiment 2 results, it was decided to use the ANOVA partitioning strategy with 50% recovering elements from the provenance database. This choice is because the ANOVA partitioning strategy was the one that obtained the best results in previous experiment. As the percentage of data recovered by the strategy was not an impacting factor in the results, an intermediate percentage used in the previous experiment is selected. In addition, only KNN, with k ∈ {5, 7}, and SVM were kept <ns0:ref type='table' target='#tab_11'>5</ns0:ref> presents the results obtained with the Algorithm 3 instance variations. Each row in the table represents an Algorithm 3 instance setup. The column that draws the most attention is the Failures. What happens is that, for some cases, the algorithm was not able to carry out the recommendation together and therefore did not return any recommendations. It is important to remember that each algorithm setup was tested on a set with 10 records extracted randomly from the database. The random record selection process can select records in which parameter values can be present only in the selected record. For this experiment, the selected examples are removed from the dataset, and therefore there is no other record that allows the correct execution of the algorithm.</ns0:p><ns0:p>Analyzing Table <ns0:ref type='table' target='#tab_11'>5</ns0:ref> results, focusing on the column Failures and taking into account that 10 records were chosen for each setup, it is possible to verify that in most cases, the algorithm was not able to make recommendations. However, considering only the recommendations made, it can be seen that the algorithm had satisfactory results for the precision and recall metrics. The values presented for the MSE metric were mostly satisfactory, differing only in the configurations of lines 4 and 7, both using the regressor KNR with k = 5. Another point to note is that the algorithm had more problems to make recommendations when the SVM classifier was used. Furthermore, it is possible to note that algorithm setups with more sophisticated Machine Learning models such as SVM and SVR do not add performance to the algorithm, specifically for Sciphy provenance dataset used.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Previous literature works had already relied on recommender systems to support scientific workflows.</ns0:p><ns0:p>Moreover, hyperparameter tuning methods also have similar goals as paramater recommendation. Hyperparameters are variables that cannot be estimated directly from data, and, as a result, it is the user's task to explore and define those values. Hyperparameter Optimization (HPO) is a research area that emerged to assist users in adjusting the hyperparameters of Machine Learning models in a non-ad-hoc manner Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b86'>(Yang and Shami, 2020)</ns0:ref>. The well-defined processes resulting from research in the area may speed up the experimentation process and allow for reproducibility and fair comparison between models. Among the different methods of HPO, we can mention Decision Theory, Bayesian Optimization, Multi-fidelity Optimization, and Metaheuristic Algorithms.</ns0:p><ns0:p>Among the Decision Theory methods, the most used are Grid Search <ns0:ref type='bibr' target='#b5'>(Bergstra et al., 2011)</ns0:ref> and Random Search <ns0:ref type='bibr' target='#b4'>(Bergstra and Bengio, 2012)</ns0:ref>. For both strategies, the user defines a list of values to be experimented for each hyperparameter. In Grid Search, the search for optimum values is given by experimenting the predefined values for the entire cartesian product. Random Search selects a sample for the hyperparameters to improve the execution time of the whole process. While the exponential search space of Grid Search may be impossible to complain, in Random Search there is the possibility that an optimal combination will not be explored. Also, the common problem between both approaches is that the dependencies between the hyperparameters are not taken into account. FReeP considers the possible dependencies between parameters by following the concept of classifier chains.</ns0:p><ns0:p>The Bayesian Optimization <ns0:ref type='bibr' target='#b18'>(Eggensperger et al., 2013)</ns0:ref> method optimizes the search space exploration using information from the previously tested hyperparameters to prune the non-promising combinations test. Despite using a surrogate model, the Bayesian optimization method still requires that the target model evaluation direct the search for the optimal hyperparameters. In a scenario of scientific workflows, it is very costly from the economical and runtime perspective to run an experiment, even more so to only evaluate a combination of parameter values. FReeP does not require any new workflow execution to recommend which values to use as it uses only data from past executions.</ns0:p><ns0:p>Multi-fidelity Algorithms <ns0:ref type='bibr' target='#b90'>(Zhang et al., 2016)</ns0:ref> also have the premise of balancing the time spent to search for hyperparameters. This kind of algorithm is based on successively evaluating hyperparameters in a subset search space . Those strategies follow similar motivations as the partitions generation of FReeP. However, in a scenario of scientific workflows, the Multi-fidelity algorithms still require workflow execution to evaluate combination quality.</ns0:p><ns0:p>The Metaheuristics Algorithms <ns0:ref type='bibr' target='#b28'>(Gogna and Tayal, 2013)</ns0:ref>, based on the evolution of populations, use different forms of combinations of pre-existing populations in the hope of generating better populations at each generation. For hyperparameters tuning, hyperparameters with missing values are the population.</ns0:p><ns0:p>Still, FReeP does not require any new execution of the workflow a priori to evaluate a recommendation given by the algorithm.</ns0:p><ns0:p>In general, the works that seek to assist scientists with some type of recommendation involving scientific workflow are focused on the composition phase. <ns0:ref type='bibr' target='#b92'>Zhou et al. (2018)</ns0:ref> uses a graph-based clustering technique to recommend workflows that can be reused in the composition of a developing workflow.</ns0:p><ns0:p>De <ns0:ref type='bibr' target='#b16'>Oliveira et al. (2008)</ns0:ref> uses workflow provenance to extract connection patterns between components in order to make recommendations of new components for a workflow in composition. For each new component used in the composition of workflow, new components are recommended. <ns0:ref type='bibr' target='#b35'>Halioui et al. (2016)</ns0:ref>, uses Natural Language Processing combined with specific ontologies in the field of Bioinformatics to extract concrete workflows from works in the literature. After the reconstruction of concrete workflows, tool combinations patterns , its parameters, and input data used in these workflows are extracted. All this data extracted can be used as assistance for composing new ones workflows that solve problems related to the mined workflows.</ns0:p><ns0:p>Yet concerned with assistance during the workflow composition phase, <ns0:ref type='bibr' target='#b60'>Mohan et al. (2015)</ns0:ref> proposes the use of Folksonomy <ns0:ref type='bibr' target='#b30'>(Gruber, 2007)</ns0:ref> to enrich the data used for the recommendation of others workflows similar to a workflow under development. A design workflow tool was developed that allows free specification tags to be used in each component, making it possible to use not only the recommendation strategy through the workflow syntax, but also component semantics. <ns0:ref type='bibr' target='#b76'>Soomro et al. (2015)</ns0:ref> uses domain ontologies as a knowledge base to incorporate semantics into the recommendation process. A hybrid recommender system was developed using ontologies to improve the already known recommendation strategy based on the extraction of standards from other workflows. <ns0:ref type='bibr' target='#b88'>Zeng et al. (2011)</ns0:ref> uses data and control dependencies between activities, stored in the workflow provenance to build a causality table and another weights table. Subsequently, a Petri network <ns0:ref type='bibr' target='#b91'>(Zhou and Venkatesh, 1999</ns0:ref>) is used to recommend other components for the composition of workflow.</ns0:p><ns0:p>In the context of helping less experienced users in the use of scientific workflows, <ns0:ref type='bibr' target='#b85'>Wickramarachchi et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b53'>Mallawaarachchi et al. (2018)</ns0:ref> show experiments that prove that SWfMS BioWorkflow <ns0:ref type='bibr' target='#b84'>(Welivita et al., 2018)</ns0:ref> use is effective in increasing student engagement and learning in Bioinformatics.</ns0:p></ns0:div>
<ns0:div><ns0:head>23/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science Some works propose recommendation approaches that assist less experienced users in analysis of unknown domains, as is the case of <ns0:ref type='bibr' target='#b44'>Kanchana et al. (2016)</ns0:ref> and <ns0:ref type='bibr' target='#b46'>Kanchana et al. (2017)</ns0:ref>, where a chart recommendation system was developed and evolved based on the use of metadata from any domain data.</ns0:p><ns0:p>The system uses Machine Learning and Rule-based components that are refined with user feedback on the usefulness of the recommended charts.</ns0:p><ns0:p>Most of the approaches that uses recommender system methods to support the scientific process are closely linked to the experiment's composition phase. The execution phase, where there is a need to adjust parameters, still lacks alternatives. Table <ns0:ref type='table' target='#tab_12'>6</ns0:ref> compares related work with FReeP approach. In Table <ns0:ref type='table' target='#tab_12'>6</ns0:ref> we show the name of the approach (column Approach), if it is focused on a specific domain or if its generic (column Domain), if it prunes the search space or considers the entire search space (column Search Space), if the approach considers dependencies among parameters (column Considers Dependencies), if it requires a new execution of the workflow or the application (column Requires Execution), and in which phase of the experiment life-cycle the approach is executed (column Life-cycle Phase). If there is no information about the analyzed characteristics in the paper we set as N/A (Not Available) in Table <ns0:ref type='table' target='#tab_12'>6</ns0:ref>. This work proposes a hybrid recommendation algorithm capable of making value recommendations for one or multiple parameters of a scientific workflow, taking into account the user's preferences.</ns0:p></ns0:div>
<ns0:div><ns0:head>FINAL REMARKS</ns0:head><ns0:p>The precision and recall results obtained from the experiments suggest that FReeP is useful in recommending missing parameter values, decreasing the probability that failures will abort scientific experiments performed in High-Performance Computing environments. These results show a high-reliability degree, especially in the recommendation for one workflow parameter due to the number of experimental iterations performed to obtain the evaluations. The low availability of data for the experiments of the recommendation for n parameters impacts the reliability of the results obtained in this scenario. However, the results presented for the n parameters recommendation show that the approach is promising.</ns0:p><ns0:p>FReeP has a number of characteristics pointing out its contribution in saving runtime and financial resources when executing scientific experiments. First, FReeP can be executed on standard hardware, such as that used in the experiments presented in this article, without the need for an HPC environment. Besides, FReeP does not require any further execution of the scientific workflow to assess the recommendation's quality as it uses provenance data. This characteristic of not requiring an instance of the scientific experiment to be performed is the huge difference and advantage compared with Hyperparameter Optimization strategies widely used in the Machine Learning models tuning.</ns0:p><ns0:p>In FReeP, all training data are collected and each tuple represents a different execution of the workflow. This data gathering process can nevertheless be time-consuming. However, one aspect that is expected is Manuscript to be reviewed Computer Science that the recommendation process will be performed once and a series of executions of the same workflow is repeated a significant number of (varying the known parameter). In addition, in many research groups there is already a database containing the provenance <ns0:ref type='bibr' target='#b21'>(Freire et al., 2008)</ns0:ref> that can be used to recommend parameter values for non-expert users, i.e., the scientists will not need to effectively execute the workflow to train the model since provenance data is already available. Public provenance repositories such as ProvStore 3 <ns0:ref type='bibr' target='#b41'>(Huynh and Moreau, 2015)</ns0:ref> can be used as input for FReeP. For example, ProvStore contains 1,136 documents (each one associated with a workflow execution) of several different real workflows uploaded by research groups around the world.</ns0:p><ns0:p>From the perspective of runtime, when using the ANOVA partitioning strategy, in the experimental evaluation with the provenance data from the Sciphy workflow, the average time spent on the recommendations is about only 4 minutes. In comparison, the average time of execution of the workflow Sciphy extracted from the provenance data used is about 17 hours and 32 minutes. Still taking into account the use of the ANOVA partitioning strategy, in the experimental evaluation with the Montage workflow provenance data, the average time spent on the recommendation is about 1 hour and 30 minutes. In contrast, the average execution time of a workflow experiment Montage extracted from the provenance data used was about 2 hours and 3 minutes.</ns0:p><ns0:p>Although it is more evident the lower relation between the experiment's execution time and the recommendation time when analyzing the data from the Sciphy workflow, it is essential to emphasize that more robust hardware is not necessary to execute the recommendation process. Yet, future improvements in FReeP includes employing parallelism techniques to further decrease the recommendation time.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The scientific process involves observing phenomena from different areas, formulating hypotheses, testing, and refining them. Arguably, this is an arduous job for the scientist in charge of the process. With the advances in computational resources, there is a growing concern about helping scientists in scientific experimentation. A significant step towards a more robust aid was the adoption of scientific workflows as a model for representing scientific experiments and Scientific Workflow Management Systems to support the management of experiment executions.</ns0:p><ns0:p>Computational execution of the experiments represented as scientific workflows relies on the use of computer programs that play the role of each stage of the experiment. In addition to input data, these programs often need additional configuration parameters to be adjusted to simulate the experiment's conditions. The scientist responsible for the experiment ends up developing an intuition about the sets of parameters that lead to satisfactory results. However, another scientist who runs the same experiment will not have the same experience, which may lead him/her to define a set of parameters that will not result in a successful experiment.</ns0:p><ns0:p>Several proposals in the literature have aimed at supporting the composition phase of the experiments, but recommending parameter values for the experiment execution phase is still an open field. This article presented FReeP: Feature Recommender From Preferences, an algorithm for recommending values for parameters in scientific workflows considering the user's preferences. The goal was to allow a new user to express their preferences of values for a subset of workflow parameters and recommend values for the parameters that had no preference defined. FReeP has three versions, all of them relying on Machine Learning techniques. Two approaches focused on the value recommendation for one parameter at a time.</ns0:p><ns0:p>The third instance addresses recommending values for all the other parameters of a workflow for which a user preference was not defined.</ns0:p><ns0:p>The proposed algorithm proved to be useful for recommending one parameter, indicating a path for the recommendation of n parameters. Nevertheless, there are some limitations. FReeP, as a memory-based algorithm, faces scalability issues as its implementation can consume a lot of computational resources.</ns0:p><ns0:p>Yet, the recommendations of FReeP are limited to the existence of examples on the provenance dataset. This means that the algorithm cannot make any 'default' recommendations if there are no examples for the algorithm's execution or recommend values that are not present in the provenance dataset. Also, the recommendation algorithm may have a longer processing time than the experiment itself. Another point is that all the instances have the same weight during the recommendation process. The algorithm does not consider the user's expertise that performed the previous execution to adjust an example's weight. Still,</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>published in the Proceedings of the 2018 Brazilian Symposium on Databases (SBBD). This extended version provides new empirical shreds of evidence regarding several 2/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021) Manuscript to be reviewed Computer Science workflow case studies as well as a broader discussion on related work and experiments.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Votes example that each candidate received in voters preference order.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 depicts an example of Borda Count. There are four candidates: A, B, C and D, and five vote ballots. The lines in each ballot represent the preference positions occupied by each candidate. As there are four candidates, the candidate preferred by a voter receives three points. The score for the candidate D is computed as follows: 1 voter elected the candidate D as the preferred candidate, then 1 * 3 = 3 points; 2 voters elected the candidate D as the second most preferred candidate, then 2 * 2 = 4 points; 2 voters elected the candidate D as the third most preferred candidate, then 2 * 1 = 2 points; 0 voters elected the candidate D as the least preferred, then 0 * 0 = 0 points. Finally, candidate D total score = 3 + 4 + 2 + 0 = 9.Voting algorithms are used together with recommender systems to choose which items the users have liked best to make a good recommendation.<ns0:ref type='bibr' target='#b68'>Rani et al. (2017)</ns0:ref> proposed a recommendation algorithm based on clustering and a voting schema that after clustering and selecting the target user's cluster, uses the Borda Count to select the most popular items in the cluster to be recommended. Similarly,<ns0:ref type='bibr' target='#b51'>Lestari et al. (2018)</ns0:ref> compares Borda Count and the Copeland Score Al-Sharrah (2010) in a recommendation</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 Figure 3 .</ns0:head><ns0:label>33</ns0:label><ns0:figDesc>Figure 3 depicts a synthetic workflow, where one can see four activities represented by colored circleswhere activities 1, 2, and 3 have one parameter each. To execute the workflow, it is required to define values for parameters 1, 2, and 3. Given a scenario where a user has not defined values for all parameters, FReeP targets at helping the user to define values for the missing parameters. For this, FReeP divides the problem into two sub-tasks: 1) recommendation for only one parameter at a time; 2) recommendation for n parameters at once. The second task is more challenging than the first as parameters of different activities may present some data dependencies.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 presents an architecture overview of FReeP's naive version. The algorithm receives as input the provenance database, a target workflow and user preferences. User preferences are also input as this article assumes that the user already has a subset of parameters for which has already defined values to use. In this naive version, the user preferences are only allowed in the form a = b, where a is a parameter, and b is a desired value to a.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>FReePFigure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. FReeP Architecture Overview.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Example of FReeP's Partitioning Rules Generation for Sciphy provenance dataset using user's preferences.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>With a model created, we can use it to recommend the value for the target parameter. This step is represented in FReeP as recommended, and the recommendation of parameter y is made from the user's preferences. It is important to emphasize that the model's training data may contain parameters that the user did not specify any preference. In this case, an attribute of the instance submitted for the hypothesis does not have a defined value. To clarify the problem, let PW be all workflow parameters set, PP the parameters of workflow for which preference values have been defined; PA the parameters present in the partition rules of an iteration over the partitioning rules; and PV = (PW − PP) ∪ (PP ∩ PA) ∪ {y}, there may be parameters p ∈ PV | p / ∈ PP, and for those parameters p there are no values defined a priori. To handle this problem, the average values present in the provenance data are used to fill in the numerical attributes' values and the most frequent values in the provenance date for the categorical attributes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021) Manuscript to be reviewed Computer Science the Enhanced FReeP allows for the user to have access to the relational operators: ==, >, >=, <, <= and ! = to define his/her preferences. In addition, two logical operators are also supported in setting preferences: | and &. Preferences with combination of supported operators is also allowed, for example: (a > 10)|(at < 5). However, by allowing users to define their preferences in this way we create a problem when setting up the instances for recommendation step. As seen, PW represents all workflow parameters set , PP are workflow parameters that preference values have been set; PA the parameters present in the partitioning rules of an iteration over the partition rules; and PV = (PW − PP) ∪ (PP ∩ PA) ∪ {y}. Thus, there may be parameters p ∈ PV | p / ∈ PP, and for those parameters p, there are no values defined a priori. This enhanced version of the proposal allows the user's preferences to be expressed in a more relaxed way, demanding to create the instances used in the step recommendation that include a range (or set of values).To handle this isse, all possible instances from preference values combinations were generated. In case the preference is related to a numerical domain parameter and is defined in terms of values range, like a ≤ 10.5, FReeP uses all values present in the source provenance database that follows the preference restriction. It is important to note that for both numerical and categorical parameters, the combination of possible values are those present in the provenance database and that respect the user's preferences. Then, predictions are made for a set of instances using the model learned during the training phase.Regarding problem 2, the provenance database, in general, present attributes with numerical and categorical domains. It is FReeP responsibility to convert categorical values into numerical representation due to restrictions related to the nature of the training algorithms of the Machine Learning models, e.g.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>pt such that pca D − pca i pt resulted in the lowest calculated values. Note that both x and n are defined parameters when executing the algorithm. In summary, the PCA strategy will select the partitions where the main components extracted are the closest to the principal components of the original provenance dataset.ANOVA strategy seeks the n partitioning rules that best represent D, selecting those that generate partitions where the data variance is closest to D data variance. In short, original data variance and data variance for each partition are calculated using the ANOVA metric, then partitions with most similar variance to the original provenance data are selected. Here, the n rules are defined in terms of the data11/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021) Manuscript to be reviewed Computer Science percentage required to represent the entire data set, and that parameter must also be defined in algorithm execution. Using PCA or ANOVA partitioning strategies means that the partitioning rules used by FReeP can be reduced, depending on the associated parameters that need to be defined. Algorithm 2 Enhanced FReeP Require: y : recommendation target parameter P : {(param, val) | param is a workflow parameter, val is the preference value for param} D : {{(param 1 1 , val 1 1 ), ..., (param l 1 , val l 1 ), ...(param m l , val m l )} | l is the workflow parameters number, m is the provenance dataset length} 1: procedure FREEP(y, P, D) partition ∈ partitions do 6: data ← horizontal f ilter(D ′ , partition) 7: data ← vertical f ilter(data, partition) 8: model type ← model select(data, y) 9: data ′ ← preprocessing(data) 10: model ← hypothesis generation(data ′ , y, model type) 11:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Generic FReeP architecture overview</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Figure 9(b) shows the montage activities: (i) ListFITS, which extracts compressed FITS files, (ii) Projection, which maps the astronomical positions into a Euclidean plane, (iii) SelectProjections, which joins the planes into a single mosaic file, and (iv) CreateIncorrectedMosaic, which creates an overlapping mosaic as an image. Programs (v) CalculateOverlap, (vi) ExtractDifferences, (vii) CalculateDifferences, (viii) FitPlane, and (ix) CreateMosaic refine the image into the final mosaic. Montage is a data-intensive workflow, since one single execution of Montage can produce several GBs of data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The abstract specification of (a) SciPhy and (b) Montage</ns0:figDesc><ns0:graphic coords='16,203.77,63.78,289.50,206.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>dataset attributes. Also, in Montage dataset, the crota2 attribute (a float value that represents an image rotation on sky) has the largest values range and the largest standard deviation. The dec (an optional float value that represents Dec for region statistics) and crval2 (a float value that represents Axis 2 sky reference value in Montage workflow) attributes have close statistics and are the attributes with the smallest data range and the smallest Montage data standard deviation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Datasets Attributes Correlation matrices.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021) Manuscript to be reviewed Computer Science as the classifier of this ranker implementation. The k parameter of K Nearest Neighbors classifier was set as 3, 5, 7 for both the ranker and classifier. The choice of k ∈ {3, 5, 7} is because small datasets are used, and thus k values greater than 7 do not return any neighbors in the experiments. Experiment 1. Algorithm 1 Evaluation Script.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Precision results with SciPhy data.</ns0:figDesc><ns0:graphic coords='18,141.73,443.93,132.35,116.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Recall results with SciPhy data.</ns0:figDesc><ns0:graphic coords='18,282.35,444.25,132.34,115.64' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Experiment recommendation execution time with SciPhy data.</ns0:figDesc><ns0:graphic coords='18,422.96,437.53,132.35,117.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Precision results with Montage data.</ns0:figDesc><ns0:graphic coords='19,141.73,191.44,132.34,119.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15. Recall results with Montage data.</ns0:figDesc><ns0:graphic coords='19,282.35,189.34,132.34,123.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure 16. Experiment recommendation execution time with Montage data.</ns0:figDesc><ns0:graphic coords='19,422.96,184.18,132.34,121.78' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head /><ns0:label /><ns0:figDesc>7. Precision and recall, or MSE values are calculated based on all K-Fold Cross Validation iterations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>Figure 17 .</ns0:head><ns0:label>17</ns0:label><ns0:figDesc>Figure 17. Precision results with Sciphy data.</ns0:figDesc><ns0:graphic coords='20,141.73,337.35,132.35,123.85' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head>Figure 18 .</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Figure 18. Recall results with Sciphy data.</ns0:figDesc><ns0:graphic coords='20,282.35,338.77,132.35,121.01' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_27'><ns0:head>Figure 19 .</ns0:head><ns0:label>19</ns0:label><ns0:figDesc>Figure 19. Experiment recommendation execution time with Sciphy data.</ns0:figDesc><ns0:graphic coords='20,422.96,333.00,132.34,120.61' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_28'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 20a brings the data from results obtained for the numerical domain parameter Sciphy provenance database. The data shows zero MSE in all cases, except for the use of Multi Layer Perceptron in the regression. This result can be explained by the small database and the few different values for each</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_29'><ns0:head>Figure 20 .</ns0:head><ns0:label>20</ns0:label><ns0:figDesc>Figure 20. MSE results and recommendation execution time with Sciphy data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_30'><ns0:head>Figure 21 .</ns0:head><ns0:label>21</ns0:label><ns0:figDesc>Figure 21. Precision results with Montage data.</ns0:figDesc><ns0:graphic coords='21,141.73,471.09,132.35,120.05' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_31'><ns0:head>Figure 22 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figure 22. Recall results with Montage data.</ns0:figDesc><ns0:graphic coords='21,282.35,471.36,132.34,119.51' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_32'><ns0:head>Figure 23 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 23. Experiment recommendation execution time with Montage data.</ns0:figDesc><ns0:graphic coords='21,422.96,462.29,132.35,125.69' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_33'><ns0:head>Figure 24 .</ns0:head><ns0:label>24</ns0:label><ns0:figDesc>Figure 24. MSE results and recommendation execution time with Montage data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_34'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_35'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_36'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Learning model have numerical and categorical domain parameters. However, traditional Machine Learning models generally work with numerical data because the generation of these models, in most cases, involves many numerical calculations. Therefore, it is necessary to codify these categorical parameters to a numerical representation. The technique used here to encode categorical domain parameters to numerical representation is One-Hot encoding<ns0:ref type='bibr' target='#b9'>(Coates and Ng, 2011)</ns0:ref>. This technique consists of creating a new binary attribute, that is, the domain of this new attribute is 0 or 1, for each different attribute value present in dataset.</ns0:figDesc><ns0:table /><ns0:note>The encoded provenance data allows building Machine Learning models to make predictions for the target parameter under the step hypothesis generation. The model generated has the parameter y as class variable, and the other parameters present in vertical filter step output data are the attributes used to generalize the hypothesis. The model can be a classifier, where the model's prediction is a single recommendation value, or a ranker, where its prediction is an ordered list of values, of the value most suitable for the recommendation to the least suitable.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Dataset characteristics.Table1summarizes the main characteristics of the datasets. The Total Records column shows the number of past executions of each workflow. Each dataset record can be used as an example for generating Machine Learning models during the algorithm's execution. As seen, the SciPhy dataset is relatively small compared to Montage. The column Total Attributes shows how many activity parameters are considered in each workflow execution. Both workflows have the same number of categorical domain parameters,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Total Records</ns0:cell><ns0:cell>Total Attributes</ns0:cell><ns0:cell>Categorical Attributes</ns0:cell><ns0:cell>Numerical Attributes</ns0:cell></ns0:row><ns0:row><ns0:cell>Sciphy</ns0:cell><ns0:cell>376</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>Montage</ns0:cell><ns0:cell>1565</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>6</ns0:cell></ns0:row></ns0:table><ns0:note>as presented in the column Categorical Attributes. Montage has more numeric domain parameters than SciPhy, as shown in the Numerical Attributes column.1 https://confluence.pegasus.isi.edu/display/pegasus/WorkflowGenerator 2 Data sources are available at http://irsa.ipac.caltech.edu.15/30PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>SciPhy dataset statistics.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Minimum Value</ns0:cell><ns0:cell>Maximum Value</ns0:cell><ns0:cell>Standard Deviation</ns0:cell></ns0:row><ns0:row><ns0:cell>cntr</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell>134.00</ns0:cell><ns0:cell>35.34</ns0:cell></ns0:row><ns0:row><ns0:cell>ra</ns0:cell><ns0:cell>83.12</ns0:cell><ns0:cell>323.90</ns0:cell><ns0:cell>91.13</ns0:cell></ns0:row><ns0:row><ns0:cell>dec</ns0:cell><ns0:cell>-27.17</ns0:cell><ns0:cell>28.85</ns0:cell><ns0:cell>17.90</ns0:cell></ns0:row><ns0:row><ns0:cell>crval1</ns0:cell><ns0:cell>83.12</ns0:cell><ns0:cell>323.90</ns0:cell><ns0:cell>91.13</ns0:cell></ns0:row><ns0:row><ns0:cell>crval2</ns0:cell><ns0:cell>-27.17</ns0:cell><ns0:cell>28.85</ns0:cell><ns0:cell>17.90</ns0:cell></ns0:row><ns0:row><ns0:cell>crota2</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell>360.00</ns0:cell><ns0:cell>178.64</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Montage dataset statistics. Statistics on the SciPhy numerical attributes are shown in Table2. This table presents the minimum and maximum values of each attribute, in addition to the standard deviation. The attribute prob1 ( probability of a given evolutive relationship is valid) has the highest standard deviation, and its range of values is the largest among all attributes. The prob2 attribute (probability of a given evolutive relationship is valid) has both a range of values and the standard deviation similar to prob1. The standard deviation of the values of num aligns (total number of alignments in a given data file) is very small, while the attribute length (maximum sequence length in a specific data file) has a high standard deviation, considering its values range.</ns0:figDesc><ns0:table /><ns0:note>The Montage numerical attributes, shown in Table3, in most of the cases, have smaller standard deviation than the SciPhy. On average, Montage attributes also have a smaller values range than SciPhy</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head /><ns0:label /><ns0:figDesc>Montage dataset attributes Correlation Matrix.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>cntr equinox cra_05h_33m_30.26s cra_05h_34m_31.57s cra_05h_36m_03.61s cra_13h_39m_53.29s cra_13h_41m_29.27s cra_13h_44m_07.51s cra_15h_17m_28.13s cra_15h_20m_21.53s cra_16h_22m_57.25s cra_16h_24m_32.78s cra_16h_26m_07.95s cra_21h_32m_12.87s cra_21h_33m_39.80s cra_21h_34m_37.86s cdec_+01d_28m_59.3s cdec_+01d_57m_59.9s cdec_+02d_17m_27.4s cdec_+02d_33m_39.3s cdec_+21d_42m_31.8s cdec_+21d_58m_46.4s cdec_+22d_18m_04.0s cdec_+27d_45m_58.9s cdec_+28d_14m_42.0s cdec_+28d_31m_07.0s cdec_+28d_47m_24.8s cdec_-00d_15m_31.9s cdec_-00d_44m_31.0s cdec_-01d_00m_47.1s cdec_-01d_20m_11.9s cdec_-26d_21m_09.9s cdec_-26d_41m_09.9s (a) num_aligns prob1 model1_BLOSUM62 model1_BLOSUM62+G model1_BLOSUM62+I model1_BLOSUM62+I+G+F model1_CPREV+I+G model1_DCMut+G+F model1_Dayhoff+I+F model1_JTT+G+F model1_JTT+I+G+F model1_RtREV+G model1_RtREV+I model1_RtREV+I+G+F model1_VT+G+F model1_WAG+F model1_WAG+G+F model1_WAG+I+F model1_WAG+I+G+F model2_BLOSUM62+G model2_BLOSUM62+I+G+F model2_CPREV+G model2_DCMut model2_DCMut+I model2_Dayhoff+I+F model2_JTT+G+F model2_RtREV model2_RtREV+G+F model2_RtREV+I+F model2_WAG model2_WAG+G+F model2_WAG+I+F cdec_-26d_57m_33.8s cntr hdu cra_05h_34m_00.85s cra_05h_35m_02.31s cra_05h_36m_33.57s cra_13h_41m_29.25s cra_13h_44m_07.51s cra_15h_17m_56.98s cra_16h_22m_25.33s cra_16h_24m_00.92s cra_16h_25m_36.36s cra_21h_31m_43.88s cra_21h_33m_39.80s cra_21h_35m_06.69s cdec_+01d_41m_50.4s cdec_+02d_01m_19.3s cdec_+02d_33m_37.2s cdec_+21d_42m_30.8s cdec_+21d_58m_46.4s cdec_+22d_31m_01.2s cdec_+27d_58m_47.0s cdec_+28d_18m_06.9s cdec_+28d_47m_02.0s cdec_-00d_15m_30.2s cdec_-00d_44m_31.0s cdec_-01d_04m_01.2s cdec_-26d_04m_59.9s cdec_-26d_25m_13.8s cdec_-26d_57m_08.2s 0.8 0.4 0.0 0.4 0.8 num_aligns model1_CPREV+I+G model1_RtREV+I model1_WAG+I+G+F model2_Dayhoff+I+F model2_WAG+G+F model2_WAG+I+G model2_VT+G model2_RtREV+G+F model2_JTT+I model2_DCMut+G model2_CPREV+G model2_BLOSUM62+I model1_WAG+I model1_WAG+F model1_VT+G model1_RtREV+F model1_JTT+G+F model1_Dayhoff+G model1_BLOSUM62+I+F model1_BLOSUM62+G prob2</ns0:cell><ns0:cell>0.25 0.00 0.25 0.50 0.75 1.00</ns0:cell></ns0:row></ns0:table><ns0:note>(b) Sciphy dataset attributes Correlation Matrix.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Algorithm 2 values per parameter used in Experiment 2</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Experiment 3 results with Sciphy dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Table</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison between FReeP and related work.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='3'>https://openprovenance.org/store/ 25/30 PeerJ Comput. Sci. reviewing PDF | (CS-2020:11:55050:2:0:NEW 14 May 2021)</ns0:note>
</ns0:body>
" | "Reply to the Reviewers
Re: Manuscript ID 55050
“Provenance and Machine Learning-based Recommendation of Parameter Values in Scientific Workflows”
Daniel Silva Junior, Esther Pacitti, Aline Paes, Daniel de Oliveira
PeerJ Computer Science
Thank you very much for sending the report on the Decision of our manuscript ID 55050. We would like
to thank the reviewer for your very careful reading of this manuscript and constructive suggestion to further
improve the quality of our manuscript.
We revised the manuscript and we believe that this new version fulfills the remarks of the reviewer.
Again, we highlight the major changes in the revised manuscript. In this letter, however, follows our specific
response to the points raised by the referee. For each suggestion, our response follows below (reviewers’
comments are in italics).
Yours Sincerely,
Prof. Aline Paes, on behalf of the authors
Corresponding author: alinepaes@ic.uff.br.
May 14, 2021
1 of 2
Reviewer #1, comment #1
It would be good, if the authors include some comparison tables (summary tables) of the related work,
otherwise, it may become difficult to flow the textual description clearly.
Our response #1.1
First of all, we would like to thank you for reading, reviewing, and giving rich feedback about our work.
We appreciate your suggestion. We improve the Related Work section, including Table 6, between lines
890 and 891, where are presented a summary table comparing characteristics between our approach and
mentioned related works.
May 14, 2021
2 of 2
" | Here is a paper. Please give your review comments after reading it. |
161 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Annotation of bioassay protocols using semantic web vocabulary is a way to make experiment descriptions machine-readable. Protocols are communicated using concise scientific English, which precludes most kinds of analysis by software algorithms. Given the availability of a sufficiently expressive ontology, some or all of the pertinent information can be captured by asserting a series of facts, expressed as semantic web triples (subject, predicate, object). With appropriate annotation, assays can be searched, clustered, tagged and evaluated in a multitude of ways, analogous to other segments of drug discovery informatics. The BioAssay Ontology (BAO) has been previously designed for this express purpose, and provides a layered hierarchy of meaningful terms which can be linked to.</ns0:p><ns0:p>Currently the biggest challenge is the issue of content creation: scientists cannot be expected to use the BAO effectively without having access to software tools that make it straightforward to use the vocabulary in a canonical way. We have sought to remove this barrier by: (1) defining a bioassay template data model; (2) creating a software tool for experts to create or modify templates to suit their needs; and (3) designing a common assay template (CAT) to leverage the most value from the BAO terms. The CAT was carefully assembled by biologists in order to find a balance between the maximum amount of information captured vs. low degrees of freedom in order to keep the user experience as simple as possible. The data format that we use for describing templates and corresponding annotations is the native format of the semantic web (RDF triples), and we demonstrate some of the ways that generated content can be meaningfully queried using the SPARQL language. We have made all of these materials available as open source (http://github.com/cdd/bioassay-template), in order to encourage community input and use within diverse projects, including but not limited to our own commercial electronic lab notebook products.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>One of the major problems currently being faced by biologists charged with the task of performing experimental assays on pharmaceutically interesting molecules is the information burden involved with handling collections of assay descriptions. Individual laboratories may carry out hundreds or even thousands of screening experiments each year. Each of these experiments involves a protocol, and any two experiments may be identical, similar, or completely different. The typical practice for describing bioassay protocols, for both external communication and internal record keeping, is to use concise scientific English, which is the most universally human readable method of communication, assuming the recipient is familiar with the relevant jargon.</ns0:p><ns0:p>Unfortunately this method is not scalable. Even given the availability of an expert, it is often quite difficult and time-consuming to read two assay description paragraphs and provide a metric for the degree to which two protocols differ. There are many workflow scenarios where comparison of protocols is necessary, e.g. searching through a collection of previous experiments, or making a judgment call as to whether two batches of small molecule measurements are comparable. Attempting to use software to assist with such tasks, when the substrate is unconstrained text, results in solutions that are crude at best. While these issues with scalability could be described as a relatively minor nuisance in a small laboratory, the field of drug discovery has lately been undergoing a renaissance of open data. <ns0:ref type='bibr' target='#b0'>1,</ns0:ref><ns0:ref type='bibr' target='#b1'>2,</ns0:ref><ns0:ref type='bibr' target='#b2'>3,</ns0:ref><ns0:ref type='bibr' target='#b3'>4</ns0:ref> Services such as PubChem provide a truly massive resource; <ns0:ref type='bibr' target='#b4'>5</ns0:ref> PubChem alone provides more than a million unique bioassay descriptions, and is growing rapidly. <ns0:ref type='bibr' target='#b6'>6,</ns0:ref><ns0:ref type='bibr' target='#b7'>7</ns0:ref> Such data are supplemented by carefully curated resources like ChEMBL, <ns0:ref type='bibr' target='#b8'>8</ns0:ref> which are much smaller but have strict quality control mechanisms in place. What these services have in common is that their bioassay protocols have very little machine-readable content. In many cases, information about the target, and the kind and units of the measurements, have been abstracted out and represented in a marked up format, but all of the remaining particulars of the protocol are ensconced within English grammar, if at all.</ns0:p><ns0:p>In order to address this problem, the BioAssay Ontology (BAO) was devised. <ns0:ref type='bibr' target='#b9'>9,</ns0:ref><ns0:ref type='bibr' target='#b10'>10,</ns0:ref><ns0:ref type='bibr' target='#b11'>11</ns0:ref> The BAO, which includes relevant components from other ontologies, is a semantic web vocabulary that contains thousands of terms for biological assay screening concepts, arranged in a series of layered class hierarchies. The BAO is extensive and detailed, and easily extensible. The vocabulary is sufficiently expressive to be used for describing biological assays in a systematic way, yet it has seen limited use. Influential projects such as PubChem, <ns0:ref type='bibr' target='#b13'>12</ns0:ref> ChEMBL, <ns0:ref type='bibr' target='#b14'>13</ns0:ref> BARD <ns0:ref type='bibr' target='#b15'>14</ns0:ref> and OpenPHACTS 15 make use of the ontology, but the level of description in each is shallow, using only a small fraction of the terms.</ns0:p><ns0:p>There are a number of factors holding back scientists from using the BAO and related ontologies to describe their assays in detail, with perhaps the most substantial being the lack of software that makes the annotation process fast and convenient. Because it is based on the semantic web, BAO concepts are expressed as triples, of the form [subject, predicate, object]. There are no hard rules about how this is applied, which is a characteristic of the semantic web, and is both an asset and a liability. The simplest way to consider annotating a particular feature of an assay, e.g. the biological process, is to compose a triple of a form such as [assay ID, biological process, viral genome replication]. Each of these 3 fields is a uniform resource indicator (URI), which points to a globally unique object with established meaning. In this case, assay ID would correspond to an identifier that the user has created for the assay description; biological process corresponds to a specific property in the BAO that is used to link assays and the biological process that is being affected; and viral genome replication refers to a class in the BAO, which identifies a specific instance of a biological process, which is in turn inherited from a sequence of increasingly general classes, and may also be linked to any other node within the greater semantic web, such as the extensive Gene Ontology (GO) <ns0:ref type='bibr' target='#b18'>16</ns0:ref> .</ns0:p><ns0:p>In principle, screening biologists can use the properties and classes from the BAO to annotate their assays intelligently in a machine readable format that is compatible with the universe of the semantic web. If large numbers of assays were sufficiently annotated, biologists and other drug discovery scientists could perform advanced searches and filtering that would enable better interpretation of results, enhanced building of machine-learning models, and uncovering of experimental artifacts. Despite the clear benefits of semantic annotation, the BAO remains largely unused, the primary reason being its lack of accessibility. The BAO and its linked dependencies are large, and can be expected to keep growing as they are extended to capture more biological concepts. For an interactive view onto these terms, the site http://bioportal.bioontology.org/ontologies/BAO should be used to peruse the hierarchy. 17 Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref> shows two snapshots of part of the BAO hierarchy, using the BioPortal resource. The classes (Figure <ns0:ref type='figure' target='#fig_17'>1a</ns0:ref>) that make up the ontology contain the bulk of the terms and provide most of the expressive value, while the properties (Figure <ns0:ref type='figure' target='#fig_17'>1b</ns0:ref>) are used to provide context. The class hierarchy is in places many levels deep, and although it is arranged in a logical pattern, it is nonetheless necessary to be familiar with the entire layout in order to meaningfully annotate an assay protocol. Even an expert biologist familiar with the entire ontology would be presented with multiple degrees of freedom for deciding how to annotate a protocol; this is a fundamental problem for machine readability, which requires uniform consistency.</ns0:p><ns0:p>In our previous work we addressed the end-user problem, and invented technology that applies to the scenario when a user is presented with plain English text, and is charged with the task of selecting the appropriate semantic annotations. Our solution involved a hybrid approach that combined natural language processing with machine learning based on training data, with an intuitive interface that helps the user select the correct annotations, leaving the final choice in the hands of the scientist. <ns0:ref type='bibr' target='#b19'>18</ns0:ref> During this process we found that the challenge that we were unable to fully overcome was the burden of creating new training data. The BAO vocabulary defines more than 2500 classes, in addition to properties and terms from other ontologies, all of which can be expected to grow as the BAO is increasingly used for more biological content.</ns0:p><ns0:p>Considering each term as it applies to a given assay requires a high level of expertise of the BAO itself. For example, the NIH's Molecular Libraries Program's bioassay database, known as the BARD, employed dedicated research staff to annotate more than two thousand assays. <ns0:ref type='bibr' target='#b20'>19</ns0:ref> The absence of clear and straightforward guidance as to which terms to use under what circumstances is preventing adoption of the BAO by drug discovery scientists. For our model building efforts, we made use of a training data set made up of 1066 PubChem bioassays that each had more than a hundred terms associated with them, <ns0:ref type='bibr' target='#b23'>20,</ns0:ref><ns0:ref type='bibr' target='#b24'>21</ns0:ref> although not all of the annotations were able to be matched to ontology terms. For purposes of creating additional training data, we experienced considerable difficulty finding what we considered to be canonical annotations for any given assay.</ns0:p><ns0:p>The BAO is essentially a vocabulary that is capable of describing many assay properties, but it lacks instructions on its use. This is an issue that we have undertaken to solve, and in this article we describe our approach to providing this critical missing component.</ns0:p><ns0:p>We describe a data model called the BioAssay Template (BAT), which consists of a small number of terms which are organized to describe how the BAO and linked ontologies should be used to describe a particular kind of bioassay. A template is essentially a gateway to the overall ontology, which divides the assay annotation process into a fixed hierarchy of assignments, each of which has a prescribed list of values, which are cherry-picked from the overall ontology.</ns0:p><ns0:p>The BAT vocabulary can be used to create any number of templates, which can be customized to suit the task at hand. As a starting point, we have created what we refer to as the common assay template (CAT). CAT is an annotation recipe that is intended to capture the major properties that most biologists need to describe their assays and that enables most drug discovery scientists to have a basic understanding of an assay and its results.</ns0:p><ns0:p>A condensed summary of this template is shown in Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>. Unlike the class hierarchy of the BAO, the tree structure of the CAT is flat. While the data model allows groups and subgroups, our current template errs on the side of simplicity, and includes just 16 A template can be customized as necessary, and once it is ready, it can be used to define the way in which assays are annotated. The data model is designed to enable software to compose a user interface: presenting each of the categories, and making use of the selected values as the options that are made available to the user. It is essentially a way to restrict and simplify the large scope of the BAO, reduce the degrees of freedom, and remove ambiguity. Having curated the assignments and values so that the lists consist of the minimum number of relevant possibilities, each of them decorated by a meaningful label and a more detailed description, it becomes possible to design a user experience that is suitable for a scientist who is an expert in the field, but does not necessarily know anything about semantic web concepts.</ns0:p><ns0:p>In order to explore this approach, we have created a software package called the BioAssay Schema Editor, which is open source and available via GitHub. It is written using Java 8, and runs on the major desktop platforms (Windows, Mac & Linux). The software implements the data model that we describe in this article.</ns0:p><ns0:p>Our priorities for this work are to: (1) establish a data model for bioassay templates; (2) create an intuitive software package for editing these templates and using them to annotate real data; and (3) collaboratively establish a CAT for general purpose use. We have put a considerable amount of effort into the user interface for editing templates, even though we expect only a small fraction of biologists will ever be directly involved in editing them. We have also invested significant effort towards developing a one-size-fits-most template, the CAT. Our goal with the CAT was to enable capture of ~80% of the most commonly used terms, and present them in a logical and concise way, so that a large proportion of users will be able to use it as-is to add a significant amount of value to their protocol data. In addition, the CAT can act as a starting point for modification if scientists would like to tailor the template.</ns0:p><ns0:p>Scientists working in research groups that routinely make use of terms that are not included in the CAT can elect to start with an existing template and add the missing assignments and values, and also delete whole groups of content that do not apply to their research. A research group may accumulate a collection of task-specific templates, allowing their scientists to pick the most appropriate one. By ensuring that the editor software is easy to use, runs on all platforms, and is open source, we hope to ensure that this option is quite practical for any research group with access to basic information technology expertise. We intend to encourage the community to make use of these resources, both as standalone tools and interoperating with the electronic lab notebook software that we are presently designing.</ns0:p><ns0:p>One of the implicit advantages of using semantic web technology as the underlying data format (triples), and a well established set of reference terms (the BAO and various linked ontologies), is that even if two scientists are annotating assays with different templates, it is highly likely that many or most of the terms will overlap, even if the templates were created from scratch. Since the final deliverable for an annotated assay is the semantic web, it means that the output can be subjected to the entire universe of software designed to work with RDF triple stores. <ns0:ref type='bibr' target='#b25'>22</ns0:ref> As more assays are annotated, the scope and power of queries and informatics approaches for enhancing drug discovery projects are similarly increased. With a large corpus of annotated assays available, scientists will be able to make better use of prior work for understanding structure activity relationships, uncovering experimental artifacts, building machine-learning models, and reducing duplicated efforts. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Data Model</ns0:head><ns0:p>The semantic description of templates and annotations uses a small number of additional URIs, each of which has the root stem http://bioassayontology.org/bat, and is denoted using the Turtle-style 23 abbreviated prefix 'bat'.</ns0:p><ns0:p>The hierarchical model for describing a template is shown in Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref>. Parent:child relationships denoted by an arrow indicate one-to-many relationships, while the properties listed in the boxes underneath the nodes are one-to-one relationships. A template definition begins with the root, which is distinguished by being of type bat:BioAssayTemplate. The root is also of type bat:Group, and has some number of child nodes, which are themselves either assignments or subgroups.</ns0:p><ns0:p>An assignment node has several scalar properties, including label and description, and it also refers to a property resource. These are typically mapped to URI resources found within the BAO (e.g. http://www.bioassayontology.org/bao#BAO_0000205, label: 'has assay format'). Each assignment has some number of values associated with it, and these make up the list of available options. Each value is primarily identified by the resource that it maps to, which is typically found in the BAO (e.g. http://www.bioassayontology.org/bao#BAO_0000219, label: 'cell based format'). Besides the label and description, which are customizable within the template data model, the reference URI has its own implied class hierarchy (e.g. 'cell based format' is a subclass of 'assay format'), which is not encoded in the template data model, but is inferred once it is paired with the BAO and its linked ontologies.</ns0:p><ns0:p>The schema for annotation of assays is shown in Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>. The assay is given a distinct URI, and is associated with several properties such as label and description. The template is recorded, as is an optional reference to the origin of the assay (which may be a semantic web resource, or a DOI link to a journal article). The free-text description of the assay can also be recorded using the hasParagraph predicate.</ns0:p><ns0:p>The assay is associated with some number of annotations, which are primarily linked to assignments within the corresponding template. For annotations that assert a URI link, the hasValue predicate typically corresponds to one of the available values that was prescribed for the assignment in the template definition, and generally refers to a term defined in the BAO, though custom references can be used -or the annotation may be specified using the hasLiteral predicate instead, which means that the user has entered data in a different form, typically text or a numeric value. The hasProperty predicate is generally copied from the corresponding assignment.</ns0:p><ns0:p>When annotating an assay, each assignment may be used any number of times, i.e. zero instances means that it has been left blank, while asserting two or more triples means that all of the values apply. The relationship between assays and annotations has no nesting: the intrinsic group/sub-group structure of any particular annotation can be inferred from the template, since the usesTemplate and isAssignment predicates refer to the origins in the template.</ns0:p></ns0:div>
<ns0:div><ns0:head>Software</ns0:head><ns0:p>The BioAssay Schema Editor is available from GitHub (https://github.com/cdd/bioassaytemplate) and may be used under the terms of the Gnu Public License 2.0. <ns0:ref type='bibr' target='#b27'>24</ns0:ref> The code is written using Java 8, and the user interface is based on JavaFX. Semantic web functionality is implemented by incorporating the Apache Jena library. <ns0:ref type='bibr' target='#b28'>25</ns0:ref> The project includes a snapshot of the BioAssay Ontology <ns0:ref type='bibr' target='#b29'>26</ns0:ref> and some of the linked ontologies, as well as the latest version of the common assay template schema. It should be assumed that the project will continue to evolve until well after the publication date of this article.</ns0:p><ns0:p>The application operates on a datafile referred to as a schema, which is represented as a collection of triples (in Turtle format, with the extension .ttl). A schema is expected to include a single template, for which the root node is of type bat:BioAssayTemplate, and may optionally contain any number of assays that have been (or will be) annotated using that same template. Triples are used as the serialization format in order that the editable files can be used as-is by a Triple store, and become a part of the semantic web with no further modification.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref> shows the main window for the application, which has loaded a contemporary version of the common assay template (CAT), and has several accompanying assays awaiting annotation. The components that make up the template are shown as a hierarchy on the left hand side of the panel. Selecting any of the groups or assignments causes the detail view on the right to be filled in with the corresponding content.</ns0:p><ns0:p>Adding, deleting, renaming etc. of groups, assignments and values is fairly mundane, and follows standard desktop user interface design patterns. Selecting URI values for properties and values requires a more specific interface, and is composed by summarizing the BAO vocabulary, which is loaded into the application at the beginning. Resources can be selected using a dialog box that can present the list of options in a flat list, with an optional search box for restricting the list (Figure <ns0:ref type='figure' target='#fig_23'>6a</ns0:ref>) or by using the hierarchy view that shows the position in the BAO ontology (Figure <ns0:ref type='figure' target='#fig_23'>6b</ns0:ref>). The dialog box can also be used to add multiple values at once, which is particularly convenient when a branch of the BAO encompasses multiple terms that are all valid options. When a resource is selected, its label and description are imported from the BAO into the template: these values can be edited after the fact, but by default they are the same as in the underlying vocabulary.</ns0:p><ns0:p>The primary role of the schema editor is to provide a convenient way to edit templates, but in support of this goal, it also provides an interface to use the template to annotate assays. The interface can be used for generating training data (e.g. for model generation), but it is mainly intended as a way to 'test drive' the current template. Because the annotation process is directly derived from the template, having the two editing processes side by side is advantageous when the template is being designed. For example, the operator can begin annotating an assay, and if a value is missing from one of the assignments, or a new kind of assignment turns out to be necessary, this can be added to the template within the same editing session. Figure <ns0:ref type='figure' target='#fig_25'>7a</ns0:ref> shows an example of an assay that has been annotated. The detail view has a placeholder for description text, which is particularly useful when the content has been imported from some external source, and the annotations are being made by converting the protocol text into semantic annotations. Clicking on any of the annotation buttons brings up a panel of options (Figure <ns0:ref type='figure' target='#fig_25'>7b</ns0:ref>) that represent the prescribed values for the assignment. Each of the assignments can be left blank, annotated once, or given multiple values. The ideal use case is when the value (or values) occurs within the list of prescribed values, but since the data model allows any URI, the user interface also allows the user to insert a custom URI. In cases where no URI is listed in the template (e.g. a concept that does not have an established URI), it is possible to add plain text for any of the assignment annotations. While this has no meaning from a machine-learning point of view, it can serve as a convenient placeholder for terms that will be invented in the future.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Templates</ns0:head><ns0:p>We set out to create a common assay template (CAT) that includes the basic details essential to defining any bioassay: assay type, format, target and biology, results and pharmacology, and other details. The CAT was developed with the opposing goals of identifying assignments that (1) would be limited in number in order to be not overly burdensome vs. (2) comprehensively cover the majority of the information contained in written descriptions of bioassays. We also considered the type of information that would be utilized by an end user attempting to search, filter, and aggregate assays by their bioassay annotations. For example, details such as the assay footprint (plate type), assay kit, and detection instrument were included because they may be useful terms for identifying experimental artifacts. Biological process and other targetrelated information were included to enable aggregating results across similar drug discovery projects for model-building and other applications. Finally, we limited assignments to those where the BAO offered sufficient options for possible values. Since the goal of the project is to generate machine-readable assay annotations, we avoided assignments where BAO terms were not available, such as those characterizing in vivo assays, and especially assignments whose values would be very specific for each assay, such as negative and positive controls. These areas will be addressed in the future once the underlying vocabulary (BAO or otherwise) is available sufficient to expand the domain. Similarly, the CAT falls short of capturing detailed protocol steps. In its present incarnation, it cannot be considered as a complete replacement for the text that is typically used to describe an assay, though we do intend to pursue this level of detail in future work. For the present, we are primarily concerned with utilizing the rich vocabulary within the BAO to achieve maximum impact with minimum additional burden on the end user workflow.</ns0:p><ns0:p>To develop the CAT, we used the following process: first, biologists independently considered each of the terms available in the BAO and prioritized assignments for the CAT. Each assignment was associated with a number of possible values based on the BAO hierarchy. Then, quantitative and qualitative approaches were used to determine if the prioritized assignments included in the CAT were sufficient to fully describe most assays. For the quantitative approach, we assessed the set of 1066 PubChem bioassays <ns0:ref type='bibr' target='#b30'>27</ns0:ref> that were previously annotated by hand by BAO experts. <ns0:ref type='bibr' target='#b31'>28</ns0:ref> In that exercise, the BAO experts aimed to fully annotate each assay, capturing all applicable information for more than a hundred different categories or terms. If there was not an applicable value, the assignment or category was left blank. We analyzed the use of the BAO terms to assess the utility and comprehensiveness of the assignments included in the CAT compared to the remaining terms. We found that the 16 CAT assignments were annotated in 81% of the 1066 PubChem assays compared to 33% for the remaining terms. We also found that 95% of the values for CAT assignments were BAO terms rather than literal or non-URI based terms, compared to 63% in the remaining categories. These results suggested that the CAT includes assignments that are both relevant to the majority of assays as represented in PubChem and well covered by the BAO.</ns0:p><ns0:p>For an in-depth qualitative assessment of the CAT, biologists annotated a wide variety of assays, encompassing different assay types (e.g., cell viability, enzyme activity, binding, and ADMET), assay formats (e.g., cell-based, biochemical, microsome, organism, tissue, etc.), and assay design methods (e.g., ATP quantitation, cell number, immunoassays, gene expression, radioligand binding, etc), as summarized in Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>. We found that in many cases, both from assay descriptions available from PubChem and from in-house screening assay descriptions, the CAT captured much of the relevant information. For example, annotating an assay for cell viability (PubChem ID 427) shows that all but two of the 16 CAT assignments are readily annotated from the short descriptive information provided (Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref>). 'Target' is left blank, as it is not applicable (this assay aims solely to identify cytotoxic compounds); 'Detection Instrument' was not noted. Similarly, as shown in Figure <ns0:ref type='figure'>9</ns0:ref>, all applicable CAT assignments (15 of the 16) are annotated from the description of a competitive binding assay (PubChem ID 440). Figure <ns0:ref type='figure'>9</ns0:ref> also illustrates that multiple values can be annotated for a single assignment, enabling content from complex assays to be captured. Together, these two examples highlight that both cellbased and biochemical assays can be extremely well-suited to be annotated using the CAT.</ns0:p><ns0:p>However, there were some cases where the CAT was less effective in capturing important information. For example, 14 of the 16 CAT assignments could be annotated for PubChem ID 488847, some with multiple values; however, the 'big picture' view of this rather complex primary assay is not as readily apparent from its 'CAT profile' as from a single sentence in the description (Figure <ns0:ref type='figure' target='#fig_14'>10</ns0:ref>). In addition, this PubChem record had extensive technical details such as reagent components, liquid handling volumes and instruments, times of incubation and plate processing steps, which could be important for identifying matching assays or interpreting the results. Another example of a poor fit for the CAT, as noted earlier, are in vivo assays. These are largely beyond the scope of this effort, which is currently constrained to terms defined by the BAO: key parameters such as route of administration, dose, dose units, type of model (e.g. xenograft, disease) are not well represented. These and other limitations will be addressed in the future by adding or extending the underlying ontologies.</ns0:p><ns0:p>Finally, as noted earlier, we designed the CAT to be a 'one-size-fits-most' template. A summary of assignments for the complete set of assays annotated in the course of developing the CAT shows we have achieved this (Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>). One consequence of this 'one-size-fits-most' strategy is that certain attributes (such as those highlighted in green or red in Figures <ns0:ref type='figure' target='#fig_26'>8 and 9</ns0:ref>) have been omitted. Depending on one's perspective, these types of data (such as positive and negative controls, data processing/normalization steps, relevant disease indication, and specific protocol details such as pre-incubation of compounds with the target, time or temperature of an assay) could be viewed as essential. We decided to exclude this type of information from the CAT because of irregularity of appearance in bioassay descriptions, the lack of coverage by the BAO, or incompatibility with the current data model. Expanding into this area is an opportunity for future development, and it should be noted that the CAT may be used as a starting point for templates that provide a set of assignment options that are customized for subcategories of assays, or even specific projects. We believe the next immediate step should be to apply our CAT to a large (>10,000) set of assays, both to facilitate new meta-analyses and to identify potential gaps in annotation revealed by such studies.</ns0:p></ns0:div>
<ns0:div><ns0:head>PubChem</ns0:head><ns0:p>Possibly the most voluminous source of openly accessible bioassay data can be found on PubChem, which hosts more than 1.1 million assay records at the time of publication, and is growing rapidly. These are individually associated with the chemical structures of the compounds for which the measurements were made. Each of the assays is decorated with several descriptive fields that are essentially plain text, and which are populated by contributors during the upload process, or in some cases by an import script transferring data from other sources. While many of the entries contain a significant amount of detail, the phrasing style and level of detail varies considerably, often erring on the side of too little or too much information about the assay protocol.</ns0:p><ns0:p>Nonetheless, the PubChem assay collection represents one of the best and most convenient sources of data for annotation purposes, and for this reason we have added a feature to the BioAssay Template editor that explicitly searches for PubChem records, as shown in Figure <ns0:ref type='figure' target='#fig_15'>11</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The dialog box allows the user to type in a PubChem Assay ID number, or to hit the button labelled Random, which picks an arbitrary assay from the entire collection, and fills in the corresponding text and URI of origin. While a large proportion of assays loaded into PubChem contain only sparse tags about the data source, or the abstract of the corresponding publication, there are a significant number of records that contain lengthy descriptions of the assay. The dialog box provides an opportunity for the user to tidy up the text (e.g. removing irrelevant content) prior to importing it into the schema. The content is then added to the list of assays being annotated within the schema model, whereby the origin is recorded as a link to the assay, and the text is associated using the hasParagraph predicate. Once the text is augmented with annotations using the current template, it becomes a useful entry for training data. This is one of our main strategies for generating a corpus of data for machine-learning purposes, which will ultimately find its way into a user friendly ELN for bioassay annotation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Analysis</ns0:head><ns0:p>Because the data model we describe is based on semantic web triples, and the file format that is used by the BioAssay Schema Editor is made up of triples (in Turtle format), it means that any templates and assay annotations can be loaded directly into a triple store database, and queried using SPARQL queries. Content can be hosted on private servers for local use, or it can be exposed to the greater web of connected data. The supplementary information (Section 1) describes a configuration script for the open source Apache Fuseki Jena server which can be used to load the BioAssay Ontology, its related ontologies, and some number of files saved with the BioAssay Schema Editor, which can then be served up as read-only content.</ns0:p><ns0:p>Once the content is available via a SPARQL endpoint, there are a number of boilerplate queries that can be used to extract summary and specific information. Fetching a list of all bioassay templates can be accomplished using the following query: OPTIONAL {?template bat:hasDescription ?descr .} } The above query identifies any resource that is tagged as having the BioAssayTemplate type. Obtaining information about the assignments that are associated with a template can be done by looking for resources of type Group that are associated with it. Obtaining a summary list of assignments that are attached to the top level (i.e. not within a subgroup) can be accomplished with a query similar to the following (using the same prefixes as above) which explicitly references the common assay template: To query for information about the prescribed values for assignment (in this case the bioassay assignment from the common assay template), the following query can be used: </ns0:p></ns0:div>
<ns0:div><ns0:head>}</ns0:head><ns0:p>The query specifically pulls out the property field, which is typically a link into the BAO property terms, and the value field, which is typically a link into the BAO classes. Pursuing either of these resources provides a wealth of implicit information, partly from the hierarchical nature of the BAO terms, and the unlimited opportunities for these terms to be linked to other semantic resources.</ns0:p><ns0:p>To obtain a list of assays that have been annotated using one of the templates, the following query can be used: </ns0:p><ns0:formula xml:id='formula_0'>SELECT</ns0:formula></ns0:div>
<ns0:div><ns0:head>}</ns0:head><ns0:p>Because annotations are directly attached to an assay description, hierarchical information about the nature of the assignment can be obtained by further investigating the template definition of the assignment (?assn) or either of the linked BAO terms (?property and ?value).</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>We have developed a data model and interactive tool that can be used to narrow the degrees of freedom from the BioAssay Ontology (BAO) and its linked dependencies. This has been done in Manuscript to be reviewed Computer Science order to facilitate content creation activities, so that semantic annotation of assay protocols can be carried out by a domain expert with no corresponding expertise with the underlying ontology. We have provided a proof of concept tool that creates a user interface based on the template data model, and made this available to the community as open source.</ns0:p><ns0:p>The data model that we have created follows a simplistic pattern, where elementary facts can be asserted. By leveraging the implied value of the underlying ontology, a small collection of a dozen or so such annotations provides a significant amount of machine-readable context about the assay. While insufficient to completely define an assay protocol experiment, this stands in contrast to the standard practice of providing essentially zero machine-readable information (i.e. plain English text with quasi-standardized jargon).</ns0:p><ns0:p>We have made available the common assay template (CAT) which was designed by biologists with the objective of leveraging the BAO to provide the largest amount of useful, relevant, machine-readable information with the fewest number of additional data points needing to be captured by the originating scientist. The CAT is expected to be useful for a wide variety of sorting, filtering, and data aggregating tasks that drug discovery scientists need to be able to carry out on a large scale, but currently cannot due to the absence of machine-readable annotations.</ns0:p><ns0:p>The CAT prioritizes 16 assignments that biologists consider most central to describing their assays and reporting assay results. Annotations for these assignments will enable biologists to ask complex queries. For example, one could ask if there are systematic differences in cellbased versus biochemical-based assays for a certain target class, such as kinases. One could determine if a certain assay set-up, such as 96-well plates using a spectrophotometer were likely to have a higher hit rate. Similarly, one could identify if a certain compound or class of compounds is active in multiple assays, and if those assays assess similar biological processes or if the activity is likely to be an artifact.</ns0:p><ns0:p>By focusing on 16 assignments out of more than a hundred options available in the BAO, the CAT is meant to impose a minimal burden for annotating scientists. Our goal is to make annotating assays simple and easy so that the practice may be generally adopted. Templates are malleable and scientists can easily include other assignments.</ns0:p><ns0:p>One critical type of information that is not included in the current framework is protocol steps, which would be essential for directly comparing two assays. In the future, it would be useful if this information were machine-readable. However, semantic technology using a simplistic data model like the BAT cannot capture sequences of information. Capturing procedural or protocol steps would require the development of a more complex data model. Under the current system, we imagine that queries using annotations from the CAT will allow scientists to hone in on similar assays, but for the moment, experts will still need to read the full assay descriptions to make decisions about combining different assays' data sets.</ns0:p><ns0:p>We have carried out this work in the context of a much larger scope, which is to provide scientists with tools to easily annotate bioassays and other related experiments in a way that is complete and machine-readable. Given that the standard industry practice does not involve adding any machine readable data to assay protocols, and that there are currently no widely available tools to do so with a user experience that is sufficiently painless for mass adoption, we have taken an incremental approach. This additional work has been done in order that we can continue with our previous work that was focused on using machine learning techniques to accelerate manual assignment of assays. <ns0:ref type='bibr' target='#b17'>15</ns0:ref> Our immediate follow-up goals are to make use of the CAT to gather a large corpus of training data, both from active users of CDD Vault, and from existing repositories such as PubChem. This training data will be used to ensure that our PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>enterprise ELN tools will be supported by machine learning technology as soon as they are unveiled.</ns0:p><ns0:p>We are also pursuing options for extending the BioAssay Template (BAT) data model so that it is capable of capturing more sophisticated information about assays, e.g. linking to other ontologies to cover more types of assays; adding terminology for capturing quantities; addition of indefinite numbers of preparation steps; dependent assignment types, etc. One critical step when we enable connecting with other ontologies will be the ability to link the 'Target' to a unique identifier such as geneid or UniProtID. Each unique target identifier can be associated with a rich array of corresponding GO terms, of which a subset are mapped into the default selection of BAO classes. This will enable comparison of assays based on specific targets and related biological processes or molecular functions. While our first objective is horizontal scaling, i.e. ensuring that all assay protocols have semantic annotations that make a large portion of the content machine-readable, pursuing vertical scaling is also of great interest, i.e. making it possible for the semantic annotations to replace the need for use of English text. <ns0:ref type='bibr' target='#b32'>29</ns0:ref> This brings about some exciting possibilities beyond just improvement of searching and matching, such as uploading protocols to robotic assay machinery, or making the publication process multi-lingual, thus alleviating a considerable burden to non-native English speakers. Pursuing this goal will require significant additions to the BAO itself, as well as making increased use of borrowed terms from other ontologies.</ns0:p><ns0:p>The technology that we have described in this article has been created for the purpose of improving the electronic lab notebook (ELN) technology that is offered by Collaborative Drug Discovery, Inc. (CDD), and we have begun work on a web-based interface for using templates such as the CAT for annotating assay protocols. <ns0:ref type='bibr' target='#b33'>30</ns0:ref> We have disclosed all of the underlying methods, data and open source code because we welcome participation by anyone and everyone. While CDD is a privately held for-profit company, it is our firm belief that improvement to this particular aspect of scientific research is a positive sum game, and we have more to gain by sharing than by keeping our technology entirely proprietary.</ns0:p></ns0:div>
<ns0:div><ns0:head>Supporting Materials</ns0:head><ns0:p>The BioAssay Schema Editor is publicly available from GitHub (https://github.com/cdd/bioassaytemplate). The source code for the application is available under the terms of the Gnu Public License (GPL) v2, which requires that derived works must also be similarly open. The underlying semantic data model for the template and assay annotation, as well as the common assay template (CAT), are public domain: they are not copyrighted, and no restrictions are placed on their use. The BioAssay Ontology (BAO) is available from the corresponding site (http://bioassayontology.org/bioassayontology) under the Creative Commons Attribution License v3. Manuscript to be reviewed , too specific to exist as a BAO entry, but deemed valuable), green = information not captured but possible for a future version (e.g., controls, labels of target and ligand, assay quality data (Z')), red= information beyond the scope of BAO (technical details). Right: CATvalues assigned in the BioAssay Schema Editor capture key parameters of the assay yet do not capture the complexity of the assay articulated in the single sentence (arrow): 'a flow cytometry protein interaction assay to screen for compounds that compete with RNA binding to GRK2'. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed A snapshot of the annotation interface that is available within the template editor Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Tables</ns0:head><ns0:note type='other'>Figure Captions</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>different assignments, PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016) Manuscript to be reviewed Computer Science each of which is associated directly with the top-level assay, and each of which has a list of associated values (examples shown in Figure 2).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PREFIX bat: <http://www.bioassayontology.org/bat#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT ?template ?label ?descr WHERE { ?template a bat:BioAssayTemplate ; rdfs:label ?label .</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016) Manuscript to be reviewed Computer Science SELECT ?property ?value ?label { <http://www.bioassayontology.org/bas#Bioassay> bat:hasProperty ?property ;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: A selection of the BioAssay Ontology hierarchy, visualized using BioPortal (http://bioportal.bioontology.org): (a) classes and (b) properties.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: An overview of the common assay template (CAT) at the time of publication.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: BioAssay Template data model, which is used to describe a template.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Data model for annotated assays, which is used to apply a template to a specific assay.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5:A snapshot of the BioAssay Schema Editor. On the left hand side the current template is shown at the top (with its hierarchy of groups and assignments), and any assays currently in progress shown underneath. The panel on the right shows the details for an assignmentassay format -and the prescribed values that are associated with it.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: A snapshot of the two main tabs used for locating a value in the BioAssay Ontology. The left hand side (a) shows the list view, which is flat, while the right hand side (b) shows the values in context of the actual hierarchy of the underlying ontology.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: A snapshot of the annotation interface that is available within the template editor (a).The current template can be applied to specific assays within the same overall user interface, which is a convenient way to evaluate its suitability. Selecting any of the assignments brings up a dialog box presenting all of the prescribed values (b).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Example of PubChem Assay text ideally suited for annotation with the CAT. Left: Text from description in PubChem Assay ID 427: yellow = information captured in CAT, green = information not captured but possible for a future version (e.g., controls, data processing), red= information beyond the scope of BAO (technical details) Right: CAT assignments in BioAssay Schema Editor.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 9. Example of PubChem Assay text ideally suited for annotation with the CAT.Left: Text from description in PubChem Assay ID 440: yellow = information captured in CAT, pink = information added as 'literal' values (i.e., too specific to exist as a BAO entry, but deemed valuable), green = information not captured but possible for a future version (e.g., controls, data processing), red= information beyond the scope of BAO (technical details). Right: CAT assignments in BioAssay Schema Editor. Annotations added as 'literal' values are highlighted yellow and contained in single quotes. Note that multiple values for a single CAT assignment can be annotated (target biological process, assay mode of action, assay screening campaign stage, perturbagen type).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure10. Example of an assay partially suited for annotation with the CAT. Left: Text from description in PubChem Assay ID 488847: yellow = information captured in CAT, pink= information added as 'literal' values (i.e., too specific to exist as a BAO entry, but deemed valuable), green = information not captured but possible for a future version (e.g., controls, labels of target and ligand, assay quality data (Z')), red= information beyond the scope of BAO (technical details). Right: CATvalues assigned in the BioAssay Schema Editor capture key parameters of the assay yet do not capture the complexity of the assay articulated in the single sentence (arrow): 'a flow cytometry protein interaction assay to screen for compounds that compete with RNA binding to GRK2'.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Dialog box for random lookup of assays from PubChem.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>1 A</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>selection of the BioAssay Ontology hierarchy, visualized using BioPortal (http://bioportal.bioontology.org): (a) classes and (b) properties.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: A selection of the BioAssay Ontology hierarchy, visualized using BioPortal (http://bioportal.bioontology.org): (a) classes and (b) properties.</ns0:figDesc><ns0:graphic coords='19,42.52,250.12,525.00,261.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 2 (Figure 2 :</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figure 2(on next page)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 3 (</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3(on next page)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 3 :Figure 4 (</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3: BioAssay Template data model, which is used to describe a template.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 4 : 5 A</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4: Data model for annotated assays, which is used to apply a template to a specific assay.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 5 : 6 A</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5: A snapshot of the BioAssay Schema Editor. On the left hand side the current template is shown at the top (with its hierarchy of groups and assignments), and any assays currently in progress shown underneath. The panel on the right shows the details for an assignment -assay format -and the prescribed values that are associated with it.</ns0:figDesc><ns0:graphic coords='26,42.52,280.87,525.00,484.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: A snapshot of the two main tabs used for locating a value in the BioAssay Ontology. The left hand side (a) shows the list view, which is flat, while the right hand side (b) shows the values in context of the actual hierarchy of the underlying ontology.</ns0:figDesc><ns0:graphic coords='27,42.52,255.37,525.00,246.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head>7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: A snapshot of the annotation interface that is available within the template editor (a). The current template can be applied to specific assays within the same overall user interface, which is a convenient way to evaluate its suitability. Selecting any of the assignments brings up a dialog box presenting all of the prescribed values (b).</ns0:figDesc><ns0:graphic coords='28,42.52,280.87,525.00,246.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head>Figure 8 (</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8(on next page)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,204.37,525.00,525.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Similarly, assignments with one level of nesting can be obtained with a slightly longer query, which explicitly inserts a subgroup in between the template and assignment:</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SELECT ?assn ?label ?descr ?property ?numValues</ns0:cell></ns0:row><ns0:row><ns0:cell>{</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'><http://www.bioassayontology.org/bas#CommonAssayTemplate></ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>bat:hasAssignment ?assn .</ns0:cell></ns0:row><ns0:row><ns0:cell>?assn a bat:Assignment ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>rdfs:label ?label ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>bat:hasProperty ?property .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>OPTIONAL {?assn bat:hasDescription ?descr .}</ns0:cell></ns0:row><ns0:row><ns0:cell>{</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>SELECT ?assn (COUNT(?value) as ?numValues) WHERE</ns0:cell></ns0:row><ns0:row><ns0:cell>{</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>?assn bat:hasValue ?value .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>}</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>GROUP BY ?assn</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>}</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>}</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>ORDER BY ?label</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>SELECT ?group ?glabel ?assn ?label ?descr ?property ?numValues</ns0:cell></ns0:row><ns0:row><ns0:cell>{</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'><http://www.bioassayontology.org/bas#CommonAssayTemplate></ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>bat:hasGroup ?group .</ns0:cell></ns0:row><ns0:row><ns0:cell>?group a bat:Group ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>rdfs:label ?glabel ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>bat:hasAssignment ?assn .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>?assn a bat:Assignment ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>rdfs:label ?label ;</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>bat:hasProperty ?property .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>{</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>SELECT ?assn (COUNT(?value) as ?numValues) WHERE</ns0:cell></ns0:row><ns0:row><ns0:cell>{</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>?assn bat:hasValue ?value .</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>}</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>GROUP BY ?assn</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>}</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>}</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>ORDER BY ?glabel ?label</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 . Representation of Common Assay Template in Sample Assay Set CAT Assignment Test Assays (of 43) With at Least 1 Value # of Unique Values Annotated</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>bioassay type</ns0:cell><ns0:cell>43 (100%)</ns0:cell><ns0:cell>24 of 88</ns0:cell></ns0:row><ns0:row><ns0:cell>assay format</ns0:cell><ns0:cell>43 (100%)</ns0:cell><ns0:cell>6 of 19</ns0:cell></ns0:row><ns0:row><ns0:cell>assay design method</ns0:cell><ns0:cell>43 (100%)</ns0:cell><ns0:cell>20 of 76</ns0:cell></ns0:row><ns0:row><ns0:cell>assay cell line</ns0:cell><ns0:cell>24 (55.8%)</ns0:cell><ns0:cell>15 of 95</ns0:cell></ns0:row><ns0:row><ns0:cell>organism</ns0:cell><ns0:cell>41 (95.3%)</ns0:cell><ns0:cell>11 of 65</ns0:cell></ns0:row><ns0:row><ns0:cell>biological process</ns0:cell><ns0:cell>40 (93.0%)</ns0:cell><ns0:cell>28 of 54</ns0:cell></ns0:row><ns0:row><ns0:cell>target</ns0:cell><ns0:cell>32 (74.4%)</ns0:cell><ns0:cell>13 of 38</ns0:cell></ns0:row><ns0:row><ns0:cell>assay mode of action</ns0:cell><ns0:cell>43 (100%)</ns0:cell><ns0:cell>8 of 13</ns0:cell></ns0:row><ns0:row><ns0:cell>result</ns0:cell><ns0:cell>41 (100%)</ns0:cell><ns0:cell>16 of 94</ns0:cell></ns0:row><ns0:row><ns0:cell>result unit of measurement</ns0:cell><ns0:cell>32 (74.4%)</ns0:cell><ns0:cell>6 of 56</ns0:cell></ns0:row><ns0:row><ns0:cell>assay screening campaign stage</ns0:cell><ns0:cell>40 (93.0%)</ns0:cell><ns0:cell>8 of 23</ns0:cell></ns0:row><ns0:row><ns0:cell>assay footprint</ns0:cell><ns0:cell>36 (83.7%)</ns0:cell><ns0:cell>5 of 20</ns0:cell></ns0:row><ns0:row><ns0:cell>assay kit</ns0:cell><ns0:cell>9 (20.9%)</ns0:cell><ns0:cell>5 of 93</ns0:cell></ns0:row><ns0:row><ns0:cell>physical detection method</ns0:cell><ns0:cell>42 (97.7%)</ns0:cell><ns0:cell>11 of 51</ns0:cell></ns0:row><ns0:row><ns0:cell>detection instrument</ns0:cell><ns0:cell>26 (60.5%)</ns0:cell><ns0:cell>9 of 97</ns0:cell></ns0:row><ns0:row><ns0:cell>perturbagen type</ns0:cell><ns0:cell>20 (46.5%)</ns0:cell><ns0:cell>3 of 9</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>9. Example of PubChem Assay text ideally suited for annotation with the CAT. Left:</ns0:head><ns0:label /><ns0:figDesc>Text from description in PubChem Assay ID 440: yellow = information captured in CAT, pink = information added as 'literal' values (i.e., too specific to exist as a BAO entry, but deemed valuable), green = information not captured but possible for a future version (e.g., controls, data processing), red= information beyond the scope of BAO (technical details). Right: CAT assignments in BioAssay Schema Editor. Annotations added as 'literal' values are highlighted yellow and contained in single quotes. Note that multiple values for a single CAT assignment can be annotated (target biological process, assay mode of action, assay screening campaign stage, perturbagen type).</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8704:1:1:NEW 6 Apr 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to Reviewer Comments
Reviewer 1 (Nicole Vasilevsky)
Basic reporting
This paper describes the use of the BioAssay Ontology for describing screening assays. The authors make good points about the difficulty in using and applying biomedical ontologies to scientific workflows. To address this, they developed a software tool which contains a common assay template and is modifiable to suit user's needs. This type of software is valuable for researchers and will help aid in structuring scientific data and ensuring reproducibility.
Figures 1 and 2 are cumbersome to view. I think these could be removed and the authors could point to BioPortal or the OWL file to convey this information to the reader.
We agree, and have opted to replace these figures with snapshots from BioPortal, and direct readers to use the site to explore further.
The screenshots are low resolution.
They match the resolution of the screen and cannot be improved.
Experimental design
For your bioassay template, you need to include detailed instructions on your GitHub page or in the paper on how to access and use the CAT, it was not obvious to me how to open and use the software, and I don't find it very intuitive to use.
We have updated the front page of the GitHub site accordingly, which should serve as an adequate starting point. The easiest way to get started is to download the BioAssayTemplate.jar and the schema, which we have now explained. Furthermore we have added a comment to the end of the manuscript mentioning recent work, which provides a web-based assay annotation tool that makes use of the schema output from this project, and is accessible from http://bioassayexpress.com. The software is in a very early stage of development, and is not the primary subject of this manuscript, but it is worthy of mention as future-work that has already produced tangible results.
Validity of the findings
The goal of this software application is to enable researchers to capture semantic data about their research findings. However, there needs to be detailed instructions on how to use this software tool, as most basic science researchers are not familiar with how to access software via GitHub. I think this could be addressed in the paper and/or the ReadMe file in GitHub.
As stated above, we have updated the GitHub documentation accordingly. As also mentioned, we have introduced a sneak preview of the web-based tool that is a followup to the project, which should hopefully satisfy the reviewers of our long term intentions.
Comments for the author
Add URL in abstract
Line 34: Should define what you mean by screening biologists
Line 38: Do you mean in publications or in the lab notebook, or both?
Line 58: Add link to BAO url
Line 92: The OWL file can also be viewed in Protege
We have applied all the above corrections, as requested.
Line 103: BioPortal says you have 3340 classes
This is consistent with our claim, which we indicate as a minimum. The number varies depending on how you count imported and equivalent classes.
Line 41: Please clarify here - you are referring to the Materials and Methods section of a publication?
We are referring to the part of a document that includes the assay protocol description, which is generally the materials and methods, though not always. The Bioassay Ontology (and Express) is applicable to the experiment procedure describing a bioassay, wherever it is located. This may be different for papers, databases, internal applications, etc.
Line 106: what is the BARD effort?
The BARD effort is the NIH Molecular Libraries Program bioassay database. This definition has been added to the text and the original manuscript provides a reference. The NIH has their own website on the project too: http://bard.nih.gov.
Line 143: Provide a URL for the GitHub site
The GitHub URL is provided on page 6 and page 13 and we have now added it to the abstract.
Line 254: Should be 'detailed', not 'detail' in 'The detail view has a placeholder...'
'Detail' in this case is used as a noun rather than an adjective.
General comments:
In the BAO, there is a typo in the comment in the class: BAO_0003110 HMS, I think it should be 'Harvard' instead of 'Harward'. Seems like this should be an 'alternative term' instead of a comment, as well.
Where can users make term requests or submit issues to the BAO?
This raises an important issue which we are well aware of, but have not yet proposed a definitive solution to. We have thus far opted to use the BAO as-is, rather than forking the ontology with our own version. In the long term this will not be sufficient, since the ontology needs to evolve and grow. We are working with the creators of the BAO, and of various other related ontologies, to make sure this is resolved as part of future products. We discuss the need to extend the BAO itself in the page 13 in the Conclusion section. Improving the BAO will be an aspect of future work which goes beyond the scope of this paper, and is related more broadly to the specific representation of information on the Internet.
Reviewer 2 (Stephan Schürer)
Basic reporting
This report is well written and addresses an important bottleneck in the use of domain ontologies to annotate biological data. The manuscript is well organized with clear introduction, methods, results and conclusions. Figure are adequate (some content needs to be improved / corrected). The developed BAT, CAT and software code with examples are made available open source.
Experimental design
The development of annotation software tools is an important area of research and development in the larger area of Semantic Web Technologies and Linked Data in particular in biomedical basic research where complex experiments, technologies and datatypes are common place. Authors report a simple approach to enable scientists who perform experiments and have little or knowledge of ontologies to make at least some basic use of the BioAssay Ontology to annotate biological assays (here primarily biochemical and cell-based assays)
Validity of the findings
The and methods approach are straight-forward, although I have not reviewed the code. I am very familiar with the BAO (disclosed above) and can therefore judge that the selection of the major category and their implementation are useful and cover basic annotations of straight-forward assays. It does not address a number of more complex annotations as the authors themselves indicate; these could be added in later as the tools evolve. BAO also provides the terminology and modeling to relate assays to each other (for example to describe a screening campaign), for example confirmatory, counter, alternate confirmatory, etc; this an important consideration outside the current solution. It could be mentioned. For reference, we have explained such assay relations and how they can be used to aggregate data in PMID 24078711.
Comments for the author
While the content appears overwhelmingly correct, it could be pointed out that the majority over one million assays in PubChem are imported from other resources. This is relevant, because these are not strictly assays, but (for example in case of ChEMBL records) curated descriptions from the literature; they would typically describe the final results / outcome of a study, but not each step on the way (e.g. the screening campaign). While it does make sense to annotate these records using formal semantics, BAO allows to describe screening assays technologically and to some extent operationally, e.g. screening pipelines, multiplexes and parallelized assays. The best example therefore to point out in PubChem are the ones generated in the NIH funded Molecular Libraries Program, specifically MLSCN and MLPCN programs.
We have added a note to this effect, namely that the majority of the PubChem assays are not detailed experimental protocol descriptions, and provided some guidance as to how to select assays based on the contributor. We have had the opportunity to study the content in more detail since originally submitting the manuscript, and have a better understanding of what is contained within. That being said, the project is not in any way inherently dependent on the state of PubChem assays, though we have benefitted greatly from this resource.
In the introduction the complexity of BAO is pointed out and exemplified with figures 1 and 2. Figure 1 appears to include many object properties that are not part of BAO and that are also not used in any assay descriptions in BAO. Many of those appear RO and BFO2.0 relations; some others appear to be made up, e.g. 'has other'.
We have replaced both of these figures. We have amended the text to make it clear that the BAO includes components from other ontologies. We have continued to use 'BAO' as a synonym for 'BAO + borrowed terms from other vocabularies', thus we do not think that this is a major impediment to understanding the article.
BAO imports a module of relationships from RO / BFO, which can be viewed in an ontology editor. This has to be corrected and I would also recommend to indicate the source of the relations (BAO, RO, OBI, etc), because this is an important feature of compatibility. The same should be done for figure 2 (class hierarchy). BAO imports modules from many other ontologies (GO, ChEBI, CLO, EFI, DO, etc) and this should mentioned; perhaps where it is suggested that GO could be used to further annotate biological processes (74-79).
In the time taken to review the manuscript, we have worked with the GO, and have made some minor additions to the conclusion, with regard to our intention to expand the scope of the BAO. This has been noted.
Reviewer 3 (Larisa Soldatova)
Basic reporting
The manuscript presents a common assay template (CAT) based on the previously developed ontology BAO and software (the BioAssay Editor) to ease the annotation of bioassays with formally defined BAO labels.
While the presented work on CAT is solid, with the potential users being involved into the development of CAT, there is no sufficient information on how the user requirements for the developed software were identified and how the Editor was evaluated.
The overall logic of the submitted manuscript is not as crisp as one would expect:
- The authors state that there is a problem (“the major problem”) – a comparison of protocols.
- They claim that BAO can solve this problem – “in order to address this problem, the BAO was devised.
- “the BAO remains largely unused”.
- The presented in the manuscript work will make it easier to use BAO for the annotation of bioassays.
My concerns are:
- In my view, BAO is not sufficient to solve the problem of accurate comparison of the protocols. The granularity for the representation of experiments is not detailed enough, with some essential for the comparison information still missing in BAO. It can contribute to the solution of the problem, but no one evaluated to what degree. The authors need to make this point clear (since it is about the main problem discussed in the manuscript): provide convincing evidence that bioassay protocols annotated with BAO terms can be accurately identified as “identical, similar, or completely different”, or that BAO offers a partial solution to the problem, that BAO annotation can help with such identification.
- The manuscript gives an impression (possibly unintentional) that the main drive for the reported work is to make BAO more usable. Instead the motivation should be to solve the stated problem (or contribute to its solution).
- The conclusions of the manuscript do not refer to the stated problem, how the reported results solved the problem. Instead the conclusion section reports on the solution of another (even if related) problem.
The suggestion is to make the flow of the manuscript more logically coherent with a problem clearly stated, solutions to this problem described and evaluated, and conclusions about the considered problem made.
The reviewer's posited alternate explanation is in fact the correct one, and we believe we have made this clear in the text, as it is referenced in the first paragraph of the Results section. We are well aware that the BAO has limitations, and we are exploring this in our follow-up efforts, which we discuss. The project we are describing in this manuscript really is a first, prerequisite step to make the BAO more usable - this is a major roadblock, which is holding back all efforts to leverage the ontology for its intended purpose. The template builder is the key to making BAO (and possibly other scientific ontologies) editable and extensible. We are documenting our efforts to solve the usability problem. We clarify our approach with the following logic:
• In order for analysis to be done, one needs quality content;
• in order to create quality content, one needs tools to enable scientists to produce it;
• in order to create these tools, one needs to have a clear roadmap about both the content to create and to create it;
• in order to capture this, one needs a way to express the guidelines for content creation. This is the tool which we describe in the manuscript.
Thus we agree with the reviewer about the large challenge, this reinforces the value of the approach, this publication, and the most recently released software tools.
In the 2 months since we submitted the paper, we have begun plans to undertake a major extension to the amount of experimental detail that can be captured by this method, far beyond the moderate level of detail that we are currently implementing. It would be premature to announce this in a current article, and so we have mentioned it only in passing in the conclusion, however the template editor and online tools will be continuously updated.
Experimental design
The process for the development of CAT is well described. 16 CAT were annotated in 81% of the 1066 PubMed assays.
Unfortunately there is no sufficient description of what methodology was used for the software development and how the quality and usability of the Editor was evaluated.
There is nothing unusual or noteworthy about our software development methodology: we used off-the-shelf tools to write the code. The library dependencies are referenced in the text. Evaluation of the editor is via informal use by biologists exercising it to achieve the intended purpose (creation and refinement of the Common Assay Template), which is an ongoing process. We also state that the project is not only open source, but is also being actively maintained.
Validity of the findings
The findings are valid and of value to the research community. However it has not been estimated how the findings contribute to the solution of the stated problem.
This criticism is accurate, but as we state clearly in the introduction, the overall problem has a series of linearly dependent steps. In this manuscript we describe our solution to the very first one, which is to remove the initial roadblock which prevents scientists from being able to generate quality content. We believe this milestone is of critical importance, and the absence of an appropriate solution has been holding back the field for some years. Moreover, we agree now that this first problem has been addressed, as with any new scientific or technical development of interest, in immediately suggests multiple other questions, problems and opportunities. By making the tools open source, these can be address both by us and/or others, in the spirit of the semantic web.
Comments for the author
The reported work is important and deserves to be published, but it needs to be better presented.
Agreed. We focused foremost on ensuring we were doing important and useful work that deserves to be published. And now with the help of the feedback from the reviewers it is better presented. It is timely to publish this work, because there are multiple possible follow-on studies (by us and/or others) that this will enable.
" | Here is a paper. Please give your review comments after reading it. |
162 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Energy is at the basis of any social or economic development. The fossil energy is the most used energy source in the world due to the cheap building cost of the power plants. In 2017, fossil fuels generated 64.5% of the world electricity. Since on the one hand, these plants produce large amount of carbon dioxide which drives climate change, and on the other hand, the storage of existing world fossil resources is in continuous decrease, safer and highly available energy sources should be considered. Hence, for human well-being, and for a green environment, these fossil plants should be switched to cleaner ones.</ns0:p><ns0:p>Renewable energy resources, have begun to be used as an alternative ones. These resources have many advantages such as sustainability and environmental protection.</ns0:p><ns0:p>Nevertheless, they require higher investment costs and mainly the reliability of planted systems which are in most cases not sufficient to ensure a continuous demand of energy for all in needy regions because most of these resources are climate dependent. The main contributions of this research are i) to propose a natural formalisation of the renewable energy distribution problem based on COP (Constraint Optimisation Problem), that take into consideration all the constraints related to this problem; ii) to propose a novel multiagent dynamic (A-RESS for Agent based Renewable Energy Sharing System) to solve this problem. The proposed system was implemented and the obtained results show its efficiency and performance in terms of produced, consumed and lost energy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Fossil fuels are the most used resources mainly to power homes and cars, i.e. in 2017, fossil fuels generated 64.5% of the world electricity. For a country, it is convenient to use coal, oil or gas energy sources to meet its energy needs, but these fuel sources are often limited. In addition, the burning of fossil fuels sends greenhouse gases into the atmosphere, polluting air, soil and water while trapping the heat of the sun and contributing to a global warming. Other traditional energies use biomass which designates the organic waste that can become a source of energy after combustion (wood energies), the anaerobic digestion (bio-gas) and other chemical transformations (bio-fuel). However, a biomass plant operates in a very similar way to fossil power plant. These pollutants fuel plants have serious consequences on the environment and on humans. Hence, even with an unlimited stock of fossil fuels, it is better to use renewable energy for the sake of humans and environment.</ns0:p><ns0:p>Renewable energy is an inexhaustible source of energy because it is constantly renewed by natural processes. Renewable energy is derived from natural phenomena mainly sun (radiation), moon (tide) and earth (geothermal). There is also energy generated from the water and the wind. All these renewable mentioned sources are called new energies. Renewable energy technologies transform these natural sources into several forms of usable energy, most often electricity but also heat, chemical and mechanical energies. Renewable energy technologies are called 'green' or 'clean' because they pollute little or nothing of the entire environment. The use of renewable energy hybrid power plants allow any country to develop its energy independence and security. Nevertheless, these resources are weather and location dependent, leading to the intermittent and randomness of its use for energy production. Hence, the hybrid PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science power plant production fluctuates independently from demand, yielding to an energy excess for some power plants while others cannot satisfy their minimal needs.</ns0:p><ns0:p>Since the production of renewable energy is expensive due to the high cost of both the installation of renewable energy power plants and the storage devices, the optimal solution is to avail from the produced quantity of this energy by maximizing its use and consequently minimizing its loss. Hence, the main goal of our work is to propose a novel dynamic, distributed and smart system (A-RESS for Agent based Renewable Energy Sharing System) that i) maximises the use of produced renewable energy (consequently minimizing its loss) by sharing the excess of energies between countries, and ii) minimizing the production cost (production, storage and transportation) by investment recovery.</ns0:p><ns0:p>Several research efforts have been proposed in the literature to enhance, generalise and mainly optimise the use of renewable energy. However, most of them i) dealt with sharing energy at a local and level, while our goal is to deal with global levels, for different environment and under several constraints, ii) focused only on how to find the best plan for locating and using distributed generators (DGs), iii) are based on storing energy excess for later use despite its high cost, iv) did not consider the dynamic aspect of these resources and v) did not consider the advantage of energy sharing among plants/countries. The main contributions of this research are i) to propose a natural formalisation of the renewable energy distribution problem based on COP (Constraint Optimisation Problem), that take into consideration all the constraints related to this problem; ii) to propose a novel multi-agent dynamic to solve this problem.</ns0:p><ns0:p>In this paper, we will first present several research efforts dealing with enhancing the use of renewable energy. Then, we will describe the energy sharing problem followed by an illustration of our proposed COP formalisation for this problem. A multi-agent dynamic of the proposed solver will be given in section 4. Finally, we will describe the different scenarios of our experimentation followed by an explanation of the obtained results. </ns0:p></ns0:div>
<ns0:div><ns0:head>NOMENCLATURE</ns0:head></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Several research efforts have been devoted toward enhancing the use of renewable energy as an alternative solution for mainly electricity and heat demands. Existing researches can be classified in three groups.</ns0:p><ns0:p>The first group concerns efforts that consider the problem of locating DGs (Distributed Generators) whether these generators are dedicated for renewable or non-renewable energy. The second group presents works dealing with the planning of Renewable Energy Sources (RES). While the third group concerns the optimization of renewable energy sharing at local level, within a building or among buildings of a same region.</ns0:p><ns0:p>In the first group, A. <ns0:ref type='bibr' target='#b0'>Piccolo (2009)</ns0:ref> considered the issue of satisfying demands' growth and distribution network security. They indicated the potential of DGs in offering an alternative approach to utilities compared to centralised generator. As DG provides many benefits for DNO (Distribution Network Operators). The main goal of this work is to evaluate impacts of network investment deferral, if recognized to DNOs, on DG expansion. The proposed approach allowes consideration of variable energy sources in condition to the deterministic ones. As for M.A. <ns0:ref type='bibr' target='#b16'>Senol and Ari (2016)</ns0:ref>, they suggested the use of a control method based on swarm intelligence (SI). Their goal is to increase the robustness of the 3-phase DMC (Direct matrix converters) control system. A DMC is a type of AC-to-AC power converter having relatively small dimensions of circuits which provide substantial advantages for them. The idea consists of generating optimal switching states by using a swarm optimization algorithm and also applying them to the power switches of DMC. As for M. <ns0:ref type='bibr' target='#b15'>Kumawat and Bansal. (2017)</ns0:ref>, authors proposed also the use of the meta-heuristic swarm intelligence to find the optimal energy production plan of DGs that minimises energy loss. The particle-swarm-optimization meta-heuristic has been used to determine the optimal size and allocation of the DGs to fulfill consumer demands and reduce power losses.</ns0:p><ns0:p>As for the second group of researches, that considers the problem of selecting the best energy source or combination of resources for economic and environmental impacts, T. <ns0:ref type='bibr'>Niknam (2009)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science waste heat. their objective is to improve the management of the different levels of Distributed Energy Resources (DERs) with the underlying resources and control points by breaking the whole system into micro-grids. Hence, The clustered sources and control loads can operate in parallel to the grid. This grid resource can be disconnected from the utility during events, but may also be intentionally disconnect when the quality of power from the grid falls below certain threshold. The authors A. <ns0:ref type='bibr'>Omu and A.Boies (2013)</ns0:ref> studied the economic and environmental impact of the use of distributed energy generating system (including renewable energy) compared to centralised one. Therefore, they proposed a Mixed Linear Programming (MLP) model for the design of a system that meets the electricity and heating needs of a cluster of buildings. As for C.O. <ns0:ref type='bibr'>Incekara (2019)</ns0:ref>, they proposed a fuzzy Multi-Objective Linear <ns0:ref type='formula'>2021</ns0:ref>) presented a two-stage approach that allows sharing renewable energy within district by using a set of prosumers and aggregator that supervises the energy exchanges between the prosumers and with the grid. The use of the two-stage approach minimizes the revenues deriving from the prevision and the sale of energy. As for K. <ns0:ref type='bibr'>Kusakana (2021)</ns0:ref>, they developed an optimal energy management model between commercial prosumers and a residential consumers. This model allows to minimize the cost of Manuscript to be reviewed</ns0:p><ns0:p>Computer Science energy consumed. The proposed model has been evaluated in a microgrid using the peer-to-peer energy sharing scheme operating Time of Use Tariff.</ns0:p><ns0:p>All these efforts seek the optimization of the planning and use of energy (including renewable one) using different techniques. Nevertheless, most of their main goal is to find the best location (using meta-heuristics) of their DGs according to demands and load clusters, meaning to decide whether or not to establish a renewable energy system in a given place and which renewable energy system source or combination of sources is the best choice. Planning of energy system is to select the best alternative among the different renewable energy systems R. <ns0:ref type='bibr'>Banos and al. (2011)</ns0:ref>. Other researches goal were devoted toward finding solution for storing the extra-clean obtained energy for later use despite the cost and underlying energy loss. Researches, that treat the energy sharing problem, studied the case of energy sharing in a limited and static space: between buildings belonging to the same region or in the same building. None of them deals with establishing a dynamic and naturally distributed energy sharing system that i) considers all constraints related to this problem mainly physical distances, ii) aims to maximise local and global satisfaction of several plants/countries subject to production fluctuation and dependent on weather conditions that may lead a supplier to become a consumer and iii) minimising local and global energy losses.</ns0:p></ns0:div>
<ns0:div><ns0:head>NEW COP FORMALIZATION FOR RENEWABLE ENERGY DISTRIBUTION</ns0:head></ns0:div>
<ns0:div><ns0:head>PROBLEM Description of the problem</ns0:head><ns0:p>Our aim is to build an undirected and dynamic graph connecting the different N hybrid power plants.</ns0:p><ns0:p>The graph is dynamic, since a power plant can be added/deleted at any time and it is undirected because any power plant can switch from an energy supplier to an energy consumer and vice versa. The graph is incomplete because it is useless to connect power plants which are at large distances. Each hybrid power plant X i , i ∈ {1, ..., N} at a region R i , maintains important information as:</ns0:p><ns0:p>• The estimated quantity of renewable energy that can be produced Prod(X i ) per day,</ns0:p><ns0:p>• The remaining quantity of renewable energy in batteries Stock(X i ) per day,</ns0:p><ns0:p>• The estimated quantity of needed energy Needs(X i ) for the region under several conditions, i.e.</ns0:p><ns0:p>current season, temperature, etc.</ns0:p><ns0:p>• The quantity of energy to keep in reserve Reserve(X i ) for any risk.</ns0:p><ns0:p>The main objective of our system is to determine the daily quantity of energy to provide by each supplier X i to its consumer X j neighbors. In order to guarantee a high performance and accuracy for the proposed A-RESS system, We propose to integrate an artificial neural network (ANN) model, or another supervised learning model to estimate both the quantity of renewable energy that can be produced according to the climate conditions and the quantity of needed energy per region based on previous experiences. In this paper we will focus only on the formal representation of energy sharing problem and the agent based solving process, therefore, We will assume that the required above information are provided. The prediction of production and needs quantity of energy will be treated in future work.</ns0:p><ns0:p>The transfer of renewable energy can be done at a local level means between plants of the same country; in this case the energy will be offered. The energy can be also sold when the transfer involves plants of different countries. In this work, we will focus on the international scale. For simplifying the proposed system, we assume that each country contains one single power plant X i . A transfer cable is provided between two power plants X i and X j only if the distance between these two plants is not large. Hence X i and X j are considered as neighbors ,i.e. Ng(X i ). Our goal is to optimise the shared quantity of renewable energy while minimizing the cost of transportation and maintaining a high level of transportation reliability. The possessing of the transmission line depends on the quantity of energy to be transported and the distance between the two power plants to interconnect. Note that, each transfer process will obviously lead to some loss of energy.</ns0:p><ns0:p>The decision of any power plant for whether acquiring some quantity of renewable energy or producing</ns0:p></ns0:div>
<ns0:div><ns0:head>6/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the same needed quantity of non-renewable energy is based on the basic price of the plant's installation, the amortization period, the transfer cable installation, etc. In addition, any X i can deliver or store energy if and only if its both needs and risk reserve are satisfied. For reducing the loss of energy, the quantity received by X i must not exceed its needs/risk reserve.</ns0:p></ns0:div>
<ns0:div><ns0:head>New Formalization for Renewable Energy Sharing Problem</ns0:head><ns0:p>A renewable energy distribution problem can be formalised as a Constraint Optimisation Problem (COP).</ns0:p><ns0:p>A COP is a Tuple(X, D, C, F) as follows:</ns0:p><ns0:p>• X= {X 1 , X 2 , ..., X n } with X i =(X i .io 1 , X i .io 2 , ..., X i .io k ), where X i represents a power plant and X i .io j , j ∈ {1, ..., Ng(X i ) }, is the quantity of energy to provide/get to/from X j , j = i. Note that X j ∈ Ng(X i ) only if these two plants are linked together, i.e. Ng(X i ) is the set of all neighbors of X i ,</ns0:p><ns0:p>• D={D 1 , D 2 , ..., D n } where D i =(val 1 , val 2 , ..., val k ) with val j ∈ N, are the possible quantities of renewable energy to provide/get to/from each X j ∈ Ng(X i ). If X i is a power plant provider then ∀ X j ∈ Ng(X i ), X i .io j ≥0.</ns0:p><ns0:p>• C={C Pro f , C Share , C Rec } where -Profitability Constraint C Pro f : the cost of requesting some quantities of renewable energy from neighbors should be better than producing the same quantity from non-renewable energy (see eq.1). These costs involve: the cost of transfer, the cost of the cable installation, the cost of air pollution, etc.</ns0:p><ns0:formula xml:id='formula_0'>∑ j∈Ng(X i ) (β i j * X i .io j ) − (γ i * ∑ j∈Ng((X i ) (1 − α i j ) * X i .io j ) < 0 (1)</ns0:formula><ns0:p>where α i j is the proportion of lost energy, determined according to the link type between X i and X j , its length, the weather condition, etc. γ i is the cost of producing 1KW of non-renewable energy by the plant X i , and β i j is the cost of exchanging 1KW of renewable energy between two plants X i and X j -Shared Quantity Constraint C Share : the total shared quantity per a plant should not exceed the available energy to be given as in eq.2.</ns0:p><ns0:p>∑</ns0:p><ns0:formula xml:id='formula_1'>X j ∈Ng(X i ) X i .io j ≤ Avail(X i )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Where</ns0:p><ns0:formula xml:id='formula_2'>Avail(X i ) = Prod(X i ) + Stock(X i ) − Needs(X i ) − Reserve(X i )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>-Received Quantity Constraint C Rec : the total received quantity of renewable energy per a plant should not exceed the needed energy as in eq.4.</ns0:p><ns0:p>∑ X i ∈Ng(X j )</ns0:p><ns0:p>(1 − α i j )X j .io i + Avail(X j ) ≃ 0 (4)</ns0:p></ns0:div>
<ns0:div><ns0:head>7/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Solving this COP consists in finding the optimal (according to the objective function F) instantiation of all variables X from values of their domains D that satisfies all the constraints C. An optimal solution X * ={X * 1 , X * 2 , ..., X * n } for this problem is a solution that maximizes the satisfaction of all neighbors requests, F(X * ). Each power plant tries to compensate the spent cost on its installation by serving the maximum in needs regions with possible quantities. In addition, this system tries to maximize the benefit of distributed energy. The cost of purchasing/selling energy differs from one plant to another according to the price of the link installation, the distance between the regions, etc.</ns0:p><ns0:p>Each plant X i will try to find values X * i =(X * i .io 1 ,X * i .io 2 ,..., X * i .io k ) with k = Ng(X i ) , that optimizes F(X), as shown in eq.5.</ns0:p><ns0:formula xml:id='formula_3'>F(X * ) = min X F(X) (5) F(X) = g(X) − h(X) (6)</ns0:formula><ns0:p>where, g(X), is the function that measure the degree of non-satisfaction of all the plants X i ∈ X (as given in eq.7). An energy plant supplier X i is satisfied only if the maximum of its extra produced energy is shared with neighbors, while an energy plant consumer X j is satisfied only if it gets the maximum of the energy it needs. As for h(X), it computes the global cost of all the energy exchanged between all the plants (as given in eq.8). If h(X) > 0 then most plants are suppliers; otherwise most plants are consumers.</ns0:p><ns0:formula xml:id='formula_4'>g(X) = ∑ X i ∈X ( ∑ X j ∈Ng(X i ) X i .io j − Avail(X i )) 2 (7) h(X) = ∑ X i ∈X ∑ X j ∈Ng(X i ) β i j * X i .io j (8)</ns0:formula><ns0:p>Recall that β i j is the cost of exchanging 1KW of energy between two plants X i and X j . This cost is not the same for all plants, it is subject to type and length of the used cable, the installation cost of the plant, etc. Both quantities, h(X) and g(X) should be normalized.</ns0:p></ns0:div>
<ns0:div><ns0:head>A-RESS GLOBAL DYNAMIC</ns0:head><ns0:p>Since our problem is naturally distributed where each plant is responsible of taking its own decisions upon its knowledge, we propose to use a multi-agent system for solving this problem. Agents will communicate to agree on quantities to exchange, that maximises the satisfaction of them all, i.e. F(X * ).</ns0:p></ns0:div>
<ns0:div><ns0:head>System Architecture</ns0:head><ns0:p>In the multi-agent proposed system, each agent will be assigned to a power plant. Three types of agent are used, a Supplier, a Consumer and a Neutral Agents. Supplier agents are those who have an excess of renewable energy and subsequently can provide some of it to the neighbors who are indigent. These latter agents are the Consumer. A neutral agent is an agent that is able to satisfy its needs and does not have any extra energy.</ns0:p><ns0:p>Each agent X i maintains five static knowledge i) Estimated quantity of renewable energy that its power plant can produce by per day, ii) Available quantity of renewable energy in its power plant batteries per day, iii) Estimated quantity of needed energy for its power plant, iv) Quantity of energy to keep in reserve for any risk and v) Number of all its neighbors Ng(X i ) . Agents also have dynamic knowledge, this knowledge differs according to the type of agent. An supplier might be mapped into a consumer if it faces a lack of energy at any day, same for a Consumer. Agents can be mapped according to their daily needs.</ns0:p><ns0:p>All these types of agents need to cooperate together in order to maximize their own satisfaction.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Global Dynamic System</ns0:head><ns0:p>The proposed multi-agent global dynamic is divided into two phases:</ns0:p><ns0:p>• An initialization phase</ns0:p><ns0:p>• A negotiation phase</ns0:p><ns0:p>During the initialization phase, each agent will use its trained ANN model to estimate the production and the needs of the day. As we mentioned before this process will be discussed in another research work.</ns0:p><ns0:p>According to the obtained estimations, decides the quantity of available renewable energy to share with neighbors. If the Avail(X i ) is a positive value then X i will be a Supplier; otherwise it will be a Consumer.</ns0:p><ns0:p>Agents can then start the second phase.</ns0:p><ns0:p>During the negotiation phase, each Consumer agent, will check first whether is it profitable to get the needed quantity or to use non-renewable energy according to eq.1. If yes, then it will send a request to all its neighboring Supplier, with the needed quantity i.e. the agent will divide the needed quantity |Avail(X i )| on the number of Suppliers. Each Supplier X j that receives demands from related Consumers, proceeds first by checking whether it is possible to grant them all, i.e Avail(X j ) is equal or greater than the summation of all requested quantities. If so, each Consumer will receive what it needs and the extra-quantity will be kept for further requests. Otherwise, the agent X j will determine the proportion p i j of each request according to the total needs (summation of all asked quantities from all X i ∈ Ng(X i )). X i will reply to all its requester with possible quantities (see eq.9).</ns0:p><ns0:formula xml:id='formula_5'>X i .io j = p i j * Avail(X i ) (9) p i j = X i .io j ∑ X j ∈Ng(X i ) X i .io j (10)</ns0:formula><ns0:p>Each Consumer agent X i will check the total amount of energy that it is willing to get from all Suppliers. If the whole quantity is equal to its need (satisfying constraint of the eq.4) and optimise its objective function F(X i ), it replies with an acceptance. Otherwise, if it is less than its needs (due to some energy loss or non availability of enough energy on the Supplier side), then it will resend to other Suppliers, that are willing to provide more, asking them for the missing quantity. The same negotiation proceeds until all available quantities are distributed and all Consumers get the maximum of what they can get. Once the negotiation process is over, the power plant may start exchanging their agreed quantities.</ns0:p></ns0:div>
<ns0:div><ns0:head>System Architecture Description</ns0:head><ns0:p>In this section, we will describe our multi-agent system using the ODD (Overview, Design, and Details) protocol proposed by <ns0:ref type='bibr'>Grimm V. (2006)</ns0:ref>.</ns0:p><ns0:p>Overview 1. Purpose: The objective of our multi-agent model is to specify the quantities of energy to be released from agents with energy excess to agents with a lack of energy to satisfy their needs.</ns0:p><ns0:p>2. Entity, states variables and scale: Our model is made up of three types of agents: 1) Supplier Agent:</ns0:p><ns0:p>represents power plants that produce a quantity of energy that exceeds their needs, 2) Consumer Agent: represents power plants that produce a quantity of energy that does not satisfy their needs and 3) Neutral Agent: are the power plants that produce a quantity of energy that only meets their proper needs.</ns0:p><ns0:p>Each agent knows the quantity of energy that has in its storage devices (Stock(X i )), the quantity of energy to keep in reserve for all risks (Reserve(X i )) and the number of its neighbours ( Ng(X i ) ).</ns0:p><ns0:p>The agents are located in an environment made up of a set of regions where each region has meteorological characteristics (temperature, humidity, amount of rain, cloudiness, season, etc.) and a machine learning model to predict its daily quantity of energy 4) to be produced (Prod(X i )) and 5) to be consumed (Needs(X i )).</ns0:p></ns0:div>
<ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>3. Process and scheduling: Every 24 hours, the following processes are executed in this given order:</ns0:p><ns0:p>(a) Each Consumer agent sends the quantity it needs to each of its Supplier neighbours.</ns0:p><ns0:p>(b) Supplier agents receive all requests from neighbouring Consumers agents.</ns0:p><ns0:p>(c) Each Supplier agent compares the total quantity requested with the quantity available. If the total quantity can be provided, then it responds to each Consumer with the requested quantity.</ns0:p><ns0:p>Otherwise, it determines the quantity to be given. The quantity X i .io j (eq.9) to be offered from the Supplier agent X i to a Consumer agent X j is determined according to availability Avail(X i ) (eq.3) and the portion p i j given by the equation eq.10. This proportion is assigned to each consumer depending on its environment, i.e strong consumer with many suppliers, or weak consumer with few suppliers.</ns0:p><ns0:p>(d) Each Consumer agent receives the responses from all the neighbouring supplier agents and checks the total of the quantities received with its need. If the total is less or equal to its needs, it accepts the proposed quantities, otherwise it adjusts the quantities according to its needs and sends a response with only needed quantities.</ns0:p></ns0:div>
<ns0:div><ns0:head>Design concepts</ns0:head><ns0:p>1. Emergency: Each agent has a local goal that is to provide all its energy excess. This goal is represented by the function F(X * ) given by the equation eq.5, which aims to minimize the two functions g(X)(eq.7) and g(X) (eq.8).</ns0:p><ns0:p>2. Sensing: For a Supplier agent, in all cases the total quantity of the shared energy does not exceed its available quantity of renewable energy. The same for an agents consumer, i.e. the total quantity of received energy does not exceed its need under no circumstance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Interaction:</ns0:head><ns0:p>The only interactions are among each Consumer and its neighboring Suppliers. Neutral agent does not interact with other agents because it is already satisfied, i.e. it has neither lack nor excess energy. Supplier agents interact with Consumer agents to negotiate the quantities of energy to be shared. These interactions resumes until no more available energy to be shared.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Stochasticity:</ns0:head><ns0:p>In the initialization phase, there are two types of data: known data and data to be estimated. The quantity of energy stored in the battery (Stock(X i )) and the energy reserve (Reserve(X i )) of each power plant are both known quantities. However, the quantity of energy to be produced (Prod(X i )) and the quantity of energy to be consumed (Needs(X i )) are predicted using a trained regression model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Details</ns0:head><ns0:p>1. Initialisation: The first step in the simulation initialization is the estimate the quantity of energy to be produced (Prod(X i )) and the quantity of energy to be consumed (Needs(X i )) for each agent using Machine Learning. Then, each agent calculates its quantity of available energy (Avail(X i )).</ns0:p><ns0:p>If the quantity is positive, then it is a Supplier agent otherwise it is a Consumer agent.The agents exchange their states then each Consumer agent calculates the number of its Supplier neighbors.</ns0:p><ns0:p>Based on the number of neighbors and the total number of agents, Consumer agents are classified into two types: Strong Consumer who have more neighbors and those who have fewer are Weak Consumers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Input Data:</ns0:head><ns0:p>The model takes as input a graph containing the list of power plants and the links between them. Two power plants are linked, meaning that they are neighbours and can interact.</ns0:p></ns0:div>
<ns0:div><ns0:head>System Complexity</ns0:head><ns0:p>In order to evaluate the computational efficiency of the proposed system and decide whether it can be used in practical applications or not, we computed the time complexity of the underlying multi-agent system. Since a multi-agent system is defined as a set of interacting agents in an environment, we need to determine the complexity at two levels, the agent level (as an atomic unit) and the system level (including the interactions). In the following, we assumed N the total number of agents, each agent is connected at most to N/2 other agents as neighbors.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• At the agent level: According to its behavior, each agent has to predict the quantities of renewable energy to be produced and to be consumed in order to estimate the quantity to be available. The complexity of this process depends on the trained regression model that will be used. Note that most of the existing model are polynomial. Then the agent has to process m(N/2) values to/from neighbors, with m the total number of iterations to attend consumers' satisfaction. In the worst case, only one Consumer will be satisfied at each communication, so the total number of iterations for a Supplier will be N/2. Hence, each consumer will process O(N 2 )</ns0:p><ns0:p>• At the system level: According to the designed interactions, a first communication is required between agents to exchange their states (Consumer or Supplier). This first step requires O(N 2 ) messages at most. Then, Consumer agents will negotiate with their Suppliers to get what they can get. These communications sending/receiving requests/answers between agents resumes until all the consumers are satisfied (no more available quantities to be shared). Assume that at each communications one neighbor is satisfied, so the total communications will be equal to the number of neighbors for each agent, means O(N 2 ). All agents will process O(N 3 ) messages.</ns0:p></ns0:div>
<ns0:div><ns0:head>Illustration Example</ns0:head><ns0:p>To illustrate the global dynamic of our proposed system, let's consider the following example: We assume we have 5 regions {X 1 , X 2 ,..., X 5 } described in the table.1. To represent the distance graph, we assume that two regions X i and X j are linked if the distance d i j is less than the maximum distance D. In our example, we assume that d 15 , d 13 , d 23 , d 24 , d 45 ≥ D as shown in the following graph:</ns0:p><ns0:formula xml:id='formula_6'>X 1 X 2 X 3 X 4 X 5</ns0:formula><ns0:p>The table.2 gives an example of the estimated energy per region.</ns0:p><ns0:p>During the initial phase, agents will first estimate their energy available quantity and decide about their current status, i.e. whether are a Consumer, a Supplier or a Neutral agent. Then they will exchange their status as given in the following graph: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_7'>(+1500) X 1 X 2 X 3 X 4 X 5</ns0:formula><ns0:p>During negotiation phase, the Consumer agents will send a message to all neighboring Suppliers to ask for the needed quantities. X 1 will ask for 2000KW from X 5 and X 4 will ask for 750KW from X 2 and X 5 . Then X 5 will compute p 51 =0.72 and p 54 =0.28, and inform X 1 and X 4 about the quantities they are willing to receive, i.e. 1080KW for X 1 and 420KW for X 4 . As for X 2 , it will inform X 4 that it is able to get only 500KW. The Consumer agents will accept the offer given by Suppliers, since no additional affordable energies as given in table.3. </ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL EVALUATION</ns0:head><ns0:p>To evaluate the performance of our A-RESS system, we assume that we have 10 countries {X 1 , X 2 , ..., X 10 } described in the table.4.</ns0:p><ns0:p>For simplicity reason, we assume that for each country just one power plant is installed. A link exists between two countries if the distance between them is less than some threshold, for our experiments is 2500Km. We generated 30 samples, where the estimated quantities of the daily energy production, the available quantity in batteries, the estimated quantity of energy requirement and the necessary reserve are generated in a random way. To determine the selling price of a 1 KW of renewable energy, we needed to consider the initial renewable energy power plant and link installation cost β 0 i , the amortization period t i and the discount rate σ i . The selling price of a 1KW of non-renewable energy is represented by a geometric suite (as given in eq.11).</ns0:p><ns0:formula xml:id='formula_8'>β i j = σ n i β 0 i j + a (11)</ns0:formula><ns0:p>with n∈[0..t i ] and a is a fixed fees.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed From all above obtained results, we believe that despite the variation of the random inputs (Prod(X i ),</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Needs(X i ), Stock(X i ) and Reserve(X i )), the proposed multi-agent dynamic is accurate. In all cases, a Supplier agent cannot offered more than what it is available for it and a Consumer agent can not receive more than its needs.</ns0:p><ns0:p>To test the scalability of our approach, we varied the number of plants from 10 to 200 and we measured the required CPU time. The figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref> shows that the CPU time increases, with polynomial growth, with the number of plants. This is due to the number of messages that are exchanged in order to attend a compromise between all agents. This number is polynomial function of the number of plants. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the performance of the proposed agent-based system in terms of accuracy, satisfactions and energy loss.</ns0:p><ns0:p>In future work we will, integrate a regression model in our system for predicting all daily unknown energy quantities to be produced/to be consumed. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Programming (MOLP) to obtain Turkey's energy mix in 2035. They used fuzzy techniques to obtain energy objectives. The authors X.Xu and al. (2020) presented a new two-stage game-theoretic framework of residential PV panels planning. First, they use a stockelberg game theory to model stochastic bi-level energy sharing problem. In this stage, they proposed a descend search algorithm-based solution methods to obtain the optimal installation capacity of residential PV panels. Then, they developed an Optimal Power Flow (OPF) model to optimally allocate residential PV panels with minimum expected active power loss. For the third researches group, M.Rafik and al.. (2019) proposed a new architecture Micro Smart Grid by Software-Defined Network (MSGSDN). This architecture adopts the Software-Defined Network (SDN) approach and uses the Internet of Things (IoT) sensors to allow an efficient and intelligent distribution of electrical energy obtained from all several sources on many buildings. The MSGSDN architecture deals with several constraints: the preference of energy source, the type of the devices of the building, the real time of equipment's consumption and its security. As for I.Aldaouab and nez. (2019), they tried to optimize the energy exchange between two prosumers. Their optimisation problem is defined by a Model Predictive Control (MPC) framework based on future behaviour prediction algorithms. The two used prosumers have the same structure: load, energy supply, battery storage and connections to other power sources. The power flows transfer is done only between two prosumers through the peer-to-peer transactions block. The authors A.Azizi and al.. (2019) proposed an autonomous and decentralized power sharing and energy management approach for PV and battery based DC microgrids without utilizing a supervisory and communication system. As for R.Carli. (2019), they presented a decentralized control strategy for the scheduling of electrical energy activities of a microgrids. The microgrid is composed of smart homes connected to a distributor and exchanging renewable energy produced by individually owned distributed energy sources. The authors assume that each smart home can both buy/sale energy from/to the grid taking into account time varying non linear pricing signals. Authors A.Prasad (2019) modeled the work environment as a multi-agent environment where each agent represents a building and they proposed a Deep Reinforcement Learning (DRL) solution to optimize energy sharing between different buildings. The intelligent agent learns the suitable behaviours to share energy in order to realize a nearly zero energy status. As regards K.Kusakana (2020), they proposed a peer-to-peer energy sharing model. The advantage of their model is to minimise the operating cost of the two prosumers by maximising the use of the power from the renewable energy sources and minimizing the use of the electrical supplying energy under the time of use rates. M.E.Haque and al (2020) proposed a distributed approach to solve the problem of optimizing energy management between different houses (house without solar photovoltaic or battery, with solar photovoltaic as well as batteries). The proposed approach allows different houses to make decisions without sharing any information with the central transactive energy management system. The proposed approach is efficient in terms of optimal use resources and efficient sharing of energy between different houses in a microgrids. As for<ns0:ref type='bibr' target='#b24'>Sh.Cui and al. (2020a)</ns0:ref>, they presented a peer-to-peer energy sharing framework for numerous community prosumers. Two strategy are proposed in this research, intercommunity energy-sharing and an intracommunity energy-sharing for day-ahead and relative energy-sharing schedule respectively. As regards<ns0:ref type='bibr' target='#b25'>Sh.Cui and al. (2020b)</ns0:ref>, authors suggested a new and fair peer-to-peer for a community of energy buildings. First, a Generalized Nash Equilibrium (GNE) of the game is displayed independently of the energy sharing payments. then a Cost Reduction Ratio Distribution (CRRD) model is used to fix energy sharing payments for the buildings. While A.Giordano and al. (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:55023:1:1:NEW 21 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Difference between Available and shared energy for Supplier Agents</ns0:figDesc><ns0:graphic coords='15,151.91,374.07,393.23,182.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Difference between Needed/Received quantity of energy for all Consumer Agents.</ns0:figDesc><ns0:graphic coords='16,155.06,63.78,386.93,181.65' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Number of Supplier Agents Vs. Number of Consumer Agents</ns0:figDesc><ns0:graphic coords='16,182.36,338.58,332.33,171.15' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Total quantities of shared energy vs Total quantities of received energy</ns0:figDesc><ns0:graphic coords='17,141.73,63.78,415.28,186.90' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>For 10 power plants we have 80 seconds or for 200 power plants, 620 seconds.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Number power plants/CPU time</ns0:figDesc><ns0:graphic coords='17,219.37,395.74,258.30,174.83' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,242.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,236.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,246.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,354.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Illustration Example</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Distance</ns0:cell><ns0:cell>X 1</ns0:cell><ns0:cell>X 2</ns0:cell><ns0:cell>X 3</ns0:cell><ns0:cell>X 4</ns0:cell><ns0:cell>X 5</ns0:cell></ns0:row><ns0:row><ns0:cell>X 1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>d 12</ns0:cell><ns0:cell>d 13</ns0:cell><ns0:cell>d 14</ns0:cell><ns0:cell>d 15</ns0:cell></ns0:row><ns0:row><ns0:cell>X 2</ns0:cell><ns0:cell>d 12</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>d 23</ns0:cell><ns0:cell>d 24</ns0:cell><ns0:cell>d 25</ns0:cell></ns0:row><ns0:row><ns0:cell>X 3</ns0:cell><ns0:cell>d 13</ns0:cell><ns0:cell>d 23</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>d 34</ns0:cell><ns0:cell>d 35</ns0:cell></ns0:row><ns0:row><ns0:cell>X 4</ns0:cell><ns0:cell>d 14</ns0:cell><ns0:cell>d 24</ns0:cell><ns0:cell>d 34</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>d 45</ns0:cell></ns0:row><ns0:row><ns0:cell>X 5</ns0:cell><ns0:cell>d 15</ns0:cell><ns0:cell>d 25</ns0:cell><ns0:cell>d 35</ns0:cell><ns0:cell>d 45</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Production Example</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>X 1</ns0:cell><ns0:cell>X 2</ns0:cell><ns0:cell>X 3</ns0:cell><ns0:cell>X 4</ns0:cell><ns0:cell>X 5</ns0:cell></ns0:row><ns0:row><ns0:cell>EstProd</ns0:cell><ns0:cell>6000</ns0:cell><ns0:cell>8000</ns0:cell><ns0:cell>8000</ns0:cell><ns0:cell>4000</ns0:cell><ns0:cell>11000</ns0:cell></ns0:row><ns0:row><ns0:cell>QteStock</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>3000</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>EstNeed</ns0:cell><ns0:cell>7000</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>7000</ns0:cell><ns0:cell>8000</ns0:cell><ns0:cell>6000</ns0:cell></ns0:row><ns0:row><ns0:cell>Reserve</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>3000</ns0:cell><ns0:cell>3000</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>4000</ns0:cell></ns0:row><ns0:row><ns0:cell>Avail(X i )</ns0:cell><ns0:cell>Consumer</ns0:cell><ns0:cell>Supplier</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Consumer</ns0:cell><ns0:cell>Supplier</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(-2000)</ns0:cell><ns0:cell>(+1000)</ns0:cell><ns0:cell /><ns0:cell>(-1500)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Negotiation phase results</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>X 1</ns0:cell><ns0:cell>X 2</ns0:cell><ns0:cell>X 3</ns0:cell><ns0:cell>X 4</ns0:cell><ns0:cell>X 5</ns0:cell></ns0:row><ns0:row><ns0:cell>Avail(X i )</ns0:cell><ns0:cell>-2000</ns0:cell><ns0:cell>+1000</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>-1500</ns0:cell><ns0:cell>+1500</ns0:cell></ns0:row><ns0:row><ns0:cell>Supplier/</ns0:cell><ns0:cell>X 5 /750</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>X 2 /500</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Quantity</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>X 5 /750</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NEW</ns0:cell><ns0:cell>-1250</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>-250</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Avail(X i )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "April 11, 2021
National School of Computer Sciences,
University of Manouba,
Tunis, Tunisia
Telephone:
Email: toumia.imen@gmail.com
Dear Editors,
Thank you for inviting us to submit a revised draft of our manuscript entitled ”A-RESS New Dynamic and
Smart System for Renewable Energy Sharing Problem”. We appreciate the time and effort you and each of
the reviewers have dedicated to providing insightful feedback on ways to strengthen our paper. In the remainder of this letter, we will reply to all your comments point-by-point. Our answers are mentioned in blue
and for each modification of the manuscript we noted the number of the line. We reply to each comment in
point-by-point fashion. We have mentioned the changes to the revised manuscript in red color.
Reviewer 1
1 Author should highlight the research gaps and contribution of the proposed work by comparing the state
of the art methods and recent studies.
On page 2, line 50, we have cited the gaps in existing research and in line 55 we describe the main goals of
our contribution.
2 In the introduction section, the literature review must be strengthened. The references is incomplete
and somewhat outdated. A more comprehensive and timely literature survey is desired.
Starting from line 62 of page 2, we have modified the bibliography by adding recent research published
between 2019 and 2021.
3 How scalable is the proposed approach?
We have increased the number of power plants to test the evolution of our system. You find the results
obtained from the line 303, page 10.
4 The theoretical depth of this paper needs to be strengthened.
To strengthen the theoretical depth of your article, we have added, starting from line 246, page 6, the study
of the complexity of our system.
5 The proposed method might be sensitive to the values of its main controlling parameter. How did you
tune the parameters? Please elaborate on that.
In the part added to the line 303, page 10, you will find an analysis of the sensitivity of the results obtained
to the values of the inputs.
6 The computational cost of the proposed approach isnt discussed in this work. The approach should be
computationally efficient to be used in practical applications.
In the section ”System Complexity” added in line 246, page 6, we have proven the efficiency of our approach
in terms of calculations.
7 The novelty and contribution of the presented work need further justification. Authors need to add more
results to thoroughly support the main findings.
We carried out more testing by increasing the number of power plants and comparing the reliability of the
results obtained in terms of the random inputs. You will find the interpretation of the results obtained from
the line 304, page 10.
8 Authors have not presented limitations of this work. How this work can be extended in future? Although
authors have provided various comparative results, however, more details on how proposed phenomenon
performs better results against baseline is still missing in the paper.
On the line 305, page 10, we have cited the limit of our work and we have presented our future contribution.
9 Please specify details of the computing platform and programming language used in this study.
We have specified the programming language and the platform used from line 282, page 8.
Point-by-point list of minor recommendations: 10 The nomenclature should be included to help the reader
to follow the paper conveniently.
At line 60 of page 2, we added a NOMENCLATURE section containing the definitions of all the acronyms
cited in our paper.
11 The italic types of symbols in equations and the main text should be in the same expression.
We have applied the italic type on all the expressions of the main text. You will find these formatting mentioned in red color in all the text
12 Please improve the quality of figures completely to improve the readability of this paper.
We have modified all the figures and we have added an interpretation for each figure.
Reviewer 2
Basic reporting
This article covers an important topic, however I have some considerable concerns:
1. The introduction of the article and research is poorly written, uses too many acronyms, does not define
concepts, uses hardly any references, and also does not describe the justification of the research, nor does it
clearly describe what the article is about. This needs to be updated.
At line 60, page 2, we added a NOMENCLATURE section containing the definitions of all the acronyms cited
in our paper. We have described more the objective of our proposal, you will find this description on line 46,
page 1. All references are cited in the bibliography section.
2. The methodology and description of the approach and its application is better, but there are still many
major questions. For example, how does the temporal dimension feature? All the demands and supplies
should be a function of time, yet this is not clarified anywhere. What is the time step? This is important,
especially for renewable energy given the limited predictability a long way into the future.
3. An Agent-Based Model should be described in some detail, such as by the ODD method, but this is not
described here. Therefore, the method is not replicable in my mind unless this is fully described.
4. What data is being used as inputs?
On response to three comments 2, 3 and 4, we add a description with the ODD method to the line 245, page
6. In this description, we have specified the time dimension and the inputs.
Experimental design
The experimental design has not been clearly described, and I can’t find information about important aspects
such as sensitivity analysis
We analysed the sensitivity of the results according to the values of the input variables. we have added a
description of this analysis to the line 303, page 10.
Validity of the findings
It is unclear exactly what the findings are, so I can’t judge the validity. In the part added in line 245, page 6,
we have described our system, in particular we have specified the output.
Comments for the Author
Given that on the surface your method looks very useful, I suggest more effort is given to describing it in
greater detail so that the validity and details can be evaluated adequately.
We try to improve the quality of our manuscripts by responding to all of yours comments
Again, thank you for giving us the opportunity to strengthen our manuscript with your valuable comments
and queries. We hope that our submission is now suitable to be published in PeerJ.
Sincerely,
Imen Toumia
On behalf all authors
" | Here is a paper. Please give your review comments after reading it. |
163 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Energy is at the basis of any social or economic development. The fossil energy is the most used energy source in the world due to the cheap building cost of the power plants. In 2017, fossil fuels generated 64.5\% of the world electricity. Since, on the one hand, these plants produce large amount of carbon dioxide which drives climate change, and on the other hand, the storage of existing world fossil resources is in continuous decrease, safer and highly available energy sources should be considered. Hence, for human well-being, and for a green environment, these fossil plants should be switched to cleaner ones.</ns0:p><ns0:p>Renewable energy resources have begun to be used as an alternative one. These resources have many advantages such as sustainability and environmental protection.</ns0:p><ns0:p>Nevertheless, they require higher investment costs. In addition, the reliability of many planted systems is poor. In most cases these systems are not sufficient to ensure a continuous demand of energy for all in needy regions because most of their resources are climate dependent. The main contributions of this research are i) to propose a natural formalisation of the renewable energy distribution problem based on COP (Constraint Optimisation Problem), that takes into consideration all the constraints related to this problem; ii) to propose a novel multi-agent dynamic (A-RESS for Agent based Renewable Energy Sharing System) to solve this problem. The proposed system was implemented and the obtained results show its efficiency and performance in terms of produced, consumed and lost energy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Fossil fuels are the most used resources mainly to power homes and cars, i.e. in 2017, fossil fuels generated 64.5% of the world electricity. For a country, it is convenient to use coal, oil or gas energy sources to meet its energy needs, but these fuel sources are often limited. In addition, the intensive use of these energies causes dangerous consequences of the environment, mainly the phenomenon climate change <ns0:ref type='bibr' target='#b11'>Dincer (1999)</ns0:ref> <ns0:ref type='bibr' target='#b13'>Goldemberg (2006)</ns0:ref>. The burning of fossil fuels sends greenhouse gases into the atmosphere, polluting air, soil and water while trapping the heat of the sun and contributing to a global warming. Other traditional energies use biomass which designates the organic waste that can become a source of energy after combustion (wood energies), the anaerobic digestion (bio-gas) and other chemical transformations (bio-fuel). However, a biomass plant operates in a very similar way to fossil power plant.</ns0:p><ns0:p>These pollutant fuel plants have serious consequences on the environment and on humans. Hence, even with an unlimited stock of fossil fuels, it is better to use renewable energy for the sake of humans and environment.</ns0:p><ns0:p>Renewable energy is an inexhaustible source of energy because it is constantly renewed by natural processes. Renewable energy is derived from natural phenomena mainly sun (radiation), moon (tide) and earth <ns0:ref type='bibr'>(geothermal)</ns0:ref>. There is also energy generated from the water and the wind. All these renewable mentioned sources are called new energies. Renewable energy technologies transform these natural sources into several forms of usable energy, most often electricity but also heat, chemical and mechanical energies. Renewable energy technologies are called 'green' or 'clean' because they pollute little or nothing of the entire environment. The use of renewable energy hybrid power plants allow any country PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to develop its energy independence and security. Nevertheless, these resources are weather and location dependent, leading to the intermittent and randomness of their use for energy production. The hybrid power plant production fluctuates independently from demand, yielding to an energy excess for some power plants while others cannot satisfy their minimal needs.</ns0:p><ns0:p>Since the production of renewable energy is expensive due to the high cost of both the installation of renewable energy power plants and the storage devices, the optimal solution is to avail from the produced quantity of this energy by maximizing its use and consequently minimizing its loss.</ns0:p><ns0:p>Many attempts have been made in the past decade to enhance, generalise and mainly optimise the use of renewable energy using different technologies, including meta-heuristics <ns0:ref type='bibr' target='#b26'>(Piccolo and Siano (2009)</ns0:ref>, <ns0:ref type='bibr' target='#b33'>Senol et al. (2016)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>Kumawat et al. (2017)</ns0:ref>, <ns0:ref type='bibr' target='#b12'>Etxeberria et al. (2010)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>Niknam and Firouzi (2009)</ns0:ref>, <ns0:ref type='bibr' target='#b37'>Soroudi et al. (2011)</ns0:ref>.) for an optimal DGs energy production plan; Chance-constrained programming <ns0:ref type='bibr' target='#b21'>(Li et al. (2018)</ns0:ref>) as a stochastic programming for improving DGs performance; Mixed linear programming <ns0:ref type='bibr' target='#b25'>(Omu et al. (2013)</ns0:ref>), Peer-to Peer platforms and projects ( <ns0:ref type='bibr' target='#b19'>(Kusakana, 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b34'>Shichang et al. (2020a)</ns0:ref>, <ns0:ref type='bibr' target='#b35'>Shichang et al. (2020b)</ns0:ref>, <ns0:ref type='bibr' target='#b18'>Kusakana (2019)</ns0:ref>, <ns0:ref type='bibr' target='#b16'>Klein et al. (2020)</ns0:ref>), Stackelberg game <ns0:ref type='bibr' target='#b39'>(Xu et al. (2020)</ns0:ref>, <ns0:ref type='bibr' target='#b22'>Li et al. (2021)</ns0:ref>), Machine Learning <ns0:ref type='bibr' target='#b3'>(Amit and Ivana (2019)</ns0:ref>) for an optimal energy sharing system between a group of buildings; Fuzzy multi-objective linear programming (C.O. <ns0:ref type='bibr'>Incekara (2019)</ns0:ref>) for the optimization of the best energy mix. Most of these efforts i) deal with sharing energy at a local levels only, while our goal is to consider global levels, for different environment and under several constraints, ii) focus on how to find the best plan for locating and using distributed generators (DGs), and not how to optimise the profit from the produced energy, iii) seek for the best renewable energy source or combination of sources to use, and not the best way to coordinate between these sources, iv) are based on storing energy excess for later use despite its high cost, v) do not consider the dynamic intrinsic aspect of these resources and vi) do not consider the advantages of energy sharing among several plants/countries mainly on the environment.</ns0:p><ns0:p>The goal of this work is to propose a novel dynamic, distributed and smart system (A-RESS for Agent based Renewable Energy Sharing System) that i) maximises the use of produced renewable energy (consequently minimizing its loss) by sharing the excess of energies between countries, and ii) minimizing the production cost (production, storage and transportation) by investment recovery.</ns0:p><ns0:p>The main contributions of this research are i) to propose a natural formalisation of the renewable energy distribution problem based on COP (Constraint Optimisation Problem), that takes into consideration all the constraints related to this problem; ii) to propose a novel multi-agent dynamic to solve this problem.</ns0:p><ns0:p>The reminder of this paper is organised as follows. Section 2 describes several research efforts that deal with enhancing the use of renewable energy. A detailed explanation of the energy sharing problem followed by an illustration of our proposed COP formalisation for this problem are given in section 3. A multi-agent dynamic of the proposed solver is given in section 4. Finally, a description of the different scenarios of the performed experimentation followed by an explanation of the obtained results are drawn in Section 5. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>NOMENCLATURE</ns0:head><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Up to now, there have been several important pioneering research efforts toward enhancing the use of renewable energy as an alternative solution for mainly electricity and heat demands. Existing researches can be classified in three groups. The first group concerns efforts that consider the problem of locating DGs (Distributed Generators) whether these generators are dedicated to renewable or non-renewable energy. The second group presents works dealing with the planning of Renewable Energy Sources (RES).</ns0:p><ns0:p>While the third group concerns the optimization of renewable energy sharing at local level, within a building or among buildings of a same region.</ns0:p><ns0:p>In the first group, <ns0:ref type='bibr' target='#b26'>Piccolo and Siano (2009)</ns0:ref> considered the issue of satisfying demands' growth and distribution network security. They indicated the potential of DGs in offering an alternative approach to utilities compared to centralised generator. As DG provides many benefits for DNO (Distribution Network Operators). The main goal of this work is to evaluate the impacts of network investment deferral, if recognized to DNOs, on DG expansion. The proposed approach allows consideration of variable energy sources in condition to the deterministic ones. As for <ns0:ref type='bibr' target='#b33'>Senol et al. (2016)</ns0:ref>, they suggested the use of a control method based on Swarm Intelligence (SI). Their goal is to increase the robustness of the 3-phase DMC (Direct Matrix Converters) control system. A DMC is a type of AC-to-AC power converter having relatively small dimensions of circuits which provide substantial advantages for them. The idea consists of generating optimal switching states by using a swarm optimization algorithm and also applying them to the power switches of DMC. As for <ns0:ref type='bibr' target='#b17'>Kumawat et al. (2017)</ns0:ref>, authors also proposed the use of the meta-heuristic swarm intelligence to find the optimal energy production plan for DGs that minimises energy loss. The particle-swarm-optimization meta-heuristic has been used to determine the optimal size Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and allocation of the DGs to fulfill consumer demands and reduce power losses. The authors in Li et al.</ns0:p><ns0:p>(2018) proposed a two-stage method for determining the optimal locations and sizes of DGs in distributed networks with the integration of energy storage. This method uses the Chance-Constrained Programming to determine the maximum outputs of energy storage devices. They showed that the integration of energy storage is an effective way for DG powers to achieve their pre-designed rated capacity at planning stage and consequently improve their powers' output performances.</ns0:p><ns0:p>As for the second group of researches, which considers the problem of selecting the best energy source or combination of sources according to their economic and environmental impacts, <ns0:ref type='bibr' target='#b24'>Niknam and Firouzi (2009)</ns0:ref> PSO based algorithm for annual load forecasting in an electrical power system with the aim of minimizing the error associated with the estimated model parameters. <ns0:ref type='bibr' target='#b37'>Soroudi et al. (2011)</ns0:ref> proposed an Immune Genetic based Algorithm (I-GA) to present a long-term dynamic multi-objective planning model for distribution network expansion along with distributed energy options. This algorithm optimizes costs and emission by determining the optimal schema of sizing, placement and dynamics of investments on DGs and network reinforcements over the planning period. As for <ns0:ref type='bibr' target='#b20'>Lasseter (2011)</ns0:ref>, they tried to improve the reliability (through penetration of renewable sources), dynamic island, and generation efficiencies through the use of waste heat. Their objective is to improve the management of the different levels of Distributed Energy Resources (DERs) with the underlying resources and control points by breaking the whole system into micro-grids. Hence, the clustered sources and control loads can operate in parallel to the grid. This grid resource can be disconnected from the utility during events, but may also be intentionally <ns0:ref type='formula'>2021</ns0:ref>) presented a two-stage approach that allows sharing renewable energy within the district by using a set of prosumers and aggregators that supervises the energy exchanges between the prosumers and with the grid. The use of the two-stage approach minimizes the revenues deriving from the prevision and the sale of energy. As for <ns0:ref type='bibr' target='#b18'>Kusakana (2019)</ns0:ref>, they developed an optimal energy management model between commercial prosumers and a residential consumers. This model allows to minimize the cost of energy consumed. The proposed model has been evaluated in a microgrid using peer-to-peer energy sharing schemes operating Time of Use Tariff.</ns0:p><ns0:p>All these efforts seek the optimization of the planning and use of energy (including renewable one) using different techniques. Nevertheless, most of their main goal is to find the best location (using meta-heuristics) of their DGs according to demands and load clusters, meaning to decide whether or not to establish a renewable energy system in a given place and which renewable energy system source or combination of sources is the best choice. Planning of the energy system is to select the best alternative among the different renewable energy systems <ns0:ref type='bibr' target='#b6'>Baños et al. (2011)</ns0:ref>. Other researches goal was devoted toward finding solutions for storing the extra-clean obtained energy for later use despite the cost and underlying energy loss. Researches, that treats the energy sharing problems, studied the case of energy sharing in a limited and static space, between buildings belonging to the same region or within the same building. Note that, compared to our A-RESS system, none of these efforts dealt with establishing a dynamic and naturally distributed energy sharing system that i) considers all constraints related to this problem mainly physical distances, ii) aims to maximise local and global satisfaction of several Manuscript to be reviewed</ns0:p><ns0:p>Computer Science main objective is to optimise the use of clean energy between several countries, mainly those in lack of renewable resources for a green environment.</ns0:p></ns0:div>
<ns0:div><ns0:head>NEW COP FORMALIZATION FOR RENEWABLE ENERGY DISTRIBUTION PROBLEM Description of the problem</ns0:head><ns0:p>Depending on the geographical location of the regions, renewable energy resources may change, e.g. some of the countries are sunny allover the year, while for others the sun comes out for only few days. Hence, a country with a continuous and important opportunity of renewable energy production is known as a producing country, while a country which cannot meet its proper needs is known as a consuming country.</ns0:p><ns0:p>Since, the production and storage of renewable energy are expensive and in order to optimise these costs while ensuring a clean environment, a system with a local and global regulator for energy sharing can optimise the life cost of the renewable energy powers and reduce their loss for a green environment.</ns0:p><ns0:p>Our aim is to build an undirected and dynamic graph connecting the different N hybrid power plants (of different involved countries). The graph is dynamic, since a power plant can be added/deleted at any time and it is undirected because any power plant can switch from an energy supplier to an energy consumer and vice versa. The graph is incomplete because it is useless to connect power plants which are at large distances. Each hybrid power plant X i , i ∈ {1, ..., N} in a region R i , maintains important information as:</ns0:p><ns0:p>• The estimated quantity of renewable energy that can be produced Prod(X i ) per day,</ns0:p><ns0:p>• The remaining quantity of renewable energy in batteries Stock(X i ) per day,</ns0:p><ns0:p>• The estimated quantity of needed energy Needs(X i ) for the region under several conditions, i.e.</ns0:p><ns0:p>current season, temperature, etc.</ns0:p><ns0:p>• The quantity of energy to keep in reserve Reserve(X i ) for any risk.</ns0:p><ns0:p>For predicting these data, i.e. Prod(X i ) and Needs(X i ), and for a high performance and accuracy for the proposed A-RESS system, several existing models can be adopted for estimating energy production and Once the daily quantity of energy to provide by each supplier X i to its consumer X j neighbors is determined (using eq.3). The transfer of renewable energy can be done at a local level means between plants of the same country; in this case the energy will be offered. The energy can also be sold when the transfer involves plants of different countries. In this work, we will focus on the international scale.</ns0:p><ns0:p>For simplifying the proposed system, we assume that each country contains one single power plant X i .</ns0:p><ns0:p>A transfer cable is provided between two power plants X i and X j only if the distance between these two plants is not large. Hence X i and X j are considered as neighbors ,i.e. Ng(X i ). Our goal is to optimise the shared quantity of renewable energy while minimizing the cost of transportation and maintaining a high level of transportation reliability. The possessing of the transmission line depends on the quantity of energy to be transported and the distance between the two power plants to interconnect. Note that each transfer process will obviously lead to some loss of energy.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The decision of any power plant for whether acquiring some quantity of renewable energy or producing the same needed quantity of non-renewable energy is based on the basic price of the plant's installation, the amortization period, the transfer cable installation, etc. In addition, any X i can deliver or store energy if and only if its both needs and risk reserve are satisfied. For reducing the loss of energy, the quantity received by X i must not exceed its needs/risk reserve.</ns0:p></ns0:div>
<ns0:div><ns0:head>New Formalization for Renewable Energy Sharing Problem</ns0:head><ns0:p>A renewable energy distribution problem can be formalised as a Constraint Optimisation Problem (COP).</ns0:p><ns0:p>A COP is a Tuple(X, D, C, F) as follows:</ns0:p><ns0:p>• X= {X 1 , X 2 , ..., X n } with X i =(X i .io 1 , X i .io 2 , ..., X i .io k ), where X i represents a power plant and X i .io j , j ∈ {1, ..., Ng(X i ) }, is the quantity of energy to provide/get to/from X j , j = i. Note that X j ∈ Ng(X i ) only if these two plants are linked together, i.e. Ng(X i ) is the set of all neighbors of X i ,</ns0:p><ns0:p>• D={D 1 , D 2 , ..., D n } where D i =(val 1 , val 2 , ..., val k ) with val j ∈ N, are the possible quantities of renewable energy to provide/get to/from each X j ∈ Ng(X i ). If X i is a power plant provider then ∀ X j ∈ Ng(X i ), X i .io j ≥0.</ns0:p><ns0:p>• C={C Pro f , C Share , C Rec } where -Profitability Constraint C Pro f : the cost of requesting some quantities of renewable energy from neighbors should be better than producing the same quantity of non-renewable energy (see eq.1). These costs involve: the cost of the transfer, the cost of the cable installation, the cost of air pollution, etc.</ns0:p><ns0:formula xml:id='formula_0'>∑ j∈Ng(X i ) (β i j * X i .io j ) − (γ i * ∑ j∈Ng((X i ) (1 − α i j ) * X i .io j ) < 0 (1)</ns0:formula><ns0:p>where α i j is the proportion of lost energy, determined according to the link type between X i and X j , its length, the weather condition, etc. γ i is the cost of producing 1KW of non-renewable energy by the plant X i , and β i j is the cost of exchanging 1KW of renewable energy between two plants X i and X j -Shared Quantity Constraint C Share : the total shared quantity per plant should not exceed the available quantity of energy to be given as in eq.2.</ns0:p><ns0:p>∑</ns0:p><ns0:formula xml:id='formula_1'>X j ∈Ng(X i ) X i .io j ≤ Avail(X i )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Where</ns0:p><ns0:formula xml:id='formula_2'>Avail(X i ) = Prod(X i ) + Stock(X i ) − Needs(X i ) − Reserve(X i )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>-Received Quantity Constraint C Rec : the total received quantity of renewable energy per a plant should not exceed the needed energy as in eq.4.</ns0:p><ns0:p>∑ X i ∈Ng(X j )</ns0:p><ns0:p>(1 − α i j )X j .io i + Avail(X j ) ≃ 0 (4)</ns0:p></ns0:div>
<ns0:div><ns0:head>8/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Solving this COP consists in finding the optimal (according to the objective function F) instantiation of all variables X with values of their domains D that satisfies all the constraints C. An optimal solution X * ={X * 1 , X * 2 , ..., X * n } for this problem is a solution that maximizes the satisfaction of all neighbors requests, F(X * ). Each power plant tries to compensate the spent cost of its installation by serving the maximum in needy regions with possible quantities. In addition, this system tries to maximize the benefit from distributing energy. The cost of purchasing/selling energy differs from one plant to another according to the price of the link installation, the distance between the regions, etc.</ns0:p><ns0:p>Each plant X i will try to find values X * i =(X * i .io 1 ,X * i .io 2 ,..., X * i .io k ) with k = Ng(X i ) , that optimizes F(X), as shown in eq.5.</ns0:p><ns0:formula xml:id='formula_3'>F(X * ) = min X F(X) (5) F(X) = g(X) − h(X)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where, g(X), is the function that measures the degree of non-satisfaction of all the plants X i ∈ X (as given in eq.7). An energy plant supplier X i is satisfied only if the maximum of its extra produced energy is shared with neighbors, while an energy plant consumer X j is satisfied only if it gets the maximum of the energy it needs. As for h(X), it computes the global cost of all the energy exchanged between all the plants (as given in eq.8). If h(X) > 0 then most plants are suppliers; otherwise most plants are consumers.</ns0:p><ns0:formula xml:id='formula_4'>g(X) = ∑ X i ∈X ( ∑ X j ∈Ng(X i ) X i .io j − Avail(X i )) 2 (7) h(X) = ∑ X i ∈X ∑ X j ∈Ng(X i ) β i j * X i .io j<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>Recall that β i j is the cost of exchanging 1KW of energy between two plants X i and X j . This cost is not the same for all plants, it is subject to type and length of the used cable, the installation cost of the plant, etc. Both quantities, h(X) and g(X) should be normalized.</ns0:p></ns0:div>
<ns0:div><ns0:head>A-RESS GLOBAL DYNAMIC</ns0:head><ns0:p>Since our problem is naturally distributed where each plant is responsible for taking its own decisions upon its knowledge, we propose to use a multi-agent system for solving this problem. Agents will communicate to agree on quantities to exchange, that maximises the satisfaction of them all, i.e. F(X * ).</ns0:p></ns0:div>
<ns0:div><ns0:head>System Architecture</ns0:head><ns0:p>In the multi-agent proposed system, each agent will be assigned to a power plant. Three types of agent are used, a Supplier, a Consumer and a Neutral Agents. Supplier agents are those who have an excess of renewable energy and subsequently can provide some of it to the neighbors who are indigent. These latter agents are the Consumer. A neutral agent is an agent that is able to satisfy its needs and does not have any extra energy.</ns0:p><ns0:p>Each agent X i maintains five static knowledge i) Estimated quantity of renewable energy that its power plant can produce by per day, ii) Available quantity of renewable energy in its power plant batteries per day, iii) Estimated quantity of needed energy for its power plant, iv) Quantity of energy to keep in reserve for any risk and v) Number of all its neighbors Ng(X i ) . Agents also have dynamic knowledge, this knowledge differs according to the type of agent. A supplier might be mapped into a consumer if it faces a lack of energy at any day, same for a Consumer. Agents can be mapped according to their daily needs.</ns0:p><ns0:p>All these types of agents need to cooperate together in order to maximize their own satisfaction.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Global Dynamic System</ns0:head><ns0:p>The proposed multi-agent global dynamic is divided into two phases:</ns0:p><ns0:p>• An initialization phase</ns0:p><ns0:p>• A negotiation phase</ns0:p><ns0:p>During the initialization phase, each agent will use its trained ANN model to estimate the production and the needs of the day. As we mentioned before this process will be discussed in another research work.</ns0:p><ns0:p>According to the obtained estimations, decides the quantity of available renewable energy to share with neighbors. If the Avail(X i ) is a positive value then X i will be a Supplier; otherwise it will be a Consumer.</ns0:p><ns0:p>Agents can then start the second phase.</ns0:p><ns0:p>During the negotiation phase, each Consumer agent, will check first whether is it profitable to get the needed quantity or to use non-renewable energy according to eq.1. If yes, then it will send a request to all its neighboring Supplier, with the needed quantity i.e. the agent will divide the needed quantity |Avail(X i )| on the number of Suppliers. Each Supplier X j that receives demands from related Consumers, proceeds first by checking whether it is possible to grant them all, i.e Avail(X j ) is equal or greater than the summation of all requested quantities. If so, each Consumer will receive what it needs and the extra-quantity will be kept for further requests. Otherwise, the agent X j will determine the proportion p i j of each request according to the total needs (summation of all asked quantities from all X i ∈ Ng(X i )). X i will reply to all its requester with possible quantities (see eq.9).</ns0:p><ns0:formula xml:id='formula_5'>X i .io j = p i j * Avail(X i )<ns0:label>(9)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>p i j = X i .io j ∑ X j ∈Ng(X i ) X i .io j (10)</ns0:formula><ns0:p>Each Consumer agent X i will check the total amount of energy that it is willing to get from all Suppliers. If the whole quantity is equal to its need (satisfying constraint of the eq.4) and optimise its objective function F(X i ), it replies with an acceptance. Otherwise, if it is less than its needs (due to some energy loss or non availability of enough energy on the Supplier side), then it will resend to other Suppliers, that are willing to provide more, asking them for the missing quantity. The same negotiation proceeds until all available quantities are distributed and all Consumers get the maximum of what they can get. Once the negotiation process is over, the power plant may start exchanging their agreed quantities.</ns0:p></ns0:div>
<ns0:div><ns0:head>System Architecture Description</ns0:head><ns0:p>In this section, we will describe our multi-agent system using the ODD (Overview, Design, and Details) protocol proposed by <ns0:ref type='bibr' target='#b14'>Grimm et al. (2006)</ns0:ref>.</ns0:p><ns0:p>Overview 1. Purpose: The objective of our multi-agent model is to specify the quantities of energy to be released from agents with energy excess to agents with a lack of energy to satisfy their needs.</ns0:p><ns0:p>2. Entity, states variables and scale: Our model is made up of three types of agents: 1) Supplier Agent:</ns0:p><ns0:p>represents power plants that produce a quantity of energy that exceeds their needs, 2) Consumer Agent: represents power plants that produce a quantity of energy that does not satisfy their needs and 3) Neutral Agent: are the power plants that produce a quantity of energy that only meets their proper needs.</ns0:p><ns0:p>Each agent knows the quantity of energy that has in its storage devices (Stock(X i )), the quantity of energy to keep in reserve for all risks (Reserve(X i )) and the number of its neighbours ( Ng(X i ) ).</ns0:p><ns0:p>The agents are located in an environment made up of a set of regions where each region has meteorological characteristics (temperature, humidity, amount of rain, cloudiness, season, etc.) and a machine learning model to predict its daily quantity of energy 4) to be produced (Prod(X i )) and 5) to be consumed (Needs(X i )).</ns0:p></ns0:div>
<ns0:div><ns0:head>10/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>3. Process and scheduling: Every 24 hours, the following processes are executed in this given order:</ns0:p><ns0:p>(a) Each Consumer agent sends the quantity it needs to each of its Supplier neighbours.</ns0:p><ns0:p>(b) Supplier agents receive all requests from neighbouring Consumers agents.</ns0:p><ns0:p>(c) Each Supplier agent compares the total quantity requested with the quantity available. If the total quantity can be provided, then it responds to each Consumer with the requested quantity.</ns0:p><ns0:p>Otherwise, it determines the quantity to be given. The quantity X i .io j (eq.9) to be offered from the Supplier agent X i to a Consumer agent X j is determined according to availability Avail(X i ) (eq.3) and the portion p i j given by the equation eq.10. This proportion is assigned to each consumer depending on its environment, i.e strong consumer with many suppliers, or weak consumer with few suppliers.</ns0:p><ns0:p>(d) Each Consumer agent receives the responses from all the neighbouring supplier agents and checks the total of the quantities received with its need. If the total is less or equal to its needs, it accepts the proposed quantities, otherwise it adjusts the quantities according to its needs and sends a response with only needed quantities.</ns0:p></ns0:div>
<ns0:div><ns0:head>Design concepts</ns0:head><ns0:p>1. Emergency: Each agent has a local goal that is to provide all its energy excess. This goal is represented by the function F(X * ) given by the equation eq.5, which aims to minimize the two functions g(X) and h(X) (see eq.7 and eq.8).</ns0:p><ns0:p>2. Sensing: For a Supplier agent, in all cases the total quantity of the shared energy does not exceed its available quantity of renewable energy. The same for an agent consumer, i.e. the total quantity of received energy does not exceed its needs under any circumstance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Interaction:</ns0:head><ns0:p>The only interactions are among each Consumer and its neighboring Suppliers. Neutral agent does not interact with other agents because it is already satisfied, i.e. it has neither lack nor excess of energy. Supplier agents interact with Consumer agents to negotiate the quantities of energy to be shared. These interactions resume until no more available energy to be shared.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Stochasticity:</ns0:head><ns0:p>In the initialization phase, there are two types of data: known data and data to be estimated. The quantity of energy stored in the battery (Stock(X i )) and the energy reserve (Reserve(X i )) of each power plant are both known quantities. However, the quantity of energy to be produced (Prod(X i )) and the quantity of energy to be consumed (Needs(X i )) are predicted using a trained regression model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Details</ns0:head><ns0:p>1. Initialisation: The first action to be performed is the estimate the quantity of energy to be produced (Prod(X i )) and the quantity of energy to be consumed (Needs(X i )) for each agent. Then, each agent calculates its quantity of available energy (Avail(X i )). If the quantity is positive, then it is a Supplier agent otherwise it is a Consumer agent. The agents exchange their states then each Consumer agent calculates the number of its Supplier neighbors. Based on the number of neighbors and the total number of agents, Consumer agents are classified into two types: Strong Consumer who have more neighbors and those who have fewer are Weak Consumers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Input Data:</ns0:head><ns0:p>The model takes as input a graph containing the list of power plants and the links between them. Two power plants are linked, meaning that they are neighbours and can interact.</ns0:p></ns0:div>
<ns0:div><ns0:head>System Complexity</ns0:head><ns0:p>In order to evaluate the computational efficiency of the proposed system and decide whether it can be used in practical applications or not, we computed the time complexity of the underlying multi-agent system. Since a multi-agent system is defined as a set of interacting agents in an environment, we need to determine the complexity at two levels, the agent level (as an atomic unit) and the system level (including the interactions). In the following, we assumed N the total number of agents, each agent is connected at most to N/2 other agents as neighbors.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• At the agent level: According to its behavior, each agent has to predict the quantities of renewable energy to be produced and to be consumed in order to estimate the quantity to be available. The complexity of this process depends on the trained regression model that will be used. Note that most of the existing models are polynomial. Then the agent has to process m(N/2) values to/from neighbors, with m the total number of iterations to attend consumers' satisfaction. In the worst case, only one Consumer will be satisfied at each communication, so the total number of iterations for a Supplier will be N/2. Hence, each consumer will process O(N 2 )</ns0:p><ns0:p>• At the system level: According to the designed interactions, a first communication is required between agents to exchange their states (Consumer or Supplier). This first step requires O(N 2 ) messages at most. Then, Consumer agents will negotiate with their Suppliers to get what they can get. These communications sending/receiving requests/answers between agents resumes until all consumers are satisfied (no more available quantities to be shared). Assume that at each communication one neighbor is satisfied, so the total communications will be equal to the number of neighbors for each agent, means O(N 2 ). All agents will process O(N 3 ) messages.</ns0:p></ns0:div>
<ns0:div><ns0:head>Illustration Example</ns0:head><ns0:p>To illustrate the global dynamic of our proposed system, let's consider the following example: assume we have 5 regions {X 1 , X 2 ,..., X 5 } described in the table.1. To represent the distance graph, we assume that the two regions X i and X j are linked if the distance d i j is less than the maximum distance D. In our example, we assume that d 15 , d 13 , d 23 , d 24 , d 45 ≥ D as shown in the following graph:</ns0:p><ns0:formula xml:id='formula_7'>X 1 X 2 X 3 X 4 X 5</ns0:formula><ns0:p>The table.2 gives an example of the estimated energy per region.</ns0:p><ns0:p>During the initial phase, agents will first estimate their energy available quantity and decide about their current status, i.e. whether a Consumer, a Supplier or a Neutral agent. Then they will exchange their status as given in the following graph: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_8'>Supplier (+1500) X 1 X 2 X 3 X 4 X 5</ns0:formula><ns0:p>During negotiation phase, the Consumer agents will send a message to all neighboring Suppliers to ask for the needed quantities. X 1 will ask for 2000KW from X 5 and X 4 will ask for 750KW from X 2 and X 5 . Then X 5 will compute p 51 =0.72 and p 54 =0.28, and inform X 1 and X 4 about the quantities they are willing to receive, i.e. 1080KW for X 1 and 420KW for X 4 . As for X 2 , it will inform X 4 that it is able to get only 500KW. The Consumer agent will accept the offer given by Suppliers, since no additional affordable energies as given in table.3. </ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL EVALUATION</ns0:head><ns0:p>For an effective performance evaluation of our A-RESS system, we need to show that: i) All produced quantities of renewable energy will be shared among involved power plants, and ii) None of the power plants that lack energy, gets more than its demand. This system can be adopted by one large country, e.g. Saudi-Arabia or by several neighboring countries. The aim is to guarantee both, the maximum use of produced renewable energy for a less environment damage and the minimum loss of energy. We implemented a prototype of our system using JADE environment, a Java programming language on an Intel (R) Pentium (R) Dual CPU T3200 processor with a Microsoft Windows 7 operating system.</ns0:p><ns0:p>Several experiments were performed under the following assumptions:</ns0:p><ns0:p>• The number of countries is 10, {X 1 , X 2 , ..., X 10 } described in the table. 4,</ns0:p><ns0:p>• For each country only one power plant is installed. We will have one agent per country.</ns0:p><ns0:p>• A link exists between two countries if the distance between them is less than some threshold, for our experiments is 2500Km.</ns0:p></ns0:div>
<ns0:div><ns0:head>13/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• A country X i can be Neutral, Supplier or Consumer according to its maintained data. These data, i.e. Prod(X i ), Needs(X i ), Stock(X i ) and Reserve(X i ), are generated randomly.</ns0:p><ns0:p>The performance of our A-RESS system is evaluated in terms of three metrics: i) the CPU time for scalability testing, ii) the energy availability per region before/after sharing and iii) the difference between afforded/received for sensibility testing. The obtained results are represented in the following Figures. To determine the selling price of 1 KW of renewable energy, we needed to consider the initial renewable energy power plant and link installation cost β 0 i , the amortization period t i and the discount rate σ i . The selling price of a 1KW of non-renewable energy is represented by a geometric suite (as given in eq.11).</ns0:p><ns0:formula xml:id='formula_9'>β i j = σ n i β 0 i j + a<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>with n∈[0..t i ] and a is a fixed fees.</ns0:p><ns0:p>In our experiments, 30 samples were generated. These samples differ in terms of the estimated quantities of energy consumption, production, remaining quantity in batteries and the quantity to keep in reserve. So, the numbers of Neutral, Supplier and Consumer agents vary from one sample to another.</ns0:p><ns0:p>In figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>, the blue curve shows the total quantities of available energy, while the red curve shows the total quantities of shared energy for all Supplier agents per sample. Note that for most samples both curves superimposed. Only for few cases the available quantity is slightly greater the shared one. This means that most produced energy by suppliers will be used by consumers and consequently the loss of energy is very few.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed From all above obtained results, we believe that despite the variation of the random inputs (Prod(X i ),</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Needs(X i ), Stock(X i ) and Reserve(X i )), the proposed multi-agent dynamic is accurate. In all cases, a Supplier agent cannot offer more than what it is available for it, a Consumer agent cannot receive more than it needs and mainly most of the produced quantity of renewable energy is well distributed.</ns0:p><ns0:p>To test the scalability of our approach, we varied the number of plants from 10 to 200 and we measured the required CPU time. The figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref> shows that the CPU time increases, with polynomial growth, according the number of plants. This is due to the number of messages that are exchanged in order to attend a compromise between all agents. This number is a polynomial function of the number of plants. For 10 power plants we needed 80 seconds while for 200 power plants, we had 620 seconds.</ns0:p></ns0:div>
<ns0:div><ns0:head>16/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Therefore, we have proposed a novel smart and agent-based system that is able to share most of the available quantities of renewable energy with all neighboring regions. The obtained results have shown the performance of the proposed agent-based system in terms of accuracy, satisfactions and energy loss.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>In future work we will, integrate a regression model in our system for predicting all daily unknown energy quantities to be produced/to be consumed.</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:p>Difference between Available and shared energy for Supplier Agents</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Abbreviations A-RESS Agent based Renewable Energy Sharing System ACO Ant Colony Optimization ANN Artificial Neural Network COP Constraint Optimisation Problem CRRD Cost Reduction Ratio Distribution DER Distributed Energy Resources DG Distributed Generator DMC Direct Matrix Converters DNO Distributed Network Operators</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>photovoltaic or battery, with solar photovoltaic as well as batteries). The proposed approach allows different houses to make decisions without sharing any information with the central transactive energy management system. The proposed approach is efficient in terms of optimal use resources and efficient sharing of energy between different houses in microgrids. As for<ns0:ref type='bibr' target='#b34'>Shichang et al. (2020a)</ns0:ref>, they presented a peer-to-peer energy sharing framework for numerous community prosumers. Two strategy are proposed in this research, intercommunity energy-sharing and an intracommunity energy-sharing for day-ahead and relative energy-sharing schedule respectively. As regards<ns0:ref type='bibr' target='#b35'>Shichang et al. (2020b)</ns0:ref>, authors suggested a new and fair peer-to-peer for a community of energy buildings. First, a Generalized Nash Equilibrium (GNE) of the game is displayed independently of energy sharing payments. Then a Cost Reduction Ratio Distribution (CRRD) model is used to fix energy sharing payments for the buildings. The authors of<ns0:ref type='bibr' target='#b16'>Klein et al. (2020)</ns0:ref> proposed an end-user engagement framework tailored to fit the Peer-to-Peer (P2P) energy sharing. The objective of the proposed P2P energy sharing model is to optimize the energy consumption of each network participant according to the available energy distributed. The network in this study is made up of a static number of pilots (set to three) and the communication established between them is static. While<ns0:ref type='bibr' target='#b4'>Andrea et al. (2021)</ns0:ref> presented a two-stage approach that allows sharing renewable energy</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>plants/countries subject to production fluctuation and depending on weather conditions that may lead a supplier to become a consumer and iii) minimises local and global energy losses. Recall that our 6/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>consumption. Most of them are based on Artificial Intelligent and Machine Learning techniques, amongst, Artificial Neural Network (ANN) based model that have provided good results for real-time prediction of energy production (Bermejo et al. (2019), M.Shapi et al. (2021)); Multiple Linear Regression (Solyali (2020)); Support Vector Machine (Kaytez (2020), Walker et al. (2020)); Random Forest (Walker et al. (2020)); K-Nearest Neighbour (M.Shapi et al. (2021)); Multilayer Perceptron (Chammas et al. (2019));etc. The prediction of the required quantities of energy is based mainly on daily meteorological data and production/consumption histories for a given period. The latter is divided into two types of features: Historical features, i.e. energy previous consumption, and weather features, i.e. temperature, humidity, solar radiation, wind speed, wind direction, pressure, rainfall amount, degree of cloudiness, type of day (weekday/weekend/holiday), type of hour (daytime/night-time), season, etc. Most studies cited above showed the effectiveness of Artificial Neural Network (ANN) and Support Vector Machine (SVM) based models. The integration of an ANN or an SVM model in our system for the required daily data will be considered in the future work.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Difference between Available and shared energy for Supplier Agents</ns0:figDesc><ns0:graphic coords='16,151.91,63.78,393.23,182.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Difference between Needed/Received quantity of energy for all Consumer Agents.</ns0:figDesc><ns0:graphic coords='16,155.06,528.24,386.93,181.65' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Number of Supplier Agents Vs. Number of Consumer Agents</ns0:figDesc><ns0:graphic coords='17,182.36,63.78,332.33,171.15' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Total quantities of shared energy vs Total quantities of received energy</ns0:figDesc><ns0:graphic coords='17,141.73,387.12,415.28,186.90' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Number power plants/CPU time</ns0:figDesc><ns0:graphic coords='18,219.37,63.78,258.30,174.83' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,242.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,246.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,270.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,236.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,354.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Function that compute global cost of all the energy exchanged</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>DRL Deep Reinforcement Learning</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>DSE Distributed State Estimation</ns0:cell></ns0:row><ns0:row><ns0:cell>GA</ns0:cell><ns0:cell>Genetic Algorithm</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>GNE Generalized Nash Equilibrium</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>HBMO Honey Bee Mating Optimization</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>HESS Hybrid Energy Storage System N Number of hybrid power plants</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>I-GA Immune Genetic based Algorithm NeedsX i ) Estimated quantity of needed energy for the region R i</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IDR Ng(X i ) Set oh neighbours of X i Integrated Demand Response</ns0:cell></ns0:row><ns0:row><ns0:cell>IEO p i j</ns0:cell><ns0:cell>Integrated Energy Operator Portion of each request according to the total needs</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>IES KW MIP MLP Mixed Linear Programming Integrated Energy System Kilo Watt Mixed Integer Programming MOLP Multi-Objective Linear Programming Prod(X t i Amortization time</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>MPC Model Predictive Control X i Hybrid power plant</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>MSGSDN Micro Smart Grid by Software-Defined Network NN Neural Network X a Fixed fees</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ODD Overview, Design, and Details C Set of constraints</ns0:cell></ns0:row><ns0:row><ns0:cell>OPF D</ns0:cell><ns0:cell>Optimal Power Flow Set of constraints</ns0:cell></ns0:row><ns0:row><ns0:cell>P2P X</ns0:cell><ns0:cell>Peer-to-Peer Set of variables</ns0:cell></ns0:row><ns0:row><ns0:cell>PSO</ns0:cell><ns0:cell>Particle Swarm Optimization</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PSO-NM Particle Swarm Optimization-Nelder Mead</ns0:cell></ns0:row><ns0:row><ns0:cell>PV</ns0:cell><ns0:cell>Photovoltaic</ns0:cell></ns0:row><ns0:row><ns0:cell>RES</ns0:cell><ns0:cell>Renewable Energy Sources</ns0:cell></ns0:row><ns0:row><ns0:cell>SA</ns0:cell><ns0:cell>Stand Alone</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SDN Software-Defined Network</ns0:cell></ns0:row><ns0:row><ns0:cell>SI</ns0:cell><ns0:cell>Swarm Intelligence</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>SVM Support Vector Machine</ns0:cell></ns0:row><ns0:row><ns0:cell>ToT</ns0:cell><ns0:cell>Internet of Things</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>WLS Weighted Least Square</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Parameters</ns0:cell></ns0:row><ns0:row><ns0:cell>α i j</ns0:cell><ns0:cell>Proportion of lost energy</ns0:cell></ns0:row></ns0:table><ns0:note>β i jcost of exchanging 1KW of Renewable Energy between two power plants X i and X jγ i Cost of 1KW of non renewable energy 3/19 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)Manuscript to be reviewedComputer Scienceσ iDiscount rateAvail(X i ) Quantity of energy available F(X) Objective function g(X) Function that measure the degree of non satisfaction of all the plants X i h(X) i ) Estimated quantity of renewable energy that can be produced X i per dayR iThe region where the power plant X i is installedReserve(X i ) Quantity of energy to keep in reserve for any risk Stock(X i ) Remaining quantity of renewable energy in batteries per day i .io j Quantity of energy to provide/get to/from X i</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>4/19 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2020:10:55023:2:0:NEW 27 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Illustration Example</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Distance</ns0:cell><ns0:cell>X 1</ns0:cell><ns0:cell>X 2</ns0:cell><ns0:cell>X 3</ns0:cell><ns0:cell>X 4</ns0:cell><ns0:cell>X 5</ns0:cell></ns0:row><ns0:row><ns0:cell>X 1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>d 12</ns0:cell><ns0:cell>d 13</ns0:cell><ns0:cell>d 14</ns0:cell><ns0:cell>d 15</ns0:cell></ns0:row><ns0:row><ns0:cell>X 2</ns0:cell><ns0:cell>d 12</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>d 23</ns0:cell><ns0:cell>d 24</ns0:cell><ns0:cell>d 25</ns0:cell></ns0:row><ns0:row><ns0:cell>X 3</ns0:cell><ns0:cell>d 13</ns0:cell><ns0:cell>d 23</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>d 34</ns0:cell><ns0:cell>d 35</ns0:cell></ns0:row><ns0:row><ns0:cell>X 4</ns0:cell><ns0:cell>d 14</ns0:cell><ns0:cell>d 24</ns0:cell><ns0:cell>d 34</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>d 45</ns0:cell></ns0:row><ns0:row><ns0:cell>X 5</ns0:cell><ns0:cell>d 15</ns0:cell><ns0:cell>d 25</ns0:cell><ns0:cell>d 35</ns0:cell><ns0:cell>d 45</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Production Example</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>X 1</ns0:cell><ns0:cell>X 2</ns0:cell><ns0:cell>X 3</ns0:cell><ns0:cell>X 4</ns0:cell><ns0:cell>X 5</ns0:cell></ns0:row><ns0:row><ns0:cell>EstProd</ns0:cell><ns0:cell>6000</ns0:cell><ns0:cell>8000</ns0:cell><ns0:cell>8000</ns0:cell><ns0:cell>4000</ns0:cell><ns0:cell>11000</ns0:cell></ns0:row><ns0:row><ns0:cell>QteStock</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>3000</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>EstNeed</ns0:cell><ns0:cell>7000</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>7000</ns0:cell><ns0:cell>8000</ns0:cell><ns0:cell>6000</ns0:cell></ns0:row><ns0:row><ns0:cell>Reserve</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>3000</ns0:cell><ns0:cell>3000</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>4000</ns0:cell></ns0:row><ns0:row><ns0:cell>Avail(X i )</ns0:cell><ns0:cell>Consumer</ns0:cell><ns0:cell>Supplier</ns0:cell><ns0:cell>Neutral</ns0:cell><ns0:cell>Consumer</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>(-2000)</ns0:cell><ns0:cell>(+1000)</ns0:cell><ns0:cell /><ns0:cell>(-1500)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Negotiation phase results</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>X 1</ns0:cell><ns0:cell>X 2</ns0:cell><ns0:cell>X 3</ns0:cell><ns0:cell>X 4</ns0:cell><ns0:cell>X 5</ns0:cell></ns0:row><ns0:row><ns0:cell>Avail(X i )</ns0:cell><ns0:cell>-2000</ns0:cell><ns0:cell>+1000</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>-1500</ns0:cell><ns0:cell>+1500</ns0:cell></ns0:row><ns0:row><ns0:cell>Supplier/</ns0:cell><ns0:cell>X 5 /750</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>X 2 /500</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Quantity</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>X 5 /750</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>NEW</ns0:cell><ns0:cell>-1250</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>-250</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>Avail(X i )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Distance between countries</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Distance</ns0:cell><ns0:cell>Tunisia</ns0:cell><ns0:cell>Saudi</ns0:cell><ns0:cell>Bahrain</ns0:cell><ns0:cell>Qatar</ns0:cell><ns0:cell>Kuwait</ns0:cell><ns0:cell>E.A.U</ns0:cell><ns0:cell>Yamen</ns0:cell><ns0:cell>Oman</ns0:cell><ns0:cell>Iran</ns0:cell><ns0:cell>Egypt</ns0:cell></ns0:row><ns0:row><ns0:cell>(Km)</ns0:cell><ns0:cell>(X 1 )</ns0:cell><ns0:cell>Arabia</ns0:cell><ns0:cell>(X 3 )</ns0:cell><ns0:cell>(X 4 )</ns0:cell><ns0:cell>(X 5 )</ns0:cell><ns0:cell>(X 6 )</ns0:cell><ns0:cell>(X 7 )</ns0:cell><ns0:cell>(X 8 )</ns0:cell><ns0:cell>(X 9 )</ns0:cell><ns0:cell>(X 10 )</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(X 2 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Tunisia</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3614</ns0:cell><ns0:cell>4018</ns0:cell><ns0:cell>4017</ns0:cell><ns0:cell>3609</ns0:cell><ns0:cell>4441</ns0:cell><ns0:cell>4396</ns0:cell><ns0:cell>4726</ns0:cell><ns0:cell>4081</ns0:cell><ns0:cell>2181</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 1 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Saudi</ns0:cell><ns0:cell>3614</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3603</ns0:cell><ns0:cell>638</ns0:cell><ns0:cell>649</ns0:cell><ns0:cell>894</ns0:cell><ns0:cell>1458</ns0:cell><ns0:cell>1143</ns0:cell><ns0:cell>1269</ns0:cell><ns0:cell>1470</ns0:cell></ns0:row><ns0:row><ns0:cell>Arabia</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>(X 2 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Bahrain</ns0:cell><ns0:cell>4018</ns0:cell><ns0:cell>3603</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>471</ns0:cell><ns0:cell>443</ns0:cell><ns0:cell>1188</ns0:cell><ns0:cell>744</ns0:cell><ns0:cell>1586</ns0:cell><ns0:cell>1967</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 3 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Qatar</ns0:cell><ns0:cell>4017</ns0:cell><ns0:cell>638</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>572</ns0:cell><ns0:cell>345</ns0:cell><ns0:cell>1124</ns0:cell><ns0:cell>645</ns0:cell><ns0:cell>1903</ns0:cell><ns0:cell>2040</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 4 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Kuwait</ns0:cell><ns0:cell>3609</ns0:cell><ns0:cell>649</ns0:cell><ns0:cell>471</ns0:cell><ns0:cell>572</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>911</ns0:cell><ns0:cell>1534</ns0:cell><ns0:cell>1212</ns0:cell><ns0:cell>686</ns0:cell><ns0:cell>1685</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 5 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>E.A.U</ns0:cell><ns0:cell>4441</ns0:cell><ns0:cell>894</ns0:cell><ns0:cell>443</ns0:cell><ns0:cell>345</ns0:cell><ns0:cell>911</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1038</ns0:cell><ns0:cell>301</ns0:cell><ns0:cell>1001</ns0:cell><ns0:cell>2434</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 6 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yamen</ns0:cell><ns0:cell>4396</ns0:cell><ns0:cell>1450</ns0:cell><ns0:cell>1188</ns0:cell><ns0:cell>1124</ns0:cell><ns0:cell>1534</ns0:cell><ns0:cell>1038</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>1947</ns0:cell><ns0:cell>2219</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 7 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Oman</ns0:cell><ns0:cell>4726</ns0:cell><ns0:cell>1143</ns0:cell><ns0:cell>744</ns0:cell><ns0:cell>645</ns0:cell><ns0:cell>1212</ns0:cell><ns0:cell>301</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1234</ns0:cell><ns0:cell>2817</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 8 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Iran</ns0:cell><ns0:cell>4081</ns0:cell><ns0:cell>1269</ns0:cell><ns0:cell>1586</ns0:cell><ns0:cell>1903</ns0:cell><ns0:cell>686</ns0:cell><ns0:cell>1001</ns0:cell><ns0:cell>1947</ns0:cell><ns0:cell>1234</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1981</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 9 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Egypt</ns0:cell><ns0:cell>2181</ns0:cell><ns0:cell>1470</ns0:cell><ns0:cell>1967</ns0:cell><ns0:cell>2040</ns0:cell><ns0:cell>1685</ns0:cell><ns0:cell>2484</ns0:cell><ns0:cell>2219</ns0:cell><ns0:cell>2817</ns0:cell><ns0:cell>1981</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>(X 10 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "May 27, 2021
National School of Computer Sciences,
University of Manouba,
Tunis, Tunisia
Telephone:
Email: toumia.imen@gmail.com
Dear Editors,
Thank you for inviting us to submit a revised draft of our manuscript entitled ”A-RESS New Dynamic and
Smart System for Renewable Energy Sharing Problem”. We appreciate the time and effort you and each
of the reviewers have dedicated to providing insightful feedback on ways to strengthen our paper. In the
remainder of this letter, we will reply to all your comments point-by-point. Our answers are mentioned in
blue and for each modification of the manuscript we noted the number of the line.
Reviewer 1
1. Some references lack information such as volume number and page number. Please check and modified
the references.
We have completed the missing informations. You will find these modifications in the REFERENCES section
from line 556.
2. The current literature and technique review is still inadequate, and some important works are missing.
Please refer to the following studies: ”A pragmatic approach towards end-user engagement in the context
of peer-to-peer energy sharing”, ”Optimal distributed generation planning in active distribution networks
considering integration of energy storage”, ”Optimal scheduling of integrated demand response-enabled integrated energy systems with uncertain renewable generations: a Stackelberg game approach”.
We have reviewed the studies cited above and have added our interpretation in the RELATED WORK section
starting at line 150.
3. Although the manuscript is well written in terms of English, there are some (very few, indeed) grammatical errors. It is suggested to proofread the paper.
We have checked the English of our paper and you will find the corrections made throughout the paper.
Reviewer 2
Basic reporting
1. The paper is however severely lacking in references, especially in some sections. For example the Introduction is almost entirely lacking in references. This is critically important to address because the premise
of the model needs to be established based on a real need. The methodology is now much better described.
I am however missing further discussion about data. The data requirements are unclear and not fully specified, and unless this data can be found, the model will be useless.
In the INTRODUCTION section, we have added references (line 30). From line 51 we have added the references of existing works and we have described their gaps which are the motivation of our research. From
line 259 we have clearly explained the data used.
2. The Related work section is a major improvement, but it needs to more clearly specify how this paper and model contributes something new. In clear and succinct language.
In the RELATED WORK section on line 240, we have added a clear description of the advantage of our research over existing work.
Experimental design
1. There is no clear research question or hypothesis stated.
Starting at line 248, we more describe the context and the motivation of our research. From line 478 of the
EXPERIMENTAL EVALUATION section, we have clearly cited the hypotheses of our experiment.
2. Data is not adequately described.
From line 259 we have clearly explained the data used and how to get them.
Validity of the findings
I can’t quite see how the experimental evaluation is clearly showing the usefulness of the approach. Who
will use the model? Is this example representative of real planning examples? I do not consider that the
experiment validates the hypothesis. What have you actually proven? It’s not clear to me.
At the EXPERIMENTAL EVALUATION part, starting from line 478, you will find a description of the performance measurements of our system as well as an interpretation of the results obtained.
Again, thank you for giving us the opportunity to strengthen our manuscript with your valuable comments
and queries. We hope that our submission is now suitable to be published in PeerJ.
Sincerely,
Imen Toumia
On behalf all authors
" | Here is a paper. Please give your review comments after reading it. |
164 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>A global path planning algorithm for unmanned surface vehicles (USVs) with short time requirements in large-scale and complex multi-island marine environments is proposed.</ns0:p><ns0:p>The fast marching method-based path planning for USVs is performed on grid maps, resulting in a decrease in computer efficiency for larger maps. This can be mitigated by improving the algorithm process. In the proposed algorithm, path planning is performed twice in maps with different spatial resolution (SR) grids. The first path planning is performed in a low SR grid map to determine effective regions, and the second is executed in a high SR grid map to rapidly acquire the final high precision global path. In each path planning process, a modified inshore-distance-constraint fast marching square (IDC-FM 2 ) method is applied. Based on this method, the path portions around an obstacle can be constrained within a region determined by two inshore-distance parameters. The path planning results show that the proposed algorithm can generate smooth and safe global paths wherein the portions that bypass obstacles can be flexibly modified. Compared with the path planning based on the IDC-FM 2 method applied to a single grid map, this algorithm can significantly improve the calculation efficiency while maintaining the precision of the planned path.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Research on unmanned surface vehicles (USVs) has received increased attention in various military and civilian applications over recent years <ns0:ref type='bibr' target='#b38'>(Yan et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b4'>Campbell, Naeem & Irwin, 2012;</ns0:ref><ns0:ref type='bibr'>Liu et al., 2016)</ns0:ref>. Robust and reliable guidance, navigation, and control (GNC) systems are required for USVs to perform a variety of complex marine missions. Path planning is an essential In some special applications, the shortest time may be an important requirement for USVs. The fast marching method (FMM) can be a solution for time-optimal global path planning. This method was first proposed by <ns0:ref type='bibr' target='#b33'>Tsitsiklis (1995)</ns0:ref>, <ns0:ref type='bibr' target='#b0'>and Adalsteinsson & Sethian (1995)</ns0:ref> independently and was extended by <ns0:ref type='bibr' target='#b25'>Sethian (1999)</ns0:ref>. The path planned by the FMM is usually extremely close to the obstacles. One solution is to adjust the speed map, as exemplified by the method with an adjusted cost function <ns0:ref type='bibr' target='#b23'>(Messias et al., 2014)</ns0:ref> and the FM 2 method <ns0:ref type='bibr' target='#b9'>(Garrido et al., 2007)</ns0:ref>. FMM-based methods have been widely used in path planning applications <ns0:ref type='bibr' target='#b7'>(Gómez et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b2'>Amorim & Ventura, 2014;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alvarez et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b10'>González et al., 2016)</ns0:ref>. Marine applications based on FMM were introduced by Garrido, <ns0:ref type='bibr' target='#b8'>Alvarez & Moreno (2020)</ns0:ref>. Interesting modifications for the FM 2 method have been performed, and the FMM has been subjected to a vector field considering the effects of several vector variables such as wind flow or water currents <ns0:ref type='bibr' target='#b8'>(Garrido, Alvarez & Moreno, 2020)</ns0:ref>. In addition, studies on path following and obstacle avoidance and formations have also been conducted using FMM <ns0:ref type='bibr' target='#b8'>(Garrido, Alvarez & Moreno, 2020)</ns0:ref>. USV formation path planning has also been performed by <ns0:ref type='bibr' target='#b19'>Liu & Bucknall (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b32'>Tan et al. (2020)</ns0:ref>, and an angle-guidance FM 2 method has been used for the Springer USV to make the generated path compliant with the dynamics and orientation restrictions of USVs <ns0:ref type='bibr' target='#b20'>(Liu & Bucknall, 2016;</ns0:ref><ns0:ref type='bibr' target='#b21'>Liu, Bucknall & Zhang, 2017</ns0:ref>). An improved anisotropic fast marching method using a multi-layered fast marching was proposed by <ns0:ref type='bibr' target='#b31'>Song, Liu & Bucknall (2017)</ns0:ref>, which combines different environmental factors and provides interesting results. In addition to the global path planning, the FMM-based methods also show potential in collision avoidance of USVs <ns0:ref type='bibr' target='#b34'>(Wang, Jin & Er, 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Garrido, Alvarez & Moreno, 2020)</ns0:ref>. These successful studies have demonstrated the potential of FMM-based methods in global path planning of USVs.</ns0:p><ns0:p>The basic FMM can plan time-optimal paths for USVs. However, some of the main shortcomings is that the paths planned by the FMM are too close to obstacles, and there may be abrupt turns when the paths bypass obstacles with sharp corners. Thus, the FM 2 method is proposed to address these problems, and two dimensionless parameters are introduced to adjust the paths more flexibly <ns0:ref type='bibr' target='#b9'>(Garrido et, al. ,2007;</ns0:ref><ns0:ref type='bibr' target='#b8'>Garrido, Alvarez & Moreno, 2020)</ns0:ref>. However, the two introduced parameters lack clear physical meaning, and the suitable adjustment is difficult to identify with specific values. Moreover, the adjustment effects of these two parameters are not common because the adjustment degree with the same parameters varies with different grid maps. Another common problem about FM-based methods is that the computational efficiency of path planning decreases sharply when the scale of the grid map is very large. Therefore, we make two improvements to address the mentioned shortcomings of the basic FM 2 method. First, we introduced an inshore-distance-constraint fast marching square (IDC-FM 2 ) method to improve the inshore path adjustment performance for the first time. Comparing with the basic FM 2 method, the IDC-FM 2 method applies two inshore distance parameters other than the two dimensionless parameters to adjust the paths around the obstacles. The adjustment effects for path planning of the IDC-FM 2 method are stable in different grid maps as the IDC-FM 2 method can constrain the path portions around the obstacles within the region constrained by the two inshore distance parameters. Further, to improve the computational efficiency, the algorithm that applies the IDC-FM 2 method based on two-level spatial resolution grid maps is designed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Environment map model</ns0:head><ns0:p>In 2D global path planning applications based on the FMM or its improved methods, discrete numerical calculations are based on cartesian grid maps. Therefore, an environment map should first be converted into a binary grid map with a suitable spatial resolution <ns0:ref type='bibr'>(SR)</ns0:ref>. Free and opensource satellite images (such as Google satellite images) can be used as the data source for the maps in most USV applications, and the corresponding grid maps can be generated using image processing <ns0:ref type='bibr' target='#b26'>(Shi et al., 2018)</ns0:ref> combined with manual assistance. The binary grid is set as an obstacle grid when an obstacle exists at the geographic location (0 values); otherwise, it is set as a free grid (1 value).</ns0:p></ns0:div>
<ns0:div><ns0:head>Fast marching square method</ns0:head><ns0:p>When using FM-based methods to plan a path, the basis is the FMM. The core work of the FMM is calculating solutions of the Eikonal equation <ns0:ref type='bibr' target='#b33'>(Tsitsiklis, 1995;</ns0:ref><ns0:ref type='bibr' target='#b0'>Adalsteinsson & Sethian, 1995)</ns0:ref>. The Eikonal equation describes a wave front propagation scenario from sources with the speed of a wave front given as at cell . It can be expressed as , where is the arrival 𝐹 𝑥 𝑥 ‖∇𝑇 𝑥 ‖𝐹 𝑥 = 1 𝑇 𝑥 time of the wave from the source to the cell and is a vector differential operator. From the 𝑥 ∇ perspective of time-cost, the Eikonal equation can be expressed in another form (that of <ns0:ref type='bibr' target='#b18'>Lin, 2003)</ns0:ref>:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>‖∇𝑇 𝑥 ‖ = 𝜏 𝑥</ns0:formula><ns0:p>where is the time-cost at cell and is equivalent to . The solution we want to calculate is</ns0:p><ns0:formula xml:id='formula_1'>𝜏 𝑥 𝑥 1 𝐹 𝑥</ns0:formula><ns0:p>, and all of them compose the arrival time map, . All time-cost compose the time-cost 𝑇 𝑥 𝓣(𝑥) 𝜏 𝑥 function map, .</ns0:p></ns0:div>
<ns0:div><ns0:head>𝝉(𝑥)</ns0:head><ns0:p>The FMM was first proposed independently by <ns0:ref type='bibr' target='#b33'>Tsitsiklis (1995)</ns0:ref> and <ns0:ref type='bibr' target='#b0'>Adalsteinsson & Sethian (1995)</ns0:ref>. The solution in cell can be interpreted as the wave arrival time from the nearest 𝑇 𝑥 𝑥 source to cell . other cells with , which has never been computed <ns0:ref type='bibr' target='#b18'>(Lin, 2003)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑇 𝑥</ns0:head><ns0:p>The calculation process includes initialization and loop procedures. During initialization, source cells with are set to , and all other cells are set to with . After the</ns0:p><ns0:formula xml:id='formula_2'>𝑥 0 𝑇 𝑥 0 = 0 𝓢 T 𝓢 F 𝑇 𝑥 = ∞</ns0:formula><ns0:p>initialization, the cell with the smallest value in is selected and moved from to .</ns0:p><ns0:formula xml:id='formula_3'>𝑥 𝑚 𝑇 𝓢 T 𝓢 T 𝓢 A</ns0:formula><ns0:p>Thereafter, the non-accepted neighbor cells of are updated, including the solutions and cell 𝑥 𝑚 sets. Considering Tsitsiklis's deduction <ns0:ref type='bibr' target='#b33'>(Tsitsiklis, 1995;</ns0:ref><ns0:ref type='bibr' target='#b18'>Lin, 2003)</ns0:ref>, each of the new neighbor solutions is determined by:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_4'>𝑇 𝑥 = min ( 𝑇 𝑥 ,(𝑇 𝑥 𝑚 𝑥 𝑖 ,i = 1,2))</ns0:formula><ns0:p>where denotes the original solution.</ns0:p><ns0:p>are the candidate solutions for the paths that 𝑇 𝑥 𝑇 𝑥 𝑚 𝑥 𝑖 ,i = 1,2 pass through the line segment and propagate to cell (see Fig. <ns0:ref type='figure'>1</ns0:ref>), which are calculated by: 𝑥 𝑚 𝑥 𝑖 𝑥</ns0:p><ns0:p>(3) Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>𝑇 𝑥 𝑚 𝑥 𝑖 = { 1 2 ( 𝑇 𝑥 𝑚 + 𝑇 𝑥 𝑖 + 2𝜏 2 𝑥 -(𝑇 𝑥 𝑚 -𝑇 𝑥 𝑖 ) 2 ) ,𝑇</ns0:formula><ns0:p>Computer Science loop procedure is performed continually until all cells are in . The resulting map is the arrival 𝓢 A time map, . 𝓣</ns0:p><ns0:p>The main disadvantage of the basic FMM is that the computed path is too close to obstacles and forces the vehicle to perform abrupt turns <ns0:ref type='bibr' target='#b8'>(Garrido, Alvarez & Moreno, 2020)</ns0:ref>. As described in <ns0:ref type='bibr' target='#b9'>Garrido et, al.(2007)</ns0:ref> and <ns0:ref type='bibr' target='#b8'>Garrido, Alvarez & Moreno, (2020)</ns0:ref>, a smooth path with sufficient safety distances from obstacles can be computed using the FM 2 method. This method applies the basic FMM twice. The procedure for computing paths is described as follows <ns0:ref type='bibr' target='#b8'>(Garrido, Alvarez & Moreno, 2020)</ns0:ref>: 1. The environment is modeled as a binary grid map (Fig. <ns0:ref type='figure' target='#fig_2'>2A</ns0:ref>). 2. The FM-1 st step. All obstacle cells are used as wave sources (</ns0:p><ns0:p>), expanding several 𝑇 = 0 waves at the same time at a constant speed. The value of each cell in the resulting map indicates the time required for a wave to reach the closest obstacle (see Fig. <ns0:ref type='figure' target='#fig_2'>2B</ns0:ref>). This is proportional to the distance from the obstacles. By reversing the meaning of these values, they can be understood as the maximum admissible speed at each cell. Finally, the speed values are rescaled to fix a maximum cell value of 1.</ns0:p><ns0:p>Modifications of the speed map with two parameters, and , can adjust distances 𝛼 𝛽 between the computed paths and obstacles. The value of each cell in the speed map is 𝐹 𝑖,𝑗 adjusted exponentially by : 𝛼</ns0:p><ns0:formula xml:id='formula_6'>(4) 𝑛𝑒𝑤𝐹 𝑖,𝑗 = 𝐹 𝛼 𝑖,𝑗</ns0:formula><ns0:p>The parameter is used to saturate the values in the speed map. It is defined within the 𝛽 range of 0 and 1. Every with a value greater than is set to one (see Fig. <ns0:ref type='figure' target='#fig_2'>2C</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>𝐹 𝑖,𝑗 𝛽</ns0:head><ns0:p>Comparing with a path without modifications, the path modified by will be closer to 𝛼 obstacles when . On the contrary, if , the modified path will stay further away 𝛼 < 1 𝛼 > 1 from obstacles. The parameter allows the path to move closer to obstacles. When is 𝛽 𝛽 smaller, the modified path will be closer to obstacles. 3. The FM-2 nd step. The goal point is used as a unique wave source. The wave is expanded over the map until the starting point is reached. The speed at each cell (equivalent to</ns0:p><ns0:p>) is 1 𝜏 𝑥 obtained from the modified speed map computed in the FM-1 st step. The resulting arrival time map is shown in Fig. <ns0:ref type='figure' target='#fig_2'>2D</ns0:ref>. 4. Finally, a gradient descent is applied over the resulting arrival time map from the starting point to the goal point. An optimal path in terms of the arrival time, smoothness, and safety is obtained.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gradient descent method</ns0:head><ns0:p>The global path can be extracted by applying the gradient descent method. The path propagates along the gradient descent direction from the starting position to the goal position with a 𝑃 s 𝑃 g step length . The value of can be set to a value equal to the SR of the grid map. The gradient 𝑑 𝑑 of the cell is calculated by: 𝑥 𝑖,𝑗 PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_7'>∇𝑇 𝑖,𝑗 = [ 𝑇 𝑖 + 1,𝑗 -𝑇 𝑖 -1,𝑗 2 𝑇 𝑖,𝑗 + 1 -𝑇 𝑖,𝑗 -1 2 ] T</ns0:formula><ns0:p>where represents the arrival time value at grid and the non-italic in the upper-right 𝑇 𝑖,𝑗 𝑥 𝑖,𝑗 T corner represents the transpose of vector .</ns0:p></ns0:div>
<ns0:div><ns0:head>[𝑥 𝑦]</ns0:head><ns0:p>The path waypoints are not limited to cell centers. Generally, the gradient of a path waypoint located in the cell is approximated by the cell gradient . However, a modified 𝑃 𝑛 = (𝑢,𝑣)</ns0:p><ns0:p>𝑥 𝑖,𝑗 ∇𝑇 𝑖,𝑗 gradient calculated by the bilinear interpolation can be used to improve the path precision ∇𝑇 𝑢,𝑣 (see Fig. <ns0:ref type='figure'>3</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed algorithm</ns0:head><ns0:p>To improve the computational efficiency of global path planning in a large-scale and multiisland marine environment, the proposed algorithm performs path planning twice for different purposes. First, the path planning is performed in a low SR (LSR) grid map to determine an effective region. The final global path is then obtained in the second path planning within the effective region of a high SR (HSR) grid map. The relevant methods and procedures applied in the algorithm are as follows. The complete algorithm flow is summarized in the last part of this section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mapping of two-level SR grid maps</ns0:head><ns0:p>The two-level SR grid maps are contained in the HSR and LSR grid maps. The HSR grid map is directly obtained from the Google satellite images data. A mapping relationship between the LSR cells and the corresponding HSR cell sub-blocks is established, which can be 𝐿 × 𝐿 expressed as:</ns0:p><ns0:formula xml:id='formula_8'>(6) (𝑖 L ,𝑗 L )~[ (𝑖 ' H ,𝑗 ' H + 𝐿 -1) ⋯ (𝑖 ' H + 𝐿 -1,𝑗 ' H + 𝐿 -1) ⋮ ⋱ ⋮ (𝑖 ' H ,𝑗 ' H ) ⋯ (𝑖 ' H + 𝐿 -1,𝑗 ' H ) ]</ns0:formula><ns0:p>where are the LSR cell coordinates, and are the original cell coordinates of the</ns0:p><ns0:formula xml:id='formula_9'>(𝑖 L ,𝑗 L ) (𝑖 ' H ,𝑗 ' H )</ns0:formula><ns0:p>mapped HSR cell sub-block, as shown in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p><ns0:p>To map the LSR grid map to a sub-block of the HSR grid map, the following is used:</ns0:p><ns0:formula xml:id='formula_10'>(7) { 𝑖 ' H = 𝑖 Ho + 𝐿𝑖 L 𝑗 ' H = 𝑗 Ho + 𝐿𝑗 L</ns0:formula><ns0:p>To map from a sub-block of the HSR grid map to a cell of the LSR grid map, we use: <ns0:ref type='table'>-2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_11'>(8) { 𝑖 L = 𝑓 floor ( 𝑖 H -𝑖 Ho 𝐿 ) 𝑗 L = 𝑓 floor ( 𝑗 H -𝑗 Ho 𝐿 ) PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where are the HSR cell coordinates.</ns0:p><ns0:p>(𝑖 H ,𝑗 H ),𝑖 '</ns0:p><ns0:formula xml:id='formula_12'>H ≤ 𝑖 H ≤ 𝑖 ' H + 𝐿 -1,𝑗 ' H ≤ 𝑗 H ≤ 𝑗 ' H + 𝐿 -1 (𝑖 Ho ,𝑗 Ho )</ns0:formula><ns0:p>are the original cell coordinates of the original HSR cell sub-block (see Fig. <ns0:ref type='figure'>4</ns0:ref>), which are determined by: (9)</ns0:p><ns0:formula xml:id='formula_13'>{ 𝑖 Ho = ( 𝑖 Hg -𝑓 floor ( 𝐿 2 )) %𝐿 𝑗 Ho = ( 𝑗 Hg -𝑓 floor ( 𝐿 2 )) %𝐿</ns0:formula><ns0:p>where are the HSR cell coordinates of the goal cell , the function indicates (𝑖 Hg ,𝑗 Hg )</ns0:p><ns0:formula xml:id='formula_14'>𝑥 g 𝑓 floor (𝑚)</ns0:formula><ns0:p>a rounding down of , and indicates the modulo operation.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑚 𝑎%𝑏</ns0:head><ns0:p>The SR of the LSR grid map is:</ns0:p><ns0:p>(10) 𝐷 LRes = 𝐿𝐷 HRes where is the SR of the HSR grid map. In general, is defined as:</ns0:p><ns0:formula xml:id='formula_15'>𝐷 HRes 𝐿 (11) 𝐿 ≤ 𝑓 round ( 𝐷 Th 2𝐷 HRes )</ns0:formula><ns0:p>where is the distance threshold of the virtual obstacle influence, and the function</ns0:p><ns0:formula xml:id='formula_16'>𝐷 Th 𝑓 round (𝑚)</ns0:formula><ns0:p>indicates a rounding of . The cell numbers of the LSR grid map in the X-axis and Y-axis 𝑚 directions are:</ns0:p><ns0:p>(12)</ns0:p><ns0:formula xml:id='formula_17'>𝑀 L = 𝑓 floor ( 𝑀 H -𝑖 Ho 𝐿 ) (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:formula xml:id='formula_19'>𝑁 L = 𝑓 floor ( 𝑁 H -𝑗 Ho 𝐿 )</ns0:formula><ns0:p>where and are the cell numbers of the HSR grid map in the X-axis and Y-axis directions, 𝑀 H 𝑁 H respectively.</ns0:p><ns0:p>Based on the mapping relationship, an LSR cell type is determined by comparing the setting threshold and obstacle cell proportion in the HSR cell sub-block. If , the LSR cell is set 𝛤 𝛾 𝛾 > 𝛤 as an obstacle cell; otherwise, it is set as a free cell. prefers to select a small value to retain as 𝛤 much obstacle information as possible.</ns0:p></ns0:div>
<ns0:div><ns0:head>IDC-FM 2 method</ns0:head><ns0:p>In the basic FM 2 method, the path is improved using a modified speed map. Two parameters, 𝛼 and , are used to modify the speed map. These modifications ensure the flexibility of the 𝛽 computed path. However, these two parameters have no clear physical meaning. The normalization of the speed values in the FM-1 st step also results in several shortcomings. For example, all the speed values in the speed map must be calculated. When the map range changes slightly, the entire speed map rescales and differs in the same locations of the original map, and the parameters must be adjusted.</ns0:p><ns0:p>An IDC-FM 2 method is proposed to address these shortcomings. In this method, a time-cost weighting function, , is introduced to adjust the speed map. </ns0:p></ns0:div>
<ns0:div><ns0:head>Time-cost weighting function</ns0:head><ns0:p>The time-cost weighting function is introduced as a concept of the virtual obstacle influence. 𝑤 𝑥 When a USV is far away from obstacles, there is no obstacle influence. As the USV approaches, the virtual obstacle influence increases slowly at first and the amplitude increases gradually. The obstacle influence increases dramatically when the USV is close to within a certain degree.</ns0:p><ns0:p>Similar to the FM-1 st step in the basic FM 2 method, all obstacle cells are set as source cells to first calculate the arrival time map. The difference here is that a threshold 𝑇 Th = 𝜏 𝐷 Th 𝐷 Res could be set to speed up the calculation, where is the SR of the map, and is the unified 𝐷 Res 𝜏 time-cost which can take 1 as the value. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where is the value in cell of the approximate time-cost weighting function map, .</ns0:p><ns0:p>𝑤 𝑥 (𝐷 𝑥 )</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑥 𝑾</ns0:head><ns0:p>The time-cost value of each grid in the time-cost map, which is the inversion of the speed map, is then adjusted to:</ns0:p><ns0:p>(17)</ns0:p><ns0:formula xml:id='formula_20'>𝑛𝑒𝑤𝜏 𝑥 = 𝑤 𝑥 (𝐷 𝑥 )𝜏 𝑥</ns0:formula><ns0:p>Eq. ( <ns0:ref type='formula'>16</ns0:ref>) implies that increases when grid is closer to obstacles, and it also increases 𝑤 𝑥 𝑥 when is small. The planned path tends to select points with a lower arrival time-cost. 𝐷 𝑥 Therefore, when the path is close to obstacles, it will be farther away from obstacles compared to the path planned by the basic FMM.</ns0:p></ns0:div>
<ns0:div><ns0:head>Parameters for the time-cost weighting function</ns0:head><ns0:p>Five parameters, , , , , and , are introduced to determine the coefficients and 𝐷 Th 𝐷 sc 𝐷 wc 𝑤 sc 𝑤 wc 𝑎</ns0:p><ns0:p>, which are determined by: 𝑏</ns0:p><ns0:formula xml:id='formula_21'>(18) 𝑏 = ln (𝑤 sc -1) -ln (𝑤 wc -1) ln (1 -𝑒 sc ) -ln (1 -𝑒 wc ) + ln 𝑒 wc -ln 𝑒 sc (19) 𝑎 = (𝑤 sc -1) ( 𝑒 sc 1 -𝑒 sc ) 𝑏 where , ,<ns0:label>, and .</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>𝑒 sc = 𝐷 sc 𝐷 Th 𝑒 wc = 𝐷 wc 𝐷 Th 𝐷 sc < 𝐷 wc < 𝐷 Th 𝑤 sc > 𝑤 wc > 1</ns0:formula><ns0:p>The parameter determines the largest virtual influence scope of the obstacles. The 𝐷 Th suggestion for the selection of is that there should be a sufficiently safe buffer area for 𝐷 Th obstacles. The parameter is very important for the safety of the USV. First, it is suggested 𝐷 sc that this parameter satisfies:</ns0:p><ns0:formula xml:id='formula_23'>(20) 𝐷 sc ≥ 𝑣 U,max 𝑡 r - 𝑣 2 U,max 2𝑎 d</ns0:formula><ns0:p>where is the maximum speed of the USV, is the reaction time of the thruster, and is 𝑣 U,max 𝑡 r 𝑎 d the negative maximum acceleration of the USV under braking. In an unknown environment, it is better to select a slightly larger value to ensure safety, which is similar in environments with shallow and reef waters. This value may be relatively small in deep and reef-free waters to improve path efficiency. is a parameter that acts as the adjusted constraint distance. In a 𝐷 wc non-channel ocean area, the path portion around the obstacles will be limited to the region between and . The value can be set independently or determined jointly by and 𝐷 wc 𝐷 Th 𝐷 wc 𝐷 Th</ns0:p><ns0:p>. In this study, it is set as: 𝐷 sc 5A, the path will be located far from the obstacle until it is near the boundary of when is 𝑆 wc 𝑤 wc large. In contrast, the path is closer to an obstacle when is large (see Fig. <ns0:ref type='figure'>5B</ns0:ref>). The minimum 𝑤 sc degree of the path close to is mainly determined by , which can be inferred by further 𝑆 sc 𝑤 wc comparing the paths in Fig. <ns0:ref type='figure'>5B</ns0:ref> with the closest path to the obstacle ( ) in Fig. <ns0:ref type='figure'>5A</ns0:ref>. 𝑤 wc = 1.1 However, regardless of the change in and ( ), the path portions bypassing 𝑤 wc 𝑤 sc 𝑤 sc > 𝑤 wc > 1 obstacles will always be within or around the outside boundary of in non-channel ocean 𝑆 wc 𝑆 wc areas. In channel areas where there is no , the path will follow the quasi-centerline of the 𝑆 wc channel if it passes through the channel. In this study, the values of and 𝑤 sc = 40.0 𝑤 wc = 2.0</ns0:p><ns0:p>were selected and fixed. The paths can be flexibly modified by the inshore-distance parameters and . 𝐷 Th 𝐷 sc</ns0:p></ns0:div>
<ns0:div><ns0:head>Determination of effective regions within the HSR grid map</ns0:head><ns0:p>The effective region within the HSR grid map is important for improving the computational efficiency of the proposed algorithm. It is acquired based on the mapping between the LSR and HSR grid maps when the corresponding region within the LSR grid map is determined.</ns0:p><ns0:p>Two effective regions that are respectively used in the FM-1 st and FM-2 nd steps of the IDC-FM 2 method must be determined. The low-precision initial path, , is obtained first based on 𝓁 ini the LSR arrival time map . Two effective regions within the LSR grid map, and</ns0:p><ns0:formula xml:id='formula_24'>𝓣 L 𝑆 LER_1st</ns0:formula><ns0:p>, are determined by expanding the grids around the 'path-passed' grids of . As shown 𝑆 LER_2nd</ns0:p><ns0:p>𝓁 ini in Fig. <ns0:ref type='figure' target='#fig_3'>6</ns0:ref>, the nearest neighbor grid among the four neighbor cells of the waypoint is Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_25'>𝑃 𝑖 = (</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>parameters used in the proposed algorithm are listed in Table <ns0:ref type='table'>1</ns0:ref>. Using as the reference, the 𝓁 ref distances between the corresponding waypoints of (</ns0:p><ns0:p>) and are shown in Fig. <ns0:ref type='figure'>7</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_26'>It 𝓁 𝜅 𝜅 = 3,…,6</ns0:formula><ns0:p>𝓁 ref is shown that the largest distance decreases when increases, and the distance becomes 0 when 𝜅 . Other cases were tested and similar results were obtained. These results indicate that there 𝜅 ≥ 7 is good consistency when is sufficiently large (such as</ns0:p><ns0:p>). is a variable parameter that 𝜅 𝜅 = 10 ∆𝜅 was determined based on three different cases.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm flow</ns0:head><ns0:p>The proposed algorithm is improved based on the basic FM 2 -based algorithm. The basic FM 2based algorithm is executed directly on a single grid map. The algorithm flow of this basic algorithm is shown in Fig. <ns0:ref type='figure'>8</ns0:ref> in the form of main data stream and used methods. This algorithm flow has two main steps. In Step S1, an arrival time map with obstacle sources and a saturation threshold, , is obtained by first applying the FMM. Then, the approximate time-cost 𝓣 sat weighting function map, , is calculated and used to adjust the time-cost map. Based on the 𝑾 adjusted time-cost map, an arrival time map with the goal point source, , is obtained by 𝓣 applying the FMM again. The entire process is used to obtain by applying the IDC-FM 2 𝓣 method. Finally, the global path is acquired based on by applying the gradient descent method 𝓣 in Step S2.</ns0:p><ns0:p>The main data stream in the proposed algorithm and the methods used to obtain them are shown in Fig. <ns0:ref type='figure'>9</ns0:ref>. The main flow is as follows: 1. Step T1. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results and discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>Simulation environments</ns0:head><ns0:p>Two surrounding spatial areas of Zhucha Island and Changhai County were selected as the simulation environments. The original maps adopted Google satellite images, as shown in Fig. <ns0:ref type='figure'>10</ns0:ref>. The spatial resolutions in the longitude and latitude directions are approximately 4.8 m and 3.9 m, respectively. Temporary binary grid maps are obtained based on the image processing method <ns0:ref type='bibr' target='#b26'>(Shi et al., 2018)</ns0:ref> combined with manual assistance, and then the HSR grid maps with in both the longitude and latitude directions are determined by resampling 𝐷 HRes = 10 m processing. The corresponding binary grid maps are presented in Fig. <ns0:ref type='figure' target='#fig_6'>11</ns0:ref>. Their ranges are and , respectively. 7km × 7km 64km × 48km</ns0:p></ns0:div>
<ns0:div><ns0:head>Inshore-distance-constraint performances</ns0:head><ns0:p>A path planning from to is used as a typical case to</ns0:p><ns0:formula xml:id='formula_27'>𝑃 s = [ 4.3 km, 2.8 km ] 𝑃 g = [ 3.6 km, 2.0 km ]</ns0:formula><ns0:p>analyze the inshore-distance-constraint performance of the IDC-FM 2 method using the inshoredistance parameters and . As shown in Fig. <ns0:ref type='figure' target='#fig_2'>12</ns0:ref>, the path planned based on the basic 𝐷 Th 𝐷 sc FMM, , is very close to the islands when this path bypasses them. For comparison, all paths 𝓁 FM planned by the proposed algorithm, to , are located away from islands by a certain distance.</ns0:p><ns0:p>𝓁 1 𝓁 4</ns0:p><ns0:p>They are therefore significantly better choices from a safety perspective.</ns0:p><ns0:p>Paths to are acquired using different inshore-constraint distance parameters ( and</ns0:p><ns0:formula xml:id='formula_28'>𝓁 1 𝓁 4 𝐷 Th 𝐷 sc</ns0:formula><ns0:p>, see Table <ns0:ref type='table'>2</ns0:ref> wherein is calculated based on and ) in the IDC-FM 2 method in which 𝐷 wc 𝐷 Th 𝐷 sc and . These paths are clearly adjusted by these distance parameters. When 𝑤 sc = 40.0 𝑤 wc = 2.0 the distance constraints ( and ) are small, the path (such as , see Fig. <ns0:ref type='figure' target='#fig_2'>12</ns0:ref>) will be a 𝐷 Th 𝐷 sc 𝓁 1 somewhat close to the islands. The estimated quasi-closest distance from to an island is 𝓁 1 approximately 38 m, which is shorter than half of the shortest channel width ( ). In 𝑑 hscw ≈ 101 m this situation, is always outside of its (<28.2 m) value, and the path portions around the 𝓁 1 𝑆 sc islands are located in the region of its (28.2 m to 60.0 m). When the distance constraints 𝑆 wc increase, the paths (such as and , see Fig. <ns0:ref type='figure' target='#fig_2'>12</ns0:ref>) in the non-channel areas will be outside of .</ns0:p><ns0:p>𝓁 2 𝓁 3 𝑆 sc</ns0:p><ns0:p>The estimated quasi-closest distances (approximately 115 m and 128 m, respectively) were larger than . However, and will be along the quasi-midline of the channel in the 𝑑 hscw 𝓁 2 𝓁 3 channel area, regardless of whether is greater or less than for these two paths because 𝑑 hscw 𝐷 wc the weighting time-cost will be smaller than the path around the islands (as in , see Fig. <ns0:ref type='figure' target='#fig_2'>12</ns0:ref>). If 𝓁 4 the distance constraints are further increased, will be selected for safety, although more time 𝓁 4 will be required. These results indicate that the path based on the proposed algorithm can be adjusted by setting different inshore-constraint distance parameters to meet safety requirements. Additionally, when the inshore-constraint distance parameters are determined, the path will cross a channel when its width is adequately large; otherwise, the path will bypass around the islands.</ns0:p></ns0:div>
<ns0:div><ns0:head>Global path planning in a large-scale and complex multi-island environment</ns0:head><ns0:p>To verify the path planning ability and the computational efficiency improvement of the proposed algorithm in a large-scale and relatively complex multi-island environment, the simulation environment shown in Fig. <ns0:ref type='figure' target='#fig_6'>11B</ns0:ref> is selected, and the long-length path cases of USVs around the islands or across channels are investigated.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Path planning cases</ns0:head><ns0:p>Five typical path cases were selected to demonstrate the performance of the proposed algorithm.</ns0:p><ns0:p>The and groups are listed in Table <ns0:ref type='table'>3</ns0:ref>. When determining the inshore-distance parameter 𝑃 s 𝑃 g 𝐷 sc , a USV with , , is considered as a case. Based on Eq. ( <ns0:ref type='formula'>20</ns0:ref>), 𝑣 U,max = 6 m/s 𝑡 r = 2 s 𝑎 d =-1 m s 2 is suggested to select a value larger than 30 m. Therefore, a larger value is used 𝐷 sc 𝐷 sc = 50 m for the path planning for the selected five path cases. Another inshore-distance parameter 𝐷 Th is selected empirically. The main parameters used in these path planning cases are = 200 m shown in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure'>13</ns0:ref>, all paths successfully bypass the islands. The enlarged views (see Fig. <ns0:ref type='figure'>14</ns0:ref>, every path is displayed with a solid line) provide further details. Every path maintains a relatively safe distance when close to an island, and smooth when turning around the island. For a narrow channel of a certain degree (usually wider than</ns0:p><ns0:p>), the path is planned along the 2𝐷 LRes quasi-midline of the channel to ensure that it is as safety as possible. The path lengths range from about 26.52 km to 47.86 km (see Table <ns0:ref type='table'>3</ns0:ref>). These lengths can cover the range of most applications of current small-and medium-sized USVs. These results show the effective path planning ability of the proposed algorithm in a large-scale and complex multi-island environment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computational efficiency improvement</ns0:head><ns0:p>To verify the computational efficiency improvement of the proposed method, the time spent on the five typical paths (see Table <ns0:ref type='table'>3</ns0:ref>) based on the proposed algorithm was calculated. For comparison, the time spent on the basic algorithm which is executed directly on a single HSR grid map by applying the IDC-FM 2 method was also calculated as a reference. The C algorithm code was tested on a computer with a Core i5-6300U CPU and 8G memory, which runs a 64-bit Win7 operating system.</ns0:p><ns0:p>Path planning for every typical path was repeated 10 times. Every planning time starts from the HSR grid map reading (in Steps S1 and T1) and ends when the global path has been calculated (in Steps S2 and T5). Because the path planning is performed on a Windows operating system, which involves multitasking, there may be many factors influencing the planning time, such as the variable CPU usage, random CPU hit rate to the cache, and possible thread scheduling. Therefore, the average planning time is used to evaluate the computational efficiency. The average planning time results of the five planned paths are presented in Table <ns0:ref type='table'>4</ns0:ref>. These results indicate that the computational efficiency of the proposed algorithm based on twolevel grid maps is significantly higher than the algorithm based on a single HSR grid map. The time for all cases is approximately 2 s, indicating that this method can be used in less demanding real-time planning applications. This can effectively improve the practicality of the proposed algorithm.</ns0:p><ns0:p>When planning a path by applying the basic FM 2 -based algorithm, if the grid map scale is very large, two aspects of calculations will severely increase compared with small-scale maps. First, as the number of cells increase significantly, the calculations for free cells increase. Second, 𝓢 T PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science changes in every iteration of calculations. The overall scale of clearly increases, resulting in 𝓢 T the increase in sorting time of the adopted priority heap struct. This compared algorithm, whose algorithm flow is shown in Fig. <ns0:ref type='figure'>8</ns0:ref>, is used directly on a single HSR grid map. The valid scale of the HSR grid map is the number of free cells. For comparison, the proposed algorithm determines the effective regions within the HSR grid map first and then plans a path within the determined effective regions. As shown in Fig. <ns0:ref type='figure'>9</ns0:ref>, Steps T1-T3 complete the determination of effective regions, and Steps T4-T5 realize the path planning. The same function of the path planning is also realized by Steps S1-S2, as shown in Fig. <ns0:ref type='figure'>8</ns0:ref>. Comparing the path planning steps of the compared and proposed algorithms, the numbers of free cells in effective regions are much less than the ones in the HSR grid map. This indicates that the calculations for free cells and the overall scale of will greatly decrease. Therefore, the planning time of the proposed algorithm 𝓢 T reduces. Certainly, there are more steps in the proposed algorithm. Planning time spent on these steps (Steps T1-T3) mainly depends on the LSR mapping parameter . The planning time spent on Steps T1 and T3 is often much less than . Therefore, the increased 𝑡 ref planning time spent on Steps T1-T3 is much less than the saved planning time spent on Steps T4-T5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>A rapid global path planning algorithm applying an IDC-FM 2 method in two-level SR grid maps is proposed for USV applications with short time requirements in large-scale and complex multiisland environments. This algorithm can acquire a continuous, smooth, quasi-time-optimal path while maintaining a safe distance around obstacles when bypassing obstacles. When the path is near obstacles, it is limited to a safe area determined by two inshore-distance parameters. By adjusting these two inshore-distance parameters, the path can be modified flexibly. The two-path planning process based on two-level SR grid maps improves the computational efficiency compared with the basic FM 2 -based method. The planning time on the order of seconds is acceptable in many global path planning applications of USVs. Meanwhile, the planning time of this order of magnitude is typically short enough for USVs in most situations with the requirement to replan the path. This indicates the potential of replanning by using the proposed algorithm from the perspective of planning time. However, there are still some challenges. For example, how to accurately obtain the location information of newly detected obstacles and how to add this information into the grid map when replanning are the common challenges. The IDC-FM 2 method also needs to be modified to plan the path meeting the dynamic characteristics of a USV. Otherwise, the USV may not follow the replanned path successfully at the beginning of the path.</ns0:p><ns0:p>One shortcoming of the proposed algorithm is that paths through channels that are very narrow but still passable in reality may be missed because of the first path planning process in the LSR Effective region within the LSR grid map.</ns0:p><ns0:p>The case shown uses k = 4, and P ff,i = (f floor (x i ), f floor (y i )), P cf,i = (f ceil (x i ), f floor (y i )), P cc,i = (f ceil (x i ), f ceil (y i )), and P fc,i = (f floor (x i ), f ceil (y i )) are the four neighbor cells of P i . P ff,i is the 'path-passed' cell with respect to P i in this case. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 13</ns0:note><ns0:note type='other'>Computer Science Figure 14</ns0:note><ns0:p>Enlarged views of path portions.</ns0:p><ns0:p>(A)-(E) show the portions that are indicated by rectangles in Fig. <ns0:ref type='figure'>13</ns0:ref>. Detailed information can be acquired from (A) to (E). For example, all paths maintain a relatively safe distance from each island, the paths are smooth even when they turn around islands, and paths across narrow channels can be planned along the quasi-midline of the channels.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Th -𝐷 sc ) The area around the obstacles is divided into four parts by the three inshore-distance parameters: , , and . The three parts extending from an obstacle to the outside region 𝐷 sc 𝐷 wc 𝐷 Th are the danger region, ; the strong constraint region, ; and the weak constraint region, 𝑆 d 𝑆 sc 𝑆 wc PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021) Manuscript to be reviewed Computer Science (see Fig. 5). The influences of and on the path are shown in Fig. 5. As shown in Fig. 𝑤 wc 𝑤 sc</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021)Manuscript to be reviewedComputer Science grid map. The introduction of environmental effects into the proposed algorithm is an important task to be performed in future works. Marine experiments should also be conducted to verify the validity of the algorithm.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure 7</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>l</ns0:head><ns0:label /><ns0:figDesc>ref is the path obtained by applying the IDC-FM 2 method based on the single HSR grid map directly, while l k is the path obtained by applying the proposed algorithm based on two-level SR grid maps.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 11 Binary</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Figure 12</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Five</ns0:head><ns0:label /><ns0:figDesc>typical paths in the surrounding area of Changhai County. Several portions of these paths marked by rectangles are amplified to show more details in Fig. 14. PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,265.72,525.00,315.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='42,42.52,255.37,525.00,180.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,255.52,525.00,434.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>𝑥 𝑚 𝑥 𝑖 > 𝑇 𝑥 𝑚 , and 𝑇 𝑥 𝑚 𝑥 𝑖 > 𝑇 𝑥 𝑖</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>min(𝑇 𝑥 𝑚</ns0:cell><ns0:cell>,𝑇 𝑥 𝑖 ) + 𝜏 𝑥</ns0:cell><ns0:cell>,others</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>where</ns0:cell><ns0:cell>𝑇 𝑥 𝑚</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝑇 𝑥 𝑖</ns0:cell><ns0:cell cols='2'>are the accepted solutions at cells</ns0:cell><ns0:cell>𝑥 𝑚</ns0:cell><ns0:cell>and , respectively. If cell is in , 𝑥 𝑖 𝑥 𝑖 𝓢 T</ns0:cell></ns0:row><ns0:row><ns0:cell>𝑇 𝑥 𝑖</ns0:cell><ns0:cell cols='7'>is . For the cell set, if a non-accepted neighbor cell is in , it moves from ∞ 𝑥 𝓢 F</ns0:cell><ns0:cell>to . The 𝓢 F 𝓢 T</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Three inshore-distance parameters 𝑤 𝑥 (the distance threshold , as well as the virtual strong and weak constraint distance parameters, 𝐷</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Th</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>𝐷 sc</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell cols='2'>𝐷 wc</ns0:cell><ns0:cell cols='6'>, respectively) and two weighting function values with respect to</ns0:cell><ns0:cell>𝐷 sc</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>( 𝐷 wc 𝑤 sc</ns0:cell></ns0:row><ns0:row><ns0:cell>and</ns0:cell><ns0:cell cols='2'>𝑤 wc</ns0:cell><ns0:cell cols='7'>, respectively) are used to determine the function , and 𝑤 𝑥</ns0:cell><ns0:cell>𝐷 Th</ns0:cell><ns0:cell>is used to achieve the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>same function as the speed saturation. Although five parameters are set, only two inshore-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>distance parameters,</ns0:cell><ns0:cell>𝐷 Th</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝐷 sc</ns0:cell><ns0:cell cols='2'>, must be considered, while suitable values of</ns0:cell><ns0:cell>𝑤 sc</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝑤 wc</ns0:cell><ns0:cell>can</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>be selected as constant values. Based on this method, the path portion around an obstacle is</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>constrained in a region determined by</ns0:cell><ns0:cell>𝐷 Th</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝐷 sc</ns0:cell><ns0:cell>.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>In the loop procedure, if the smallest value of a cell in 𝑇</ns0:figDesc><ns0:table><ns0:row><ns0:cell>𝓢 T</ns0:cell><ns0:cell cols='6'>is larger than or equal to</ns0:cell><ns0:cell>𝑇 Th</ns0:cell><ns0:cell>, all values of non-accepted cells can be set as 𝑇</ns0:cell><ns0:cell>𝑇 Th</ns0:cell><ns0:cell>directly.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>Based on the resulting arrival time map,</ns0:cell><ns0:cell>𝓣 sat</ns0:cell><ns0:cell>, with a saturation threshold, the time-cost</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>weighting function map, , is designed as: 𝑾 '</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>𝑤 ' 𝑥 = {</ns0:cell><ns0:cell cols='4'>∞ 1 + 𝑎 ( 𝑇 Th 𝑇 𝑥</ns0:cell><ns0:cell>,𝑇 𝑥 = 0 -1 ) 𝑏 ,𝑇 𝑥 > 0</ns0:cell><ns0:cell>(14)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>where</ns0:cell><ns0:cell cols='2'>𝑤 ' 𝑥</ns0:cell><ns0:cell cols='3'>and are the values of the cell in 𝑇 𝑥 𝑥 𝑾 '</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝓣 sat</ns0:cell><ns0:cell>, respectively, and and are two 𝑎 𝑏</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>positive coefficients to be determined.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>When a free grid, , is within the virtual influenced scope of obstacles (i.e., 𝑥</ns0:cell><ns0:cell>0 < 𝐷 𝑥 ≤ 𝐷 Th</ns0:cell><ns0:cell>,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>where</ns0:cell><ns0:cell cols='2'>𝐷 𝑥</ns0:cell><ns0:cell cols='3'>is the closest distance to obstacles), then:</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>𝐷 𝑥 ≈ 𝐷 Res</ns0:cell><ns0:cell cols='2'>𝑇 𝑥 𝜏</ns0:cell><ns0:cell>(15)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Therefore,</ns0:cell><ns0:cell>𝑤 𝑥</ns0:cell><ns0:cell>with respect to</ns0:cell><ns0:cell>𝐷 𝑥</ns0:cell><ns0:cell>is used to approximate : 𝑤 ' 𝑥</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>𝑤 𝑥 (𝐷 𝑥 ) = {</ns0:cell><ns0:cell>∞ 1 + 𝑎 (</ns0:cell><ns0:cell>,𝐷 𝑥 = 0</ns0:cell><ns0:cell>(16)</ns0:cell></ns0:row></ns0:table><ns0:note>𝐷 Th 𝐷 𝑥 -1 ) 𝑏 ,0 < 𝐷 𝑥 ≤ 𝐷 Th 1 ,𝐷 𝑥 > 𝐷 Th PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:1:0:NEW 14 Apr 2021)</ns0:note>
</ns0:body>
" | "School of Electronics and Information Engineering
Harbin Institute of Technology
Nangang District, Harbin, Heilongjiang, China
No. 92, Xidazhi Street
Rapid global path planning algorithm for unmanned surface vehicles in large-scale and multi-island marine environments
Dear Editors:
We thank the reviewers for their generous comments on the manuscript. We have revised the manuscript to address their concerns.
This document provides our point-by-point response to reviewers’ comments.
Three key documents are resubmitted:
• Rebuttal letter:
This document contains our point-by-point response to the comments by the reviewers and the editor.
• Revised manuscript with tracked changes:
This document is mainly identical to the revised manuscript with all changes highlighted, and tracked changes are computer-generated, and Compare function is added in the document.
• Revised manuscript:
The clean new version of the manuscript with tracked changes accepted.
Responses to Reviewer #1
Thank you very much for your comments. A clean revised manuscript and a revised manuscript with tracked changes are provided. The detailed responses are below the comments.
Reviewer comments have been italicized and are in bold font, and the responses are normal. Every reviewer comment with a response is highlighted, such as “Reviewer #1’s comment 1,” and the response is highlighted by another color such as “Response to reviewer #1’s comment 1”.
In the responses, in addition to the explanation and discussion of the responses, some original texts and revised texts are also shown in the responses. The original texts are in red font, and the revised texts are in blue font.
To distinguish the line number in the original and revised manuscripts, the Original-Line and the Revised-Line are used to represent the lines in original and revised manuscripts. For example, “Original-Line 1” represents line 1 in the original manuscript, while “Revised-Line 1” represents line 1 in the revised manuscript.
Basic reporting
Reviewer #1’s comment 1:
The paper is well organized and is quite good from the English aspect. The introduction and background literature references are sufficient. Moreover, the shared Raw data supports the presented results. Finally, figures have good quality and tables are appropriately described.
Response to reviewer #1’s comment 1:
We are very grateful for your affirmation.
Experimental design
Reviewer #1’s comment 2:
The paper is into the scope of the journal. The contribution of the paper is clear, and it provides sufficient detail to replicate the proposed method.
Response to reviewer #1’s comment 2:
We are very grateful for your affirmation.
Validity of the findings
The results of the paper show the effectiveness of the proposed approach. Moreover, a comparison against a basic approach is presented. The conclusions are well stated and connected to the original question investigated. However, a sentence of the conclusions is not clear. Comment to the author is:
Reviewer #1’s comment 3:
In the “Conclusions” section, the authors mention “Additionally, the execution time is sufficiently low in many global path planning applications of USVs”. Do you mean that the proposed algorithm execution time is low enough for global path planning applications? Please, discuss the common execution time for global path planning algorithms.
Response to reviewer #1’s comment 3:
Thank you very much for this comment.
First, as it is also pointed out by Reviewer #2’s comment 7, we want to clarify that the expression of “planning time” is more accurate than “execution time.”
Here, we wanted to discuss the planning time of the global path planning algorithms. In references of studies that focus on global path planning algorithms, researchers are mainly concerned with the performance of the paths generated by the algorithms, other than the planning time. Therefore, global path planning is often performed offline before the USV starts a mission. Therefore, we do not intend to discuss the common execution time for global path planning algorithms. The main reason is that the ability of the FM-based methods to plan global paths with excellent performances is the main point of comparison with other common global path planning algorithms.
However, this article discussed the planning time because the shortcoming of FM-based methods is that the planning time is a little long when the cell scale of a grid map is very large. Although global path planning algorithms rarely care about planning time, engineering staff care about it, and a long planning time is often not desirable. However, planning time in the order of seconds can typically be accepted. As shown in Table 4, the planning times of the compared algorithm (based on a single HSR grid map directly) is relatively a little longer. To address this problem, our algorithm is proposed, and the improved planning times decrease to less than 1/10 of the compared algorithm. All results are around 2 s. Such planning times are completely acceptable. This is the main significance of our work: the proposed algorithm can generate global paths with excellent performances (continuous, smooth, safe, and quasi time-optimal) of FM-based methods. Meanwhile, it has an acceptable planning time in actual USV applications even when the grid map is very large.
In the revised article, the original part “Additionally, the execution time is sufficiently low in many global path planning applications of USVs” has been modified to make it more objective. The revised text (Revised-Line 468-469) is
“The planning time on the order of seconds is acceptable in many global path planning applications of USVs.”
Comments for the author
There are some minor problems that need attention. Comment and suggestion are:
Reviewer #1’s comment 4:
In Eq. (3), verify if there is an extra comma in .
Response to reviewer #1’s comment 4:
Thank you for this suggestion.
In Eq. (3), the comma in is necessary. This expression represents that is larger than both and . We attempted to make the expression more concise. Therefore, we used the concise expression of , rather than the detailed expression of . However, based on this comment, we can realize that the concise expression might cause misunderstanding. In the revised manuscript, we have used the detailed expression, rather than the concise expression.
The original equation is
(3)
The revised equation is
(3)
Reviewer #1’s comment 5:
In Eq. (5), non-italic T is not defined.
Response to reviewer #1’s comment 5:
Thank you very much for your reminder.
The non-italic T is only a vector transpose symbol in Eq. (5). The normal style of is
However, this form seems to be unclear. Therefore, we used the transposition form with the vector transpose symbol T, which is
(5)
The definition was lost in the original article. In the revised manuscript, we have added an explanation of this non-italic T after Eq. (5). The revised part (Revised-Line 182-183) is
“where represents the arrival time value at grid and the non-italic in the upper-right corner represents the transpose of vector .”
Reviewer #1’s comment 6:
In line 179, verify the text “To The two-level SR grid maps are contained in the HSR and LSR grid maps.” I think “To” can be removed.
Response to reviewer #1’s comment 6:
Thank you for pointing out this error. We totally agree with your comment.
In the revised manuscript (Revised-Line 197), the word “To” is removed.
Reviewer #1’s comment 7:
In line 201, It seems to be missing the word “axis” for “X-“.
Response to reviewer #1’s comment 7:
Thank you for pointing this out.
We have carefully checked the full original manuscript and found this problem in Original-Line 201 and Original-Line 205. Thus, we have revised both of them (Revised-Line 219 and Revised-Line 223) by using “X-axis.”
Reviewer #1’s comment 8:
In line 212, verify the text “This In the basic FM2 method,” I think “This” can be removed.
Response to reviewer #1’s comment 8:
Thank you for pointing out this error.
Similar to the comment 6, the word “This” is removed in the revised manuscript (Revised-Line 230).
Reviewer #1’s comment 9:
Eq. (18) is hard to read because it is too small.
Response to reviewer #1’s comment 9:
It is a good suggestion that we had not considered.
To address this problem, we have transformed the equation according to the ln() function. Then, the transformed form was used to express this equation. However, the size of the equation seems to be automatically adjusted, and it is still a bit small. The original equation is
(18)
while the revised equation is
. (18)
Some comments based on the “Instructions for Authors” of PeerJ are:
Reviewer #1’s comment 10:
In Fig. 10, the description of the Figures (A-B) should be included in the figure caption. Similarly, in Fig. 13, the description of the Figures (A-E) should be included in the figure caption.
Response to reviewer #1’s comment 10:
Thank you for your advice.
We have added the description of Figures (A–B) in the figure caption of the original Fig. 10. Hence, a new figure (Fig. 8) was added. Therefore, original Figs. 8–13 become Figs. 9–14 in the revised manuscript. The revised figure caption is
“Figure 11. Binary grid maps of the simulation environments. The black cells indicate obstacle areas, while the white cells indicate the navigable areas for USVs. (A) Binary grid map of the environment shown in Fig. 10A. (B) Binary grid map of the environment shown in Fig. 10B.”
Similarly, the description of Figures (A–E) is added in the figure caption of the original Fig. 13 (the revised Fig. 14). The revised figure caption is
“Figure 14. Enlarged views of path portions. (A)–(E) show the portions that are indicated by rectangles in Figure 13. Detailed information can be acquired from (A) to (E). For example, all paths maintain a relatively safe distance from each island, the paths are smooth even when they turn around islands, and paths across narrow channels can be planned along the quasi-midline of the channels.”
Reviewer #1’s comment 11:
In Fig. 9, label each part with an uppercase letter. Then, the description of the Figures should be included in the caption.
Response to reviewer #1’s comment 11:
Thank you for your advice.
The two parts in the original Fig. 9 (the revised Fig. 10) have been labeled with white uppercase letters “A” and “B” in the upper-left corner as follows:
The revised figure caption becomes
“Figure 10. Google satellite images of the simulation environments. (A) Surrounding areas of Zhucha Island in Qingdao, Shandong Province, China. Map data @ 2020 Google. (B) Surrounding areas of Changhai County in Liaoning Province, China. Map data @ 2020 Google.”
Responses to Reviewer #2
Thank you very much for your comments. A clean revised manuscript and a revised manuscript with tracked changes are provided. The detailed responses are below the comments.
Reviewer comments have been italicized and are in bold font, and the responses are normal. Every reviewer comment with a response is highlighted, such as “Reviewer #2’s comment 1,” and the response is highlighted by another color such as “Response to reviewer #2’s comment 1”.
In the responses, in addition to the explanation and discussion of the responses, some original texts and revised texts are shown in the responses. The original texts are in red front, and the revised texts are in blue font for distinction.
To distinguish the line number in the original and revised manuscripts, Original-Line and Revised-Line were used to represent the lines in the original and revised manuscripts, respectively. For example, “Original-Line 1” represents line 1 in the original manuscript, while “Revised-Line 1” represents line 1 in the revised manuscript.
Basic reporting
The idea of the paper is interesting and relevant to the field. The introduction adequately describes the context and the related literature, in particular identifying some shortcomings of the FMM-based methods. However, in the last paragraph of the introduction, it must be clearly remarked the issues of the existing methods. It must be clarify if this is the first time that the IDC is introduced in an FMM method. At the beginning of the section “Related methods” it must be provided main references where the FMM method is detailed. The structure of the paper is good and the figures are relevant and of good quality. A remarkable issue is that the English writing can be improved and must be carefully revised.
Experimental design
The research reported in this work seems original, the research question is well defined and relevant. The proposed method is adequately described, although some aspects that will be described in the comments to the authors can be improved.
Validity of the findings
The results show the benefits of the proposed method in comparison to a basic FMM-based method in terms of flexibility to ensure a smooth and safe path. Some final questions about the computational efficiency must be clarified as will be specified in the comments to the authors.
Comments for the author
The paper presents a global path planning algorithm for unmanned surface vehicles with time optimality requirements. The proposed method is based on the fast marching method, which has been improved to find better solutions in terms of safety and computational efficiency.
Reviewer #2’s comment 1:
The idea of the paper is interesting and relevant to the field. The introduction adequately describes the context and the related literature, in particular identifying some shortcomings of the FMM-based methods. However, in the last paragraph of the introduction, it must be clearly remarked the issues of the existing methods. It must be clarify if this is the first time that the IDC is introduced in an FMM method. At the beginning of the section “Related methods” it must be provided main references where the FMM method is detailed. The structure of the paper is good and the figures are relevant and of good quality.
Response to reviewer #2’s comment 1:
Thank you very much for this comment, which helped us improve our manuscript.
We have reorganized the content of the last paragraph to remark the issues of existing methods, and the clarification about the IDC was added in the revised manuscript. The revised text (Revised-Line 85-103) is
“The basic FMM can plan time-optimal paths for USVs. However, some of the main shortcomings is that the paths planned by the FMM are too close to obstacles, and there may be abrupt turns when the paths bypass obstacles with sharp corners. Thus, the FM2 method is proposed to address these problems, and two dimensionless parameters are introduced to adjust the paths more flexibly (Garrido et, al. ,2007; Garrido, Alvarez & Moreno, 2020). However, the two introduced parameters lack clear physical meaning, and the suitable adjustment is difficult to identify with specific values. Moreover, the adjustment effects of these two parameters are not common because the adjustment degree with the same parameters varies with different grid maps. Another common problem about FM-based methods is that the computational efficiency of path planning decreases sharply when the scale of the grid map is very large. Therefore, we make two improvements to address the mentioned shortcomings of the basic FM2 method. First, we introduced an inshore-distance-constraint fast marching square (IDC-FM2) method to improve the inshore path adjustment performance for the first time. Comparing with the basic FM2 method, the IDC-FM2 method applies two inshore distance parameters other than the two dimensionless parameters to adjust the paths around the obstacles. The adjustment effects for path planning of the IDC-FM2 method are stable in different grid maps as the IDC-FM2 method can constrain the path portions around the obstacles within the region constrained by the two inshore distance parameters. Further, to improve the computational efficiency, the algorithm that applies the IDC-FM2 method based on two-level spatial resolution grid maps is designed.”
At the beginning of the section “Related methods,” the main references are introduced, and they are mentioned again when introducing the calculation process of the FMM. The revised texts are
“When using FM-based methods to plan a path, the basis is the FMM. The core work of the FMM is calculating solutions of the Eikonal equation (Tsitsiklis, 1995; Adalsteinsson & Sethian, 1995).” (Revised-Line 115-116)
“The FMM was first proposed independently by Tsitsiklis (1995) and Adalsteinsson & Sethian (1995).” (Revised-Line 126-127)
The research reported in this work seems original, the research question is well defined and relevant. The proposed method is adequately described, although some aspects must be better described or justified:
Reviewer #2’s comment 2:
What is the importance to plan time-optimal paths in comparison to optimal paths in distance? At the end, the method modifies the paths according to the parameters of the IDC and time optimality is then missed.
Response to reviewer #2’s comment 2:
Thank you for this comment. We are very glad to discuss this issue.
Within our knowledge and experience, the importance of time-optimal or distance-optimal concepts is mainly based on the requirements of USV applications. For example, in rescue applications, the USV will be required to reach accident positions quickly, and then they must be time-optimal.
Planning a time-optimal path is the characteristic of the basic FMM. Without considering additional information (such as ocean currents and winds), the time-optimal paths planned by the FMM can be approximately equivalent to the distance-optimal paths. However, the paths planned by the FMM are not safe enough when they are around obstacles because they are too close to the obstacles. Therefore, the FMM has to be improved for safety considerations. At this time, the safety and optimal time of a path cannot be guaranteed simultaneously, and the safety with a higher priority in most applications should almost be ensured initially. In our method, IDC parameters (the inshore distances to obstacles) are introduced to improve safety. Although the results miss the time optimality, it remains necessary.
Reviewer #2’s comment 3:
It is not clear the effect of the vehicle dynamics (velocity, acceleration) over the final planned path. Equation (20) specifies a condition that the parameter Dsc must satisfy, but the results does not show how the dynamic parameters are taken into account in the method, since one expects that they are very important to find time-optimal paths.
Response to reviewer #2’s comment 3:
First, we wanted to explain and discuss the design of parameter Dsc. The FM-based methods are focused on time-optimal feature. However, before considering this feature, the safety of a path is more important for a USV in most applications. Therefore, the parameter Dsc, which is mainly focused on the safety of the path, is designed. It simply considers the dynamic parameters of the USV, including the maximum speed , reaction time of the thruster , and negative maximum acceleration of the USV under braking . In Equation (20), the safety distance when the USV sails toward obstacles and performs a slowdown operation is suggested. It includes two distance parts: the distance traveled within the response time of the thruster and the distance for braking when the USV sails with the maximum speed . As shown in Equation (20), the suggested smallest value of Dsc is determined by . There are two main factors for this suggested smallest value. (1) The USV will hardly sail at , which implies that the braking distance to stop the USV with will be smaller than the suggested smallest value. (2) The USV will not always sail straight to obstacles, which implies that the braking distance of the USV will be longer than the distance to obstacles. Therefore, when the environment map has no clear errors and the USV has the ability to brake in time, the suggested smallest value of Dsc is often safe for the USV even when the USV approaches obstacles. When Dsc is larger, the USV will be safer. Meanwhile, to tolerate deviations (such as the deviation of the environment map, and the influence of ocean currents and winds), a little larger Dsc value is often selected.
As pointed out in this comment, the results do not show how the dynamic parameters are considered in the method. Therefore, we have made supplementary explanations in section of “Path planning cases.” However, we will explain why we do not discuss the selection of Dsc in the section “Inshore-distance-constraint performances.” In this section, although the results are based on the Dsc parameter, they are mainly used to discuss the common performances of the proposed algorithm by adjusting the inshore-distance parameters (Dsc and Dwc), which is not related to a specific USV. Therefore, Dsc in this section is not limited by a definite value.
The supplementary part in section of “Path planning cases” (Revised-Line 405-410) is
“When determining the inshore-distance parameter , a USV with , , is considered as a case. Based on Eq. (20), is suggested to select a value larger than 30 m. Therefore, a larger value is used for the path planning for the selected five path cases. Another inshore-distance parameter is selected empirically. The main parameters used in these path planning cases are shown in Table 1.”
Reviewer #2’s comment 4:
Provide an interpretation of equation (1) besides of the definition of the variables.
Response to reviewer #2’s comment 4:
Equation (1) is an expression form of Eikonal equation. Eikonal equation is a famous equation in geometrical optics field. In the original article, we simply described it. The original expression (Original-Line 110-112) is
“The core of the FMM is to calculate the solutions of the Eikonal equation, which describes a wave front propagation scenario with the speed of a wave front given as at grid .”.
To make it clearer, we have reorganized this part and added more description about this equation. The revised part (Revised-Line 115-125) is
“When using FM-based methods to plan a path, the basis is the FMM. The core work of the FMM is calculating solutions of the Eikonal equation (Tsitsiklis, 1995; Adalsteinsson & Sethian, 1995). The Eikonal equation describes a wave front propagation scenario from sources with the speed of a wave front given as at cell . It can be expressed as , where is the arrival time of the wave from the source to the cell and is a vector differential operator. From the perspective of time-cost, the Eikonal equation can be expressed in another form (that of Lin, 2003):
(1)
where is the time-cost at cell and is equivalent to . The solution we want to calculate is , and all of them compose the arrival time map, . All time-cost compose the time-cost function map, .”
Reviewer #2’s comment 5:
Justify the need of the proposed method in comparison to the classical approach to inflate the obstacles size to guarantee to find collision free paths.
Response to reviewer #2’s comment 5:
Thank you very much for this suggestion.
Inflating the obstacles size to guarantee finding collision free paths is actually a classical approach. The classical and the proposed methods are two different processing approaches. In open ocean areas, such classical approach can plan safe paths. However, there are some shortcomings of inflating the obstacle size. (1) When using the FMM to plan a path on a grid map with inflated obstacles, it is equivalent to perform a basic FMM. Although the planned paths are safe enough by inflating the obstacle size, there may be abrupt turns when the obstacles have sharp corners (Garrido, Alvarez & Moreno, 2020). Such paths are typically unfriendly for USVs. Comparing with such paths, the modified paths planned by the IDC-FM2 method are smoother. (2) When using the algorithm based on two-level grid maps and if the obstacle size is inflated, the narrow channels will be easier to map as obstacles when mapping the HSR grid map to the LSR grid map to improve computational efficiency. This may result in loss of some possible paths through the channel.
Although we believe that this comment is a good suggestion to reflect some advantage of the proposed method, it may be a little bit irrelevant to the improvement of IDC-FM2 method with the IDC parameters and the improvement of computational efficiency. Therefore, we do not intend to compare the classical approach with the proposed method in this manuscript. However, if there is a chance to carry out more detailed related studies, this suggestion will be a good supplement.
The results show the benefits of the proposed method in comparison to a basic FMM-based method in terms of flexibility to find a smooth and safe path.
Regarding the reported results about computational efficiency, some issues must be addressed:
Reviewer #2’s comment 6:
Clarify in the description of the proposed method, where exactly the method saves computational time with respect to the compared method.
Response to reviewer #2’s comment 6:
Thank you for this valuable suggestion.
About the computational efficiency, all FM-based methods are based on the scale of the grid map. When the grid map scale is very large, two aspects of calculations will severely increase compared with small-scale grid maps. (1) The number of cells increases significantly; therefore, the calculations for cells also increase. (2) The Trail set in every iteration of calculations changes. Its overall scale will obviously increase, resulting in the increasing of sorting time of the adopted priority heap struct.
The compared method performs directly on a single grid map. The valid scale of a grid map equals to the number of free cells. As a comparison, the proposed algorithm determines effective regions within the HSR grid map, and the path planning will be performed in these effective regions. The number of free cells in effective regions significantly decreases. Therefore, calculation time will significantly decrease, and much planning time can be saved.
To describe the comparison well, we have added the algorithm flow of the compared method based on a single grid map in section “Algorithm flow,” including a new figure (Fig. 8). The details of where exactly the method saves computational time with respect to the compared method are described in section “Computational efficiency improvement.”
The supplementary part in section “Algorithm flow” (Revised-Line 339-348) is
“The proposed algorithm is improved based on the basic FM2-based algorithm. The basic FM2-based algorithm is executed directly on a single grid map. The algorithm flow of this basic algorithm is shown in Fig. 8 in the form of main data stream and used methods. This algorithm flow has two main steps. In Step S1, an arrival time map with obstacle sources and a saturation threshold, , is obtained by first applying the FMM. Then, the approximate time-cost weighting function map, , is calculated and used to adjust the time-cost map. Based on the adjusted time-cost map, an arrival time map with the goal point source, , is obtained by applying the FMM again. The entire process is used to obtain by applying the IDC-FM2 method. Finally, the global path is acquired based on by applying the gradient descent method in Step S2.”
The new Fig. 8 is shown as follows:
Figure 8. Algorithm flow of the basic algorithm applying the IDC-FM2 method on a single grid map. This flow is expressed in the form of main data stream and used methods.
As a comparison, Fig. 9 (original Fig. 8) is shown as follows:
Figure 9. Algorithm flow of the proposed algorithm performed on two-level grid maps. This flow is expressed in the form of main data stream and used methods.
The supplementary part in section of “Computational efficiency improvement” (Revised-Line 439-459) is
“When planning a path by applying the basic FM2-based algorithm, if the grid map scale is very large, two aspects of calculations will severely increase compared with small-scale maps. First, as the number of cells increase significantly, the calculations for free cells increase. Second, changes in every iteration of calculations. The overall scale of clearly increases, resulting in the increase in sorting time of the adopted priority heap struct. This compared algorithm, whose algorithm flow is shown in Fig. 8, is used directly on a single HSR grid map. The valid scale of the HSR grid map is the number of free cells. For comparison, the proposed algorithm determines the effective regions within the HSR grid map first and then plans a path within the determined effective regions. As shown in Fig. 9, Steps T1–T3 complete the determination of effective regions, and Steps T4–T5 realize the path planning. The same function of the path planning is also realized by Steps S1–S2, as shown in Fig. 8. Comparing the path planning steps of the compared and proposed algorithms, the numbers of free cells in effective regions are much less than the ones in the HSR grid map. This indicates that the calculations for free cells and the overall scale of will greatly decrease. Therefore, the planning time of the proposed algorithm reduces. Certainly, there are more steps in the proposed algorithm. Planning time spent on these steps (Steps T1–T3) mainly depends on the LSR mapping parameter . When the planning time spent on the compared algorithm, , is large enough, the planning time spent on Step T2 can reduce to less than (in the cases shown in Table 4, this value is approximately ). The planning time spent on Steps T1 and T3 is often much less than . Therefore, the increased planning time spent on Steps T1–T3 is much less than the saved planning time spent on Steps T4–T5.”
Reviewer #2’s comment 7:
Specify which stages of the proposed method are measured in the planning times reported in Table 4 (they must be called planning times instead of execution times).
Response to reviewer #2’s comment 7:
Thank you for this suggestion.
First, we really agree with the comment about the “planning time.” The unclear description of “execution time” may cause readers to have different understandings. Thus, we have modified all “execution times” (Original-Line 389, Original-Line 390 and Original-Line 406) to “planning times.”
The specification about this comment has been added in section “Computational efficiency improvement”.
The supplementary part (Revised-Line 427-429) is
“Every planning time starts from the HSR grid map reading (in Steps S1 and T1) and ends when the global path has been calculated (in Steps S2 and T5).”
Reviewer #2’s comment 8:
It is not clear why these times varies from one run to another, clarify if there is some random process in the method that generates the difference in time.
Response to reviewer #2’s comment 8:
Thank you for your comment about this question. It is an important issue to discuss.
First, we can confirm that there is no random process in this method, and the global path generated by each path planning is exactly the same.
Here, we understand that the variable times talked about in this comment represent the repeated planning times of a same path. It is easy to understand that there are different starting and goal positions and lengths for various paths, which will lead to different calculations. There are several factors why these times vary from each other. One main factor is the environment in which the algorithm code runs. As mentioned in Revised-Line 426, the code ran in a Win7 operating system. Although there is no random process in the method, the dynamic performance of the computer is always changing in the program execution. The main reason is that Windows is multitasking. In a Windows operating system environment, the CPU usage is different at each moment, and the CPU hit rate to the Cache is also random. This indicates that there will be a time deviation in each calculation. The accumulation of thousands of calculation time differences in one path planning will cause objective planning time differences. In addition, there are many threads executing in the operating system background in addition to the running code itself. Therefore, the CPU will often not be occupied by the thread of the running code all the time. Instead, the CPU will perform thread scheduling and allocate CPU time for other threads. Each scheduling will produce a certain amount of system internal consumption, and other threads will also consume some time. This will also influence the planning time.
However, currently we cannot pinpoint out all possible influencing factors owing the lack of enough knowledge about the Windows operating systems.
Based on this problem, first we have removed the maximum and minimum planning times in Table 4, and then the discussion of related results was also removed from the content to avoid the possible misunderstanding that the planning times were unstable and there might be a random process. Meanwhile, a simple explanation of why the planning times of a same path varies has been added. As a comparison, the original text (Original-Line 389 to Original-Line 394) is
“The execution time results are presented in Table 4. The minimum and maximum time results show that execution times for the same path using the same algorithm do not change significantly at different times. This indicates that the proposed algorithm was stable. Conversely, the computational efficiency of the proposed algorithm is significantly higher than that of the FM2-based method when considering a single HSR grid map.”
The original Table 4 is
Table 4:
Execution times of five typical path cases for the algorithms based on different maps.
Path
Single map time (s)
Two-level maps time (s)
Min
Max
Avg
Min
Max
Avg
28.94
29.41
29.21
1.93
1.95
1.94
24.45
24.84
24.65
1.94
1.96
1.95
29.85
31.34
30.25
1.95
1.99
1.97
16.10
17.15
16.44
1.68
1.88
1.76
40.04
40.94
40.40
2.13
2.37
2.24
The revised text (Revised-Line 429-435) is
“Because the path planning is performed on a Windows operating system, which involves multitasking, there may be many factors influencing the planning time, such as the variable CPU usage, random CPU hit rate to the cache, and possible thread scheduling. Therefore, the average planning time is used to evaluate the computational efficiency. The average planning time results of the five planned paths are presented in Table 4. These results indicate that the computational efficiency of the proposed algorithm based on two-level grid maps is significantly higher than the algorithm based on a single HSR grid map.”
The revised Table 4 is
Table 4:
Planning times of five typical path cases for the compared algorithm based on a single HSR grid map and the proposed algorithm based on two-level SR grid maps.
Path
Average planning times (s)
Compared algorithm
Proposed algorithm
29.21
1.94
24.65
1.95
30.25
1.97
16.44
1.76
40.40
2.24
Reviewer #2’s comment 9:
Comment in the conclusions if the method can be extended to make replanning in execution time and what is the challenge to do it using this kind of methods.
Response to reviewer #2’s comment 9:
Thank you for your suggestion. It is a valuable issue about path replanning. We were considering using this method for path replanning recently.
From the results shown in Table 4, in a map with 64000*48000 cells, the planning time for a path within 40 km is around 2 s. Although such planning time is often longer than the control cycle of the USV (usually 1.0 s), it is still short enough for a USV within most situations with the requirement to replan. With reference to situations of manned ships, it is very normal to spend a few seconds or even tens of seconds proposing a target change (i.e., the need to replan a new path) to executing a new route. Therefore, this method has the potential to be applied to replanning from the planning time perspective.
Certainly, there are some challenges when using such methods to replan a path. For example, when a USV detects obstacles that are not included in the map (such as reefs, even other ships), how to accurately obtain the location information of new obstacles and how to add the information into the grid map are the common challenges. In addition, a USV is typically sailing when replanning a new path. Therefore, the method should be modified to plan a path meeting the dynamic characteristics of a USV. Otherwise, the USV may not follow the replanned path at the beginning.
In the revised manuscript, we have added a comment in the conclusions about the potential and challenges to replan a new path using this type of methods. The supplementary part (Revised-Line 469-477) is
“Meanwhile, the planning time of this order of magnitude is typically short enough for USVs in most situations with the requirement to replan the path. This indicates the potential of replanning by using the proposed algorithm from the perspective of planning time. However, there are still some challenges. For example, how to accurately obtain the location information of newly detected obstacles and how to add this information into the grid map when replanning are the common challenges. The IDC-FM2 method also needs to be modified to plan the path meeting the dynamic characteristics of a USV. Otherwise, the USV may not follow the replanned path successfully at the beginning of the path.”
Reviewer #2’s comment 10:
Finally, a remarkable issue of the paper is that the English writing can be improved and must be carefully revised.
Response to reviewer #2’s comment 10:
Thank you for your advice about this problem.
We have revised the questions on the following comments first, and some other places have also been improved. We have had the revised manuscript edited by a language editing service to improve the language.
Reviewer #2’s comment 11:
For instance,
the second phrase of the abstract is not correct.
Response to reviewer #2’s comment 11:
Thank you for pointing out this error.
We have used the phrase of “result in” to replace the phrase of “lead to”.
The original text (Original-Line 16-18) is
“The fast marching method-based path planning for unmanned surface vehicles performed on grid maps, leading to a decrease in computer efficiency for larger maps.”
The revised text (Revised-Line 16-18) is
“The fast marching method-based path planning for USVs is performed on grid maps, resulting in a decrease in computer efficiency for larger maps.”
Reviewer #2’s comment 12:
the word “grid” is incorrectly used most of the time, “cell” must be used in many cases.
Response to reviewer #2’s comment 12:
Thank you for your comment, we really agree with this.
We have used “cell” to replace “grid” in cases where the word should indicate a cell of a grid map. The detailed replaced places are not shown here. However, these modifications can be seen in the revised manuscript with tracked changes.
Reviewer #2’s comment 13:
In the paragraph starting in line 134, are the authors talking about the proposed method or the existing one?
Response to reviewer #2’s comment 13:
The method which is referred to in the mentioned paragraph is an existing method.
We have added references in the revised article to point out that the method is an existing method.
The original text (Original-Line 135-136) is
“Using the FM2 method, a smooth path with sufficient safety distances from obstacles can be computed.”
The revised text (Revised-Line 147-149) is
“As described in Garrido et, al.(2007) and Garrido, Alvarez & Moreno, (2020), a smooth path with sufficient safety distances from obstacles can be computed using the FM2 method.”
Reviewer #2’s comment 14:
Line 147, different behaviors… These kind of behaviors must be introduced to give a clearer notion.
Response to reviewer #2’s comment 14:
Thank you for this suggestion. We agree with this comment.
We have used detail descriptions to replace the unclear expression of “different behaviors”. Two supplementary parts have been added in the revised article. The first part simply described the function of parameters, and . The second part described how and modifies the paths.
The original text (Original-Line 146-147) is
“Modifications of the speed map with two parameters, and , allow the achievement of different behaviors of the computed paths.”
The revised texts are
“Modifications of the speed map with two parameters, and , can adjust distances between the computed paths and obstacles.” (Revised-Line 159-160)
“Comparing with a path without modifications, the path modified by will be closer to obstacles when . On the contrary, if , the modified path will stay further away from obstacles. The parameter allows the path to move closer to obstacles. When is smaller, the modified path will be closer to obstacles.” (Revised-Line 165 -168)
Reviewer #2’s comment 15:
Lines 179 and 212, among others.
Response to reviewer #2’s comment 15:
Thank you for pointing out these two errors. The redundant words (“To” in Original-Line 179, and “This” in Original-Line 212) have been removed in the revised article.
(1) The original text with the redundant word “To” (Original-Line 179) is
“To The two-level SR grid maps …”
The revised text (Revised-Line 197) is
“The two-level SR grid maps …”.
(2) The original text with the redundant word “This” (Original-Line 212) is
“This In the basic FM2 method, the path …”
The revised text (Revised-Line 230) is
“In the basic FM2 method, the path …”.
" | Here is a paper. Please give your review comments after reading it. |
165 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>A global path planning algorithm for unmanned surface vehicles (USVs) with short time requirements in large-scale and complex multi-island marine environments is proposed.</ns0:p><ns0:p>The fast marching method-based path planning for USVs is performed on grid maps, resulting in a decrease in computer efficiency for larger maps. This can be mitigated by improving the algorithm process. In the proposed algorithm, path planning is performed twice in maps with different spatial resolution (SR) grids. The first path planning is performed in a low SR grid map to determine effective regions, and the second is executed in a high SR grid map to rapidly acquire the final high precision global path. In each path planning process, a modified inshore-distance-constraint fast marching square (IDC-FM 2 ) method is applied. Based on this method, the path portions around an obstacle can be constrained within a region determined by two inshore-distance parameters. The path planning results show that the proposed algorithm can generate smooth and safe global paths wherein the portions that bypass obstacles can be flexibly modified. Compared with the path planning based on the IDC-FM 2 method applied to a single grid map, this algorithm can significantly improve the calculation efficiency while maintaining the precision of the planned path.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Research on unmanned surface vehicles (USVs) has received increased attention in various military and civilian applications over recent years <ns0:ref type='bibr' target='#b39'>(Yan et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b4'>Campbell, Naeem & Irwin, 2012;</ns0:ref><ns0:ref type='bibr'>Liu et al., 2016)</ns0:ref>. Robust and reliable navigation, guidance, and control (NGC) systems are required for USVs to perform a variety of complex marine missions. Path planning is an essential In some special applications, the shortest time may be an important requirement for USVs. The fast marching method (FMM) can be a solution for time-optimal global path planning. This method was first proposed by <ns0:ref type='bibr' target='#b33'>Tsitsiklis (1995)</ns0:ref>, <ns0:ref type='bibr' target='#b0'>and Adalsteinsson & Sethian (1995)</ns0:ref> independently and was extended by <ns0:ref type='bibr' target='#b25'>Sethian (1999)</ns0:ref>. The path planned by the FMM is usually extremely close to the obstacles. One solution is to adjust the speed map, as exemplified by the method with an adjusted cost function <ns0:ref type='bibr' target='#b23'>(Messias et al., 2014)</ns0:ref> and the FM 2 method <ns0:ref type='bibr' target='#b9'>(Garrido et al., 2007)</ns0:ref>. FMM-based methods have been widely used in path planning applications <ns0:ref type='bibr' target='#b7'>(Gómez et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b2'>Amorim & Ventura, 2014;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alvarez et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b10'>González et al., 2016)</ns0:ref>. Marine applications based on FMM were introduced by Garrido, <ns0:ref type='bibr' target='#b8'>Alvarez & Moreno (2020)</ns0:ref>. Interesting modifications for the FM 2 method have been performed, and the FMM has been subjected to a vector field considering the effects of several vector variables such as wind flow or water currents <ns0:ref type='bibr' target='#b8'>(Garrido, Alvarez & Moreno, 2020)</ns0:ref>. In addition, studies on path following and obstacle avoidance and formations have also been conducted using FMM <ns0:ref type='bibr' target='#b8'>(Garrido, Alvarez & Moreno, 2020)</ns0:ref>. USV formation path planning has also been performed by <ns0:ref type='bibr' target='#b19'>Liu & Bucknall (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b32'>Tan et al. (2020)</ns0:ref>, and an angle-guidance FM 2 method has been used for the Springer USV to make the generated path compliant with the dynamics and orientation restrictions of USVs <ns0:ref type='bibr' target='#b20'>(Liu & Bucknall, 2016;</ns0:ref><ns0:ref type='bibr' target='#b21'>Liu, Bucknall & Zhang, 2017</ns0:ref>). An improved anisotropic fast marching method using a multi-layered fast marching was proposed by <ns0:ref type='bibr' target='#b31'>Song, Liu & Bucknall (2017)</ns0:ref>, which combines different environmental factors and provides interesting results. In addition to the global path planning, the FMM-based methods also show potential in collision avoidance of USVs <ns0:ref type='bibr' target='#b34'>(Wang, Jin & Er, 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Garrido, Alvarez & Moreno, 2020)</ns0:ref>. These successful studies have demonstrated the potential of FMM-based methods in global path planning of USVs.</ns0:p><ns0:p>The basic FMM can plan time-optimal paths for USVs. However, some of the main shortcomings is that the paths planned by the FMM are too close to obstacles, and there may be abrupt turns when the paths bypass obstacles with sharp corners. Thus, the FM 2 method is proposed to address these problems, and two dimensionless parameters are introduced to adjust the paths more flexibly <ns0:ref type='bibr' target='#b9'>(Garrido et, al. ,2007;</ns0:ref><ns0:ref type='bibr' target='#b8'>Garrido, Alvarez & Moreno, 2020)</ns0:ref>. However, the two introduced parameters lack clear physical meaning, and the suitable adjustment is difficult to identify with specific values. Moreover, the adjustment effects of these two parameters are not common because the adjustment degree with the same parameters varies with different grid maps. Another common problem about FM-based methods is that the computational efficiency of path planning decreases sharply when the scale of the grid map is very large. Therefore, we make two improvements to address the mentioned shortcomings of the basic FM 2 method. First, we introduced an inshore-distance-constraint fast marching square (IDC-FM 2 ) method to improve the inshore path adjustment performance for the first time. Comparing with the basic FM 2 method, the IDC-FM 2 method applies two inshore distance parameters other than the two dimensionless parameters to adjust the paths around the obstacles. The adjustment effects for path planning of the IDC-FM 2 method are stable in different grid maps as the IDC-FM 2 method can constrain the path portions around the obstacles within the region constrained by the two inshore distance parameters. Further, to improve the computational efficiency, the algorithm that applies the IDC-FM 2 method based on two-level spatial resolution grid maps is designed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Environment map model</ns0:head><ns0:p>In 2D global path planning applications based on the FMM or its improved methods, discrete numerical calculations are based on cartesian grid maps. Therefore, an environment map should first be converted into a binary grid map with a suitable spatial resolution <ns0:ref type='bibr'>(SR)</ns0:ref>. Free and opensource satellite images (such as Google satellite images) can be used as the data source for the maps in most USV applications, and the corresponding grid maps can be generated using image processing <ns0:ref type='bibr' target='#b26'>(Shi et al., 2018)</ns0:ref> combined with manual assistance. The binary grid is set as an obstacle grid when an obstacle exists at the geographic location (0 values); otherwise, it is set as a free grid (1 value).</ns0:p></ns0:div>
<ns0:div><ns0:head>Fast marching square method</ns0:head><ns0:p>When using FM-based methods to plan a path, the basis is the FMM. The core work of the FMM is calculating solutions of the Eikonal equation <ns0:ref type='bibr' target='#b33'>(Tsitsiklis, 1995;</ns0:ref><ns0:ref type='bibr' target='#b0'>Adalsteinsson & Sethian, 1995)</ns0:ref>. The Eikonal equation describes a wave front propagation scenario from sources with the speed of a wave front given as at cell . It can be expressed as , where is the arrival 𝐹 𝑥 𝑥 ‖∇𝑇 𝑥 ‖𝐹 𝑥 = 1 𝑇 𝑥 time of the wave from the source to the cell and is a vector differential operator. From the 𝑥 ∇ perspective of time-cost, the Eikonal equation can be expressed in another form (that of <ns0:ref type='bibr' target='#b18'>Lin, 2003)</ns0:ref>:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>‖∇𝑇 𝑥 ‖ = 𝜏 𝑥</ns0:formula><ns0:p>where is the time-cost at cell and is equivalent to . The solution we want to calculate is</ns0:p><ns0:formula xml:id='formula_1'>𝜏 𝑥 𝑥 1 𝐹 𝑥</ns0:formula><ns0:p>, and all of them compose the arrival time map, . All time-cost compose the time-cost 𝑇 𝑥 𝓣(𝑥) 𝜏 𝑥 function map, .</ns0:p></ns0:div>
<ns0:div><ns0:head>𝝉(𝑥)</ns0:head><ns0:p>The FMM was first proposed independently by <ns0:ref type='bibr' target='#b33'>Tsitsiklis (1995)</ns0:ref> and <ns0:ref type='bibr' target='#b0'>Adalsteinsson & Sethian (1995)</ns0:ref>.The solution in cell can be interpreted as the wave arrival time from the nearest 𝑇 𝑥 𝑥 source to cell . other cells with , which has never been computed <ns0:ref type='bibr' target='#b18'>(Lin, 2003)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑇 𝑥</ns0:head><ns0:p>The calculation process includes initialization and loop procedures. During initialization, source cells with are set to , and all other cells are set to with . After the</ns0:p><ns0:formula xml:id='formula_2'>𝑥 0 𝑇 𝑥 0 = 0 𝓢 T 𝓢 F 𝑇 𝑥 = ∞</ns0:formula><ns0:p>initialization, the cell with the smallest value in is selected and moved from to .</ns0:p><ns0:formula xml:id='formula_3'>𝑥 𝑚 𝑇 𝓢 T 𝓢 T 𝓢 A</ns0:formula><ns0:p>Thereafter, the non-accepted neighbor cells of are updated, including the solutions and cell 𝑥 𝑚 sets. Considering Tsitsiklis's deduction <ns0:ref type='bibr' target='#b33'>(Tsitsiklis, 1995;</ns0:ref><ns0:ref type='bibr' target='#b18'>Lin, 2003)</ns0:ref>, each of the new neighbor solutions is determined by:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_4'>𝑇 𝑥 = min ( 𝑇 𝑥 ,(𝑇 𝑥 𝑚 𝑥 𝑖 ,i = 1,2))</ns0:formula><ns0:p>where denotes the original solution.</ns0:p><ns0:p>are the candidate solutions for the paths that 𝑇 𝑥 𝑇 𝑥 𝑚 𝑥 𝑖 ,i = 1,2 pass through the line segment and propagate to cell (see Fig. <ns0:ref type='figure'>1</ns0:ref>), which are calculated by: 𝑥 𝑚 𝑥 𝑖 𝑥</ns0:p><ns0:p>(3) Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>𝑇 𝑥 𝑚 𝑥 𝑖 = { 1 2 ( 𝑇 𝑥 𝑚 + 𝑇 𝑥 𝑖 + 2𝜏 2 𝑥 -(𝑇 𝑥 𝑚 -𝑇 𝑥 𝑖 ) 2 ) ,𝑇</ns0:formula><ns0:p>Computer Science loop procedure is performed continually until all cells are in . The resulting map is the arrival 𝓢 A time map, . 𝓣</ns0:p><ns0:p>The main disadvantage of the basic FMM is that the computed path is too close to obstacles and forces the vehicle to perform abrupt turns <ns0:ref type='bibr' target='#b8'>(Garrido, Alvarez & Moreno, 2020)</ns0:ref>. As described in <ns0:ref type='bibr' target='#b9'>Garrido et, al.(2007)</ns0:ref> and <ns0:ref type='bibr' target='#b8'>Garrido, Alvarez & Moreno, (2020)</ns0:ref>, a smooth path with sufficient safety distances from obstacles can be computed using the FM 2 method. This method applies the basic FMM twice. The procedure for computing paths is described as follows <ns0:ref type='bibr' target='#b8'>(Garrido, Alvarez & Moreno, 2020)</ns0:ref>: 1. The environment is modeled as a binary grid map (Fig. <ns0:ref type='figure' target='#fig_4'>2A</ns0:ref>). 2. The FM-1 st step. All obstacle cells are used as wave sources (</ns0:p><ns0:p>), expanding several 𝑇 = 0 waves at the same time at a constant speed. The value of each cell in the resulting map indicates the time required for a wave to reach the closest obstacle (see Fig. <ns0:ref type='figure' target='#fig_4'>2B</ns0:ref>). This is proportional to the distance from the obstacles. By reversing the meaning of these values, they can be understood as the maximum admissible speed at each cell. Finally, the speed values are rescaled to fix a maximum cell value of 1.</ns0:p><ns0:p>Modifications of the speed map with two parameters, and , can adjust distances 𝛼 𝛽 between the computed paths and obstacles. The value of each cell in the speed map is 𝐹 𝑖,𝑗 adjusted exponentially by : 𝛼</ns0:p><ns0:formula xml:id='formula_6'>(4) 𝑛𝑒𝑤𝐹 𝑖,𝑗 = 𝐹 𝛼 𝑖,𝑗</ns0:formula><ns0:p>The parameter is used to saturate the values in the speed map. It is defined within the 𝛽 range of 0 and 1. Every with a value greater than is set to one (see Fig. <ns0:ref type='figure' target='#fig_4'>2C</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>𝐹 𝑖,𝑗 𝛽</ns0:head><ns0:p>Comparing with a path without modifications, the path modified by will be closer to 𝛼 obstacles when . On the contrary, if , the modified path will stay further away 𝛼 < 1 𝛼 > 1 from obstacles. The parameter allows the path to move closer to obstacles. When is 𝛽 𝛽 smaller, the modified path will be closer to obstacles. 3. The FM-2 nd step. The goal point is used as a unique wave source. The wave is expanded over the map until the starting point is reached. The speed at each cell (equivalent to</ns0:p><ns0:p>) is 1 𝜏 𝑥 obtained from the modified speed map computed in the FM-1 st step. The resulting arrival time map is shown in Fig. <ns0:ref type='figure' target='#fig_4'>2D</ns0:ref>. 4. Finally, a gradient descent is applied over the resulting arrival time map from the starting point to the goal point. An optimal path in terms of the arrival time, smoothness, and safety is obtained.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gradient descent method</ns0:head><ns0:p>The global path can be extracted by applying the gradient descent method. The path propagates along the gradient descent direction from the starting position to the goal position with a 𝑃 s 𝑃 g step length . The value of can be set to a value equal to the SR of the grid map. The gradient 𝑑 𝑑 of the cell is calculated by: 𝑥 𝑖,𝑗 The path waypoints are not limited to cell centers. Generally, the gradient of a path waypoint located in the cell is approximated by the cell gradient . However, a modified 𝑃 𝑛 = (𝑢,𝑣)</ns0:p><ns0:p>𝑥 𝑖,𝑗 ∇𝑇 𝑖,𝑗 gradient calculated by the bilinear interpolation can be used to improve the path precision ∇𝑇 𝑢,𝑣 (see Fig. <ns0:ref type='figure'>3</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed algorithm</ns0:head><ns0:p>To improve the computational efficiency of global path planning in a large-scale and multiisland marine environment, the proposed algorithm performs path planning twice for different purposes. First, the path planning is performed in a low SR (LSR) grid map to determine an effective region. The final global path is then obtained in the second path planning within the effective region of a high SR (HSR) grid map. The relevant methods and procedures applied in the algorithm are as follows. The complete algorithm flow is summarized in the last part of this section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mapping of two-level SR grid maps</ns0:head><ns0:p>The two-level SR grid maps are contained in the HSR and LSR grid maps. The HSR grid map is directly obtained from the Google satellite images data. A mapping relationship between the LSR cells and the corresponding HSR cell sub-blocks is established, which can be 𝐿 × 𝐿 expressed as:</ns0:p><ns0:formula xml:id='formula_7'>(6) (𝑖 L ,𝑗 L )~[ (𝑖 ' H ,𝑗 ' H + 𝐿 -1) ⋯ (𝑖 ' H + 𝐿 -1,𝑗 ' H + 𝐿 -1) ⋮ ⋱ ⋮ (𝑖 ' H ,𝑗 ' H ) ⋯ (𝑖 ' H + 𝐿 -1,𝑗 ' H ) ]</ns0:formula><ns0:p>where are the LSR cell coordinates, and are the original cell coordinates of the</ns0:p><ns0:formula xml:id='formula_8'>(𝑖 L ,𝑗 L ) (𝑖 ' H ,𝑗 ' H )</ns0:formula><ns0:p>mapped HSR cell sub-block, as shown in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p><ns0:p>To map the LSR grid map to a sub-block of the HSR grid map, the following is used:</ns0:p><ns0:formula xml:id='formula_9'>(7) { 𝑖 ' H = 𝑖 Ho + 𝐿𝑖 L 𝑗 ' H = 𝑗 Ho + 𝐿𝑗 L</ns0:formula><ns0:p>To map from a sub-block of the HSR grid map to a cell of the LSR grid map, we use: <ns0:ref type='table' target='#tab_7'>-2021:02:58122:2:0:NEW 27 May 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_10'>(8) { 𝑖 L = 𝑓 floor ( 𝑖 H -𝑖 Ho 𝐿 ) 𝑗 L = 𝑓 floor ( 𝑗 H -𝑗 Ho 𝐿 ) PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where are the HSR cell coordinates.</ns0:p><ns0:p>(𝑖 H ,𝑗 H ),𝑖 '</ns0:p><ns0:formula xml:id='formula_11'>H ≤ 𝑖 H ≤ 𝑖 ' H + 𝐿 -1,𝑗 ' H ≤ 𝑗 H ≤ 𝑗 ' H + 𝐿 -1 (𝑖 Ho ,𝑗 Ho )</ns0:formula><ns0:p>are the original cell coordinates of the original HSR cell sub-block (see Fig. <ns0:ref type='figure'>4</ns0:ref>), which are determined by: (9)</ns0:p><ns0:formula xml:id='formula_12'>{ 𝑖 Ho = ( 𝑖 Hg -𝑓 floor ( 𝐿 2 )) %𝐿 𝑗 Ho = ( 𝑗 Hg -𝑓 floor ( 𝐿 2 )) %𝐿</ns0:formula><ns0:p>where are the HSR cell coordinates of the goal cell , the function indicates (𝑖 Hg ,𝑗 Hg )</ns0:p><ns0:formula xml:id='formula_13'>𝑥 g 𝑓 floor (𝑚)</ns0:formula><ns0:p>a rounding down of , and indicates the modulo operation.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑚 𝑎%𝑏</ns0:head><ns0:p>The SR of the LSR grid map is:</ns0:p><ns0:p>(10) 𝐷 LRes = 𝐿𝐷 HRes where is the SR of the HSR grid map. In general, is defined as:</ns0:p><ns0:formula xml:id='formula_14'>𝐷 HRes 𝐿 (11) 𝐿 ≤ 𝑓 round ( 𝐷 Th 2𝐷 HRes )</ns0:formula><ns0:p>where is the distance threshold of the virtual obstacle influence, and the function</ns0:p><ns0:formula xml:id='formula_15'>𝐷 Th 𝑓 round (𝑚)</ns0:formula><ns0:p>indicates a rounding of . The cell numbers of the LSR grid map in the X-axis and Y-axis 𝑚 directions are:</ns0:p><ns0:p>(12)</ns0:p><ns0:formula xml:id='formula_16'>𝑀 L = 𝑓 floor ( 𝑀 H -𝑖 Ho 𝐿 ) (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_17'>)</ns0:formula><ns0:formula xml:id='formula_18'>𝑁 L = 𝑓 floor ( 𝑁 H -𝑗 Ho 𝐿 )</ns0:formula><ns0:p>where and are the cell numbers of the HSR grid map in the X-axis and Y-axis directions, 𝑀 H 𝑁 H respectively.</ns0:p><ns0:p>Based on the mapping relationship, an LSR cell type is determined by comparing the setting threshold and obstacle cell proportion in the HSR cell sub-block. If , the LSR cell is set 𝛤 𝛾 𝛾 > 𝛤 as an obstacle cell; otherwise, it is set as a free cell. prefers to select a small value to retain as 𝛤 much obstacle information as possible.</ns0:p></ns0:div>
<ns0:div><ns0:head>IDC-FM 2 method</ns0:head><ns0:p>In the basic FM 2 method, the path is improved using a modified speed map. Two parameters, 𝛼 and , are used to modify the speed map. These modifications ensure the flexibility of the 𝛽 computed path. However, these two parameters have no clear physical meaning. The normalization of the speed values in the FM-1 st step also results in several shortcomings. For example, all the speed values in the speed map must be calculated. When the map range changes slightly, the entire speed map rescales and differs in the same locations of the original map, and the parameters must be adjusted.</ns0:p><ns0:p>An IDC-FM 2 method is proposed to address these shortcomings. In this method, a time-cost weighting function, , is introduced to adjust the speed map. </ns0:p></ns0:div>
<ns0:div><ns0:head>Time-cost weighting function</ns0:head><ns0:p>The time-cost weighting function is introduced as a concept of the virtual obstacle influence. 𝑤 𝑥 When a USV is far away from obstacles, there is no obstacle influence. As the USV approaches, the virtual obstacle influence increases slowly at first and the amplitude increases gradually. The obstacle influence increases dramatically when the USV is close to within a certain degree.</ns0:p><ns0:p>Similar to the FM-1 st step in the basic FM 2 method, all obstacle cells are set as source cells to first calculate the arrival time map. The difference here is that a threshold 𝑇 Th = 𝜏 𝐷 Th 𝐷 Res could be set to speed up the calculation, where is the SR of the map, and is the unified 𝐷 Res 𝜏 time-cost which can take 1 as the value. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_19'>𝑤 𝑥 (𝐷 𝑥 ) = { ∞ ,𝐷 𝑥 = 0 1 + 𝑎 ( 𝐷 Th 𝐷 𝑥 -1 ) 𝑏 ,0 < 𝐷 𝑥 ≤ 𝐷</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where is the value in cell of the approximate time-cost weighting function map, .</ns0:p><ns0:p>𝑤 𝑥 (𝐷 𝑥 )</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑥 𝑾</ns0:head><ns0:p>The time-cost value of each grid in the time-cost map, which is the inversion of the speed map, is then adjusted to:</ns0:p><ns0:p>(17)</ns0:p><ns0:formula xml:id='formula_20'>𝑛𝑒𝑤𝜏 𝑥 = 𝑤 𝑥 (𝐷 𝑥 )𝜏 𝑥</ns0:formula><ns0:p>Eq. ( <ns0:ref type='formula'>16</ns0:ref>) implies that increases when grid is closer to obstacles, and it also increases 𝑤 𝑥 𝑥 when is small. The planned path tends to select points with a lower arrival time-cost. 𝐷 𝑥 Therefore, when the path is close to obstacles, it will be farther away from obstacles compared to the path planned by the basic FMM.</ns0:p></ns0:div>
<ns0:div><ns0:head>Parameters for the time-cost weighting function</ns0:head><ns0:p>Five parameters, , , , , and , are introduced to determine the coefficients and 𝐷 Th 𝐷 sc 𝐷 wc 𝑤 sc 𝑤 wc 𝑎</ns0:p><ns0:p>, which are determined by: 𝑏</ns0:p><ns0:formula xml:id='formula_21'>(18) 𝑏 = ln (𝑤 sc -1) -ln (𝑤 wc -1) ln (1 -𝑒 sc ) -ln (1 -𝑒 wc ) + ln 𝑒 wc -ln 𝑒 sc (19) 𝑎 = (𝑤 sc -1) ( 𝑒 sc 1 -𝑒 sc ) 𝑏 where , ,<ns0:label>, and .</ns0:label></ns0:formula><ns0:formula xml:id='formula_22'>𝑒 sc = 𝐷 sc 𝐷 Th 𝑒 wc = 𝐷 wc 𝐷 Th 𝐷 sc < 𝐷 wc < 𝐷 Th 𝑤 sc > 𝑤 wc > 1</ns0:formula><ns0:p>The parameter determines the largest virtual influence scope of the obstacles. The 𝐷 Th suggestion for the selection of is that there should be a sufficiently safe buffer area for 𝐷 Th obstacles. The parameter is very important for the safety of the USV. First, it is suggested 𝐷 sc that this parameter satisfies:</ns0:p><ns0:formula xml:id='formula_23'>(20) 𝐷 sc ≥ 𝑣 U,max 𝑡 r - 𝑣 2 U,max 2𝑎 d</ns0:formula><ns0:p>where is the maximum speed of the USV, is the reaction time of the thruster, and is 𝑣 U,max 𝑡 r 𝑎 d the negative maximum acceleration of the USV under braking. In an unknown environment, it is better to select a slightly larger value to ensure safety, which is similar in environments with shallow and reef waters. This value may be relatively small in deep and reef-free waters to improve path efficiency. is a parameter that acts as the adjusted constraint distance. In a 𝐷 wc non-channel ocean area, the path portion around the obstacles will be limited to the region between and . The value can be set independently or determined jointly by and 𝐷 wc 𝐷 Th 𝐷 wc 𝐷 Th</ns0:p><ns0:p>. In this study, it is set as: 𝐷 sc </ns0:p></ns0:div>
<ns0:div><ns0:head>Determination of effective regions within the HSR grid map</ns0:head><ns0:p>The effective region within the HSR grid map is important for improving the computational efficiency of the proposed algorithm. It is acquired based on the mapping between the LSR and HSR grid maps when the corresponding region within the LSR grid map is determined.</ns0:p><ns0:p>Two effective regions that are respectively used in the FM-1 st and FM-2 nd steps of the IDC-FM 2 method must be determined. The low-precision initial path, , is obtained first based on 𝓁 ini the LSR arrival time map . Two effective regions within the LSR grid map, and</ns0:p><ns0:formula xml:id='formula_24'>𝓣 L 𝑆 LER_1st</ns0:formula><ns0:p>, are determined by expanding the grids around the 'path-passed' grids of . As shown 𝑆 LER_2nd</ns0:p><ns0:p>𝓁 ini in Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>, the nearest neighbor grid among the four neighbor cells of the waypoint is Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_25'>𝑃 𝑖 = (</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>parameters used in the proposed algorithm are listed in Table <ns0:ref type='table'>1</ns0:ref>. Using as the reference, the 𝓁 ref distances between the corresponding waypoints of ( ) and are shown in Fig. <ns0:ref type='figure'>7</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_26'>It 𝓁 𝜅 𝜅 = 3,…,6</ns0:formula><ns0:p>𝓁 ref is shown that the largest distance decreases when increases, and the distance becomes 0 when 𝜅 . Other cases were tested and similar results were obtained. These results indicate that there 𝜅 ≥ 7 is good consistency when is sufficiently large (such as</ns0:p><ns0:p>). is a variable parameter that 𝜅 𝜅 = 10 ∆𝜅 was determined based on three different cases.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm flow</ns0:head><ns0:p>The proposed algorithm is improved based on the basic FM 2 -based algorithm. The basic FM 2based algorithm is executed directly on a single grid map. The algorithm flow of this basic algorithm is shown in Fig. <ns0:ref type='figure'>8</ns0:ref> in the form of main data stream and used methods. This algorithm flow has two main steps. In Step S1, an arrival time map with obstacle sources and a saturation threshold, , is obtained by first applying the FMM. Then, the approximate time-cost 𝓣 sat weighting function map, , is calculated and used to adjust the time-cost map. Based on the 𝑾 adjusted time-cost map, an arrival time map with the goal point source, , is obtained by 𝓣 applying the FMM again. The entire process is used to obtain by applying the IDC-FM 2 𝓣 method. Finally, the global path is acquired based on by applying the gradient descent method 𝓣 in Step S2.</ns0:p><ns0:p>The main data stream in the proposed algorithm and the methods used to obtain them are shown in Fig. <ns0:ref type='figure'>9</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Results and discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>Simulation environments</ns0:head><ns0:p>Two surrounding spatial areas of Zhucha Island and Changhai County were selected as the simulation environments. The original maps adopted Google satellite images, as shown in Fig. <ns0:ref type='figure'>10</ns0:ref>. The spatial resolutions in the longitude and latitude directions are approximately 4.8 m and 3.9 m, respectively. Temporary binary grid maps are obtained based on the image processing method <ns0:ref type='bibr' target='#b26'>(Shi et al., 2018)</ns0:ref> combined with manual assistance, and then the HSR grid maps with in both the longitude and latitude directions are determined by resampling 𝐷 HRes = 10 m processing. The corresponding binary grid maps are presented in Fig. <ns0:ref type='figure' target='#fig_8'>11</ns0:ref>. Their ranges are and , respectively. 7km × 7km 64km × 48km</ns0:p></ns0:div>
<ns0:div><ns0:head>Inshore-distance-constraint performances</ns0:head><ns0:p>A path planning from to is used as a typical case to</ns0:p><ns0:formula xml:id='formula_27'>𝑃 s = [ 4.3 km, 2.8 km ] 𝑃 g = [ 3.6 km, 2.0 km ]</ns0:formula><ns0:p>analyze the inshore-distance-constraint performance of the IDC-FM 2 method using the inshoredistance parameters and . As shown in Fig. <ns0:ref type='figure' target='#fig_4'>12</ns0:ref>, the path planned based on the basic 𝐷 Th 𝐷 sc FMM, , is very close to the islands when this path bypasses them. For comparison, all paths 𝓁 FMM planned by the proposed algorithm, to , are located away from islands by a certain distance.</ns0:p><ns0:p>𝓁 1 𝓁 4</ns0:p><ns0:p>They are therefore significantly better choices from a safety perspective.</ns0:p><ns0:p>Paths to are acquired using different inshore-constraint distance parameters ( and 𝑆 sc islands are located in the region of its (28.2 m to 60.0 m). When the distance constraints 𝑆 wc increase, the paths (such as and , see Fig. <ns0:ref type='figure' target='#fig_4'>12</ns0:ref>) in the non-channel areas will be outside of .</ns0:p><ns0:formula xml:id='formula_28'>𝓁 1 𝓁 4 𝐷 Th 𝐷 sc , see</ns0:formula><ns0:p>𝓁 2 𝓁 3 𝑆 sc</ns0:p><ns0:p>The estimated quasi-closest distances (approximately 115 m and 128 m, respectively) were larger than . However, and will be along the quasi-midline of the channel in the 𝑑 hscw 𝓁 2 𝓁 3 channel area, regardless of whether is greater or less than for these two paths because 𝑑 hscw 𝐷 wc the weighting time-cost will be smaller than the path around the islands (as in , see Fig. <ns0:ref type='figure' target='#fig_4'>12</ns0:ref>). If 𝓁 4 the distance constraints are further increased, will be selected for safety, although more time 𝓁 4 will be required. These results indicate that the path based on the proposed algorithm can be adjusted by setting different inshore-constraint distance parameters to meet safety requirements. Additionally, when the inshore-constraint distance parameters are determined, the path will cross a channel when its width is adequately large; otherwise, the path will bypass around the islands.</ns0:p><ns0:p>One classical approach to plan collision-free paths is applying the FMM within the map with inflated obstacles. For comparison, paths planned by the classical approach and the IDC-FM 2 method are shown in Fig. <ns0:ref type='figure' target='#fig_10'>13</ns0:ref>. The starting and goal positions are and 13, the path planned by the classical approach, , is obviously influenced by the 𝓁 FMM + inflated shape of the inflated obstacle. In some path turns which are marked by ellipses in Fig. <ns0:ref type='figure' target='#fig_10'>13</ns0:ref>, they are both affected by the shape of the inflated obstacle and the time-value gradient calculation of cells adjacent to the inflated obstacle. A smooth path, , is also shown in Fig.</ns0:p><ns0:formula xml:id='formula_29'>𝓁 FMM + inflated + smooth</ns0:formula><ns0:p>13. Although it removes abrupt turn waypoints, the path is still not very smooth. As comparisons, paths planed based on the IDC-FM 2 method, and , are obviously smoother 𝓁 1 𝓁 2 than . In addition, when using the proposed rapid path planning algorithm 𝓁 FMM + inflated + smooth based on two-level SR grid maps to improve the computational efficiency, the channel will be mapped as obstacle cells when mapping the HSR grid map to the LSR grid map in this case (taking the mapping parameter as shown in Table <ns0:ref type='table'>1</ns0:ref>), because of the influence of the 𝐿 = 8 inflated obstacle. This unexpected mapping will result in the loss of the path through the channel.</ns0:p></ns0:div>
<ns0:div><ns0:head>Global path planning in a large-scale and complex multi-island environment</ns0:head><ns0:p>To verify the path planning ability and the computational efficiency improvement of the proposed algorithm in a large-scale and relatively complex multi-island environment, the simulation environment shown in Fig. <ns0:ref type='figure' target='#fig_8'>11B</ns0:ref> is selected, and the long-length path cases of USVs around the islands or across channels are investigated.</ns0:p></ns0:div>
<ns0:div><ns0:head>Path planning cases</ns0:head><ns0:p>Five typical path cases were selected to demonstrate the performance of the proposed algorithm. The and groups are listed in Table <ns0:ref type='table'>3</ns0:ref>. When determining the inshore-distance parameter 𝑃 s 𝑃 g 𝐷 sc , a USV with , , is considered as a case. Based on Eq. ( <ns0:ref type='formula'>20</ns0:ref>), 𝑣 U,max = 6 m/s 𝑡 r = 2 s 𝑎 d =-1 m s 2 is suggested to select a value larger than 30 m. Therefore, a larger value is used 𝐷 sc 𝐷 sc = 50 m for the path planning for the selected five path cases. Another inshore-distance parameter 𝐷 Th is selected empirically. The main parameters used in these path planning cases are = 200 m shown in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure'>14</ns0:ref>, all paths successfully bypass the islands. The enlarged views (see Fig. <ns0:ref type='figure'>15</ns0:ref>, every path is displayed with a solid line) provide further details. Every path maintains a relatively safe distance when close to an island, and smooth when turning around the island. For a narrow channel of a certain degree (usually wider than ), the path is planned along the 2𝐷 LRes quasi-midline of the channel to ensure that it is as safety as possible. The path lengths range from about 26.52 km to 47.86 km (see Table <ns0:ref type='table'>3</ns0:ref>). These lengths can cover the range of most applications of current small-and medium-sized USVs. These results show the effective path planning ability of the proposed algorithm in a large-scale and complex multi-island environment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computational efficiency improvement</ns0:head><ns0:p>To verify the computational efficiency improvement of the proposed method, the time spent on the five typical paths (see Table <ns0:ref type='table'>3</ns0:ref>) based on the proposed algorithm was calculated. For comparison, the time spent on the basic algorithm which is executed directly on a single HSR grid map by applying the IDC-FM 2 method was also calculated as a reference. The C algorithm Manuscript to be reviewed</ns0:p><ns0:p>Computer Science code was tested on a computer with a Core i5-6300U CPU and 8G memory, which runs a 64-bit Win7 operating system.</ns0:p><ns0:p>Path planning for every typical path was repeated 10 times. Every planning time starts from the HSR grid map reading (in Steps S1 and T1) and ends when the global path has been calculated (in Steps S2 and T5). Because the path planning is performed on a Windows operating system, which involves multitasking, there may be many factors influencing the planning time, such as the variable CPU usage, random CPU hit rate to the cache, and possible thread scheduling. Therefore, the average planning time is used to evaluate the computational efficiency. The average planning time results of the five planned paths are presented in Table <ns0:ref type='table'>4</ns0:ref>. These results indicate that the computational efficiency of the proposed algorithm based on twolevel grid maps is significantly higher than the algorithm based on a single HSR grid map. The time for all cases is approximately 2 s, indicating that this method can be used in less demanding real-time planning applications. This can effectively improve the practicality of the proposed algorithm.</ns0:p><ns0:p>When planning a path by applying the basic FM 2 -based algorithm, if the grid map scale is very large, two aspects of calculations will severely increase compared with small-scale maps. First, as the number of cells increases significantly, the calculations for free cells increase. Second, 𝓢 T changes in every iteration of calculations. The overall scale of clearly increases, resulting in 𝓢 T the increase in sorting time of the adopted priority heap struct. This compared algorithm, whose algorithm flow is shown in Fig. <ns0:ref type='figure'>8</ns0:ref>, is used directly on a single HSR grid map. The valid scale of the HSR grid map is the number of free cells. For comparison, the proposed algorithm determines the effective regions within the HSR grid map first and then plans a path within the determined effective regions. As shown in Fig. <ns0:ref type='figure'>9</ns0:ref>, Steps T1-T3 complete the determination of effective regions, and Steps T4-T5 realize the path planning. The same function of the path planning is also realized by Steps S1-S2, as shown in Fig. <ns0:ref type='figure'>8</ns0:ref>. Comparing the path planning steps of the compared and proposed algorithms, the numbers of free cells in effective regions are much less than the ones in the HSR grid map. This indicates that the calculations for free cells and the overall scale of will greatly decrease. Therefore, the planning time of the proposed algorithm 𝓢 T reduces. Certainly, there are more steps in the proposed algorithm. Planning time spent on these steps (Steps T1-T3) mainly depends on the LSR mapping parameter . Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In some special USV applications such as sea rescue, it requires the USV to reach a goal position as soon as possible. FMM is a suitable global path planning method which can plan a timeoptimal global path. However, the planned paths are not safe enough when they bypass obstacles, because of that they are too close to the obstacles. Therefore, FMM should be improved for safety considerations. One classical approach is applying the FMM within the map with inflated obstacles. Although the planned paths are safe by inflating the obstacle size, there may be abrupt turns when the obstacles have sharp corners, and the planned paths are influenced by shapes of inflated obstacles and may be not very smooth. Such paths are usually unfriendly for USVs. A rapid global path planning algorithm applying an IDC-FM 2 method in two-level SR grid maps is proposed for USV applications with short time requirements in large-scale and complex multi-island environments. This algorithm can acquire a continuous, smooth, quasitime-optimal path while maintaining a safe distance around obstacles when bypassing obstacles. When the path is near obstacles, it is limited to a safe area determined by two inshore-distance parameters. By adjusting these two inshore-distance parameters, the path can be modified flexibly. Although the time optimality is missed by using the IDC-FM 2 method, the safety with a higher priority in most applications has been ensured initially while the optimal loss of time remains at a certain level. The two-path planning process based on two-level SR grid maps improves the computational efficiency compared with the basic FM 2 -based method. The planning time on the order of seconds is acceptable in many global path planning applications of USVs. Meanwhile, the planning time of this order of magnitude is typically short enough for USVs in most situations with the requirement to replan the path. This indicates the potential of replanning by using the proposed algorithm from the perspective of planning time. As a comparison, when using the classical approach which applies the FMM within the map with inflated obstacles, narrow channels will be easier to map as obstacles when mapping the HSR grid map to the LSR grid map in the rapid planning process based on two-level SR grid maps. This shortcoming may result in the loss of some possible paths through channels.</ns0:p><ns0:p>On the other hand, there are still some challenges when using the proposed algorithm. For example, how to accurately obtain the location information of newly detected obstacles and how to add this information into the grid map when replanning are the common challenges. The IDC-FM 2 method also needs to be modified to plan the path meeting the dynamic characteristics of a USV. Otherwise, the USV may not follow the replanned path successfully at the beginning of the path.</ns0:p><ns0:p>One shortcoming of the proposed algorithm is that paths through channels that are very narrow but still passable in reality may be missed because of the first path planning process in the LSR grid map. The introduction of environmental effects into the proposed algorithm is an important task to be performed in future works. Marine experiments should also be conducted to verify the validity of the algorithm. Effective region within the LSR grid map.</ns0:p><ns0:p>The case shown uses k = 4, and P ff,i = (f floor (x i ), f floor (y i )), P cf,i = (f ceil (x i ), f floor (y i )), P cc,i = (f ceil (x i ), f ceil (y i )), and P fc,i = (f floor (x i ), f ceil (y i )) are the four neighbor cells of P i . P ff,i is the 'path-passed' cell with respect to P i in this case. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 14</ns0:note><ns0:note type='other'>Computer Science Figure 15</ns0:note><ns0:p>Enlarged views of path portions.</ns0:p><ns0:p>(A)-(E) show the portions that are indicated by rectangles in Fig. <ns0:ref type='figure'>14</ns0:ref>. Detailed information can be acquired from (A) to (E). For example, all paths maintain a relatively safe distance from each island, the paths are smooth even when they turn around islands, and paths across narrow channels can be planned along the quasi-midline of the channels.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:2:0:NEW 27 May 2021)</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:2:0:NEW 27 May 2021) 𝑗 = [ 𝑇 𝑖 + 1,𝑗 -𝑇 𝑖 -1,𝑗 2 𝑇 𝑖,𝑗 + 1 -𝑇 𝑖,𝑗time value at grid and the non-italic in the upper-right 𝑇 𝑖,𝑗 𝑥 𝑖,𝑗 T corner represents the transpose of vector . [𝑥 𝑦]</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Th -𝐷 sc )The area around the obstacles is divided into four parts by the three inshore-distance parameters: , , and . The three parts extending from an obstacle to the outside region𝐷 sc 𝐷 wc 𝐷 Th are the danger region, ; the strong constraint region, ; and the weak constraint region, 𝑆 d 𝑆 sc 𝑆 wc PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:2:0:NEW 27 May 2021)Manuscript to be reviewedComputer Science(see Fig. 5). The influences of and on the path are shown in Fig. 5. As shown in Fig. 𝑤 wc 𝑤 sc 5A, the path will be located far from the obstacle until it is near the boundary of when is 𝑆 wc 𝑤 wc large. In contrast, the path is closer to an obstacle when is large (see Fig. 5B). The minimum 𝑤 sc degree of the path close to is mainly determined by , which can be inferred by further 𝑆 sc 𝑤 wc comparing the paths in Fig. 5B with the closest path to the obstacle ( ) in Fig. 5A. 𝑤 wc = 1.1 However, regardless of the change in and ( ), the path portions bypassing 𝑤 wc 𝑤 sc 𝑤 sc > 𝑤 wc > 1 obstacles will always be within or around the outside boundary of in non-channel ocean 𝑆 wc 𝑆 wc areas. In channel areas where there is no , the path will follow the quasi-centerline of the 𝑆 wc channel if it passes through the channel. In this study, the values of and 𝑤 sc = 40.0 𝑤 wc = 2.0 were selected and fixed. The paths can be flexibly modified by the inshore-distance parameters and . 𝐷 Th 𝐷 sc</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>𝑃 s = [ 4.15 km, 2.74 km ] 𝑃 g , and the island obstacles have been inflated by 94.0 m. As shown in Fig. = [ 4.33 km, 2.40 km ]</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:2:0:NEW 27 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figure 7</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>l</ns0:head><ns0:label /><ns0:figDesc>ref is the path obtained by applying the IDC-FM 2 method based on the single HSR grid map directly, while l k is the path obtained by applying the proposed algorithm based on two-level SR grid maps.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 11 Binary</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Figure 12</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 13 Paths</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Five</ns0:head><ns0:label /><ns0:figDesc>Figure 15.</ns0:figDesc><ns0:graphic coords='47,42.52,229.87,525.00,391.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,265.72,525.00,315.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,255.37,525.00,180.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>𝑥 𝑚 𝑥 𝑖 > 𝑇 𝑥 𝑚 ,and 𝑇 𝑥 𝑚 𝑥 𝑖 > 𝑇 𝑥 𝑖</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>min(𝑇 𝑥 𝑚</ns0:cell><ns0:cell>,𝑇 𝑥 𝑖 ) + 𝜏 𝑥</ns0:cell><ns0:cell>,others</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>where</ns0:cell><ns0:cell>𝑇 𝑥 𝑚</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝑇 𝑥 𝑖</ns0:cell><ns0:cell cols='2'>are the accepted solutions at cells</ns0:cell><ns0:cell>𝑥 𝑚</ns0:cell><ns0:cell>and , respectively. If cell is in , 𝑥 𝑖 𝑥 𝑖 𝓢 T</ns0:cell></ns0:row><ns0:row><ns0:cell>𝑇 𝑥 𝑖</ns0:cell><ns0:cell cols='7'>is . For the cell set, if a non-accepted neighbor cell is in , it moves from ∞ 𝑥 𝓢 F</ns0:cell><ns0:cell>to . The 𝓢 F 𝓢 T</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58122:2:0:NEW 27 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Three inshore-distance parameters 𝑤 𝑥 (the distance threshold , as well as the virtual strong and weak constraint distance parameters, 𝐷</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Th</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>𝐷 sc</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell cols='2'>𝐷 wc</ns0:cell><ns0:cell cols='6'>, respectively) and two weighting function values with respect to</ns0:cell><ns0:cell>𝐷 sc</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>( 𝐷 wc 𝑤 sc</ns0:cell></ns0:row><ns0:row><ns0:cell>and</ns0:cell><ns0:cell cols='2'>𝑤 wc</ns0:cell><ns0:cell cols='7'>, respectively) are used to determine the function , and 𝑤 𝑥</ns0:cell><ns0:cell>𝐷 Th</ns0:cell><ns0:cell>is used to achieve the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>same function as the speed saturation. Although five parameters are set, only two inshore-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>distance parameters,</ns0:cell><ns0:cell>𝐷 Th</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝐷 sc</ns0:cell><ns0:cell cols='2'>, must be considered, while suitable values of</ns0:cell><ns0:cell>𝑤 sc</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝑤 wc</ns0:cell><ns0:cell>can</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>be selected as constant values. Based on this method, the path portion around an obstacle is</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>constrained in a region determined by</ns0:cell><ns0:cell>𝐷 Th</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝐷 sc</ns0:cell><ns0:cell>.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>In the loop procedure, if the smallest value of a cell in 𝑇</ns0:figDesc><ns0:table><ns0:row><ns0:cell>𝓢 T</ns0:cell><ns0:cell cols='6'>is larger than or equal to</ns0:cell><ns0:cell>𝑇 Th</ns0:cell><ns0:cell>, all values of non-accepted cells can be set as 𝑇</ns0:cell><ns0:cell>𝑇 Th</ns0:cell><ns0:cell>directly.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>Based on the resulting arrival time map,</ns0:cell><ns0:cell>𝓣 sat</ns0:cell><ns0:cell>, with a saturation threshold, the time-cost</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>weighting function map, , is designed as: 𝑾 '</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>𝑤 ' 𝑥 = {</ns0:cell><ns0:cell cols='4'>∞ 1 + 𝑎 ( 𝑇 Th 𝑇 𝑥</ns0:cell><ns0:cell>,𝑇 𝑥 = 0 -1 ) 𝑏 ,𝑇 𝑥 > 0</ns0:cell><ns0:cell>(14)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>where</ns0:cell><ns0:cell cols='2'>𝑤 ' 𝑥</ns0:cell><ns0:cell cols='3'>and are the values of the cell in 𝑇 𝑥 𝑥 𝑾 '</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝓣 sat</ns0:cell><ns0:cell>, respectively, and and are two 𝑎 𝑏</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>positive coefficients to be determined.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>When a free grid, , is within the virtual influenced scope of obstacles (i.e., 𝑥</ns0:cell><ns0:cell>0 < 𝐷 𝑥 ≤ 𝐷 Th</ns0:cell><ns0:cell>,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>where</ns0:cell><ns0:cell cols='2'>𝐷 𝑥</ns0:cell><ns0:cell cols='3'>is the closest distance to obstacles), then:</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>𝐷 𝑥 ≈ 𝐷 Res</ns0:cell><ns0:cell cols='2'>𝑇 𝑥 𝜏</ns0:cell><ns0:cell>(15)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Therefore,</ns0:cell><ns0:cell>𝑤 𝑥</ns0:cell><ns0:cell>with respect to</ns0:cell><ns0:cell>𝐷 𝑥</ns0:cell><ns0:cell>is used to approximate : 𝑤 ' 𝑥</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(16)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>wherein</ns0:cell><ns0:cell cols='2'>𝐷 wc</ns0:cell><ns0:cell cols='2'>is calculated based on</ns0:cell><ns0:cell>𝐷 Th</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>) in the IDC-FM 2 method in which 𝐷 sc</ns0:cell></ns0:row><ns0:row><ns0:cell>𝑤 sc = 40.0</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell cols='5'>. These paths are clearly adjusted by these distance parameters. When 𝑤 wc = 2.0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>the distance constraints (</ns0:cell><ns0:cell cols='2'>𝐷 Th</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝐷 sc</ns0:cell><ns0:cell>) are small, the path (such as , see Fig. 12) will be a 𝓁 1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>somewhat close to the islands. The estimated quasi-closest distance from to an island is 𝓁 1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>approximately 38 m, which is shorter than half of the shortest channel width (</ns0:cell><ns0:cell>). In 𝑑 hscw ≈ 101m</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>this situation, is always outside of its 𝓁 1</ns0:cell><ns0:cell>(<28.2 m) value, and the path portions around the</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "School of Electronics and Information Engineering
Harbin Institute of Technology
Nangang District, Harbin, Heilongjiang, China
No. 92, Xidazhi Street
Rapid global path planning algorithm for unmanned surface vehicles in large-scale and multi-island marine environments
Dear Editors:
We thank the reviewers for their second comments on the manuscript. We have revised the manuscript to address their concerns. Because of that Reviewer #1 has mentioned that he/she has no further questions in “Comments for the author”, therefore, the detail point-by-point response is not provided again.
This document provides our point-by-point response to Reviewer #2’ comments.
Three key documents are resubmitted:
• Rebuttal letter:
This document contains our point-by-point response to the comments by the reviewers and the editor.
• Revised manuscript with tracked changes:
This document is mainly identical to the revised manuscript with all changes highlighted, and tracked changes are computer-generated, and Compare function is added in the document.
• Revised manuscript:
The clean new version of the manuscript with tracked changes accepted.
Responses to Reviewer #1 (Jesus Hernandez-Barragan)
Thank you very much for your work about this manuscript.
Basic reporting
The paper is well organized and is quite good from the English aspect. The introduction and background literature references are sufficient. Moreover, the shared Raw data supports the presented results. Finally, figures have good quality and tables are appropriately described.
Experimental design
The paper is into the scope of the journal. The contribution of the paper is clear, and it provides sufficient detail to replicate the proposed method.
Validity of the findings
The results of the paper show the effectiveness of the proposed approach. Moreover, a comparison against a basic approach is presented. The conclusions are well stated and connected to the original question investigated.
Comments for the author
The authors have followed the suggestions and comments to improve the manuscript quality. I have no further questions.
Responses to Reviewer #2
Thank you very much for your comments. A clean revised manuscript and a revised manuscript with tracked changes are provided. The detailed responses are below the comments.
Reviewer comments have been italicized and are in bold font, and the responses are normal. Every reviewer comment with a response is highlighted, such as “Reviewer #2’s comment 1,” and the response is highlighted by another color such as “Response to reviewer #2’s comment 1”.
In the responses, in addition to the explanation and discussion of the responses, some original texts and revised texts are shown in the responses. The original texts are in red front, and the revised texts are in blue font for distinction.
To distinguish the line number in the original and revised manuscripts, Original-Line and Revised-Line were used to represent the lines in the original and revised manuscripts, respectively. For example, “Original-Line 1” represents line 1 in the original manuscript, while “Revised-Line 1” represents line 1 in the revised manuscript.
Basic reporting
The paper have been adequately improved and I only suggest to include disscussions about two points that were replied to this reviewer but not included in the manuscript. This is specified in the comments to the authors.
Experimental design
This reviewer is satisfied with the experimental design presented in the paper.
Validity of the findings
This reviewer is convinced of the validity of the findings presented in the paper.
Comments for the author
The paper have been adequately improved and I only suggest to include disscussions about two points that were replied to this reviewer but not included in the manuscript:
Reviewer #2’s comment 1:
-What is the importance to plan time-optimal paths in comparison to optimal paths in distance? At the end, the method modifies the paths according to the parameters of the IDC and time optimality is then missed. Some discussion about this point must be included in the manuscript as was answered to the reviewer.
Response to reviewer #2’s comment 1:
Thank you for this comment.
We add some discussion about this point in the “Conclusions” part. The supplementary parts are
“In some special USV applications such as sea rescue, it requires the USV to reach a goal position as soon as possible. FMM is a suitable global path planning method which can plan a time-optimal global path. However, the planned paths are not safe enough when they bypass obstacles, because of that they are too close to the obstacles. Therefore, FMM should be improved for safety considerations.” (Revised-Line 476-479)
“Although the time optimality is missed by using the IDC-FM2 method, the safety with a higher priority in most applications has been ensured initially while the optimal loss of time remains at a certain level.” (Revised-Line 489-491)
Reviewer #2’s comment 2:
-Justify the need of the proposed method in comparison to the classical approach to inflate the obstacles size to guarantee to find collision free paths. Some discussion about this point must be included in the manuscript as was answered to the reviewer.
Response to reviewer #2’s comment 2:
Thank you for this comment.
About this comment, we have added supplementary parts in both the “Results and discussion” section and the “Conclusions” section. In the “Results and discussion” section, the path planed by the classical approach and our proposed IDC-FM2 method are compared and discussed. A new figure (Fig. 13) is also added. Some discussions are also added in the “Conclusions” section.
The detailed supplementary part in the “Results and discussion” section is
“One classical approach to plan collision-free paths is applying the FMM within the map with inflated obstacles. For comparison, paths planned by the classical approach and the IDC-FM2 method are shown in Fig. 13. The starting and goal positions are and , and the island obstacles have been inflated by 94.0 m. As shown in Fig. 13, the path planned by the classical approach, , is obviously influenced by the shape of the inflated obstacle. In some path turns which are marked by ellipses in Fig. 13, they are both affected by the shape of the inflated obstacle and the time-value gradient calculation of cells adjacent to the inflated obstacle. A smooth path, , is also shown in Fig. 13. Although it removes abrupt turn waypoints, the path is still not very smooth. As comparisons, paths planed based on the IDC-FM2 method, and , are obviously smoother than . In addition, when using the proposed rapid path planning algorithm based on two-level SR grid maps to improve the computational efficiency, the channel will be mapped as obstacle cells when mapping the HSR grid map to the LSR grid map in this case (taking the mapping parameter as shown in Table 1), because of the influence of the inflated obstacle. This unexpected mapping will result in the loss of the path through the channel.” (Revised-Line 398-412)
The new added figure is
Figure 13. Paths planned by a classical approach applying the FMM within the map with inflated obstacles and the IDC-FM2 method. When using the classical approach, the obstacle is inflated by 94.0 m. The black cells indicate the raw obstacle, the white cells indicate the free areas, while the cells with different gray scales indicate the inflated obstacle.
The detailed supplementary parts in the “Results and discussion” section are
“One classical approach is applying the FMM within the map with inflated obstacles. Although the planned paths are safe by inflating the obstacle size, there may be abrupt turns when the obstacles have sharp corners, and the planned paths are influenced by shapes of inflated obstacles and may be not very smooth. Such paths are usually unfriendly for USVs.” (Revised-Line 479-483)
“As a comparison, when using the classical approach which applies the FMM within the map with inflated obstacles, narrow channels will be easier to map as obstacles when mapping the HSR grid map to the LSR grid map in the rapid planning process based on two-level SR grid maps. This shortcoming may result in the loss of some possible paths through channels.” (Revised-Line 497-501)
" | Here is a paper. Please give your review comments after reading it. |
166 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Small sample learning aims to learn information about object categories from a single or a few training samples. This learning style is crucial for deep learning methods based on large amounts of data. The deep learning method can solve small sample learning through the idea of meta-learning 'how to learn by using previous experience.' Therefore, this paper takes image classification as the research object to study how meta-learning quickly learns from a small number of sample images. The main contents are as follows:</ns0:p><ns0:p>After considering the distribution difference of data sets on the generalization performance of measurement learning and the advantages of optimizing the initial characterization method, this paper adds the model-independent meta-learning algorithm and designs a multi-scale meta-relational network. First, the idea of META-SGD is adopted, and the inner learning rate is taken as the learning vector and model parameter to learn together. Secondly, in the meta-training process, the model-independent metalearning algorithm is used to find the optimal parameters of the model. The inner gradient iteration is canceled in the process of meta-validation and meta-test. The experimental results show that the multiscale meta-relational network makes the learned measurement have stronger generalization ability, which further improves the classification accuracy on the benchmark set and avoids the need for finetuning of the model-independent meta-learning algorithm.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Deep learning has made significant progress in computer vision fields, but only on the premise that they have a large amount of annotated data <ns0:ref type='bibr' target='#b21'>(Ni et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b29'>Vinyals, Blundell, Lillicrap, & Wierstra, 2016;</ns0:ref><ns0:ref type='bibr'>Zheng, Liu, & Yin, 2021)</ns0:ref>. However, it is impractical to acquire large amounts of data in real life. As far as deep Learning is concerned, fitting into a more complex model requires more data to have good generalization ability. Once there is a lack of data, deep learning technology can make the in-sample training effect good, but the generalization performance of new samples is poor. Inspired by the human ability to learn quickly from a small sample, many researchers have become increasingly aware of the need to study machine learning from a small sample. In recent years, small sample learning has become a very important frontier research direction in deep Learning. Human beings can learn new concepts from a single or a few samples and obtain extremely rich representations from sparse data. This ability is attributed to people's ability to realize and control their learning process, called meta-learning <ns0:ref type='bibr' target='#b0'>(Biggs, 1985;</ns0:ref><ns0:ref type='bibr' target='#b2'>Ding et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Ma, Zheng, Chen, & Yin, 2021;</ns0:ref><ns0:ref type='bibr' target='#b31'>Yin et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Up to now, research on learning with fewer samples can be divided into two aspects: 1. Generation model based on probabilistic reasoning; 2. Discriminant model based on meta-learning.</ns0:p><ns0:p>The pioneering work of low-sample Learning can be traced back to the work <ns0:ref type='bibr'>of Li Feifei et al., in early 2000</ns0:ref>, which defined the concept of low-sample Learning: learning the new category by using one or a few samples of the image of the new category <ns0:ref type='bibr' target='#b3'>(Fe-Fei, 2003)</ns0:ref>. Li Fei-Fei proposed a Bayesian learning framework in 2004. Andrew L. Maas et al. used the Bayesian network to capture the relationship between attributes in 2009, which can deal with near-deterministic relationships and include soft probabilistic relationships <ns0:ref type='bibr' target='#b10'>(Kemp & Maas, 2009;</ns0:ref><ns0:ref type='bibr'>Li, Zheng, Wang, Yin, & Wang, 2015)</ns0:ref>. In 2016, Danilo J. <ns0:ref type='bibr'>Rezende et al. implemented</ns0:ref> the hierarchical Bayesian programming learning framework in the deep generation model based on feedback and attention principles. Compared with the shallow hierarchical Bayesian programming learning framework (B. M. <ns0:ref type='bibr' target='#b14'>Lake, Salakhutdinov, & Tenenbaum, 2015;</ns0:ref><ns0:ref type='bibr' target='#b28'>Tang, Liu, Li, et al., 2020)</ns0:ref>, this method has a wide range of applications. However, more data is needed to avoid over-fitting <ns0:ref type='bibr' target='#b23'>(Rezende, Danihelka, Gregor, & Wierstra, 2016)</ns0:ref>. In 2017, Vicarious researchers came up with a probabilistic model of vision, based on how the visual cortex works, reporting up from level to level, and named it the recursive Cortical network. With the same accuracy, the recursive cortical network uses only one-millionth of the deep learning method's training sample. It can be used to crack various variants of text captchAs after one training <ns0:ref type='bibr'>(George et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Li et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Meta-learning, also known as learning to learn, refers to using previous experience to learn a new task quickly, rather than thinking about the new task in isolation. In 2016, Brenden M. Lake et al. emphasized its importance as the cornerstone of Artifical intelligence (B. M. <ns0:ref type='bibr' target='#b15'>Lake, Ullman, Tenenbaum, & Gershman, 2017)</ns0:ref>. Up to now, the research directions of using meta-learning to deal with small sample learning include: 1. Memory enhancement; 2. Measure learning; 3. Learn the optimizer; 4. Optimize initial characterization.</ns0:p></ns0:div>
<ns0:div><ns0:head>1)Memory enhancement</ns0:head><ns0:p>Memory enhancement refers primarily to the use of cyclic neural networks with memory or temporal convolution to iterate over examples of a given problem, accumulating information that is used to solve the problem in its hidden activation layer or external memory. Considering that neural network Turing machines can carry out short-term memory through external storage and carry out long-term memory through slow weight updating <ns0:ref type='bibr'>(Graves et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b27'>Tang et al., 2021)</ns0:ref>, Adam Santoro et al proposed a memory enhancement network based on Long and short Memory Network (LSTM) in 2016 <ns0:ref type='bibr' target='#b24'>(Santoro, Bartunov, Botvinick, Wierstra, & Lillicrap, 2016;</ns0:ref><ns0:ref type='bibr'>Tang, Liu, Deng, et al., 2020)</ns0:ref>. Next,Munkhdalai et al. proposed a meta-network <ns0:ref type='bibr' target='#b21'>(Munkhdalai & Yu, 2017)</ns0:ref>, which is composed of a meta-learning device and a learning device, and at the same time, a memory unit is added externally. By using the gradient as the meta-information, the fast weight and slow weight are generated on two time scales, and then the fast weight and slow weight are combined by the layer enhancement method.</ns0:p><ns0:p>2)Measure Learning Metric Learning refers to learning a similarity measure from data, and then using this measure to compare and match samples of new unknown categories. In 2015, Gregory Koch et al. proposed the twin network <ns0:ref type='bibr' target='#b1'>(Chen et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b12'>Koch, Zemel, & Salakhutdinov, 2015)</ns0:ref>. The recognition accuracy of the model on Onniglot data is close to human. In 2016, Oriol Vinyals et al. proposed an end-to-end and directly optimized matching network based on memory and attention <ns0:ref type='bibr' target='#b18'>(Li et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b29'>Vinyals et al., 2016)</ns0:ref>. This network can quickly learn with few samples, and for new classes never seen in the training process, the trained model is not changed, and the test samples can be classified with a few calibration samples of each class. Flood Sung et al. proposed the relationship network in 2018 <ns0:ref type='bibr' target='#b25'>(Sung et al., 2018)</ns0:ref>. Learning to get similarity measure is more flexible and can capture the similarity between features better than the artificially selected measure method, and good results are obtained on several benchmark data sets learned with few samples.</ns0:p></ns0:div>
<ns0:div><ns0:head>3)Learn the optimizer</ns0:head><ns0:p>The learning optimizer refers to learning how to update the learner parameters, that is, learning the updating function or updating rules of the new model parameters. In 2017, Ravi and Larochelle used LSTM as a meta-learner, and took the learner's initial recognition parameters, learning rate and loss gradient as LSTM states to learn the learner's initialization parameters and parameter update rules <ns0:ref type='bibr' target='#b22'>(Ravi & Larochelle, 2016)</ns0:ref>. Yang et al. introduced metric Learning based on the method proposed by Ravi and Larochelle, and proposed a meta-metric learner <ns0:ref type='bibr' target='#b30'>(Yang, Liu, Dong, & Wu, 2020)</ns0:ref>. The author integrates the matching network and the method of updating rules using LSTM learning parameters, and obtains a better method. However, such a structure is more complex than the structure based on measurement learning alone, and each parameter of the learner is updated independently in each step, which will largely limit its potential <ns0:ref type='bibr' target='#b25'>(Sung et al., 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>4)Optimize initial characterization</ns0:head><ns0:p>Optimize the initial representation, that is, optimize the initial representation directly. In 2017, PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Chelsea Finn et al. proposed a model-agnostic meta-learning algorithm (MAML) <ns0:ref type='bibr' target='#b4'>(Finn, Abbeel, & Levine, 2017;</ns0:ref><ns0:ref type='bibr'>Zheng et al., 2017)</ns0:ref>. Compared with the previous meta-learning method, this method introduces no additional parameters and has no restrictions on the model structure. Instead of updating a function or learning rule, it only uses gradient to update the learner weight. Andrei A. Liu et al. used the entire training set to pre-train the feature extractor in the case that MAML could not handle high-dimensional data well, and then used the parameter generation model to capture various parameters useful for task distribution <ns0:ref type='bibr' target='#b19'>(Liu et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b34'>Zheng, Li, Yin, & Wang, 2016a)</ns0:ref>. Totally different from the mamL-based improvement, Ruixiang Zhang et al. applied antagonistic neural networks in the field of meta-learning. This method uses the idea of generating antagonistic network to expand data and improve the learning ability of the network. However, the overall effect of this method is not stable, the relative cost is high, and the technology is not mature enough <ns0:ref type='bibr' target='#b32'>(Zhang, Che, Ghahramani, Bengio, & Song, 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Zheng, Li, Yin, & Wang, 2016b)</ns0:ref>.</ns0:p><ns0:p>The ability to learn and adapt quickly from small amounts of data is critical to AI. However, the success of deep learning depends to a large extent on a large amount of tag data, and in deep neural network learning, each task is isolated from the Learning, and Learning is always from scratch when facing a new task. Limited data or rapid generalization in a dynamic environment challenge current deep learning methods. Therefore, this paper takes the problem of image classification with few samples as the research object, combines measurement learning and optimization of initial representation methods, and designs a multi-scale meta-relational network. Finally, the classification accuracy and training speed of the two baseline data sets with small sample learning are improved.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Materials: 1. Omniglot data set</ns0:head><ns0:p>In 2011, the Omniglot data set was collected by Brenden Lake and his collaborators at MIT through Amazon's Mechanical Turk (B. <ns0:ref type='bibr' target='#b13'>Lake, Salakhutdinov, Gross, & Tenenbaum, 2011;</ns0:ref><ns0:ref type='bibr' target='#b33'>Zheng, Li, Xie, Yin, & Wang, 2015)</ns0:ref>. It consists of 50 international languages, including mature international languages such as Latin and Korean, little-known local dialects, and fictional character sets such as Aurek-Besh and Klingon. The number of letters in each language varies widely, from about 15 to 40 letters, with 20 samples for each letter. Therefore, the Omniglot data set consists of 1623 categories and 32,460 images. Figure <ns0:ref type='figure'>1</ns0:ref> illustrates the five languages of the Omniglot data set.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> Omniglot data set example</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>MiniImageNet data set</ns0:head><ns0:p>MiniImageNet data set, proposed by Vinyals et al., consists of 60,000 84×84×3 color images, a total of 100 categories, each with 600 samples <ns0:ref type='bibr' target='#b29'>(Vinyals et al., 2016)</ns0:ref>. The distribution of data sets is very different, and the image category involves animals, household goods, remote sensing images, food, etc. Vinyals did not publish this data set. Ravi and Larochelle randomly selected 100 classes from the ImageNet data set to create a new MiniImageNet data set, which was divided into training set, validation set and test set at a ratio of 64:16:20 <ns0:ref type='bibr' target='#b22'>(Ravi & Larochelle, 2016)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Method 1. Metalearning based on metric Learning</ns0:head><ns0:p>In meta-learning, metric Learning refers to learning similarity measurement from a wide range of task Spaces, so that the experience extracted from the previous learning tasks can be used to guide the Learning of new tasks to achieve the purpose of learning how to learn. The learner (meta-learner) learns the target set on each task of the training set by measuring the distance of the support set, and finally learns a metric. Then for the new task of the test set, it can quickly classify the target set correctly with the help of a small number of samples of the support set.</ns0:p><ns0:p>At present, the methods of small sample image classification based on metric Learning include: twin network, matching network, prototype network and relational network. The twin network is composed of two identical convolutional neural networks, and the similarity between two images is calculated by comparing the loss function with the input of paired samples. The other three methods, instead of using paired sample inputs, calculate the similarity between the two images by setting the support set and the target set. In this paper, the improvement is made on the basis of multi-scale relational network, so the main structure of this method is mainly introduced below.</ns0:p><ns0:p>The structure of relational network <ns0:ref type='bibr' target='#b25'>(Sung et al., 2018)</ns0:ref> first obtains feature graphs of support set and target set samples through embedded modules, then splices feature graphs of support set and target set samples in depth direction, and finally obtains relationship score by learning splicer features through relationship module, so as to determine whether support set and target set samples belong to the same category. The calculation formula of relational fractions is as follows:</ns0:p><ns0:p>(1) , r ( ( ( ), ( )), 1, 2,...,</ns0:p><ns0:formula xml:id='formula_0'>i j i j g C f x f x i C     </ns0:formula><ns0:p>Where, represents the supporting set sample, represents the target set sample, </ns0:p><ns0:formula xml:id='formula_1'>      , i j C f x f x  </ns0:formula><ns0:p>stands for relationship module.</ns0:p><ns0:p>g </ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Model-independent meta-learning algorithm</ns0:head><ns0:p>According to the idea of transfer learning, when adapting to a new task, MAML only needs to fine-tune the learner so that the parameter of the learner is adapted from to . However, θ Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Here, only considers the MAML that converges to the optimal parameter after an iterative i θ step of gradient descent as follows:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_2'>  i i T L f         </ns0:formula><ns0:p>Where, stands for learner, stands for loss on specific task, stands for loss</ns0:p><ns0:formula xml:id='formula_3'>  f θ i T L   i T L f   </ns0:formula><ns0:p>gradient, and stands for learner gradient update pace, namely learning rate of learner.</ns0:p></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>On a small amount of data in new task , the learner can be converged to the optimal i T parameter after one step of gradient descent iteration, so MAML must find a set of initialization i θ representations that can be effectively fine-tuned according to a small number of samples. In order to achieve this goal, based on the idea of meta-learning 'using previous experience to quickly learn new tasks', learners need to learn the learner parameter on different tasks. MAML defines this θ process as meta-learning process.</ns0:p><ns0:p>In the Meta-learning process, for different training tasks, the optimal parameter suitable i θ for the specific task was obtained through a step of gradient iteration, and then sampled again on each task for testing, requiring to reach the minimum value. Therefore, MAML adopted</ns0:p><ns0:formula xml:id='formula_4'>  i T i L f  </ns0:formula><ns0:p>the sum of test errors on different tasks as the optimization objective of the meta-learning process, as shown below:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_5'>          ~ĩ i i i i T i T T T p T T p T min L f L f L f            </ns0:formula><ns0:p>Where, represents the distribution of the task set, represents the loss on the specific task,</ns0:p><ns0:formula xml:id='formula_6'>  p T i T L</ns0:formula><ns0:p>represents the loss gradient, and represents the learning rate of the learner.</ns0:p><ns0:formula xml:id='formula_7'>  i T L f    </ns0:formula><ns0:p>It can be seen from Equation (3-3) that the optimization goal of the meta-learning process is to adopt the updated model parameter adapted to the specific task, while the meta-optimization i θ process is ultimately executed on the learner parameters . The stochastic gradient descent method θ is adopted, and the updated iteration formula of model parameter is shown as follows:</ns0:p><ns0:formula xml:id='formula_8'>θ (4)     ~i i T i T p T L f          </ns0:formula><ns0:p>Where, represents the distribution of the task set, represents the loss on the specific task, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science also known as the learning rate of the meta-learner.</ns0:p><ns0:p>Substituting Equation (3-1) into Equation (3-4), we can get:</ns0:p><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_9'>    ~( ) i i i T T T p T L f L f              </ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.'>Algorithm design of multi-scale meta-relational network</ns0:head><ns0:p>In the multi-scale meta-relational network, we hope to find a set of characterization that  can make fine adjustments efficiently according to a small number of samples. Where, is the learner converges to the optimal parameter after an iterative step of gradient descent, as</ns0:p><ns0:formula xml:id='formula_10'>i  shown below: (6)   , i train i T D L f        </ns0:formula><ns0:p>Where is a vector of the same size as , representing the learning rate of the learner.</ns0:p><ns0:formula xml:id='formula_11'>   , i train T D L  </ns0:formula><ns0:p>The direction of the vector represents the update direction, and the magnitude of the vector represents the learning step.</ns0:p><ns0:p>For different training tasks, the optimal parameter suitable for specific tasks was obtained (7)</ns0:p><ns0:formula xml:id='formula_12'>    , , ~i s test i T D D i T p T min L f    Substituting Equation (3-6) into Equation (3-7), we can find: (8)       , , , i S test i train i T D D T D T p T min L f L f    :  </ns0:formula></ns0:div>
<ns0:div><ns0:head>  </ns0:head><ns0:p>For parameter , the gradient calculation formula of formula (3-8) is as follows: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_13'>(9)     , , , i S test i train T D D T D g L f L f           </ns0:formula><ns0:p>Substituting Equation (3-6) into Equation (3-9) can be simplified as follows:</ns0:p><ns0:formula xml:id='formula_14'>(10)     , , , , , ( ) ( ) = ( ) ( ) ( ) = ( ) ( ) i S test i train i S test T D D i i i i i T D T D D i i i i L f f g f L f L f f I f                                 </ns0:formula><ns0:p>Where is the unit vector. I</ns0:p><ns0:p>It can be seen from Equation (3-10) that the update process of the multi-scale meta-relational network involves the gradient calculation process, that is, when the gradient operator in the metatarget is used to propagate the meta-gradient, additional reverse transfer is required to calculate the Hessian vector product <ns0:ref type='bibr' target='#b10'>(Kemp & Maas, 2009)</ns0:ref>.</ns0:p><ns0:p>In the multi-scale relational network, except for the sigMIod nonlinear function at the last full connection layer of the metric learner, all other nonlinear functions are ReLU. The ReLU neural network is almost linear in part, which indicates that the second derivative is close to zero in most cases. Therefore, the multi-scale element relational network, like MAML, ignores the second derivative in the calculation of the back propagation of the element gradient <ns0:ref type='bibr' target='#b10'>(Kemp & Maas, 2009)</ns0:ref>, as follows:</ns0:p><ns0:formula xml:id='formula_15'>(11) , ,<ns0:label>, , , , ( ) ( )</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>= ( ) ( ) = = ( ) i S test i S test i i S test T D D i i i i T D D i i T D D i L f f g f L f L f                 </ns0:formula><ns0:p>Therefore, in the process of calculating the outer gradient, the multi-scale element relation network stops the back propagation after calculating the gradient at and calculates the second i  derivative at .</ns0:p></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>For parameter , the gradient calculation formula of formula (3-8) is as follows:</ns0:p><ns0:formula xml:id='formula_17'> (12)       , ,<ns0:label>, , , , , , ( ) =</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>= ( ) i S test i train i S test i train i i S test T D D T D T D D i i i T D T D D i g L f L f L f L f L f                            </ns0:formula><ns0:p>In the outer optimization iteration process, stochastic gradient descent method is adopted, and the updated iteration formula of model parameter is shown as follows: </ns0:p><ns0:formula xml:id='formula_19'>i i S test i T D D i T p T L f         </ns0:formula><ns0:p>Similarly, the update iteration formula of model parameter is shown as follows:</ns0:p><ns0:formula xml:id='formula_20'> (14)         , , , + ( ) i i S test i train i T D D i T D T p T L f L f           </ns0:formula><ns0:p>In each inner iteration, all tasks can be used to update the gradient, but when the number of tasks is very large, all tasks need to be calculated in each iteration step, so the training process will be slow and the memory capacity of the computer will be high. If each inner iteration uses one task to update the parameters, the training speed will be accelerated, but the experimental performance will be reduced, and it is not easy to implement in parallel. So, during each inner iteration, we take num_inner_task for the number of tasks. By selecting the number of tasks in the inner iteration process as a super parameter, memory can be reasonably utilized to improve the training speed and accuracy.</ns0:p><ns0:p>During the meta-training of multi-scale meta-relational network, after the end of an epoch, the accuracy on the meta-validation set was calculated, and the highest accuracy on the metavalidation set was recorded so far. If consecutively N epoches (or more, adjusted according to the specific experimental conditions) did not reach the optimal value, it could be considered that the accuracy was no longer improved, and the iteration could be stopped when the accuracy was no longer improved or declined gradually. Output the model with the highest accuracy, and then test it with this model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment and results</ns0:head></ns0:div>
<ns0:div><ns0:head n='1.'>Multi-scale meta-relational network design</ns0:head><ns0:p>In the algorithm design of multi-scale meta-relational network:</ns0:p><ns0:p>1. Adopting the meta-SGD idea, the learning rate of the inner layer, that is, the learning rate of the learner, is taken as the learning vector and model parameter to learn together, so as to further improve the performance of the learner. 2. Considering that the small sample image classification method based on metric Learning can be adapted to the new task without fine-tuning, the multiscale meta-relational network adopts MAML algorithm to learn and find the optimal parameters of the model in the meta-training process, and eliminates the inner gradient iteration in the metavalidation and meta-testing process.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Small sample learning benchmark dataset experiment</ns0:head><ns0:p>The model structure of multi-scale meta-relational network adopts multi-scale relational network, which is mainly composed of feature extractor and measure learner <ns0:ref type='bibr' target='#b0'>(Biggs, 1985)</ns0:ref>. The second derivative is ignored in the back propagation process of the multi-scale element relational network. In practical implementation, the back propagation process can be truncated after the calculation of the outer gradient of the multi-scale element relational network is completed. Manuscript to be reviewed Computer Science of full connection is adopted as a learning device. Each convolution module of the learner is composed of a convolutional layer consisting of 64 filters of size 1, a batch standardization layer, a modified linear element layer and a maximum pooling layer of size 2. All the convolutional layers are filled with zero. The fourth convolution module is followed by the full connection layer. The number of neurons in the full connection layer is 64, and the nonlinear function log_SOFTmax is adopted. Different from the multi-scale relational network, the cross-entropy loss function is used to optimize the learning object.</ns0:p></ns0:div>
<ns0:div><ns0:head>a) Omniglot data set</ns0:head><ns0:p>In the Omniglot data set experiment, all samples were processed into a size of 28×28 and a random rotation was used to enhance the data. From 1623 classes, 1200, 211 and 212 classes were selected as meta-training set, meta-validation set and meta-test set respectively. In this section, single-sample image classification experiments of 5-ways 1-shot and 20-ways 1-shot and smallsample image classification experiments of 5-ways 5-shot and 20-ways 5-shot are conducted.</ns0:p><ns0:p>In the experiment, 100 episodes of multi-scale meta-relational network were taken as one epoch. The iteration times of the meta-training set, meta-validation set and meta-test set are accordingly 70,000, 500 and 500. Other super-parameter selections on the Omniglot dataset are shown in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> Omniglot data set experiment super parameter value 1)Single sample image classification Figure <ns0:ref type='figure'>2</ns0:ref> Accuracy and loss iteration curves of 5-way 1-shot in multi-scale meta-relational network</ns0:p><ns0:p>It can be seen from Figure <ns0:ref type='figure'>2</ns0:ref> that, when the number of iterations is 42400, the multi-scale meta-relational network achieves the highest accuracy rate of 99.8667% in the 5-way 1-shot experiment on the meta-validation set. The iteration time of the multi-scale meta-relational network is less than that of the multi-scale relational network (method 1 in this paper). Therefore, compared with the convergence of the multi-scale relational network when the iteration reaches 114,000, the learning speed of the multi-scale meta-relational network is faster than that of the multi-scale relational network.</ns0:p><ns0:p>The trained model was tested on the meta-test set, and the test results were shown in Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref>. According to Table <ns0:ref type='table' target='#tab_4'>2:</ns0:ref> 1. The accuracy rate of the 5-way 1-shot experiment on the meta-test set of the multi-scale meta-relational network is higher than that of MAML and Meta-SGD, and about 0.22% higher than that of the multi-scale meta-relational network.</ns0:p><ns0:p>2. MAML and Meta-SGD based on optimized initial characterization need fine-tuning on new tasks, while the multi-scale relational network based on metric Learning can achieve good generalization performance on new tasks without fine-tuning. By comparing the two methods, the accuracy rate of the 5-way 1-shot experiment on the meta-test set of the multi-scale relational network is higher than MAML, but slightly lower than meta-SGD. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As can be seen from FIG. <ns0:ref type='figure' target='#fig_8'>3</ns0:ref>, when the number of iterations is 57,700, the multi-scale metarelational network (method 2 in this paper) achieves the highest accuracy rate of 98.86% in the 20way 1-shot experiment on the meta-validation set. Compared with the multi-scale relational network (method 1 in this paper), the learning speed of multi-scale meta-relational network is faster than that of multi-scale relational network when it iterates to 89,000.</ns0:p><ns0:p>The trained model was tested on the meta-test set, and the results were shown in Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> below. According to Table <ns0:ref type='table' target='#tab_4'>2:</ns0:ref> 1. The accuracy rate of the 20-way 1-shot experiment of the multi-scale meta-relational network on the meta-test set is about 0.47% higher than that of the multi-scale meta-relational network, which is higher than MAML and Meta-SGD.</ns0:p><ns0:p>2. MAML and Meta-SGD based on optimal initialization representation were compared with the multi-scale relational network based on metric Learning. The accuracy of the multi-scale relational network in the 20-way 1-shot experiment on the meta-test set was higher than that of MAML and meta-SGD. 2)Small sample image classification Figure <ns0:ref type='figure'>4</ns0:ref> Accuracy and loss iteration curves of 5-way 5-shot in multi-scale meta-relational network</ns0:p><ns0:p>As can be seen from FIG. <ns0:ref type='figure'>4</ns0:ref>, when the number of iterations is 29100, the multi-scale metarelational network (method 2 in this paper) achieves the highest accuracy of 99.89% in the 5-way 5-shot experiment on the meta-validation set. Compared with the multi-scale relational network (method 1 in this paper), the learning speed of the multi-scale meta-relational network is faster than that of the multi-scale relational network when it iterates to 293500.</ns0:p><ns0:p>The trained model is tested on the meta-test set, and the results are shown in Table <ns0:ref type='table'>3</ns0:ref>. According to Table <ns0:ref type='table'>3:</ns0:ref> 1. The accuracy rate of the 5-way 5-shot experiment on the meta-test set of the multi-scale meta-relational network is about 0.14% higher than that of the multi-scale meta-relational network, but lower than MAML and Meta-SGD.</ns0:p><ns0:p>2. MAML and Meta-SGD based on optimal initial representation were compared with the multi-scale relational network based on metric Learning. The accuracy of the 5-way 5-shot experiment on the meta-test set of the multi-scale relational network was lower than that of MAML and meta-SGD.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> Accuracy and loss iteration curves of 20-way 5-shot in multi-scale meta-relational network</ns0:p><ns0:p>As can be seen from FIG. <ns0:ref type='figure'>5</ns0:ref>, when the number of iterations is 65800, the multi-scale metarelational network (method 2 in this paper) achieves the highest accuracy rate of 99.82% in the 20-way 5-shot experiment on the meta-validation set. Compared with the multi-scale relational network (method 1 in this paper), the learning speed of multi-scale meta-relational network is faster than that of multi-scale relational network when it iterates to 119500.</ns0:p><ns0:p>The trained model is tested on the meta-test set, and the results are shown in Table <ns0:ref type='table'>3</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>According to Table <ns0:ref type='table'>3:</ns0:ref> 1. The accuracy rate of the 20-way 5-shot experiment of the multi-scale meta-relational network on the meta-test set is about 0.19% higher than that of the multi-scale meta-relational network, which is also higher than MAML and Meta-SGD.</ns0:p><ns0:p>2. MAML and Meta-SGD based on optimized initial characterization were compared with the multi-scale relational network based on metric Learning. The accuracy of the multi-scale relational network in the 20-way 5-shot experiment on the meta-test set was higher than that of MAML and meta-SGD.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> Small sample classification experimental results in Omniglot data set</ns0:p><ns0:p>In summary, the experimental results on Omniglot data set are as follows:</ns0:p><ns0:p>1. The classification accuracy of the 5-way 5-shot experiment on Omniglot data set was slightly lower than that of meta-SGD and MAML, and the other experiments were all higher than that of the three methods of multi-scale relational network, MAML and Meta-SGD. And the training speed of multi-scale meta-relational network is faster than that of multi-scale relational network.</ns0:p><ns0:p>2. MAML and Meta-SGD based on optimal initial characterization were compared with the multi-scale relational network based on metric Learning. Except that the classification accuracy of 5-way 1-shot experiment on Omniglot data set was slightly lower than that of META-SGD and MAML, the classification accuracy of 5-way 5-shot experiment was lower than that of META-SGD and MAML, the other two groups of experiments were both higher than that of META-SGD and MAML.</ns0:p></ns0:div>
<ns0:div><ns0:head>b) Miniimagenet data set</ns0:head><ns0:p>According to the ratio of 64:16:20, it is divided into meta-training set, meta-validation set and meta-test set. In this chapter, the classification experiment of single-sample image of 5-ways 1shot and the classification experiment of small-sample image of 5-ways 5-shot is carried out.</ns0:p><ns0:p>In the experiment, the multi-scale meta-relational network took 500 episodes as one epoch. The number of iterations of the meta-training set, meta-validation set and meta-test set can be valued at 120,000, 600 and 600. Other super-parameter selections on the MiniImageNet dataset are shown in Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref>. As can be seen from Figure <ns0:ref type='figure'>6</ns0:ref>, with the increase of iteration, loss decreases to convergence, and accuracy gradually increases to convergence. When the number of iterations is 87,000, the multi-scale meta-relational network (method 2 in this paper) achieves the highest accuracy rate of 51.234% in the 5-way 1-shot experiment on the meta-validation set. The iteration time of multiscale meta-relational network is less than that of multi-scale relational network (method 1 in this paper). Therefore, compared with the convergence of multi-scale relational network when iteration PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science reaches 155000, the learning speed of multi-scale meta-relational network is faster than that of multi-scale relational network.</ns0:p><ns0:p>The trained model is tested on the meta-test set, and the results are shown in Table <ns0:ref type='table'>5</ns0:ref>. According to Table <ns0:ref type='table'>5:</ns0:ref> 1. The accuracy rate of the 5-way 1-shot experiment on the meta-test set of the multi-scale meta-relational network is about 0.35% higher than that of the multi-scale meta-relational network, and higher than MAML and Meta-SGD.</ns0:p><ns0:p>2. MAML and Meta-SGD based on optimized initial characterization were compared with the multi-scale relational network based on metric Learning. The accuracy of the 5-way 1-shot experiment on the meta-test set of the multi-scale relational network was higher than MAML, but lower than meta-SGD.</ns0:p><ns0:p>Table <ns0:ref type='table'>5</ns0:ref> The experimental results of 5-way 1-shot in Miniimagenetdata set</ns0:p><ns0:p>2)Small sample image classification Figure <ns0:ref type='figure'>7</ns0:ref> Accuracy and loss iteration curves of 5-way 5-shot in multi-scale meta-relational network</ns0:p><ns0:p>As can be seen from Figure <ns0:ref type='figure'>7</ns0:ref>, with the increase of iteration, loss decreases to convergence, and accuracy gradually increases to convergence. When the number of iterations is 62000, the multi-scale meta-relational network (method 2 in this paper) achieves the highest accuracy rate of 66.81% in the 5-way 5-shot experiment on the meta-validation set. Compared with the convergence of multi-scale relational network (method 1 in this paper) when iterative to 140000, the learning speed of multi-scale meta-relational network is faster than that of multi-scale relational network.</ns0:p><ns0:p>The trained model is tested on the meta-test set, and the results are shown in Table <ns0:ref type='table'>6</ns0:ref>. According to Table <ns0:ref type='table'>6:</ns0:ref> 1. The accuracy rate of the 5-way 5-shot experiment on the meta-test set of the multi-scale meta-relational network is about 0.34% higher than that of the multi-scale meta-relational network, which is higher than MAML and Meta-SGD.</ns0:p><ns0:p>2. MAML and Meta-SGD based on optimized initial characterization were compared with the multi-scale relational network based on metric Learning. The accuracy of the multi-scale relational network in the 5-way 5-shot experiment on the meta-test set was higher than that of MAML and meta-SGD.</ns0:p><ns0:p>Table <ns0:ref type='table'>6</ns0:ref> The experimental results of 5-way 5-shot in Miniimagenet data set</ns0:p><ns0:p>In summary, the experimental results on Miniimagenet data set are as follows:</ns0:p><ns0:p>1. The classification accuracy of MNS on Miniimagenet data set is higher than that of MNS, MAML and META-SGD, and the training speed is higher than that of MNS.</ns0:p><ns0:p>2. By comparing the MAML and Meta-SGD based on optimal initial representation and the multi-scale relational network based on metric Learning, the classification accuracy of the 5-way 1-shot experiment on Miniimagenet data set was slightly lower than that of meta-SGD, and the classification accuracy of the 5-way 5-shot experiment was higher than that of the META-SGD </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The multi-scale meta-relational network proposed in this paper learns specific classification tasks by training the multi-scale relational network on a wide range of task Spaces. Then, MAML algorithm is used to find a set of highly adaptive parameters, which can make good use of the experiential knowledge learned in previous tasks and realize the ability of Learning.</ns0:p><ns0:p>1.By comparing the MAML and Meta-SGD based on optimized initial characterization with the multi-scale relational network based on metric Learning, the overall experimental performance of the two methods is almost the same in terms of accuracy, and it cannot be absolutely determined that one method is better than the other.</ns0:p><ns0:p>2. MAML, Meta-SGD and multi-scale meta-relational networks based on optimal initial characterization are compared separately, although Meta -SGD and MAML on 5 -way Omniglot data set the classification accuracy of cyber-shot experiment was slightly higher than that of multiscale mata relation network, but Meta -SGD and MAML need to use a small amount of data from a new task computing steps to update one or more of the gradient parameters, on a new task has the largest generalization performance, and multi-scale meta network for not seen in the process of training the new category of image, With the help of a small number of samples for each new category, it has a good generalization ability. Therefore, the method combining metric learning and learning optimization initial representation has higher performance than the method based on optimization initial representation.</ns0:p><ns0:p>3. Compare the multi-scale relational network based on metric Learning with the multi-scale meta-relational network separately. Multi-scale meta network will study MAML initialization characterization method is introduced to measure the Learning, because the initialization characterization is more suitable for MAML learning Yu Zaiyuan task distribution of training data sets, thus reduced multi-scale mate network data sets the influence of the difference of distribution in the small sample experiment results on benchmark data set higher than that of multi-scale network, network learning speed and multi-scale relation in multi-scale network. Therefore, the method combining metric learning and learning optimization initial representation has higher performance than the method based on metric Learning. Therefore, the method combining metric learning and optimal learning initial representation has higher performance than the method based on metric or optimal initial representation. The multi-scale meta-relational network enables the learned measurement method to have stronger generalization ability, which not only improves the classification accuracy and training speed on the benchmark set, but also avoids the situation that MAML needs fine-tuning.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>Considering the difference of task set distribution, and in order to make the learned measurement have stronger generalization ability, this paper designs a multi-scale meta-relational network (MSNN), a classification method based on optimized initialization representation, for image with few samples. First, the multi-scale meta-relational network adopts the meta-SGD idea and takes the inner learning rate as the learning vector and model parameter to learn together. Secondly, in the process of meta-training, the multi-scale meta-relational network adopts MAML algorithm to learn and find the optimal parameters of the model, while in the process of metavalidation and meta-test, the inner gradient iteration is eliminated and the test is carried out directly. The experimental results show that the method combined with metric Learning and optimized initial representation has higher performance than the method based on metric or optimized initial representation. The multi-scale meta-relational network enables the learned measurement method to have stronger generalization ability, which not only improves the classification accuracy and training speed on the benchmark set, but also avoids the situation that MAML needs fine-tuning.</ns0:p><ns0:p>Although the method in this paper improves the classification accuracy on the learning benchmark set with few samples and the overfitting situation, it still needs to be improved in the following aspects:</ns0:p><ns0:p>1) compared with miniImageNet data sets, Omniglot dataset is simpler, on the small sample learning problems, Omniglot baseline is above 97% and the classification of the data set but on miniImageNet data set classification effect is not too ideal, can find a better way of meta learning, to reduce the influence of the difference of task set ascend miniImageNet data set classification effect is worth exploring.</ns0:p><ns0:p>2) In the multi-scale element relational network, partial gradient information will be lost when the second derivative is omitted. Whether a better way to simplify the second derivative can be found is also the direction to be solved in future work. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>set sample, and represents the feature graph splicing operator.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>learning, MAML only needs to perform one or more gradient descent iterative steps on a small amount of data in new task to enable the learner to converge to the i T optimal parameter. PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>gradient, and represents the gradient update pace of the meta-learner, PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>D</ns0:head><ns0:label /><ns0:figDesc>composed of feature extractor parameter and metric learner parameter . During the training   process, each task is composed of training set and , serves as the target set of the outer optimization iteration, and the test D support set adopts the support set in .In the inner optimization iteration process, on the new task with a small amount of data , i T train D</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>of gradient iteration, and then tested on of different tasks, requiring test D to reach the minimum value. Therefore, the meta-optimization objectives of the process are as follows:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>For</ns0:head><ns0:label /><ns0:figDesc>experimental comparison, MAML and MetaSGD experiments are carried out simultaneously in this paper. According to Chelsea Finn et al., in the experiment of MAML and MetaSGD, a convolutional neural network composed of four layers of convolution and one layer PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3Accuracy and loss iteration curves of 20-way 1-shot in multi-scale meta-relational network</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021) Manuscript to be reviewed Computer Science and MAML.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,281.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,199.12,525.00,329.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,309.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,199.12,525.00,345.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,199.12,525.00,333.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,318.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,199.12,525.00,303.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Omniglot data set single sample classification experimental results</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>miniImageNet data set experiment super parameter value</ns0:cell></ns0:row><ns0:row><ns0:cell>1) Single sample image classification</ns0:cell></ns0:row><ns0:row><ns0:cell>Figure 6 Accuracy and loss iteration curves of 5-way 1-shot in multi-scale meta-relational network</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Omniglot dataset experiment super parameter value</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Meta-</ns0:cell><ns0:cell cols='2'>1-shot</ns0:cell><ns0:cell cols='2'>5-shot</ns0:cell><ns0:cell cols='2'>num_inner_task</ns0:cell></ns0:row><ns0:row><ns0:cell>learning rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>𝛽</ns0:cell><ns0:cell>𝑏 1</ns0:cell><ns0:cell>𝑏 2</ns0:cell><ns0:cell>𝑏 1</ns0:cell><ns0:cell>𝑏 2</ns0:cell><ns0:cell>5-way</ns0:cell><ns0:cell>20-way</ns0:cell></ns0:row><ns0:row><ns0:cell>0.003</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>16</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Omniglot dataset single sample classification experimental results</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Omniglot dataset single sample classification experimental</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>results</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Fine-tuning</ns0:cell><ns0:cell cols='2'>Accuracy 5-way 1-shot 20-way 1-shot</ns0:cell></ns0:row><ns0:row><ns0:cell>MAML</ns0:cell><ns0:cell>Y</ns0:cell><ns0:cell>97.80±0.32%</ns0:cell><ns0:cell>95.60±0.21%</ns0:cell></ns0:row><ns0:row><ns0:cell>Meta-SGD</ns0:cell><ns0:cell>Y</ns0:cell><ns0:cell>99.50±0.23%</ns0:cell><ns0:cell>95.83±0.36%</ns0:cell></ns0:row><ns0:row><ns0:cell>Multi-scale relational</ns0:cell><ns0:cell>N</ns0:cell><ns0:cell>99.35±0.25%</ns0:cell><ns0:cell>97.41±0.28%</ns0:cell></ns0:row><ns0:row><ns0:cell>networks</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Multi-scale meta-</ns0:cell><ns0:cell>N</ns0:cell><ns0:cell>99.57±0.16%</ns0:cell><ns0:cell>97.88±0.20%</ns0:cell></ns0:row><ns0:row><ns0:cell>relational network</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>miniImageNet dataset experiment super parameter value PeerJ Comput. Sci. reviewing PDF | (CS-2020:12:56726:1:1:NEW 16 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='5'>miniImageNet dataset experiment super parameter value</ns0:cell></ns0:row><ns0:row><ns0:cell>Meta-learning</ns0:cell><ns0:cell>1-shot</ns0:cell><ns0:cell cols='2'>5-shot</ns0:cell><ns0:cell>num_inner_task</ns0:cell></ns0:row><ns0:row><ns0:cell>rate</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>𝛽</ns0:cell><ns0:cell cols='4'>𝑏 1 𝑏 2 𝑏 1 𝑏 2 1-shot</ns0:cell><ns0:cell>5-shot</ns0:cell></ns0:row><ns0:row><ns0:cell>0.003</ns0:cell><ns0:cell cols='2'>10 10 5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Dear Editors
We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns.
We believe that the manuscript is now suitable for publication in PeerJ.
MS. Lirong Yin
Ph.D. student at the Department of Geography and Anthropology of Louisiana State University
On behalf of all authors.
Reviewer 1:
Advice 1:
Meta-learning of multi-scale meta-relational network proposed in this paper has advantage to have the highest accuracy as the meta-validation set is recorded. Paper suggests that MAML and Meta-SGD need fine-tuning on new tasks while multi-scale relational network based on metric learning can achieve good generalization performance on new tasks without fine-tuning.
Experiments suggests the performance according to the theoretical aspect discussed in the paper. Detailed descriptions are provided and the paper is well written in both of theoretical aspect and experimental part.
Replay:
Thanks for the comments of our paper.
Reviewer 2:
Advice 1:
The title does not reflect the contribution of this paper.
Replay:
The method proposed in this paper is based on multi-scale relation network and we refined it using improvement method. Therefore the title include the relation network.
Advice 2:
1. Chronology is important. However, it is redundant to keep mentioning the years as citations include publication year.-- Example. In 2016, .... (citation, 2016)
Replay:
The redundant year mark is removed.
Advice 3:
2. Quite often though capital letters and small letters are used inconsistently. ARTIFICIAL intelligence deep Learning This method USES the idea of…..
Replay:
The capital letters and words are changed.
Advice 4:
3. Consistencies in citations convention is needed in the presentation
Replay:
The citation is refined.
Advice 5:
4. The last reference on the reference list is incomplete ….
Replay:
The reference has been refined.
Advice 6:
1. Sentence 219 n the multi-scale meta-relational network, we hope to find a set of characterization that can make fine adjustments efficiently according to a small number of samples. Where, is composed of feature extractor parameter and metric learner parameter. These two parameters are mentioned but not explained. Elaboration on these are needed
Replay:
The two parameters are from the referenced method mentioned before. There are a constant according to the dataset property based on the two reference methods.
Advice 7:
2. Sentence 517 at the end --- a better way of yuan learning
- Is this referring to the ----MAML learning Yu Zaiyuan task distribution of …
- Elaboration on yuan learning is needed
Replay:
The Yuan learning is another name for meta-learning, it has been change to meta-learning.
Reviewer 3:
Advice 1:
1. there are some mistakes in serial number of formula. For example: line 237: Equation (3-6) and (3-7) should be (6) and (7)
Replay:
The equation numbers are checked.
Advice 2:
2. the processing of iterative is clear, but the convergence and convergence speed are not involved.
Replay:
The convergence and convergence speed are refer to the process of iteration and the time caused of the iterative process.
Advice 3:
3. the results is better than existing methods, are the differences significant enough? will the gap depend on the sample?
Replay:
There is not statistic significant test for this method. Since the comparing methods and the propose method are based on the same dataset and the process of selection testing set are randomized, the results are considered significant enough and the comparison are valid.
" | Here is a paper. Please give your review comments after reading it. |
167 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-Coronavirus-2 or SARS-CoV-2), which came into existence in 2019, is a viral pandemic that caused coronavirus disease 2019 (COVID-19) illness and death. Research showed that relentless efforts had been made to improve key performance indicators for detection, isolation, and early treatment. This paper used Deep Transfer Learning Model (DTL) for the classification of real-life COVID-19 dataset of chest X-ray images in both binary (COVID-19 or Normal) and threeclass (COVID-19, Viral-Pneumonia or Normal) classification scenarios. Four experiments were performed where fine-tuned VGG-16 and VGG-19 Convolutional Neural Networks (CNNs) with DTL were trained on both binary and three-class datasets that contain images of X-ray. The system was trained with X-ray images dataset for the detection of COVID-19. The fine-tuned VGG-16 and VGG-19 DTL were modelled by employing a batch size of 10 in 40 epochs, and Adam optimizer for weight updates, categorical cross-entropy loss function. The result showed that the fine-tuned VGG-16 and VGG-19 models produced an accuracy of 99.23% and 98.00%, respectively, in the binary task. In contrast, in the multiclass (three-class) task, the fine-tuned VGG-16 and VGG-19 DTL models produced an accuracy of 93.85% and 92.92%, respectively. Moreover, the fine-tuned VGG-16 and VGG-19 models have MCC of 0.98 and 0.96 respectively in the binary classification, and 0.91 and 0.89 for multiclass classification. These results showed strong positive correlations between the models' predictions and the true labels. In the two classification tasks (binary and three-class), it was observed that the fine-tuned VGG-16 DTL model had stronger positive correlations in the MCC metric than the fine-tuned VGG-19 DTL model. The VGG-16 DTL model has a Kappa value of 0.98 as against 0.96 for the VGG-19 DTL model in the binary classification task, while in the three-class classification problem, the VGG-16 DTL model has a Kappa value of 0.91 as against 0.89 for the VGG-19 DTL model. This result is in agreement with the trend observed in the MCC metric. Hence, it was</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Viral pandemics are usually a serious threat to the world, and Coronavirus disease 2019 is not an exception. According to a COVID-19 report of the World Health Organization (WHO), coronaviruses are from a large family of viruses that cause illness in animals or humans <ns0:ref type='bibr'>(WHO, 2020)</ns0:ref>. Numerous coronaviruses have been reported as the cause of respiratory disease in humans, ranging from the common cold to more serious illnesses like Middle East Respiratory Syndrome (MERS) and Extreme Acute Respiratory Syndrome (SARS) (SARS). The newly discovered coronavirus led to the cause of the 2019-novel coronavirus disease. COVID-19 was initially observed in the Wuhan province of China and has spread to all parts of the world <ns0:ref type='bibr' target='#b42'>(Nadeem, 2020)</ns0:ref>. COVID-19 was recognized as a contributory virus by Chinese authorities on January 7 2020. The Director-General of the WHO, on the 30 th of January 2020 reported that the epidemic constitutes a Public Health Emergency of International Concern (PHEIC), based on the recommendations made by the Emergency Committee. WHO activated the R&D Blueprint in reaction to the occurrence to speed up diagnostics, vaccines, and therapeutics for this new coronavirus <ns0:ref type='bibr'>(WHO, 2020)</ns0:ref>. The International Committee on Taxonomy of Viruses named the novel coronavirus as 'severe acute respiratory syndrome coronavirus 2 (SARS-Coronavirus-2 or SARS-CoV-2)'. Globally, as of <ns0:ref type='bibr'>March 10, 2021</ns0:ref>, there have been 117,332,262 confirmed cases of COVID-19, including 2,605,356 deaths, reported to the World Health Organization. Also, as of <ns0:ref type='bibr'>March 9, 2021, a total of 268,205,245</ns0:ref> vaccine doses have been administered <ns0:ref type='bibr'>(WHO, 2021)</ns0:ref>. Specifically, in Nigeria, the Nigeria Centre for Disease Control (NCDC) report on March 11, 2021, showed 394 new confirmed cases recorded. To date, 159,646 cases have been confirmed, 139,983 patients have recovered and discharged, and 1,993 deaths have been recorded in 36 states and the Federal Capital Territory <ns0:ref type='bibr'>(NCDC, 2021)</ns0:ref>.</ns0:p><ns0:p>Numerous researchers globally are putting their efforts together on collecting data and developing solutions. The persistent focus has been on advancing key performance indicators, for example, continually enhancing the speed of case detection, segregation, and early cure. The execution of these containment procedures has been sustained and enabled by the pioneering and aggressive use of cutting-edge technologies. Measures such as immediate case detection and isolation, rigorous close contact tracing and monitoring/quarantine, and direct population/community engagement have been considered in lessening COVID-19 illness and death. This work, therefore, is aimed at using Artificial Intelligence (AI), specifically machine learning, to identify those who are at risk of contracting COVID-19 to aid early diagnosis. According to <ns0:ref type='bibr' target='#b5'>(BBC, 2020)</ns0:ref>, a superhuman attempt is required to ease the deaths due to the global epidemic. AI may have been overestimated -but in the case of medicine, it already has established evidence. According to <ns0:ref type='bibr' target='#b2'>Arora, Bist, Chaurasia, and Prakash (2020)</ns0:ref>, the role of AI is going to be crucial for predicting the outcome based on symptoms, CT-Scan, X-ray reports, etc.</ns0:p><ns0:p>Laboratory checking of suspected cases is characterized by extended periods of testing and an exponential rise in test requests <ns0:ref type='bibr' target='#b31'>(Kobia & Gitaka, 2020)</ns0:ref>. Quick diagnostic tests with shorter turnaround times of between 10 and 30 minutes have been developed to ease the problem. However, many are presently going through clinical validation that is not in regular use <ns0:ref type='bibr' target='#b12'>(ECDC, 2020)</ns0:ref>. In the process of result expectation, there is a need to continue to self-isolate. Once results are received, there is a need to remain on self-isolation until the symptoms resolve after being in seclusion for at least 14 days. If the symptoms worsen during the seclusion time or continued after 14 days, the patient has to contact the accredited healthcare providers. The Rapid Test Kits even deliver results after hours.</ns0:p><ns0:p>Pneumonia has been described as the most severe and frequent manifestation of COVID-19 infection <ns0:ref type='bibr' target='#b55'>(Huang et al. 2020)</ns0:ref>; therefore, chest imaging which includes readily available and affordable chest radiograph (X-rays), remains an essential factor in the diagnosis and evaluation of COVID-19 patients <ns0:ref type='bibr' target='#b49'>(Rubin et al., 2020)</ns0:ref>. However, the availability of radiologists to report the chest images is another obstacle. Therefore, there is the need to develop computer algorithms and methods to optimize screening and early detection, which is the primary purpose of this research in which deep learning, most especially Convolutional Neural Network (CNN), is deployed. Deep learning provides the chance to increase the accuracy of the early discovery by automating the primary diagnosis of medical scans <ns0:ref type='bibr' target='#b38'>(Madan, Panchal, & Chavan, 2019)</ns0:ref>. The CNN belongs to a category of a Deep Neural Networks (DNN), which comprise several layers that are hidden like convolutional layers. The convolutional layers come with the non-linear activation function, a rectified linear unit (ReLU layer), a Pooling layer, and a fully connected normalized layer. CNN divides weights in the convolutional layer, thereby decreasing the memory footprint and increasing the performance of the network <ns0:ref type='bibr' target='#b51'>(Sasikala, Bharathi & Sowmiya, (2018)</ns0:ref>. The objective of this paper is to classify a real-life COVID-19 dataset consisting of X-ray images using a novel Deep Learning Convolutional Neural Network Model. Data from chest X-rays were used because most hospitals have X-ray machines, and the COVID-19 X-ray dataset is now available on the web. The remaining parts of this paper are organized as follows; literature review, materials and methods, experimentation and results, evaluation of results , and conclusion and future works.</ns0:p></ns0:div>
<ns0:div><ns0:head>Literature Review</ns0:head><ns0:p>COVID-19 is predominantly a respiratory illness, and pulmonary appearances constitute the main presentation of the disease. SARS-CoV-2 infects the respiratory system but may also affect other organs, as reported in some studies. Renal dysfunction <ns0:ref type='bibr' target='#b10'>(Chu, Tsang & Tang, 2005;</ns0:ref><ns0:ref type='bibr' target='#b68'>Xu, Shi & Wang, 2020)</ns0:ref>, gastrointestinal complications <ns0:ref type='bibr' target='#b46'>(Pan, Mu & Ren, 2020)</ns0:ref>, liver dysfunction <ns0:ref type='bibr'>(Huang, Wang & Li, 2020)</ns0:ref>, cardiac manifestations <ns0:ref type='bibr' target='#b74'>(Zhou, She, Wang & Ma, 2020)</ns0:ref>, mediastinal findings <ns0:ref type='bibr' target='#b59'>(Valette, du Cheyron & Goursaud, 2020)</ns0:ref>, neurological abnormalities, and haematological manifestations <ns0:ref type='bibr' target='#b56'>(Song & Shin, 2020)</ns0:ref> are among the reported extrapulmonary features. Some of the clinical symptoms of COVID-19 are cough, expectoration, asthenia, dyspnoea, muscle soreness, dry throat, pharyngeal dryness and pharyngalgia, fever, poor appetite, shortness of breath, nausea, vomiting, nasal obstruction, and rhinorrhoea. A study on COVID-19 credited to WHO, stated that the disease does not exhibit distinct symptoms, and patients' symptoms can vary from fully asymptomatic to extreme pneumonia and death <ns0:ref type='bibr'>(WHO, 2020)</ns0:ref>. Nevertheless, certain symptoms, such as dry cough, fever, dyspnea and fatigue, were confirmed to be more prevalent in COVID-19 patients. Sore throat, nasal inflammation, fever, arthralgia, chills, diarrhoea, hemoptysis, nausea, and conjunctival congestion are some of the other clinical symptoms. <ns0:ref type='bibr' target='#b53'>(Shima, Leila, Amir & Ali, 2020)</ns0:ref>. Other non-specific symptoms include loss of smell and taste, dermatologic eruptions, delirium, and a general decline in health <ns0:ref type='bibr' target='#b47'>(Recalcati, 2020)</ns0:ref>. Because of the wide range of clinical indications and the growing global burden of COVID-19, it is critical to promptly scale up diagnostic ability to diagnose the virus and its risks.</ns0:p><ns0:p>Reverse transcription-polymerase chain reaction (RT-PCR) is the primary clinical instrument currently in use detect COVID-19. It uses respiratory specimens for testing <ns0:ref type='bibr' target='#b64'>(Wang et al., 2020a)</ns0:ref>. RT-PCR is used as a reference method for detecting COVID-19 patients; however, the technique is expensive, manual, complicated, time-consuming, and requires specialized medical personnel. Alternatively, X-ray imaging is an easily accessible tool that can be excellent in the COVID-19 diagnosis. Chest imaging is an essential part of evaluating respiratory complications, which remain one of the most familiar presentations ranging from acute respiratory distress syndrome to respiratory failure <ns0:ref type='bibr' target='#b55'>(Huang et al. 2020)</ns0:ref>. Chest imaging has been defined as an efficient screening method for detecting pneumonia, with a sensitivity of 97.5 percent for COVID-19. <ns0:ref type='bibr' target='#b41'>(Nabila et al., 2020;</ns0:ref><ns0:ref type='bibr'>NHCPRC, 2020)</ns0:ref>. Provided COVID-19's preference for the respiratory system, chest radiography (X-ray), CT of the thorax, and/or Ultrasound have been verified not only as case management and screening methods for COVID-19, but also as a way of reducing infection transmission through early detection in initially False-negative RT-PCR tests. <ns0:ref type='bibr' target='#b49'>(Rubin et al., 2020)</ns0:ref>. Imaging tests are useful for generating clinically actionable outcomes, which can be used to determine a diagnosis or to guide management, triage, or treatment. Costs such as the risk of exposure to radiation to the patient, the risk of COVID-19 transmission to uninfected health care staff and other patients, the use of personal protective equipment (PPE), and the need for sanitation and interruption of radiology rooms in resource-constrained settings reduce the benefit. <ns0:ref type='bibr' target='#b32'>(Kooraki, Hosseiny, Myers & Gholamrezanezhad, 2020)</ns0:ref>. The role of imaging includes the detection of early parenchymal lung disease, disease progression, complications, and alternative diagnoses, including acute heart failure from COVID-19 myocardial injury and pulmonary thromboembolism <ns0:ref type='bibr' target='#b11'>(Driggin et al., 2020)</ns0:ref>. Although CT is more sensitive than chest radiography, chest radiography remains the first-line imaging modality in COVID-19 patients because of its availability, affordability, reduced radiation risk, and ease of decontamination <ns0:ref type='bibr' target='#b16'>(Fatima et al., 2020)</ns0:ref>. Hence, this research work utilized chest radiographs (X-ray) for identifying COVID-19 in patients. Bilateral, peripheral, lower zone prevalent ground-glass opacities are common chest radiograph observations in COVID-19 patients. <ns0:ref type='bibr' target='#b60'>(Vancheri et al., 2020)</ns0:ref>. Other possible findings include normal, unilateral, or bilateral reticular alterations, consolidations, ground-glass opacities, and pleural effusion <ns0:ref type='bibr' target='#b24'>(Hamid, Mir & Rohela, 2020)</ns0:ref>. COVID-19 is transmitted through droplets from coughing or sneezing and on close contacts with infected persons. Propagation of droplet based on infected surfaces are considered as the major means of transmitting SARS-CoV-2'. However, patients going through screening are protected and scanned through the use of treated tools <ns0:ref type='bibr' target='#b32'>(Kooraki et al., 2020)</ns0:ref>. The incubation period of COVID-19 is usually about 14 days, during which it attacks the lung. Different countries recommend personal protection equipment (PPE). According to the Centers for Disease Control and Prevention, radiology personnel should use a face mask, glasses or face shield, sleeves, and an isolation garment. A surgical cap and foot covers are required in countries with more rigorous PPE guidelines, whereas a surgical mask, goggles, or face shield are recommended in countries with less strict PPE guidelines. <ns0:ref type='bibr'>(CDCP, 2020)</ns0:ref>. COVID-19 is highly contagious, and the symptoms begin to appear 5-6 days after contracting it either from the droplet or close contact with an infected person. According to <ns0:ref type='bibr' target='#b63'>Wang, Tang, and Wei (2020)</ns0:ref>, the period between the manifestation of symptom and demise ranges from 6 -41 days, depending on the age and immune system of the patient. The period is shorter among older people <ns0:ref type='bibr' target='#b4'>(Bai et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hamid, Mir & Rohela, 2020)</ns0:ref>. COVID-19 shows some unique clinical symptoms like targeting the lower airway that manifests in sore throat, sneezing, and rhinorrhoea. In addition, the chest radiographs result in some cases possess 'infiltrate in the upper lobe of the lung' related to growing dyspnea with hypoxemia <ns0:ref type='bibr' target='#b24'>(Hamid et al., 2020)</ns0:ref>. COVID-19 has become endemic and rapid diagnosis is imperative to identify patients and carriers for possible isolation and treatment to curb the spread of the disease. Attempts have been made to diagnose the disease, but many are slow and not accurate in that they often give false-negative and false-positive results. This section examines some of the work done in this area.</ns0:p><ns0:p>Over comprehensive detection of each aggressive spread chain, <ns0:ref type='bibr' target='#b35'>Lokuge et al. (2020)</ns0:ref> proposed an accurate, fast, and flexible tracking approach for spotting all residual COVID-19 group transmission. Using surveillance evaluation methods, they considered efficiency and sensitivity in the classification of population transmission chains by testing primary care fever and cough patients, hospital cases, or asymptomatic community members. They also varied the number of duplications, monitoring capacities, and the prevalence of COVID-19 and non-COVID-19 fever and cough. The findings of the study revealed that testing both syndromic fever and cough primary care presentations, as well as precise and diligent case and touch monitoring and assessment, allows for proper primary direct detection and elimination of COVID-19 population transmission. Even with optimized test sensitivity, measures such as combining these approaches could allow for increased case discovery if testing capacity is minimal. The impact analysis of movement restriction as a result of emergence of COVID-19 was carried out by <ns0:ref type='bibr'>Hyafil and Moria (2020)</ns0:ref>. The study looked at the impact of the steps put in place in Spain to combat the epidemic. The instances and the influence of the imposed restriction to movement on the multiplicative quantity of hospitalization reports were estimated. The projected figure of instances displayed a rapid rise towards total movement restriction as imposed. The primary replication rate reduced meaningfully from 5.89 (95% CI: 5.46-7.09) before the lockdown to 0.48 (95% CI: 0.15-1.17) after the lockdown. The study found that managing a pandemic in the magnitude of COVID-19 was very intricate and required timely decisions. The significant modifications found in the infestation rate displayed that employing inclusive participation in the first phase was vital in reducing the effect of a possible transferrable threat. This paper likewise stressed the significance of dependable up-to-date epidemiological facts to precisely measure the influence of Public Health guidelines on the virus-related outburst. <ns0:ref type='bibr' target='#b69'>Yu et al. (2020)</ns0:ref> observed the mounting evidence that suggested that there remained a hidden collection of COVID-19 asymptomatic but transferable cases and that approximating the count of disease instances without symptoms was essential in knowing the virus and curtailing its transmission; though, it was reported that it was difficult to precisely calculate the spread of the infection. A machine learning-based fine-grained simulator (MLSim) was proposed to combine many practical reasons, such as disease development during the maturation phase, cross-region population movement, unobserved patients without symptoms, preventative measures and confinement resilience, to estimate the number of asymptomatic infection, which is critical in understanding the virus and accurately containing its spread. Digital transmission mechanisms with many unspecified variables were used to simulate the relationships between the variables, which were calculated from epidemic data using machine learning approach. When MLSim learned to closely compare and contrast real-world data, it was able to simulate the instances of patients without symptoms as well. The accessible Chinese global epidemic data helped the MLSim to train better. The analysis indicated that fine-grained machine learning simulators could improve the modelling of dynamic real-world infection transmission mechanism, which can aid in the development of balanced mitigation steps. The simulator equally showed the possibility of a great amount of undiscovered disease risk, posing a major threat to containing the virus. COVID-19 was modelled using a composite stochastic and deterministic concept that allowed for time-varying transmission capacity and discovery chances (Romero-Severson, Hengartner, Meadors & Ke, 2020). Iterative particle sorting was used to adapt the model to a historical data study of occurrence and casualty figures from fifty-one countries. The report confirmed the fact that the spread rate is decreasing in forty-two of the fifty-one countries surveyed. Out of the forty-two countries, thirty-four showed a big significant proof for subcritical transmission rates, though the turndown in novel cases was moderately slow in comparison to early development rates. The study concluded that attempts to reduce the occurrence of COVID-19 by social distancing were successful. They could, however, be improved and retained in various regions to prevent the disease from resurfacing. The study also proposed other approaches to manage the virus before the relaxation of social distancing efforts.</ns0:p><ns0:p>The challenges associated with the storage and security of COVID-19 patients' data were highlighted by ElDahshan, AlHabshy, and Abutaleb <ns0:ref type='bibr'>(2020)</ns0:ref>. The authors pointed out that the variety, volume, and variability of COVID-19 patients data required storage in NoSQL database management systems (NoSQL DBMS). It was noted that available NoSQL DBMSs were fraught with security challenges that rendered them unsuitable for storing confidential patient data. Academic institutions, research centres, and enthusiasts find it difficult to select the most suitable NoSQL DBMS because there are myriads of them without standard ways of determining the best. Thus, the study presented an inventive approach to selecting and securing NoSQL DBMS for medical information. The authors outlined the five most common NoSQL database groups, as well as the most common NoSQL DBMS forms affiliated with every one of them. In addition, their research included a comparison of the various types of NoSQL DBMS. The paper provided an efficient solution to the myriads of security challenges, ranging from authorization, authentication, encryption, and auditing in storing and securing medical information utilizing a collection of web service-based functions. <ns0:ref type='bibr' target='#b23'>Guerrero, Brito, and Cornejo (2020)</ns0:ref> used a mathematical model to depict a sneezing individual in an urban setting with a meteorological wind of medium strength. The proliferation of airborne route was demonstrated using a Lagrangian method and a wall-modeled Large Eddy Numerical simulation. The results showed that the dimensions of two kinds of droplets differ in size: larger droplets (400-900m) scatter between 2-5m in 2.3s, whereas smaller (100-200m) droplets are transported in a larger, more impressive array between 8-11m by the windy conditions in 14.1s on average.. Knowing the ambiguity of possible infection in this way aids in the development of solutions for the possibility of adopting tougher self-care and distance policies.</ns0:p><ns0:p>To distinguish communicable acute abdomen patients suspected of COVID <ns0:ref type='bibr'>-19, Zhao et al. (2020)</ns0:ref> proposed a forecasting model identified as a monogram and scale. The analytical framework was built on the basis of a retrospective case study. In a training cohort, the model was formulated using LASSO regression and multivariable logistic regression method. In the training and testing cohorts, standard curve evaluated the efficiency of the monogram, receiver PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2021:02:58415:1:1:NEW 26 May 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>operating characteristic (ROC) curves, decision curve analysis (DCA), and clinical effect curves. According to the monogram, a simpler testing scale and management algorithm was developed. In the testing cohort, the CIAAD monogram demonstrated strong differentiation and standardization, which was approved. The CIAAD monogram was clinically useful, according to decision curve research. The monogram was condensed even further into the CIAAD standard. The estimated Bayesian Computation method was utilized by <ns0:ref type='bibr'>Vasilarou, Alachiotis, Garefalaki, and Beloukas (2020)</ns0:ref> to determine the parameters of a demographic scenario engaging an exponential growth of the size of the COVID-19 populations and revealed that rapid exponential development in population size could sustain the monitored polymorphism patterns in COVID-19 genomes. Amrane et al. <ns0:ref type='bibr'>(2020)</ns0:ref> adopted a genetic approach using a rapid virological diagnosis on sputum and nasopharyngeal samples from suspect patients. Two real-time RT-PCR systems employing a 'hydrolysis probe and the LightCycler Multiplex RNA Virus Master Kit' were used. The primary technique probes the envelope protein (E)-encoding gene and used a synthetic RNA positive control. The subsequent system targeted the spike protein-encoding gene (forward primer, reverse primer, and probe) and used synthetic RNA positive control methods. <ns0:ref type='bibr' target='#b4'>Bai et al. (2020)</ns0:ref> proposed the use of medical technology through the internet of things (IoT) to develop an intelligent analysis and treatment assistance programme (nCapp). The conceptual cloud-based IoT platform includes the basic IoT functions as well as a graphics processing unit (GPU). To aid in deep mining and intelligent analysis, cloud computing systems were linked to existing electronic health records, image cataloguing, picture cataloguing, and interaction. Li et al. <ns0:ref type='bibr'>(2020)</ns0:ref> examined chest images for the analysis of COVID-19. High-resolution Computed tomography (HRCT) was implemented for analysis of the virus infection. CT scans were taken with the following parameters: 120 KV; 100-250 mAs; collimation of 5 mm; the pitch of 1-1.5; and 512 X 512 matrix. The images were reconstructed by high resolution and conventional algorithms. The experiments were repeated several times, running into days for each patient. The Highresolution CT objectively evaluated lung lesions giving a better understanding of the pathogenesis of the disease.</ns0:p><ns0:p>Long et al. <ns0:ref type='bibr'>(2020)</ns0:ref> evaluated the suitability of Computed Tomography (CT) and real-time reverse-transcriptase-polymerase Chain Reaction (rRT-PCR). A clinical experiment with life data was executed, and the results presented showed that CT examination outperformed that of rRT-PCR at 97.2% and 84.6%, respectively. In Vaishya, Javaid, Khan, and Haleem <ns0:ref type='bibr'>(2020)</ns0:ref>, seven critical AI applications for the novel COVID-19 were recognized to perform vital roles in screening, analyzing, tracking, and predicting patients. Application areas identified comprised early detection and diagnosis of the infection, treatment monitoring, individuals contact tracing, projection of cases and mortality, drugs and vaccines development, lessening healthcare workers' assignment, and deterrence of the disease by providing updated supportive information. <ns0:ref type='bibr' target='#b71'>Zhang, Wang, Jahanshahi, Jia, and Schmitt (2020)</ns0:ref> conducted a survey that presented some proofs of mental distress and associated predictors amongst adults in the current COVID-19</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2021:02:58415:1:1:NEW 26 May 2021)</ns0:ref> Manuscript to be reviewed Computer Science pandemic in Brazil. The data was composed of 638 adults from March 25 to 28, 2020, about one month after the index case was confirmed in São Paulo. Female adults, who were young, more trained, and practiced less, recorded higher levels of distress, with 52 percent experiencing mildto -moderate distress and 18.8 percent suffering extreme distress. The study's findings also revealed that a person's distance from Sao Paulo, the epicenter, had a direct connection with the psychological distress they were experiencing. For the older population who worked the least, the 'typhoon eye effect' was more potent. Adults who lived far off the worst hit geographic area and did not go to work in the week preceding the survey were the most vulnerable. The paper concluded that recognizing the predictors of suffering would allow mental health services to improve target finding and assist the more mentally defenseless adults in the crisis. <ns0:ref type='bibr' target='#b22'>Ghafari et al. (2020)</ns0:ref> assessed the challenges and indications of concern of the COVID-19 in Iran. The heterogeneous COVID-19 casualty levels around the country in fourteen university hospitals in Tehran were investigated, and it was revealed that the recorded cases on 13/03/2020 indicated just under 10% of symptomatic patients in the population in adolescents. The finding indicated that there was a major inaccurate reporting of cases in Iran. The study suggested that strict measures be implemented throughout a period of widespread underreporting in order to prevent the healthcare system from being exhausted within a month. Further studies on efficient diagnosis, detection, and vaccine of the virus are continuing for two main reasons: partly because the disease is new, and secondly because available research efforts have not been able to address the concerns effectively. Therefore, in this paper, a deep learning modeling framework for efficient identification, classification, and provision of new insights for the diagnosis of COVID-19 was presented. Also, the prediction of probable patients of the novel COVID-19 using radiology scanned images of suspected patients were shown.</ns0:p><ns0:p>Zivkovic et al. <ns0:ref type='bibr' target='#b119'>(2021)</ns0:ref> proposed a hybrid comprising machine learning Adaptive Neuro-Fuzzy Inference System (ANFIS) and enhanced Bio-inspired Beetle Antennae Search (BAS) Algorithm referred to as CESBAS-ANFIS. The hybrid study was carried out with a view to improving the existing time-series prediction algorithms for forecasting COVID-19 new cases. An improved BAS algorithm was adopted to update the parameters of ANFIS. A prediction model for the virus outbreak was formulated using ANFIS trained by the improved BAS algorithm to enhance the prediction accuracy of new cases of COVID-19. CESBAS was introduced to update ANFIS parameters, thereby solving parameters' optimization problem of machine learning techniques for prediction. Cauchy mutation operator was incorporated into the original BAS called CESBAS (Cauchy Exploration Strategy BAS) to improve exploration ability and solution diversity deficiencies observed. The proposed method consists of five layers; one input layer, two hidden layers, one layer for conclusion parameters and the output layer that presented the forecasted value. The proposed model with other hybrid models was tested under the same conditions on two datasets; one dataset from WHO and the second from the 'our world in data' website, and their results were compared. CESBAS-ANFIS showed superior performance compared to other hybrid techniques such as ABC-ANFIS, BAS-ANFIS, and FPA-ANFIS. Elzeki, Shams, Sarhan, Abd Elfattah, and Hassanien <ns0:ref type='bibr' target='#b119'>(2021)</ns0:ref> presented a new deep learning computer-aided scheme for rapid and seamless classification of COVID-19. Consisting of three separate COVID-19 X-ray datasets, the study presented the COVID Network (CXRVN) model for assessing grayscale chest X-ray images. The scheme was implemented on three different datasets using MatLab 2019b. A comparison was made with the pre-trained models of AlexNet, GoogleNet, and ResNet using the mini-batch gradient descent and Adam optimizer to aid the learning process. Performance evaluation results of the model using F1 score, recall, sensitivity, accuracy, and precision, with generative adversarial network (GAN) data augmentation revealed that the accuracy for the two-class classification was 96.7%. In comparison, that of the threeclass classification model reached 93.07%. However, the authors pointed out that increased availability of datasets could improve the performance of future methodologies. In addition, it was stated that the model could be enhanced by employing computed tomography (CT-images) and studying different updated cases of the COVID-19 X-ray images.</ns0:p><ns0:p>From the reviews presented in this section, it was observed that some of the existing works reported low accuracies, used imbalanced datasets, and there was no evidence of the use of some standard evaluation metrics such as Matthew's Correlation Coefficient and Cohen's Kappa Statistics in their approaches. The aforementioned issues were addressed in this paper. In addition, the performance of the VGG-16 and VGG-19 networks, the forms of the VGGNet Architecture, in predicting COVID-19 X-ray datatsets were compared in this study.</ns0:p></ns0:div>
<ns0:div><ns0:head>The VGGNet Architecture</ns0:head><ns0:p>The VGGNet, proposed by <ns0:ref type='bibr' target='#b54'>Simonyan and Zisserman (2015)</ns0:ref>, is a convolutional neural network that performed very well in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2014. The VGG-16 and VGG-19 networks are forms of the VGGNet Architecture. The networks accept color images with size 224 x 224 and 3 channels (Red, Green, and Blue) as their input data. The images pass through convolutional layers that are stacked on top of each other, in which there is a limited reactive field of 3 x 3 and stride of 1 in the convolutional filter. The convolutional kernel employs row and column padding such that the resolution before the Manuscript to be reviewed Computer Science convolution is retained after the processing of the images. Max-pooling is then done over a max pool window of size 2 x 2 with a stride of 2 <ns0:ref type='bibr' target='#b54'>(Simonyan & Zisserman, 2015)</ns0:ref>.</ns0:p><ns0:p>The network of the VGG-16 CNN has 13 convolutional layers (that is, 3×3 convolutional layers in blocks that are stacked on top of one another with growing depth). Two blocks house two 3×3 convolutional layers of the same setup in a sequential arrangement, while three blocks have three 3×3 convolutional layers of the same configuration in a sequential arrangement. As such, the VGG-16 has two contiguous blocks of two convolutional layers, with each block accompanied with a max-pooling. Also, it has three continuous blocks of three convolutional layers, with each block accompanied with a max-pooling. In all, there are five max-pooling layers in the architecture. Max pooling layer handles the reduction of the volume size after each block that contains two convolutional layers and after each block that contains three convolutional layers. The informative features are obtained by these max-pooling layers that are applied at the earlier specified stages in the network. The VGG-16 further has two fully-connected layers, each with 4,096 nodes and one fully-connected layer with 1000 nodes, one node each for each of the 1000 categories of images in the ImageNet database on which the network was pre-trained, and is followed by the SoftMax classifier <ns0:ref type='bibr' target='#b54'>(Simonyan & Zisserman, 2015)</ns0:ref>, as presented in the VGG Architecture, which can be found in <ns0:ref type='bibr' target='#b20'>Frossard et al. (2016)</ns0:ref>.</ns0:p><ns0:p>The network of the VGG-19 CNN has 16 convolutional layers (that is, 3×3 convolutional layers in blocks that are stacked on top of one another with growing depth). Two blocks house two 3×3 convolutional layers of the same setup in a sequential arrangement, while three blocks have four 3×3 convolutional layers of the same configuration in a sequential arrangement. In other words, the VGG-19 has two continuous blocks of two convolutional layers, with each block accompanied with a max-pooling. It also has three contiguous blocks of four convolutional layers, with each block accompanied with a max-pooling. In all, there are five max-pooling layers in the architecture. The Max-pooling layer handles the reduction of the volume size after each block that contains two convolutional layers and after each block that contains four convolutional layers. The informative features are obtained by these max-pooling layers that are applied at the earlier specified stages in the network. The VGG-19 further has two fullyconnected layers, each with 4,096 nodes and one fully-connected layer with 1000 nodes, one node each for each of the 1000 categories of images in the ImageNet database on which the network was pre-trained, and is followed by the SoftMax classifier <ns0:ref type='bibr' target='#b54'>(Simonyan & Zisserman, 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>This study focused on diagnosing the COVID-19 chest X-ray dataset using a deep learning convolutional neural network (CNN). The CNN comprises one or more convolution layers and then followed by one or more fully connected layers as obtained in a standard multilayer neural network. COVID-19 Radiology Dataset (chest X-ray) for Annotation and Collaboration was PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2021:02:58415:1:1:NEW 26 May 2021)</ns0:ref> Manuscript to be reviewed Computer Science collected from the Kaggle website (a database collated by researchers from Qatar University and the University of Dhaka) with collaborators from Pakistan and Malaysia, and some medical doctors), and Mendeley dataset repository. The data was preprocessed, and the median filter was employed to restore the image undergoing evaluation by mitigating the severity of collection degradations. <ns0:ref type='bibr' target='#b40'>Manikandarajan and Sasikala (2013)</ns0:ref> mentioned some preprocessing and segmentation strategies that were used. Each data point was replaced by the average value of its neighbors, and itself, in the median filter. As a result, data points that differed significantly to their neighbors were removed. Following the preprocessing of the image dataset, the images were sectioned by using a simulated annealing algorithm. Feature extraction and classification were done using CNN. The neural network-based convolutional segmentation was implemented in Jupyter Notebook using Python programming language and the model was built using sample datasets for the system to recognize and classify the COVID-19. The model generated can be used to develop a simple web-based application that medical personnel handling COVID-19 tests could use to input new cases and quickly predict the presence of the COVID-19, with a very high level of accuracy.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset Description and Preprocessing</ns0:head><ns0:p>Chest X-ray images were selected from a repository of COVID-19 positive cases' chest X-ray images, as well as regular and viral pneumonia images, which were collated by researchers from Qatar University and the University of Dhaka, along with collaborators from Pakistan and Malaysia, and some medical doctors. There are 219 COVID-19 positive images in their current release, 1341 normal images, and 1345 viral pneumonia images <ns0:ref type='bibr'>(Chowdhury et al., 2020)</ns0:ref>. For multiple representations, the dataset of chest X-ray images for both COVID-19 and normal cases were also selected from the Mendeley dataset repository (El-Shafai, 2020), which contains 5500 Non-COVID X-ray images and 4044 COVID-19 X-ray images. This study, therefore, adopted these multisource datasets. Due to limited computing resources, in this study, 1,300 images were selected from each category for model building and validation. In other words, 1,300 images of COVID-19 positive cases, 1,300 Normal images, and 1,300 images of viral pneumonia cases, totalling 3,900 images in all. Also, a different set of 470 images (containing 70 COVID-19 images, 200 ViralPneumonia, and 200 Normal) were selected and used for testing to obtain an impartial evaluation of a final model. The dataset used in this study can be found at <ns0:ref type='bibr' target='#b17'>Fayemiwo et al. (2021a)</ns0:ref>. It should be noted here that further descriptions of the datasets were not provided by the authors of the dataset's sources.</ns0:p><ns0:p>OpenCV <ns0:ref type='bibr' target='#b6'>(Bradski and Kaehler, 2008)</ns0:ref> was used for loading and preprocessing images in the dataset. Each image was loaded and preprocessed by performing a conversion to RGB channel and changing the size of the images to 224×224 pixels to be ready for the Convolutional Neural Network. Pixel intensities were then scaled to the range [0, 1] and converted the data and labels to NumPy array format. Labels were then encoded using a one-hot encoder while training/testing splits were created. To ensure that the model generalizes well, data augmentation was performed by setting the random image rotation to 15 degrees, random range zooming to 0.15, random shift of width and height to 0.2, random shear range to 0.15, randomly flipping half of the images horizontally by setting horizontal_flip = True and fill_mode to nearest. One thousand, nine hundred and fifty <ns0:ref type='bibr' target='#b75'>(1,</ns0:ref><ns0:ref type='bibr'>950)</ns0:ref> images were allocated initially to train the model in the binary classification task, resulting in 78,000 training images after augmentation. Two thousand, nine hundred and twenty-five (2,925) images were allocated initially for training the model in the multiclass classification task, which resulted in 117,000 training images after augmentation.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Deep-Transfer Learning Model</ns0:head><ns0:p>This study employed the with Deep Transfer Learning (DTL) approach for COVID-19 detection. The Deep Transfer Learning (DTL) approach focused on the storage of weights that have been grown while unraveling some image classification tasks and then engaging them on a related task. Several DTL networks have been proposed, some of which include VGGNet <ns0:ref type='bibr' target='#b54'>(Simonyan & Zisserman, 2015)</ns0:ref>, GoogleNet <ns0:ref type='bibr'>(Szegedy et al., 2015)</ns0:ref>, ResNet (He, <ns0:ref type='bibr'>Zhang, Ren & Sun, 2016)</ns0:ref>, DenseNet <ns0:ref type='bibr' target='#b28'>(Huang, Liu, Van Der Maaten & Wein, 2017)</ns0:ref> and Xception <ns0:ref type='bibr' target='#b7'>(Chollet, 2017)</ns0:ref>. In this paper, VGG-16 CNN and VGG-19 CNN, forms of the VGGNet, were trained on the popular ImageNet images dataset. The VGG-16 and VGG-19 CNN were pre-trained deep neural networks on the ImageNet for computer vision (image recognition) problems, having 16 weight layers and 19 weight layers, respectively. They were used as pre-trained models to help learn the distinguishing features in COVID-19 X-ray images with the aid of a transfer learning approach, thus, trained DTL models for the identification of COVID-19 from X-ray images.</ns0:p><ns0:p>As shown in the workflow in Fig. <ns0:ref type='figure'>1</ns0:ref> and Fig. <ns0:ref type='figure'>2</ns0:ref>, respectively, to train the VGG-16 based DTL model and VGG-19 based DTL model for the detection of COVID-19, the VGG-16 CNN and VGG-19 CNN was used as pre-trained models and were fine-tuned for COVID-19 detection based on the principles of transfer learning. The weights of the lower layers of the network, which train very common characteristics from the pre-trained model, were used as feature extractors for the implementation of transfer learning with fine-tuning. Therefore, the pre-trained model's lower layers weights were frozen and therefore not updated through the training process, thus not participating in the transfer-learning process. The higher layers of the pre-trained model were used for learning task-specific features from the COVID-19 images dataset. In this case, the higher layers of the pre-trained model were unfrozen, made trainable or fine-tuned in which the weights of the layers were updated. Consequently, the layers were allowed to participate in the transfer-learning process. Each of these models ends with the SoftMax layer, which produces the outputs. The weights for the VGG-16 and VGG-19 networks were pre-trained on ImageNet, and the Fully Connected (FC) layer head was removed. From there, new fully-connected layer heads were built comprising POOL => FC = SOFTMAX layers and attached on top of VGG-16 and VGG-19. The Convolutional weights of VGG-16 and VGG19 were then frozen in such a way that only the FC layer head was trained. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Both binary class scenario and three-class classification scenario are considered in the workflow, in which the DTL model determines the class of the chest X-ray images as either 'COVID-19' category or 'Normal' category in the binary class scenario or as either 'COVID-19' category, 'Viral-Pneumonia' category, or 'Normal' category in the three-class classification scenario.</ns0:p><ns0:p>The flowchart of the experimental algorithm for the deep transfer learning models based on the VGG-16 and VGG-19 networks proposed in this paper is presented in Fig. <ns0:ref type='figure'>3</ns0:ref>. For these experiments, out of a total of 3,900 images used, 2,925 images (75%) were used for training the models, while 975 images (25%) were used for validation and to perform hyper-parameter tuning. A different set of 470 images (containing 70 <ns0:ref type='bibr'>200 ViralPneumonia,</ns0:ref><ns0:ref type='bibr'>and 200 Normal)</ns0:ref> were used for testing to obtain an impartial evaluation of the final model. The code for this experiment can be found at <ns0:ref type='bibr' target='#b18'>Fayemiwo et al. (2021b)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experimentation and Results</ns0:head><ns0:p>The performances of the proposed models were obtained from the models' generated confusion matrices, using standard metrics such as accuracy, specificity, precision, recall (sensitivity), F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>show the formulas for computing the three key metrics used in this article, namely Accuracy, Matthews Correlation Coefficient and Cohen's Kappa statistics, respectively:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁 (2) 𝑀𝐶𝐶 = 𝑇𝑃 + 𝑇𝑁 -𝐹𝑃 + 𝐹𝑁 (𝑇𝑃 + 𝐹𝑃)(𝑇𝑃 + 𝐹𝑁)(𝑇𝑁 + 𝐹𝑃)(𝑇𝑁 + 𝐹𝑁) (3) 𝐾𝑎𝑝𝑝𝑎 = ( 𝑇𝑁 + 𝑇𝑃 𝑇𝑁 + 𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁) -( (𝑇𝑁 + 𝐹𝑃) × (𝑇𝑁 + 𝐹𝑁) 𝑇𝑁 + 𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁) 1 -( (𝑇𝑁 + 𝐹𝑃) × (𝑇𝑁 + 𝐹𝑁) 𝑇𝑁 + 𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁)</ns0:formula><ns0:p>where TP, TN, FP and FN denote True Positive, True Negative, False Positive and False Negative, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment A</ns0:head><ns0:p>The first experiment was performed on the binary class dataset. A DTL model based on a pretrained VGG-16 model was trained to classify the X-ray images into the two classes of COVID-19 or Normal; and to detect if X-ray images are simply of the class COVID-19 or Normal. The VGG-16 based DTL model summary detailing the layers and parameters in each layer of the model is shown in Table <ns0:ref type='table'>1</ns0:ref>. The fine-tuned VGG-16 based DTL model consists of 14,747,650 total parameters, with 32,962 of them made trainable while 14,714,688 were non-trainable. The VGG-16 DTL model was modelled by employing a batch size of 10 in 40 epochs, using Adam optimizer specifically for updates of weights, certain cross-entropy loss function with a learning rate of 1e^(-2). Performance of the proposed fine-tuned VGG-16 based DTL model was evaluated on 25% of the X-ray images.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1:</ns0:head><ns0:p>The layers and layer parameters of the proposed fine-tuned VGG-16 based DTL model (Binary Classification)</ns0:p><ns0:p>The output of the confusion matrix for the binary classification as obtained from the VGG-16 based DTL model is shown in Table <ns0:ref type='table'>2</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> illustrates the training loss and accuracy, the validation loss and accuracy graphs of the proposed fine-tuned VGG-16 based DTL model. The validation accuracy, recall, specificity, precision, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics of the proposed fine-tuned VGG-16 based DTL model were also obtained.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 2:</ns0:head><ns0:p>The Confusion Matrix of the Binary classification task obtained from the fine-tuned VGG-16 based DTL The validation accuracy obtained for the fine-tuned VGG-16 based DTL model was 99.23%, its recall was 100%, while its specificity stands at 98.48%. The obtained values for the precision, recall, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics metrics for the binary classification task using the VGG-16 based DTL model are given in Table <ns0:ref type='table' target='#tab_0'>3</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment B</ns0:head><ns0:p>The second experiment was performed on the binary class dataset. A DTL model based on a pretrained VGG-19 model was trained to classify the X-ray images into the two classes of COVID-19 or Normal; and also, to detect if X-ray images are simply of the class COVID-19 or Normal. Also, the VGG-19 based DTL model summary detailing the layers and the parameters in each layer of the model is shown in Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref>. The fine-tuned VGG-19 based DTL model consists of 20,057,346 total parameters, with 32,962 of them made trainable while 20,024,384 were nontrainable. The VGG-19 DTL model was modelled by employing a batch size of 10 in 40 epochs, using Adam optimizer for the updates of weights, categorical cross-entropy loss function with a learning rate of 1e^(-1). Performance of the proposed fine-tuned VGG-19 based DTL model was evaluated on 25% of the X-ray images. The output of the confusion matrix for the binary classification as obtained from the fine-tuned VGG-19 based DTL model are shown in Table <ns0:ref type='table'>5</ns0:ref>. Figure <ns0:ref type='figure'>5</ns0:ref> The validation accuracy obtained for the fine-tuned VGG-19 based DTL model was 98.00%, its recall was 95.95%, while its specificity stands at 100%. The values obtained for the precision, recall, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics metrics for the binary classification task using the VGG-19 based DTL model are given in Table <ns0:ref type='table' target='#tab_2'>6</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment C</ns0:head><ns0:p>The third experiment was performed on the multiclass (three-class) dataset, in which a DTL model based on a pre-trained VGG-16 model was trained to classify the X-ray images into the three classes of COVID-19, Viral Pneumonia or Normal; and to detect if X-ray images were simply of the class COVID-19 or Viral Pneumonia or Normal. The VGG-16 based DTL model summary, detailing the layers and parameters in each layer of the model, is shown in Table <ns0:ref type='table' target='#tab_3'>7</ns0:ref>. The fine-tuned VGG-16 based DTL model consists of 14,747,715 total parameters, with 33,027 of them made trainable while 14,714,688 were non-trainable. The VGG-16 DTL model was modelled by employing a batch size of 10 in 40 epochs, using Adam optimizer specifically for updates of weights, certain cross-entropy loss function with a learning rate of 1e^(-2), the performance of the proposed fine-tuned VGG-16 based DTL model was evaluated on 25% of the X-ray images. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The output of the confusion matrix for the binary classification as obtained from the VGG-16 based DTL model are shown in Table <ns0:ref type='table' target='#tab_4'>8</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> illustrates the training loss and accuracy and the validation loss and accuracy graphs of the proposed fine-tuned VGG-16 based DTL model. The validation accuracy, recall, specificity, precision, and F1-Score of the proposed fine-tuned VGG-16 based DTL model were also obtained. The validation accuracy obtained for the fine-tuned VGG-16 based DTL model was 93.85%, its recall was 97.98%, while its specificity stands at 94.69%. The obtained values for the precision, recall, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics metrics for the three-class classification task using the VGG-16 based DTL model are given in Table <ns0:ref type='table'>9</ns0:ref>. The validation accuracy obtained for the fine-tuned VGG-19 based DTL model was 92.92%, its recall was 95.95%, while its specificity stands at 89.68%. The obtained values for the precision, recall, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics metrics for the three-class classification task using the VGG-19 based DTL model are given in Table <ns0:ref type='table'>12</ns0:ref>. Being the best performing model in this study, fine-tuned VGG-16 DTL model was tested on the test dataset of 470 images. The test accuracy obtained for the model was 98%. The output results of the tests as shown in Fig. <ns0:ref type='figure' target='#fig_13'>8</ns0:ref> to Fig. <ns0:ref type='figure' target='#fig_15'>10</ns0:ref> show how the fine-tuned VGG-16 DTL model classified and detected each of the images as either 'COVID-19', 'Viral Pneumonia', or 'Normal.' The level of confidence in the model classification is also shown. Figure <ns0:ref type='figure' target='#fig_13'>8</ns0:ref> shows the sample images that were detected as 'COVID-19' along with the model's classification confidence accuracy values. Figure <ns0:ref type='figure' target='#fig_14'>9</ns0:ref> shows the sample images that were detected as 'Viral Pneumonia' and the model's classification confidence accuracy values. In contrast, Fig. <ns0:ref type='figure' target='#fig_15'>10</ns0:ref> shows the sample images that were detected as 'Normal' along with the model's classification confidence accuracy values. <ns0:ref type='figure' target='#fig_13'>8</ns0:ref>, only one showed a lower confidence level of 76.37%, while others were above 94%. Similar results could be seen in Fig. <ns0:ref type='figure' target='#fig_15'>10</ns0:ref> for Normal classification, where the lowest confidence level is 78.92%. However, the lowest output for Viral Pneumonia is 96.21%, as shown in Fig. <ns0:ref type='figure' target='#fig_14'>9</ns0:ref>. These test results showed that the developed models could generalize and adapt to new data outside the training and validation dataset. These test results are necessary to show the adaptability of the developed models when related data is considered.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation of Results</ns0:head><ns0:p>The results obtained in this work were compared with thirteen other existing approaches in the Manuscript to be reviewed Computer Science literature. Few studies conducted before this study had used twenty-five and fifty images in each class <ns0:ref type='bibr' target='#b106'>(Sethy & Behra, 2020;</ns0:ref><ns0:ref type='bibr' target='#b25'>Hemdan et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Narin et al., 2020)</ns0:ref>, while nine out of the twelve approaches benchmarked used imbalanced data (Table <ns0:ref type='table'>9</ns0:ref>). Generally, the problem in modeling imbalanced data is that it could lead to the inability of the model to generalize, or the model can be biased towards a class with a high number of data points. Hence, in this study, an equal value of data (1,300 images) was used for each category, and this is believed to have contributed to increasing accuracies of the proposed models. At the moment, creating an automated diagnostic tool for the detection of COVID-19 suffers from the drawback of limited number of cases. To ensure the generalization of the models developed in this work, data augmentation was performed by setting the random image rotation setting to 15 degrees clockwise. The proposed new models were based on fine-tuning VGG-16 and VGG-19 methods by constructing a new fully-connected layer head consisting of POOL => FC = SOFTMAX layers and append it on top of VGG-16 and VGG-19; the Convolution weights of VGG-16 and VGG-19 were then frozen, such that only the FC layer head was trained. The fine-tuned models gave better results than other models that used ordinary pre-trained VGG-16 and VGG-19 <ns0:ref type='bibr'>(Apostolopoulos & Mpesiana, 2020;</ns0:ref><ns0:ref type='bibr'>Khalid & Youness, 2020)</ns0:ref>. Complete results of the comparison with thirteen other existing results from the literature are presented in Table <ns0:ref type='table' target='#tab_5'>13</ns0:ref>, with the proposed model recording the best performance accuracy. The closest performing model to the proposed model is that of the DarkCovidNet model <ns0:ref type='bibr'>(Tulin et al., 2020)</ns0:ref> with 98.08% accuracy, while the proposed DTL-based VGG-16 model has 99.23% accuracy, both in the binary classification task. </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions and Future Work</ns0:head><ns0:p>Several researchers around the world are combining their efforts to collect data and develop solutions for the COVID-19 pandemic problem. Laboratory testing of suspected cases characterized by long waiting periods and an exponential increase in demand for tests has hitherto constituted a significant bottleneck globally. Hence, rapid diagnostic test kits are being developed, most of which are currently undergoing clinical validation and are yet to be adopted for routine use. The researchers suspect that the better performance of the VGG-16 DTL model might be attributed to the volume of data used in the experiments; that is, the depth of layers in the VGG-19 architecture may not have any significant effect on the performance when the dataset is small. This suspicion would be investigated, as a future work, when more COVID-19 data is available. Finally, the COVID-19 images from Chest CT scans are not readily available, unlike X-ray images, because of their high cost. Therefore, other future works would consider using Chest CT images to develop a more sensitive diagnostic tool for detecting viral pneumonia and COVID-19 variants. Further hyper-parameter tweaking would also be done to get more accurate results. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)Manuscript to be reviewed Computer Science<ns0:ref type='bibr' target='#b30'>Irfan et al. (2021)</ns0:ref> explored the contributions of hybrid deep neural networks (HDNNs), chest Xrays and computed tomography (CT) in the detection of COVID-19. The work employed X-ray imaging and CT to develop the HDNNs for predicting the early infection of COVID-19. The HDNNs were trained and tested on five thousand (5000) images collected from five different sources (public and open), comprising 57% males and 32% females, and 3500 infected and 1500 healthy controls within an age group of 38-55 years. The proportion of the test dataset to the training dataset was 20:80, and classification accuracy of 99% was achieved with the HDNNs on the test dataset. The results of the performed experiments showed that the new multi-model and multi-data approach achieved improved performance over the traditional machine learning models.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 :Figure 2 :Figure 3 :</ns0:head><ns0:label>123</ns0:label><ns0:figDesc>Figure 1: The Architectural Workflow of the Proposed Fine-tuned VGG-16 based Deep Transfer Learning Model for COVID-19 Detection</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Four</ns0:head><ns0:label /><ns0:figDesc>different experiments were performed to classify radiological X-ray images using Deep Transfer Learning approaches. Two of the experiments (Experiments A and B) include each of the trained two Deep Transfer Learning models (VGG-16 and VGG-19 based) on binary class Xray image dataset (with COVID-19 and Normal classes). The other two experiments (Experiments C and D) include each of the trained two Deep Transfer Learning models (VGG-16 and VGG-19 based) on a multiclass X-ray image dataset (with COVID-19, ViralPneumonia, and Normal classes).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Equations 1, 2 and 3 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: The Training loss and accuracy with the validation loss and accuracy curves obtained for the fine-tuned VGG-16 based Deep Transfer Learning Model (For Binary Classification)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Table 5 :Figure 5 :</ns0:head><ns0:label>55</ns0:label><ns0:figDesc>Figure 5: The Training loss and accuracy with the validation loss and accuracy curves obtained for the fine-tuned VGG-19 based Deep Transfer Learning Model (For Binary Classification)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: The Training loss and accuracy with the validation loss and accuracy curves obtained for the fine-tuned VGG-16 based Deep Transfer Learning Model (For Three-class Classification)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Table 9 :Table 10 :</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>The Precision, Recall, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics obtained for the classification task using the fine-tuned VGG-16 based DTL model (Three-class Classification) Experiment D The fourth experiment performed on the multiclass (three-class) dataset, in which a DTL model based on a pre-trained VGG-19 model was trained to classify the X-ray images into the three classes of COVID-19, ViralPneumonia or Normal; and also, to detect if X-ray images are simply of the class COVID-19 or ViralPneumonia or Normal. The VGG-19 based DTL model summary detailing the layers and parameters in each layer of the model is shown in Table 10. The finetuned VGG-19 based DTL model consists of 20,057,411 total parameters, with 33,027 of them made trainable while 20,024,384 were non-trainable. The VGG-19 DTL model was modelled by employing a batch size of 10 in 40 epochs, using Adam optimizer specifically for updates of weights, certain cross-entropy loss function with a learning rate of 1e^(-1), the performance of the proposed fine-tuned VGG-19 based DTL model was evaluated on the 25% of the X-ray images. The layers and layer parameters of the proposed fine-tuned VGG-19 based DTL model (Three-class Classification) The output of the confusion matrix for the binary classification as obtained from the VGG-19 based DTL model are shown in Table 11. Figure 7 illustrates the training loss and accuracy and the validation loss and accuracy graphs of the proposed fine-tuned VGG-16 based DTL model. The validation accuracy, recall, specificity, precision, and F1-Score of the proposed fine-tuned VGG-16 based DTL model were also obtained.Table 11: The Confusion Matrix of the Three-class classification task obtained from the finetuned VGG-19 based DTL.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: The Training loss and accuracy with the validation loss and accuracy curves obtained for the fine-tuned VGG-19 based Deep Transfer Learning Model (For Three-class Classification)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Table 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>The Precision, Recall, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics obtained for the classification task using the fine-tuned VGG-19 based DTL model (Three-class Classification) PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)Manuscript to be reviewedComputer ScienceIt was noted from the obtained confusion matrices and the computed performance evaluation metrics of the binary and three-class classification tasks that the fine-tuned VGG-16 based deep transfer learning model outperformed the fine-tuned VGG-19 based deep transfer learning model in the detection of COVID-19. Based on this, some tests were carried out on unlabeled images using the developed fine-tuned VGG-16 multi-classification model. The test was carried out to obtained an impartial evaluation of the final model. Some results of the tests are shown in Fig. 8 to Fig. 10.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: COVID-19 sample test results with the predicted level of confidence value</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Viral Pneumonia sample test results with a predicted level of confidence value</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Normal sample test results with the predicted level of confidence value</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>This paper proposed a solution using the Deep Learning Convolutional Neural Network Model to classify a real-life COVID-19 dataset of chest X-ray images into threeclasses: COVID-19, Viral-Pneumonia and Normal categories. Two experiments were performed where the VGG-16 and VGG-19 Convolutional Neural Networks (CNN) with Deep Transfer Learning (DTL) was implemented in Jupyter Notebook using Python programming language. Experimental results showed that the pre-trained VGG-16 DTL model classified COVID-19 data better than the VGG-19 based DTL model. The fine-tuned VGG-16 and VGG-19 models PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)Manuscript to be reviewed Computer Science produced classification accuracies of 99.23% and 98.00%, respectively, for binary classification and 93.85% and 92.92% for multiclass classification. The proposed model, therefore, outperformed existing methods in terms of accuracy. Moreover, the fine-tuned VGG-16 and VGG-19 models have MCC of 0.98 and 0.96 respectively in the binary classification, and 0.91 and 0.89 for multiclass classification. These results showed that there are strong positive correlations between the models' predictions and the true labels. In the two classification tasks (binary and three-class), it was observed that the fine-tuned VGG-16 DTL model had stronger positive correlations in the MCC metric than the fine-tuned VGG-19 DTL model. The VGG-16 DTL model has a Kappa value of 0.98 as against 0.96 for the VGG-19 DTL model in the binary classification task, while in the three-class classification problem, the VGG-16 DTL model has a Kappa value of 0.91 as against 0.89 for the VGG-19 DTL model. This result is in agreement with the trend observed in the MCC metric. The findings of this study have a high potential of increasing the prediction accuracy for COVID-19 disease, which would be of immense benefit to the medical field and the entire human populace as it could help save many lives from untimely death.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Flatten</ns0:head><ns0:label /><ns0:figDesc /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,332.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,199.12,525.00,332.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,199.12,525.00,418.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,199.12,525.00,370.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,199.12,525.00,370.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,219.37,525.00,370.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,219.37,525.00,370.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,357.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,178.87,525.00,360.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,178.87,525.00,353.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The Precision, Recall, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics obtained for the classification task using the fine-tuned VGG-16 based DTL model (Binary Classification)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The layers and layer parameters of the proposed fine-tuned </ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>The Precision, Recall, F1-Score, Matthews Correlation Coefficient and Cohen's Kappa statistics obtained for the classification task using the fine-tuned VGG-19 based DTL model (Binary Classification)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>The layers and layer parameters of the proposed fine-tuned VGG-16 based DTL model (Three-class Classification)</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>The Confusion Matrix of the Three-class classification task obtained from the finetuned VGG-16 based DTL</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 13 :</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Comparison of the proposed COVID-19 diagnostic methods with other deep learning methods developed using radiology images.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:58415:1:1:NEW 26 May 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "S/N
COMMENTS
ACTION
1.
Batch size, reason for the images to be downsized, number of training epochs).
i. The batch size, number of training epochs can be found in line 553 and 592.
ii. The reason for downsizing the images can be found in line 465
2.
The binary classification results generated with confusion matrices must report the Matthews correlation coefficient (MCC) values as well. The discussion of the results should mention and comment the MCC's obtained by the classifiers.
The binary classification results for VGG-16 and 19 DTL models generated with confusion matrices, using some standard metrics such as accuracy, specificity, precision, recall (sensitivity), F1-Score, Matthews Correlation Coefficient and Cohen’s Kappa statistics can be found in Table 3 (line 577 to 583) and Table 6 (line 615 to 621).
3.
The abstract should report the results measured with the MCC, and not with the accuracy.
The MCC and Kappa results can be found in line 35 to 43
4.
The resolution of Figure 1 must be improved.
The resolutions of all figures have been enhanced.
5.
In Tables 2, 4, 6, and 8, the authors should add more statistical rates such as the Matthews correlation coefficient (MCC), true negative rate, negative predictive value, accuracy, and Cohen’s kappa.
The statistical rates are added in Tables 2, 3, 5, 6, 6, 9, and 12.
6.
The abstract should not report the learning rate values of the artificial neural networks.
The learning rate values of the artificial neural networks had been removed.
7.
The authors sometimes write “X-rays” and sometimes write “x-rays” Please unify
“X-rays” had been unified in the document
8.
The URL of the dataset on FigShare on line 425 should be replaced with a reference to the same link.
The URL of the dataset on FigShare has been replaced with the reference Fayemiwo et al. (2021a).
9.
Add Objectives of the paper at the end of Introduction. Add Organization of the paper.
The objective and the organization of the paper can be found in lines 112 to 118.
10.
No need for any sub-heading “RELATED WORKS”. Just take one heading “Literature review' and merge everything there. At the end of Literature review, highlight in 9-15 lines what overall technical gaps are observed in existing works that led to the design of the proposed methodology.
i. The sub-heading “Related Works” has been removed and the heading “Literature Review” is now adopted to cover all discussion regarding reviews.
ii. At the end of Literature review, the technical gaps in existing works that led to the design of the proposed methodology can be found in lines 376 to 381.
11.
Add the Methodology aspect to this paper- Your proposed model, Algorithm or Flowchart of the Data Analysis.
The methodology aspect to this paper showing the Flowchart of the experimental algorithm for the DTL models based on the VGG-16 and VGG-19 networks is shown in Figure 4 (line 521)
12.
Add future scope to this paper
The future scope/work of this paper can be found in lines 783 to 791)
13.
Add the following references to this paper:
1. Zivkovic, M., Bacanin, N., Venkatachalam, K., Nayyar, A., Djordjevic, A., Strumberger, I., & Al-Turjman, F. (2021). COVID-19 cases prediction by using hybrid machine learning and beetle antennae search approach. Sustainable Cities and Society, 66, 102669.
2. Kumar, A., Sharma, K., Singh, H., Srikanth, P., Krishnamurthi, R., & Nayyar, A. Drone-Based Social Distancing, Sanitization, Inspection, Monitoring, and Control Room for COVID-19. Artificial Intelligence and Machine Learning for COVID-19, 153.
3. Devi, A., & Nayyar, A. Perspectives on the Definition of Data Visualization: A Mapping Study and Discussion on Coronavirus (COVID-19) Dataset. Emerging Technologies for Battling Covid-19: Applications and Innovations, 223.
4. Sharma, K., Singh, H., Sharma, D. K., Kumar, A., Nayyar, A., & Krishnamurthi, R. Dynamic Models and Control Techniques for Drone Delivery of Medications and Other Healthcare Items in COVID-19. Emerging Technologies for Battling Covid-19: Applications and Innovations, 1.
5. Alzubi, J., Nayyar, A., & Kumar, A. (2018, November). Machine learning from theory to algorithms: an overview. In Journal of physics: conference series (Vol. 1142, No. 1, p. 012012). IOP Publishing.
Some selected references from the recommendations have been added to the literature review.
14.
Please add block diagram of the proposed research
Architectural Workflows can be found in Figures 2 and 3 in lines 514 and 517
15.
Please add photo/photos of application of the proposed research
photos of application can be found in Figures 9, 10 and 11
16.
References should be 2018-2021 Web of Science about 50% or more
The references has been put to date
17.
Conclusion: point out what are you done
What was done can be found in lines 760 to 782
18.
The authors included multiple datasets (Qatar University, University of Dhaka and Mendeley dataset). I wonder if the batch size and the number of training epochs vary from one dataset to another. Are they application dependent?
The multiple datasets collated were put together in a folder, and the batch size and the number of training epochs on the datasets were consistent during code implementation. This can be found in lines 553 to 592.
19.
I noticed that the X-ray images were resized to 224×224 pixels? How was the size decided? I wonder if the accuracies on different image sizes were computed
The size of the X-ray images in 224×224 pixels is the default input size of images in the VGG network
20.
A reference is needed for OpenCV in line 427
This can be found in line 463
" | Here is a paper. Please give your review comments after reading it. |
168 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Natural language text based navigation dialog/query systems provide users with a natural way of communicating with maps and navigation software. If interfaced with a speech recognition based front-end, they provide hands free interface which is helpful while driving or on the go. Natural language understanding of such navigational queries is an essential task; it extracts point of interest (POI) destination from the dialogue (slot filling) and also determines user's interest regarding that POI (intent determination). Joint modeling for intent determination and slot filling provides better results with less amount of labeled data, therefore we chose the attention based encoder decoder model. This model jointly models intent determination and slot filling. There is not a lot of work done in Roman Urdu regarding natural language understanding for navigational queries; that is why we have gathered and performed experiments on a Roman Urdu navigational dialogue dataset. The input to the natural language understanding model is in the form of word embeddings. We compared different models (Fasttext, ELMO, BERT, and XLNET) for creating word embeddings for Roman Urdu Navigational dataset, and used each of these as an input to the attention based encoder decoder model. Word embeddings created using ELMO and Fasttext provide significantly better F1 score for our navigational Roman Urdu dataset in comparison to other models.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>A navigation dialogue/query system <ns0:ref type='bibr' target='#b27'>(Zheng et al., 2017)</ns0:ref> is a salient use case in the task oriented dialogue systems domain. Its input is a navigational query which is written/spoken when a user is driving or on the go. The input mode can be text and as well as speech. This navigational query might include a particular point of interest (POI) as destination and the user's intent regarding that POI like directions to the POI or its distance. To retrieve the POI from the input navigation query or utterance and determining the user's intent is the task of the natural language understanding (NLU) module <ns0:ref type='bibr' target='#b26'>(Yao et al., 2013)</ns0:ref> <ns0:ref type='bibr' target='#b16'>(Mesnil et al., 2015)</ns0:ref> of the task oriented dialogue system. The natural language understanding module of any dialogue system includes three tasks which are domain detection, slot tagging and intent determination.</ns0:p><ns0:p>The output of this module includes the intent and slots of the input utterance; these are called dialogue states. Dialogue states are used to query our knowledge base or an external database and in return it gives some sort of output. In case of a navigation oriented dialogue system, the slots may contain a destination point and the user's intent could be finding directions to that destination.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the pipeline framework of a navigation oriented dialogue system. It can be seen that NLU is an important step in the task oriented dialogue system; therefore the accuracy of natural language understanding greatly affects the output of the whole dialogue system.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> shows the natural language understanding process for an input utterance. Intent determination task is similar to the multi-class classification task. Neural network based approaches commonly used for multi-class classification task have been used for the task of intent determination. These approaches have PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:1:1:NEW 26 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed outperformed the standard statistical approaches. In <ns0:ref type='bibr' target='#b20'>(Ravuri and Stolcke, 2015)</ns0:ref> recurrent neural networks have been applied along with an LSTM based utterance classifier and are found to be effective. In <ns0:ref type='bibr' target='#b14'>(Lee and Dernoncourt, 2016)</ns0:ref>, the authors have proposed to use recurrent neural networks and convolutional neural networks for short text classification; here they also take the context into account by considering utterances prior to the current utterance. When we talk about the slot tagging task, it is more challenging than intent determination. It is similar to that of the sequence classification where the classifier's job is to determine the semantic tags for the sub-sequences in the utterance. First examples of the recurrent neural network applied for the slot tagging task are <ns0:ref type='bibr' target='#b26'>(Yao et al., 2013)</ns0:ref> <ns0:ref type='bibr' target='#b16'>(Mesnil et al., 2015)</ns0:ref>, and they achieved higher accuracy in comparison to the standard statistical approaches like support vector machine and conditional random fields. For a navigational dialogue dataset, <ns0:ref type='bibr' target='#b27'>(Zheng et al., 2017)</ns0:ref> The tasks of slot tagging and determining the intent of an input utterance are somewhat different from each other. During the pre-deep learning era they were modeled using separate approaches; like support vector machines for intent determination and conditional random fields for slot tagging. Now using deep learning it is possible to determine the solution of both tasks jointly using a single model. <ns0:ref type='bibr' target='#b10'>(Hakkani-Tür et al., 2016)</ns0:ref> proposed joint model for intent determination, slot tagging and domain detection using the RNN-LSTM architecture. The input of this model are the user utterances while the output includes the domain intent and slots. The main principle of joint modeling is similar to that of the sequence to sequence modeling <ns0:ref type='bibr' target='#b22'>(Sutskever et al., 2014)</ns0:ref> or neural conversational model <ns0:ref type='bibr' target='#b24'>(Vinyals and Le, 2015)</ns0:ref>; because the last hidden layer of the neural network contains the semantic information of the whole input, giving us intent and domain information. <ns0:ref type='bibr' target='#b15'>(Liu and Lane, 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>model is attention based with a bidirectional encoder and a unidirectional decoder. This model jointly models the intent determination task and slot tagging task. Similarly <ns0:ref type='bibr' target='#b8'>(Goo et al., 2018)</ns0:ref> also proposed an attention-based encoder-decoder architecture but they included a slot gate. The purpose of the slot gate is to make use of the intent based context vector, while determining the slot labels for an utterance. Slot gate based model outperformed simple attention-based encoder decoder architecture for joint modeling of the slot tagging and intent determination. <ns0:ref type='bibr' target='#b13'>(Hardalov et al., 2020)</ns0:ref> proposed joint intent detection and slot tagging, which was built upon a pre-trained BERT language model. It first determines the intent distribution vector by adding an additional pooling layer to get a hidden representation of the entire input utterance, then it obtains the predictions for each token in utterance using BERT language model. Both of these are used as input to predict slots of an input utterance. The above mentioned methods utilize the intent to determine the slots. Conversely, it can also be beneficial if we use the determined slots for intent determination. <ns0:ref type='bibr' target='#b18'>(Peng et al., 2020)</ns0:ref> proposed an interactive two-pass decoding network. It is the joint slot tagging and intent determination model. It uses first pass decoder to determine the explicit representation for the first task which it then uses as an input for the second pass decoder to determine the results of the second task. This model takes full advantage of both of the determined intents and slots, so that it can achieve bidirectional conversion between these two tasks. Joint modeling provides a major advantage in comparison to the separate models for slot tagging and intent determination. It provides higher accuracy with a smaller amount of labeled data in comparison to separate slot tagging and intent determination models. Therefore, in our work, we have implemented a joint model as proposed by <ns0:ref type='bibr' target='#b15'>(Liu and Lane, 2016)</ns0:ref>.</ns0:p><ns0:p>When it comes to natural language understanding of the navigational dialogues, there is not a lot of work done in Urdu language especially in Roman Urdu and using deep learning techniques. Roman Urdu here refers to writing Urdu language using the transliteration in Roman (English) alphabet; as opposed to the standard way of writing Urdu in Arabic script (with some extra letters). Examples of Roman Urdu sentences can be seen in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. Roman Urdu has been popularized in the last 2 decades due to increased use of Urdu writing on the internet and mobile phones using the standard English keyboard.</ns0:p><ns0:p>Though there are examples of deep learning applied to Roman Urdu text, those are used in different NLP domains other than natural language understanding -like sentiment analysis <ns0:ref type='bibr' target='#b7'>(Ghulam et al., 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b21'>(Shakeel and Karim, 2020)</ns0:ref> and Roman Urdu to Urdu transliteration <ns0:ref type='bibr' target='#b0'>(Alam and ul Hussain, 2017)</ns0:ref>. In this research, we are going to work on natural language understanding of a Roman Urdu navigational dialogue dataset. The input sequence to the natural language understanding model is converted into word embeddings as can be seen in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. Word embeddings are distributed vector representations of words in a document, which capture the semantic and syntactic meanings of these words. There are mainly two types; context independent and context dependent word embedding methods. Word2Vec <ns0:ref type='bibr' target='#b17'>(Mikolov et al., 2013)</ns0:ref> model was the first neural network based model, which maps the words to their distributed representation while capturing the syntactic and semantic meaning of the words. An extension to the Word2Vec model FastText was proposed by <ns0:ref type='bibr' target='#b2'>(Bojanowski et al., 2017)</ns0:ref>; which is better at predicting and recognizing out of vocabulary words in comparison to the Word2Vec model. A major drawback of the FastText and Word2Vec models is that both are context independent. In context independent methods the order of the words does not project or effect the resulting word embedding. To cater this issue deep learning based models have been introduced. Elmo was proposed by <ns0:ref type='bibr' target='#b19'>(Peters et al., 2018)</ns0:ref>; the model architecture includes the bidirectional LSTM and CNN. It has the ability to capture the word meanings with changing context. Google introduced BERT <ns0:ref type='bibr' target='#b4'>(Devlin et al., 2019)</ns0:ref>, which produces embeddings in a similar manner as that of ELMO. BERT is based on a deep learning based architecture known as transformer <ns0:ref type='bibr' target='#b23'>(Vaswani et al., 2017)</ns0:ref>. Transformer is an encoder-decoder based model which includes multi-head attention in both encoder and decoder layers. XLNET <ns0:ref type='bibr' target='#b25'>(Yang et al., 2019)</ns0:ref> is an extension to the BERT. It is also based on the transformer, but it introduces the permutation language modeling, which predicts its tokens in random order rather than left to right as in BERT. <ns0:ref type='bibr' target='#b6'>(Ghannay et al., 2020)</ns0:ref> have studied the effect of different word embedding approaches on the natural language understanding output.</ns0:p><ns0:p>They have compared both context independent and context dependents approaches and then used these embeddings as an input to the Bidirectional LSTM for natural language understanding. The datasets that they have used include both large and small corpora. Results have clearly shown that the embeddings based on the larger datasets had better accuracy compared to smaller corpus. In this paper we are going to compare different word embedding methods; these are FastText, ELMO, BERT and XLNET. We are also investigating transformer based approaches. The semantic representations produced by these approaches will be evaluated, and it will be observed how these semantic representations effect the accuracy of the The rest of the paper is organized as follows. Section 2 explains different concepts such as natural language understanding and word embeddings, as well as various machine learning models used for their determination. Section 3 introduces the data set, the experimental methodology and results obtained.</ns0:p><ns0:p>Finally, Section 4 provides a conclusion and future prospects.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS Natural Language Understanding Model</ns0:head><ns0:p>A joint slot tagging and intent determination model will be used. The reasoning behind using a joint model rather than separate models for both slot tagging and intent determination tasks, is that the joint model provides higher accuracy with a small amount of labeled sentences in training data. We are going to use an LSTM based encoder decoder model with attention mechanism and the aligned inputs <ns0:ref type='bibr' target='#b15'>(Liu and Lane, 2016)</ns0:ref>.</ns0:p><ns0:p>Attention based Encoder-Decoder model Encoder-decoder architecture including attention and aligned inputs as shown in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> provides higher accuracy for slot tagging and intent determination tasks <ns0:ref type='bibr' target='#b15'>(Liu and Lane, 2016)</ns0:ref>. The model is LSTM based encoder-decoder. LSTM is used as the basic recurrent network unit because it models long term dependencies better than the simple RNN. The model includes the BLSTM <ns0:ref type='bibr' target='#b9'>(Graves and Schmidhuber, 2005)</ns0:ref> encoder; it reads the input word sequence x = (x 1 , x 2 , x 3 , . . . , x T ) in forward and backward directions. It generates the hidden state h f while reading the input sequence in forward direction and the hidden state h b while reading the input sequence in backward direction opposite to that of the original order of input word sequence, for the time step i. The hidden state of the encoder h i at time step i is computed by the concatenation of the h f and h b hidden states,</ns0:p><ns0:formula xml:id='formula_0'>h i = [h f , h b ]</ns0:formula><ns0:p>(1)</ns0:p><ns0:p>The model includes two decoders; one for intent determination task and one for slot tagging task. Slot tagging decoder output includes the labels y = (y 1 , y 2 , y 3 , . . . , y T ) for each of the words in the input sequence. Intent determination decoder only produces a single output which is the intent of whole input word sequence. The slot tagging decoder is LSTM based. The decoder state s i at time step i is a function of the previous output y i−1 , previous decoder state i − 1, aligned encoder hidden state h i and context vector c i which is attention <ns0:ref type='bibr' target='#b1'>(Bahdanau et al., 2015)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>s i = f (y i−1 , s i−1 , h i , c i )<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>c i = T ∑ j−1 α i, j h j (3) α i, j = exp g(s i−1 , h j ) ∑ T k exp g(s i−1 , h k )<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The context vector c i is the weighted sum of the hidden encoder states. To find the weights to assign to these hidden states, a feed forward network g is trained. This network uses the previous state of decoder and all the encoder hidden states to calculate the output encoder state. The intent determination decoder state is determined by the c intent context vector and s 0 initial decoder hidden state. s 0 carries information of the entire input word sequence.</ns0:p></ns0:div>
<ns0:div><ns0:head>Word Embeddings</ns0:head><ns0:p>Word embedding models map the words to distributed representations which capture the semantic, contextual meaning of the words. For this work, word embeddings have been created using four different methods. These are described below.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/10</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:1:1:NEW 26 Mar 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>We have created and used a navigational utterances dataset in Roman Urdu. Example sentences from this dataset can be seen in figure 3. To our knowledge, such a dataset based upon navigational dialogue in Roman Urdu has not been collected before. We have similar such datasets in other languages especially in English like multi turn, multi domain dialogue dataset <ns0:ref type='bibr' target='#b25'>(Yang et al., 2019)</ns0:ref>, Atis <ns0:ref type='bibr' target='#b3'>(Bungeroth et al., 2008)</ns0:ref>, CU move <ns0:ref type='bibr' target='#b12'>(Hansen et al., 2005)</ns0:ref>. Roman Urdu is more commonly used among Pakistani people for short text messaging (SMS) and on social media platforms, in comparison to either English Language or original Urdu script. Furthermore, Pakistani street addresses available from Google API are also in Roman Urdu. Therefore, if we want to create an online text based dialogue agent, people will find it much easier to communicate with it in Roman Urdu rather than communicating in English language or original Urdu script. Our system can be interfaced with a speech recognition based front-end to create navigational dialogue system to help the drivers on the road. There are a large amount of illiterate drivers who are more comfortable in using navigational dialogue system in Urdu language rather than English.</ns0:p><ns0:p>The dataset was collected from Pakistani university students, with ages between 18 and 22 years. This group of subjects was chosen because these students are tech savvy and frequent users of maps and navigation software. Their opinion (training set examples) would give a good estimate of an average user of text or voice based maps applications. These questions were related to the issues that they face while driving or generally looking for locations in an unfamiliar area. After the collection of dataset, the next step is the pre-processing. The main issue with Roman Urdu is that everyone has their own spelling style;</ns0:p><ns0:p>this would cause problems for creating word embeddings of the dataset. To mitigate this problem lexical normalization is applied to the dataset. The next step is dataset annotation i.e. assigning the intent to each utterance and slot labels to each of the words in an utterance, as can be seen in </ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We have trained and tested our models on the Roman Urdu navigational dialogues dataset. Details of the experimental setup are given below. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>The evaluation metric used to evaluate the model is the F1 score, which is the harmonic mean between the precision and recall. If we talk in terms of our dataset; the intent determination model assigns each sentence an intent (navigate-directions, navigate-time, navigate-search-directions).</ns0:p><ns0:p>Similarly precision determines the overall amount of correctly determined intents out of all the predicted intents. The data is unevenly distributed; therefore we cannot only rely on the number of correctly predicted classes. If we only rely on precision our results will only focus on the commonly present intents in the dataset (like navigate-loc.search), and not focus on the other important intents (like navigatetime). This is why recall measure is needed, as it determines how many different intents are correctly classified. If precision is high, the recall may be low. An ideal model would have both of these metrics balanced. Therefore, F1 score has been used as it balances the trade-off between both precision and recall.</ns0:p><ns0:p>Furthermore, it works really well when it comes to the unevenly distributed multi-class dataset like ours.</ns0:p><ns0:p>Word embeddings have been created on our dataset using four of the above mentioned models. Each of these embeddings are then used as an input to the attention based encoder decoder model. The model determines both intent and slots of an input dataset. Given below we have compared the F1 scores of the models for each of the input word embeddings for both intent determination and slots tagging. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science However it could not perform as good as it did for the intent determination task, because the dataset is too small for the task of slot tagging. Transformer based models are generally much better with larger datasets. if we look at the F1 plot for slot tagging, ELMO based word embeddings also provided much better accuracy for earlier epochs and also has the highest validation F1 score 82.11. ELMO creates word distributed representations using deep learning models. It is character based like FastText, therefore it is great at context based modeling and for modeling the out-of-vocabulary words. <ns0:ref type='bibr' target='#b6'>(Ghannay et al., 2020</ns0:ref>) also studied the effect of word embeddings on NLU. They have compared GloVe, FastText, and ELMO. For the slot tagging and intent determination tasks they had used the bidirectional LSTM encoder decoder model. Their experimental results have shown that the word embeddings created using the larger out of domain dataset yields better results in comparison to smaller dataset. There results have shown that even for the larger out of domain datasets the embeddings created using ELMO provided the highest score. They did not compare the transformer based models. In our research work, we have also explored transformer based word embedding approaches. Furthermore, we have also used the attention mechanism in the slot tagging and intent determination tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this paper we have used a joint slot tagging and intent determination model for for determining the slots and intent of navigational queries in Roman Urdu. We have used different approaches for creating word embeddings of our dataset. We wanted to determine how the word embeddings created using different approaches will effect the results of slot tagging and intent determination models. Word embeddings were created using the both context independent and dependent methods. The experimental results have shown that for the intent determination task the word embeddings created using the XLNET provided much better F1 score. XLNET is a transformer based model and more effective at capturing the relationships and dependencies among words in an utterance in comparison to other approaches. For the task of slot tagging, word embeddings created using XLNET and FastText provided much better results. FastText has the ability to cater to rare/out of vocabulary words much better because it creates representations for words not present at training time. ELMO also provided the highest validation score for the task of slot tagging.</ns0:p><ns0:p>ELMO is based on Bidirectional LSTM and CNN, and provides much better context based representation for the resulting embeddings. Future work in this direction is suggested to focus on gathering larger data sets, as the performance of some methods like BERT could be more pronounced on large datasets. Also, having more demographic variation in the data gathering subjects could lead to newer insights.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Pipeline framework of navigation oriented dialogue system.</ns0:figDesc><ns0:graphic coords='3,141.73,63.78,413.58,175.56' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>have proposed RNN based two separate models for slot tagging and intent determination. They implemented their model on the CU-move in-vehicle dialogue corpus. It gave the accuracy of 98.24% and 99.60% for slot tagging and intent determination.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Block diagram of natural language understanding process.</ns0:figDesc><ns0:graphic coords='3,141.73,450.91,413.59,121.88' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>proposed an LSTM based encoder-decoder. The 2/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:1:1:NEW 26 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:1:1:NEW 26 Mar 2021) Manuscript to be reviewed Computer Science joint intent detection and slot tagging model. Main objectives of our paper are, 1) Comparison of word embeddings created by different methods for joint slot tagging and intent determination model, and their effect on the F1-score. 2) Natural language understanding of navigational dialogues in Roman Urdu using a joint slot tagging and intent determination model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Example sentences from Roman Urdu navigation dataset, along with their regular Urdu equivalent and English translation</ns0:figDesc><ns0:graphic coords='6,141.73,63.78,413.59,76.83' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Attention based encoder decoder for the joint modeling of slot tagging and intent determination</ns0:figDesc><ns0:graphic coords='6,141.73,544.74,413.59,138.72' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Evaluation F1 score (%) plot diagram (A) for intent determination and (B) for slot tagging</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,413.56,206.78' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Confusion matrix (A) for intent determination (XLNET) and (B) for slot tagging (FastText)</ns0:figDesc><ns0:graphic coords='9,141.73,476.70,413.58,210.31' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Sentence</ns0:cell><ns0:cell>Context</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Directions B-directions</ns0:cell></ns0:row><ns0:row><ns0:cell>from</ns0:cell><ns0:cell>O</ns0:cell></ns0:row><ns0:row><ns0:cell>Lahore</ns0:cell><ns0:cell>B-fromloc</ns0:cell></ns0:row><ns0:row><ns0:cell>to</ns0:cell><ns0:cell>O</ns0:cell></ns0:row><ns0:row><ns0:cell>Islamabad</ns0:cell><ns0:cell>B-toloc</ns0:cell></ns0:row></ns0:table><ns0:note>. The slot labels are assigned based on the IOB (Inside Outside and Beginning) format, which is a common NER (named entity recognition) format. An example of an annotated utterance is given below. 21 distinct intent labels and 29 distinct slot labels have been assigned.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Example of the navigational query in IOB format</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Training Details Attention based encoder decoder In</ns0:head><ns0:label /><ns0:figDesc>this model LSTM is used as the basic RNN unit. The number of units in the LSTM cell is set to 200. The number of layers for LSTM is set to 1. Dropout rate is 0.5 and learning rate is 0.001. Maximum norm is set to 5. The model has been trained for 50 and 100 epochs.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>FastText FastText Urdu predtrained model has been used for creating word embeddings. The size of</ns0:cell></ns0:row><ns0:row><ns0:cell>the word embeddings is 300.</ns0:cell></ns0:row></ns0:table><ns0:note>ELMO We used a pretrained ELMO model available. It has 2 LSTM layers with 1024 hidden states for each layer and character based word representation vector of size 512. BERT bert-base-multilingual-cased model has been used. It is trained on 104 languages including Urdu. This model is 12 layered, with 768 hidden states and 12 heads.XLNET xlnet-base-cased model is used. This model is 12 layered, and has 768 hidden states and 12 heads. 6/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:1:1:NEW 26 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Intent determinationIn Table2, F1 score of the model for intent determination are given after 100 epochs. From evaluation results it can be seen that F1 scores for the word embedding based on XLNET model has outperformed the other methods for the task intent determination with the F1 score for evaluation being 84.00. Validation and Evaluation F1 score (%) at 100 epoch of Intent determination</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='2'>Validation Evaluation</ns0:cell></ns0:row><ns0:row><ns0:cell>FastText</ns0:cell><ns0:cell>80.00</ns0:cell><ns0:cell>76.00</ns0:cell></ns0:row><ns0:row><ns0:cell>ELMO</ns0:cell><ns0:cell>82.00</ns0:cell><ns0:cell>77.00</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT</ns0:cell><ns0:cell>76.00</ns0:cell><ns0:cell>71.00</ns0:cell></ns0:row><ns0:row><ns0:cell>XLNET</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>84.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Slot tagging In Table 2, F1 score of the model for slot tagging after 100 epochs. After 100 epochs</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>looking at the evaluation F1 score for all the models, it can be seen that both XLNET and FastText based</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>word embeddings have higher F1 score in comparison to the other model with evaluation F1 scores being</ns0:cell></ns0:row><ns0:row><ns0:cell>82.71 and 81.24 respectively.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='2'>Validation Evaluation</ns0:cell></ns0:row><ns0:row><ns0:cell>FastText</ns0:cell><ns0:cell>76.11</ns0:cell><ns0:cell>82.24</ns0:cell></ns0:row><ns0:row><ns0:cell>ELMO</ns0:cell><ns0:cell>82.11</ns0:cell><ns0:cell>79.74</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT</ns0:cell><ns0:cell>78.33</ns0:cell><ns0:cell>79.57</ns0:cell></ns0:row><ns0:row><ns0:cell>XLNET</ns0:cell><ns0:cell>79.59</ns0:cell><ns0:cell>81.81</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Validation and Evaluation F1 score (%) at 100 epoch of Slot taggingDISCUSSIONOur dataset was human labeled training data. One problem with these type of human labeled datasets is that they are prone to errors. Another issue with Roman Urdu is that there are no standardized spellings; different writers may use different spellings for the same word. Even though we have normalized each word to one spelling, but there are a few words having more than one spellings. This leads to poor generalization capability for word embedding models.</ns0:figDesc><ns0:table /><ns0:note>7/10PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:1:1:NEW 26 Mar 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='10'>/10 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:1:1:NEW 26 Mar 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
Department of Computing
School of electrical engineering
& Computer science,
National University of Science
& Technology (NUST)
H-12, Islamabad
https://nust.edu.pk/ 11th March 2021
Dear Editors,
First of all, we would like to thank the reviewers, for their kind and generous comments.
As for the changes, we have made all the changes pointed out by the reviewers.
We now believe that the manuscript is now suitable for publication in peerj.
Regards,
Javeria Hassan
Student of Computer Science Masters
On behalf of all authors.
Reviewer 1
Basic Reporting
1. Please double-check the grammar and language to make sure the delivery is clear, such as the sentence at line 43 – 45 needs to rearrange.
2. Please double-check the grammar, lots of Singular nouns are missing their articles.
3. Check the line number; the line numbers are break after line 122.
We have proofread our paper, we have rephrased the sentences to remove all the grammatical errors that were present, we have also resolved the error related to the lines 43 – 45.
Regarding the line numbers they were breaking after the equations, we have also resolved that error.
Experimental design
a. The dataset introduction should be in the method part, which would give the audience a clearer idea of how the experiment is designed.
b. Please give justifications for the selection of the datasets. Is it the only available one, or is there a reason that this specific dataset is selected? Is it based on the property of the dataset or based on the method?
c. If you are using F-score as an estimation for the performance, please explain how they work and the possible merit and drawback of using F-score as an estimation method other than other methods?
d. Please give a more detailed description and comparison of results from different methods and from different epochs. Is this outcome your prediction, and what are the possible causes for the difference between results? It is not presented clearly in this paper how the different methods would impact the F1 Score.
The dataset introduction is added in the method section. In the dataset introduction we have also added the justifications behind choosing Roman Urdu text and we also explained why we chose navigation-oriented dialogue.
In the results section we have added a paragraph named Evaluation metrics, in which we explained the evaluation metric that we have used which F1 score. Evaluation metric paragraph includes everything regarding the F1 score, their merits and their drawbacks and how F1 score is determined for our task.
To make the results much clearer we have included the intent determination and slot tagging results separately. We have also included the confusion matrix and F1 score plot against each epoch for both intent determination and slot tagging tasks. The reason we added confusion matrix because we wanted to give a clearer idea about at what basis our F1 score is determining the results and why is it different for different methods.
Validity of the findings
This data and results in this paper are robust and sound. However, the justification of why this dataset is brought into use needs to be clearer.
First, thank you for the kind comment. We have provided the justifications regarding the dataset that we used, in the dataset introduction. These justifications clearly convey, the reason why we specifically created the navigation-oriented dataset in Roman Urdu and used it for this specific purpose.
Reviewer 2
Basic reporting
The idea of the paper is sound, and need be investigated more however few suggestions are there.
1 abstract needs to be rewritten with respect to findings.
2 add latest literature in this area.
3 evaluation of proposed method is not clear therefore authors need to provide confusion matrix to understand the recognition rate of the system.
Therefore, I recommend accepting the paper after incorporating above comments.
First, thank you for the kind comment. We have rewritten our abstract in terms of results based on the intent determination and slot tagging tasks.
We have also added the latest literature, related to both word embedding and Natural language understanding.
We have added the confusion matrix for the slot tagging and intent determination tasks separately for the methods which provided the best F1 scores in both respective tasks.
Experimental design
Experiments need to be explained more.
We have included the explanation about the working of F1 score, in our results section and incorporated some discussions for different methods why some methods provide better results in comparison to other methods.
Validity of the findings
Need to justify the results with latest literature.
In the end of our discussion section, we have added the comparison of our research with the latest research paper, who have similar work to ours. In the discussion, we explained how our research is different, and how we have included certain models in our research that were not included in their research.
Comments for the author
Authors are suggested to proofread from native English speaker.
We went back through our paper, there were some grammatical mistakes that we encountered. We have resolved all the major grammatical mistakes and rephrased our paper to make the delivery much clearer.
" | Here is a paper. Please give your review comments after reading it. |
169 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Navigation based task-oriented dialogue systems provide users with a natural way of communicating with maps and navigation software. Natural language understanding (NLU) is the first step for a task-oriented dialogue system. It extracts the important entities (slot tagging) from the user's utterance and determines the user's objective (intent determination). Word embeddings are the distributed representations of the input sentence, and encompass the sentence's semantic and syntactic representations. We created the word embeddings using different methods like FastText, ELMO, BERT and XLNET; and studied their effect on the natural language understanding output.</ns0:p><ns0:p>Experiments are performed on the Roman Urdu navigation utterances dataset. The results show that for the intent determination task XLNET based word embeddings outperform other methods; while for the task of slot tagging FastText and XLNET based word embeddings have much better accuracy in comparison to other approaches.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>A navigation dialogue/query system <ns0:ref type='bibr' target='#b26'>(Zheng et al., 2017)</ns0:ref> is a salient use case in the task oriented dialogue systems domain. Its input is a navigational query which is written/spoken when a user is driving or walking. The input mode can be text as well as speech. This navigational query might include a particular point of interest (POI) as destination and the user's intent regarding that POI like directions to the POI or its distance. To retrieve the POI from the input navigation query/utterance and determining the user's intent is the task of the natural language understanding (NLU) module <ns0:ref type='bibr' target='#b25'>(Yao et al., 2013)</ns0:ref> <ns0:ref type='bibr' target='#b16'>(Mesnil et al., 2015)</ns0:ref>. This module is a part of the task oriented dialogue system. The natural language understanding module of any dialogue system includes three tasks which are domain detection, slot tagging and intent determination. The output of this module includes the intent and slots of the input utterance; these are called dialogue states. Dialogue states are used to query our knowledge base or an external database and return some sort of output. In case of a navigation oriented dialogue system, the slots may contain a destination point and the user's intent could be finding directions to that destination.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the pipeline framework of a navigation oriented dialogue system. It can be seen that NLU is a central step in the task oriented dialogue system; therefore the accuracy of natural language understanding greatly affects the output of the whole dialogue system.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows the natural language understanding process for an input utterance. Intent determination task is similar to the multi-class classification task. The slot tagging task is generally more challenging than intent determination. It is similar to sequence classification where the classifier's job is to determine the semantic tags for the sub-sequences in the utterance. The tasks of slot tagging and determining the intent of an input utterance are somewhat different from each other. During the pre-deep learning era they were modeled using separate approaches; like support vector machines for intent determination and conditional random fields for slot tagging. Now using deep learning it is possible to determine the solution of both tasks jointly using a single model. <ns0:ref type='bibr' target='#b10'>(Hakkani-Tür et al., 2016)</ns0:ref> of this model are the user utterances while the output includes the domain intent and slots. The main principle of joint modeling is similar to that of the sequence-to-sequence modeling <ns0:ref type='bibr' target='#b21'>(Sutskever et al., 2014)</ns0:ref> or neural conversational model <ns0:ref type='bibr' target='#b23'>(Vinyals and Le, 2015)</ns0:ref>; because the last hidden layer of the neural network contains the semantic information of the whole input, giving us intent and domain information. <ns0:ref type='bibr' target='#b15'>(Liu and Lane, 2016)</ns0:ref> proposed an LSTM based encoder-decoder. The model is attention based with a bidirectional encoder and a unidirectional decoder. This model jointly models the intent determination task and slot tagging task. Similarly <ns0:ref type='bibr' target='#b8'>(Goo et al., 2018)</ns0:ref> also proposed an attention-based encoder-decoder architecture but they included a slot gate. The purpose of the slot gate is to make use of the intent based context vector, while determining the slot labels for an utterance. Slot gate based model outperformed simple attention-based encoder decoder architecture for joint modeling of the slot tagging and intent determination. <ns0:ref type='bibr' target='#b13'>(Hardalov et al., 2020)</ns0:ref> proposed joint intent detection and slot tagging, which was built upon a pre-trained BERT language model. It first determines the intent distribution vector by adding an additional pooling layer to get a hidden representation of the entire input utterance, then it obtains the predictions for each token in utterance using BERT language model. Both of these are used as input to predict the slots of an input utterance. The above mentioned methods utilize the intent to determine the slots. Conversely, it can also be beneficial if we use the determined slots for intent determination. <ns0:ref type='bibr' target='#b18'>(Peng et al., 2020)</ns0:ref> proposed an interactive two-pass decoding network. It is the joint slot tagging and intent determination model. It uses first pass decoder to determine the explicit representation for the first task; and then uses this representation as an input for the second pass decoder to determine the results of the second task. This model takes full advantage of both of the determined intents and slots, so that it can achieve bidirectional conversion between these two tasks. Joint modeling provides a major advantage in comparison to the separate models for slot tagging and intent determination. It provides higher accuracy with a smaller amount of labeled data. Therefore, in our work, we have implemented a joint model as proposed by <ns0:ref type='bibr' target='#b15'>(Liu and Lane, 2016)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>When it comes to natural language understanding of the navigational dialogues, there is not a lot of work done in Urdu language. This is especially true for Roman Urdu and deep learning techniques.</ns0:p><ns0:p>Roman Urdu here refers to writing Urdu language using the transliteration in Roman (English) alphabet; as opposed to the standard way of writing Urdu in Arabic script (with some extra letters). Examples of Roman Urdu sentences can be seen in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>. Roman Urdu has been popularized in the last 2 decades due to increased use of Urdu writing on the internet and mobile phones using the standard English keyboard. Though there are examples of deep learning applied to Roman Urdu text, those are in NLP domains other than natural language understanding -like sentiment analysis <ns0:ref type='bibr' target='#b7'>(Ghulam et al., 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b20'>(Shakeel and Karim, 2020)</ns0:ref> and Roman Urdu to Urdu transliteration <ns0:ref type='bibr' target='#b0'>(Alam and ul Hussain, 2017)</ns0:ref>. In this research, we are going to work on natural language understanding of a Roman Urdu navigational dialogue dataset. The input sequence to the natural language understanding model is converted into word embeddings as can be seen in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. Word embeddings are the distributed vector representations of words in a document, which capture the semantic and syntactic meanings of these words. There are mainly two types; context-independent and context-dependent word embedding methods. Word2Vec <ns0:ref type='bibr' target='#b17'>(Mikolov et al., 2013)</ns0:ref> model was the first neural network based model, which maps the words to their distributed representation while capturing the syntactic and semantic meaning of the words. An extension to the Word2Vec model FastText was proposed by <ns0:ref type='bibr' target='#b2'>(Bojanowski et al., 2017)</ns0:ref>; which is better at predicting and recognizing out-of-vocabulary words in comparison to the Word2Vec model. A major drawback of the FastText and Word2Vec models is that both are context-independent. In context-independent methods, the order of the words does not effect the resulting word embedding. To take advantage of the context information, deep learning based models have been introduced. Elmo was proposed by <ns0:ref type='bibr' target='#b19'>(Peters et al., 2018)</ns0:ref>; the model architecture includes the bidirectional LSTM and CNN. It has the ability to capture the word meanings with changing context. Google introduced BERT <ns0:ref type='bibr' target='#b4'>(Devlin et al., 2019)</ns0:ref>, which produces embeddings in a similar manner to those of ELMO. BERT is based on a deep learning based architecture known as transformer <ns0:ref type='bibr' target='#b22'>(Vaswani et al., 2017)</ns0:ref>. Transformer is an encoder-decoder based model which includes multi-head attention in both encoder and decoder layers. XLNET <ns0:ref type='bibr' target='#b24'>(Yang et al., 2019)</ns0:ref> is an extension to BERT. It is also based on the transformer, but it introduces the permutation language modeling, which predicts its tokens in random order rather than left to right as in BERT. <ns0:ref type='bibr' target='#b6'>(Ghannay et al., 2020)</ns0:ref> have studied the effect of different word embedding approaches on the natural language understanding output. They have compared both context independent and context dependents approaches and then used these embeddings as an input to the Bidirectional LSTM for natural language understanding.</ns0:p><ns0:p>The datasets that they have used include both large and small corpora. Results have clearly shown that the embeddings based on the larger datasets had better accuracy compared to smaller corpora. In this paper we are going to compare different word embedding methods; these are FastText, ELMO, BERT and XLNET. We are also investigating transformer based approaches. The semantic representations produced by these approaches will be evaluated, and it will be observed how these semantic representations effect the accuracy of the joint intent detection and slot tagging model. Main objectives of our paper are, 1) Comparison of word embeddings created by different methods for joint slot tagging and intent determination model, and their effect on the F1-score. 2) Natural language understanding of navigational dialogues in Roman Urdu using a joint slot tagging and intent determination model.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 explains different concepts such as natural language understanding and word embeddings, as well as various machine learning models used for their determination. Section 3 introduces the data set, the experimental methodology and results obtained.</ns0:p><ns0:p>Finally, Section 4 provides a conclusion and future prospects.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS Natural Language Understanding Model</ns0:head><ns0:p>A joint slot tagging and intent determination model will be used. The reasoning behind using this joint model is that it provides better accuracy with a smaller number of labeled sentences in training data. We are going to use an LSTM based encoder decoder model with attention mechanism and the aligned inputs <ns0:ref type='bibr' target='#b15'>(Liu and Lane, 2016)</ns0:ref>. in the forward and backward directions. It generates the hidden state h f while reading the input sequence in the forward direction and the hidden state h b while reading the input sequence in the backward direction (i.e. opposite to that of the original order of input word sequence), for the time step i. The hidden state of the encoder h i at time step i is computed by the concatenation of the h f and h b hidden states,</ns0:p><ns0:formula xml:id='formula_0'>h i = [h f , h b ]</ns0:formula><ns0:p>(1)</ns0:p><ns0:p>The model includes two decoders; one for intent determination task and one for slot tagging task. Slot tagging decoder output includes the labels y = (y 1 , y 2 , y 3 , . . . , y T ) for each of the words in the input sequence. The intent determination decoder produces a single output which is the intent of whole input word sequence. The slot tagging decoder is LSTM based. The decoder state s i at time step i is a function of the previous output y i−1 , previous decoder state i − 1, aligned encoder hidden state h i and context vector c i which is attention <ns0:ref type='bibr' target='#b1'>(Bahdanau et al., 2015)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>s i = f (y i−1 , s i−1 , h i , c i )<ns0:label>(2)</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>c i = T ∑ j−1 α i, j h j (3) α i, j = exp g(s i−1 , h j ) ∑ T k exp g(s i−1 , h k )<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The context vector c i is the weighted sum of the hidden encoder states. To find the weights to assign to these hidden states, a feed forward network g is trained. This network uses the previous state of decoder and all the encoder hidden states to calculate the output encoder state. The intent determination decoder state is determined by the c intent context vector and s 0 initial decoder hidden state. s 0 carries information of the entire input word sequence. in English like multi-turn, multi-domain dialogue dataset <ns0:ref type='bibr' target='#b5'>(Eric and Manning, 2017)</ns0:ref>, Atis <ns0:ref type='bibr' target='#b3'>(Bungeroth et al., 2008)</ns0:ref>, and CU move <ns0:ref type='bibr' target='#b12'>(Hansen et al., 2005)</ns0:ref>. Roman Urdu is more commonly used among Pakistani people for short text messaging (SMS) and on social media platforms, in comparison to either English Language or original Urdu script. Furthermore, Pakistani street addresses available from Google API are also in Roman Urdu. Therefore, if we want to create an online text based dialogue agent, people will find it much easier to communicate with it in Roman Urdu rather than communicating in English language or original Urdu script. Our system can be interfaced with a speech recognition based front-end to create navigational dialogue system to help the drivers on the road. There are a large number of drivers with limited English language skills; who would be more comfortable in using a navigational dialogue system in Urdu language rather than English. The dataset was collected from Pakistani university students, with ages between 18 and 22 years. This group of subjects was chosen because these students are tech savvy Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to the issues that they face while driving or generally looking for locations in an unfamiliar area. After the collection of dataset, the next step is the pre-processing. The main issue with Roman Urdu is that everyone has their own spelling style; this would cause problems for creating word embeddings of the dataset. To mitigate this problem lexical normalization is applied to the dataset. The next step is dataset annotation i.e. assigning the intent to each utterance and slot labels to each of the words in an utterance, as can be seen in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The slot labels are assigned based on the IOB (Inside Outside and Beginning) format, which is a common NER (named entity recognition) format. An example of an annotated utterance is given below. 21 distinct intent labels and 29 distinct slot labels have been assigned. </ns0:p></ns0:div>
<ns0:div><ns0:head>Sentence</ns0:head></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We have trained and tested our models on the Roman Urdu navigational dialogues dataset. Details of the experimental setup are given below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Training Details</ns0:head><ns0:p>Attention based encoder decoder In this model LSTM is used as the basic RNN unit. The number of units in the LSTM cell is set to 200. The number of layers for LSTM is set to 1. Dropout rate is 0.5 and learning rate is 0.001. Maximum norm is set to 5. The model has been trained for 50 and 100 epochs.</ns0:p><ns0:p>FastText FastText Urdu predtrained model has been used for creating word embeddings. The size of the word embeddings is 300.</ns0:p><ns0:p>ELMO We used a pretrained ELMO model available. It has 2 LSTM layers with 1024 hidden states for each layer and character based word representation vector of size 512.</ns0:p><ns0:p>BERT bert-base-multilingual-cased model has been used. It is trained on 104 languages including Urdu.</ns0:p><ns0:p>This model is 12 layered, with 768 hidden states and 12 heads.</ns0:p><ns0:p>XLNET xlnet-base-cased model is used. This model is 12 layered, and has 768 hidden states and 12 heads.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Evaluation Metrics</ns0:head><ns0:p>The evaluation metric used to evaluate the model is the F1 score, which is the harmonic mean between the precision and recall. If we talk in terms of our dataset; the intent determination model assigns each sentence an intent (navigate-directions, navigate-time, navigate-search-directions).</ns0:p><ns0:p>Similarly, precision determines the overall number of correctly determined intents out of all the predicted intents. The data is unevenly distributed; therefore we cannot only rely on the number of correctly predicted classes. If we only rely on precision our results will only focus on the commonly present intents in the dataset (like navigate-loc.search), and not focus on the other important intents (like navigatetime). This is why recall measure is needed, as it determines how many different intents are correctly classified. If precision is high, the recall may be low. An ideal model would have both of these metrics balanced. Therefore, F1 score has been used as it balances the trade-off between both precision and recall.</ns0:p><ns0:p>Furthermore, it works really well when it comes to the unevenly distributed multi-class dataset like ours.</ns0:p><ns0:p>Word embeddings have been created on our dataset using four of the above mentioned models. Each of these embeddings are then used as an input to the attention based encoder-decoder model. The model determines both the intent and slots of an input dataset. Given below is the comparison of the F1 scores of the models for each of the input word embeddings for both intent determination and slot tagging.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:2:1:CHECK 31 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Intent determination In Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>, F1 scores of the intent determination models are given after 100 epochs. From evaluation results it can be seen that F1 score for the word-embedding based on XLNET model has outperformed the other methods for intent determination; with the F1 score for evaluation being 84.00. Intent determination For the task of intent determination, there are 21 distinct classes. The model assigns each utterance an intent from those distinct classes. If we look at the F1 score (%) plot in figure <ns0:ref type='figure'>5</ns0:ref>(A) it can be clearly seen that the word embeddings which were created using the XLNET model has the highest F1 (%) of 84.00. Figure <ns0:ref type='figure'>6</ns0:ref>(A) contains the confusion matrix, based of the intent classes predicted using the word embedding created using XLNET. The diagonal of the confusion matrix shows the number of classes predicted correctly. XLNET based word embeddings have provided much better results for the intent determinations task, because it is much better at capturing the contextual information present in an utterance. XLNET is a bidirectional transformer. If we talk about the F1 (%) plot in figure <ns0:ref type='figure'>5</ns0:ref>(A), it can be seen that in comparison to BERT (which is also transformer based), XLNET performs much better for this task. XLNET uses the permutation language modeling, and predicts all the token in random order. This helps it to better understand the bidirectional relationships among the words in comparison to BERT which only predicts 15 % of the masked tokens. Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Model</ns0:head><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>datasets. Their results have shown that even for the larger out-of-domain datasets the embeddings created using ELMO provided the highest score. They did not compare the transformer based models. In our research work, we have also explored transformer based word embedding approaches. Furthermore, we have also used the attention mechanism in the slot tagging and intent determination tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In this paper we have used a joint slot tagging and intent determination model for for determining the slots and intent of navigational queries in Roman Urdu. We have used different approaches for creating word embeddings of our dataset. We wanted to determine how the word embeddings created using different approaches will effect the results of slot tagging and intent determination models. Word embeddings were created using the both context independent and dependent methods. The experimental results have shown that for the intent determination task the word embeddings created using the XLNET provided much better F1 score. XLNET is a transformer based model and more effective at capturing the relationships and dependencies among words in an utterance as compared to other approaches. For the task of slot tagging, word embeddings created using XLNET and FastText provided much better results. FastText has the ability to cater to rare/out-of-vocabulary words much better because it creates representations for words not present at training time. ELMO also provided the highest validation score for the task of slot tagging. ELMO is based on Bidirectional LSTM and CNN, and provides much better context based representation for the resulting embeddings. Future work in this direction is suggested to focus on gathering larger data sets, as the performance of some methods like BERT could be more pronounced on large datasets. Also, having more demographic variation in the data gathering subjects could lead to newer insights.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Pipeline framework of navigation oriented dialogue system.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Block diagram of natural language understanding process.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Attention based Encoder-Decoder model Encoder-decoder architecture including attention and aligned inputs as shown in Figure 4 provides higher accuracy for slot tagging and intent determination tasks (Liu 3/11 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:2:1:CHECK 31 May 2021) Manuscript to be reviewed Computer Science and Lane, 2016). The model is an LSTM based encoder-decoder. LSTM is used as the basic recurrent network unit because it models long-term dependencies better than the simple RNN. The model includes the BLSTM (Graves and Schmidhuber, 2005) encoder; it reads the input word sequence x = (x 1 , x 2 , x 3 , . . . , x T )</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Example sentences from Roman Urdu navigation dataset, along with their regular Urdu equivalent and English translation</ns0:figDesc><ns0:graphic coords='5,141.73,446.69,413.59,76.83' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Attention based encoder-decoder for the joint modeling of slot tagging and intent determination</ns0:figDesc><ns0:graphic coords='6,141.73,328.43,413.59,138.72' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>and frequent users of maps and navigation software. Their opinion (training set examples) would give a good estimate of an average user of text or voice based maps applications. These questions were related 5/11 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:2:1:CHECK 31 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Slot taggingFor the task of slot tagging, there were 29 distinct classes. Our model assigned a slot tag to each of the word in an utterance. If look at the F1 scores (%) plot in figure5(B) it can be clearly seen that the word embeddings created using the FastText model and XLNET performed really well with the F1 score of 82.24 and 81.81. Figure 6(B) contains the confusion matrix, based on the slot tags using the word embedding created using FastText. F1 score is dependent upon the number of classes correctly identified. FastText is better at recognizing the out-of-vocabulary words. FastText model represents the 7/11 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:2:1:CHECK 31 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. Evaluation F1 score (%) graphs (A) for intent determination and (B) for slot tagging</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,413.58,453.44' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Example of the navigational query in IOB format</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Context</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Directions B-directions</ns0:cell></ns0:row><ns0:row><ns0:cell>from</ns0:cell><ns0:cell>O</ns0:cell></ns0:row><ns0:row><ns0:cell>Lahore</ns0:cell><ns0:cell>B-fromloc</ns0:cell></ns0:row><ns0:row><ns0:cell>to</ns0:cell><ns0:cell>O</ns0:cell></ns0:row><ns0:row><ns0:cell>Islamabad</ns0:cell><ns0:cell>B-toloc</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Validation and Evaluation F1 scores (%) after 100 epochs of Intent determination Slot tagging In Table3, F1 score of the model for slot tagging is given after 100 epochs. Looking at the evaluation F1 scores for all the models, it can be seen that both XLNET and FastText based word embeddings have higher F1 scores in comparison to the other models; with evaluation F1 scores being 82.11 and 82.24 respectively.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Validation Evaluation</ns0:cell></ns0:row><ns0:row><ns0:cell>FastText</ns0:cell><ns0:cell>80.00</ns0:cell><ns0:cell>76.00</ns0:cell></ns0:row><ns0:row><ns0:cell>ELMO</ns0:cell><ns0:cell>82.00</ns0:cell><ns0:cell>80.00</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT</ns0:cell><ns0:cell>76.00</ns0:cell><ns0:cell>71.00</ns0:cell></ns0:row><ns0:row><ns0:cell>XLNET</ns0:cell><ns0:cell>84.00</ns0:cell><ns0:cell>84.00</ns0:cell></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell cols='2'>Validation Evaluation</ns0:cell></ns0:row><ns0:row><ns0:cell>FastText</ns0:cell><ns0:cell>76.11</ns0:cell><ns0:cell>82.24</ns0:cell></ns0:row><ns0:row><ns0:cell>ELMO</ns0:cell><ns0:cell>82.11</ns0:cell><ns0:cell>79.74</ns0:cell></ns0:row><ns0:row><ns0:cell>BERT</ns0:cell><ns0:cell>78.33</ns0:cell><ns0:cell>79.57</ns0:cell></ns0:row><ns0:row><ns0:cell>XLNET</ns0:cell><ns0:cell>79.59</ns0:cell><ns0:cell>81.81</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Validation and Evaluation F1 scores (%) after 100 epochs of slot taggingDISCUSSIONOur dataset was human labeled training data. One problem with this type of human labeled datasets is that they are prone to errors. Another issue with Roman Urdu is that there are no standardized spellings; different writers may use different spellings for the same word. Even though we have normalized each word to one spelling; still there are a few words having more than one spellings. This leads to poor generalization capability for word embedding models.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='11'>/11 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54993:2:1:CHECK 31 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
Department of Computing
School of electrical engineering
& Computer science,
National University of Science
& Technology (NUST)
H-12, Islamabad
https://nust.edu.pk/ 25th April 2021
Dear Editors,
First of all, we would like to thank the reviewers, for their kind and generous comments.
As for the changes, we have made all the changes pointed out by the reviewers.
We now believe that the manuscript is now suitable for publication in peerj.
Regards,
Javeria Hassan
Student of Computer Science Masters
On behalf of all authors.
Reviewer 1
Basic reporting
It can be seen that after the modification, the language level and quality of this paper have been improved. But in our opinion, this paper still has some small shortcomings that need to be revised.
1. The 'Abstract' part of the paper needs to be condensed. Too much technical background.
2. This paper has introduced many references in the 'Introduction' section, but I think what should be introduced is the relationship between these literatures and your research, rather than simply listing the work done by these literatures.
3. The quality of Figure 5 needs to be improved. It is suggested that the meaning of each curve be intuitively expressed in the picture.
I hope the author can seriously modify the article and polish the language in the article. After the quality is further improved, it can be considered for publication.
1. We have made Abstract more concise and to the point by removing certain technical details from the abstract.
2. We have also made the introduction more concise by removing the references in our literature review that were not directly related to the methods and techniques that we have used.
3. For the Figure 5, which includes plot diagrams for the intent detection and slot filling tasks, we have included the legend in the figure which will help to better understand, the meaning of each of curve of the plot.
We have tried to modify and improve the written text as suggested for publication.
" | Here is a paper. Please give your review comments after reading it. |
170 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In recent years, the traditional approach to spatial image steganalysis has shifted to Deep Learning (DL) techniques, which have improved the detection accuracy while combining feature extraction and classification in a single model, usually, a Convolutional Neural Network (CNN). The main contribution from researchers in this area are new architectures that further improve the detection accuracy. Nevertheless, the preprocessing and partition of the database influence the overall performance of the CNN. This paper presents the results achieved by novel steganalysis networks (Xu-Net, Ye-Net, Yedroudj-Net, SR-Net, Zhu-Net, and GBRAS-Net) using different combinations of image and filter normalization ranges, various database splits, a diverse composition of the training mini-batches, different activation functions for the preprocessing stage, as well as an analysis on the activation maps and how to report accuracy. These results demonstrate how sensible steganalysis systems are to changes in any stage of the process and how important it is for researchers in this field to register and report their work thoroughly. We also propose a set of recommendations for the design of experiments in steganalysis with DL.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In the context of criptography and information hiding, steganography refers to hiding messages in digital multimedia files <ns0:ref type='bibr' target='#b8'>(Hassaballah, 2020;</ns0:ref><ns0:ref type='bibr' target='#b9'>Hassaballah et al., 2021)</ns0:ref> and steganalysis consists of detecting whether a file has a hidden message or not <ns0:ref type='bibr' target='#b22'>(Reinel et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b30'>Tabares-Soto et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Chaumont, 2020)</ns0:ref>.</ns0:p><ns0:p>In digital image steganography, a message can be hidden by changing the value of some pixels in the image (spatial domain, see Fig. <ns0:ref type='figure'>1</ns0:ref>) <ns0:ref type='bibr' target='#b7'>(Hameed et al., 2019)</ns0:ref> or by modifying the coefficients of a frequency transform (frequency domain) while remaining invisible to the human eye. Some of the steganographic algorithms in the spatial domain are HUGO <ns0:ref type='bibr' target='#b19'>(Pevny et al., 2010b)</ns0:ref>, HILL <ns0:ref type='bibr' target='#b16'>(Li et al., 2014)</ns0:ref>, MiPOD <ns0:ref type='bibr' target='#b26'>(Sedighi et al., 2016)</ns0:ref>, S-UNIWARD <ns0:ref type='bibr' target='#b13'>(Holub et al., 2014)</ns0:ref>, and WOW <ns0:ref type='bibr' target='#b12'>(Holub and Fridrich, 2012)</ns0:ref>. In general, a sensitivity analysis refers to the assessment of how the output of a system, or in this case performance of a model, is influenced by its inputs <ns0:ref type='bibr' target='#b21'>(Razavi et al., 2021)</ns0:ref>, not only training data, but model hyper-parameters, preprocessing operations, and desing choices as well. Besides assuring the quality of a model <ns0:ref type='bibr' target='#b24'>(Saltelli et al., 2019)</ns0:ref>, sensitivity analysis can provide an important tool in reporting reproducible results, by explaining the conditions around which those results were achieved <ns0:ref type='bibr' target='#b21'>(Razavi et al., 2021)</ns0:ref>. In its most simple form, consists of varying each of the inputs around its possible values and evaluating the results achieved.</ns0:p><ns0:p>Given the accelerated growth of DL techniques for steganalysis, measuring how factors such as image and filter normalization, database partition, the composition of training mini-batches, and activation function can affect the development and performance of algorithms for steganographic images detection is essential. This research was motivated by the lack of detailed documentation of the experimental set-up, the difficulty to reproduce the CNNs, and the variability of reported results. This paper describes the results of a thorough experimentation process in which different CNN architectures were tested under different scenarios to determine how the training conditions affect the results. Similarly, this paper presents an analysis of how researchers can select the products to report, aiming to deliver reproducible and consistent results. These issues are essential to assess the sensitivity of DL algorithms to different training settings and will ultimately contribute to a further understanding of the problems applied to steganalysis and how to approach them.</ns0:p><ns0:p>The paper has the following order: Section 'Materials and Methods' describes the database, CNN architectures, experiments, training and hyper-parameters, hardware and resources. Section 'Results' presents the quantitative results found for each of the scenarios. Section 'Discussion' discusses the results presented in terms of their relationship and effect on steganalysis systems. Lastly, Section 'Conclusions'</ns0:p><ns0:p>presents the conclusions of the paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Database</ns0:head><ns0:p>The database used for the experiments was Break Our Steganographic System (BOSSBase 1.01) <ns0:ref type='bibr' target='#b1'>(Bas et al., 2011)</ns0:ref>. This database consists of 10, 000 cover images of 512 × 512 pixels in a Portable Gray Map (PGM) format (8 bits grayscale). For this research, similar to the process presented by Tabares-Soto et al.</ns0:p><ns0:p>(2021), the following operations were performed on the images:</ns0:p><ns0:p>• All images were resized to 256 × 256 pixels.</ns0:p><ns0:p>• Each corresponding steganographic image was created for each cover image using S-UNIWARD <ns0:ref type='bibr' target='#b13'>(Holub et al., 2014)</ns0:ref> and WOW <ns0:ref type='bibr' target='#b12'>(Holub and Fridrich, 2012)</ns0:ref> with payload 0.4 bpp. The implementation of these steganographic algorithms was based on the open-source tool named Aletheia <ns0:ref type='bibr' target='#b15'>(Lerch, 2020)</ns0:ref> and the Digital Data Embedding Laboratory at Binghamton University <ns0:ref type='bibr' target='#b33'>(University, 2015)</ns0:ref>.</ns0:p><ns0:p>• The images were divided into training, validation, and test sets. The size of each group varied according to the experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Default partition</ns0:head><ns0:p>After the corresponding steganographic images are generated, the BOSSBase 1.01 database contains 10, 000 pairs of images (cover and stego) divided into 4, 000 pairs for train, 1, 000 pairs for validation, and 5, 000 for the test. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>CNN Architectures</ns0:head><ns0:p>The CNN architectures used in this research, except for GBRAS-Net, were modified according to the strategy described in Tabares- <ns0:ref type='bibr'>Soto et al. (2021)</ns0:ref> to improve the performance of the networks regarding convergence, stability of the training process, and detection accuracy. The modifications involved the following: a preprocessing stage with 30 SRM filters and a modified TanH activation with range [−3, 3],</ns0:p><ns0:p>Spatial Dropout before the convolutional layers, Absolute Value followed by Batch Normalization after the convolutional layers, Leaky ReLU activation in convolutional layers, and a classification stage with three fully connected layers <ns0:ref type='bibr' target='#b4'>(Bravo Ortíz et al., 2021)</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> shows two of the six CNN architectures used for the experiments.</ns0:p></ns0:div>
<ns0:div><ns0:head>Complexity of CNNs</ns0:head><ns0:p>There are two dimensions to calculate the computational complexity of a CNN, spatial and temporal. The spatial complexity calculates the disk size that the model will occupy after being trained (parameters and feature maps). The time complexity allows calculating floating-point operations per second (FLOPS) that the network can perform <ns0:ref type='bibr' target='#b10'>(He and Sun, 2015)</ns0:ref>. Equation 1 is used to calculate the temporal complexity of a CNN and Equation 2 is used to calculate the spatial complexity.</ns0:p><ns0:formula xml:id='formula_0'>Time ∼ O D ∑ l=1 M 2 l .K 2 l .C l−1 .C l (1) Space ∼ O D ∑ l=1 K 2 l .C l−1 .C l + D ∑ l=1 M 2 l .C l (2)</ns0:formula><ns0:p>Where: It is important to clarify that for spatial complexity, the first summation calculates the total size of the network parameters. The second summation calculates the size of the feature maps. In Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>, the spatial and temporal complexities of the CNNs worked in this sensitivity analysis can be observed. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>D =</ns0:formula><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>SRM filters normalization</ns0:head><ns0:p>As for image normalization, the SRM filter values shown in Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> <ns0:ref type='table' target='#tab_2'>0 0 0 0 0 0 1 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 -2 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 -2 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 -2 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -2 1</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>CNN input</ns0:head><ns0:p>To speed up the learning process and avoid issues with GPU memory limitation, CNN optimization is performed over batches of images rather than the complete training set. This dataset division means that the distribution of image classes within each batch of images affects the learning process. Three different image input approaches were tested to demonstrate the effect of the amount of stego and cover images in a batch on the learning process and determine the best way to input the images to the network. The first one involved putting all of the cover images, followed by all the stego images; the second one alternated cover and stego images. The third one applied a random approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Database partition</ns0:head><ns0:p>Dividing the database into three sets is good practice for artificial intelligence applications: the training set to adjust network parameters, the validation set to change network hyper-parameters, and the test set to perform the final evaluation of the CNN performance. There is a default partition (see 'Default partition') which most researchers use in the field. As part of the experimentation process developed in this research, the CNN was tested using three additional database partitions as follow (amounts in image pairs):</ns0:p><ns0:p>• Train: 2, 500, Validation: 2, 500, and Test: 5, 000.</ns0:p><ns0:p>• Train: 4, 000, Validation: 3, 000, and Test: 3, 000.</ns0:p><ns0:p>• Train: 8, 000, Validation: 1, 000, and Test: 1, 000.</ns0:p></ns0:div>
<ns0:div><ns0:head>Activation function of the preprocessing stage</ns0:head><ns0:p>The preprocessing stage, which consisted of a convolutional layer with 30 SRM filters, involves an activation function that affects model performance on specific steganographic algorithms. As part of the experimentation process, four different activation functions were tested: 3 × TanH, 3 × HardSigmoid, 3 × Sigmoid and 3 × So f tsign.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60080:1:1:NEW 2 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Activation maps analysis</ns0:head><ns0:p>The output of a particular layer of a CNN is known as activation maps, indicating how well the architecture performs feature extraction. This paper presents the comparative analysis of the activation maps generated by a cover, a stego ,and a 'cover-stego' image in a trained model. Furthermore, by comparing ,it is possible to see the differences between them.</ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy reporting in steganalysis</ns0:head><ns0:p>One of the characteristics of CNN training in steganalysis is the unstable accuracy and loss values between epochs, leading to highly variable results and training curves. Consequently, an abnormally high accuracy value can be achieved at a given time during the training process. Although it is correct to select the best accuracy under comparison, having more data allows a better understanding of the CNN. For example, in this paper, model accuracy was evaluated using the mean and standard deviation of the top five results</ns0:p><ns0:p>from training, validation, and testing.</ns0:p></ns0:div>
<ns0:div><ns0:head>Training and hyper-parameters</ns0:head><ns0:p>The training batch size was set to 64 images for Xu-Net, Ye-Net, Yedroudj-Net, and 32 for SR-Net, Zhu-Net, GBRAS-Net. The number of training epochs needed to reach convergence is 100, except for Xu-Net that uses 150 epochs. The spatial dropout rate was 0.1 in all layers. Batch normalization had a momentum of 0.2, epsilon of 0.001, and renorm momentum of 0.4. The stochastic gradient descent optimizer momentum was 0.95, and the learning rate was initialized to 0.005. Except for GBRAS-Net, all layers used a glorot normal initializer and L2 regularization for weights and bias. For GBRAS-Net architecture, the training network uses Adam optimizer, which has the following configuration: the learning rate is 0.001, the beta 1 is 0.9, the beta 2 is 0.999, the decay is 0.0, and the epsilon is 1e − 08.</ns0:p><ns0:p>Convolutional layers, except the first layer of preprocessing, use a kernel initializer called glorot uniform.</ns0:p><ns0:p>CNN uses a categorical cross-entropy loss for the two classes. The metric used is accuracy. Batch</ns0:p><ns0:p>Normalization is configured like the other CNNs. In the original network, the maximum absolute value normalizes the 30 high-pass SRM filters for each filter. The same padding is used on all layers. As shown in Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, the predictions performed in the last part of the architecture directly use a Softmax activation function.</ns0:p></ns0:div>
<ns0:div><ns0:head>Hardware and resources</ns0:head><ns0:p>As </ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>Image normalization</ns0:head><ns0:p>Image normalization is a typical operation in digital image processing that affects the performance of CNN. Different types of normalization processes were performed on the images (cover and stego) of </ns0:p></ns0:div>
<ns0:div><ns0:head>SRM Filters Normalization</ns0:head><ns0:p>The SRM filters have an impact on the performance of CNNs for steganalysis. Therefore, filter normalization was performed by multiplying by 1/12. In Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>, each image normalization, distribution of classes within each batch of images, and data partition were equal to 'Image normalization'; additionally, SRM filter normalization was done by multiplying by 1/12. </ns0:p></ns0:div>
<ns0:div><ns0:head>CNN input</ns0:head><ns0:p>Three CNN input distributions were applied and mentioned in 'CNN input' experiment, namely usual (i.e., inputting all the cover images first, followed by all the stego images), random (random positions of cover and stego images), and ordered (alternating cover and stego images). The three cases demonstrate that the distribution of classes within each batch of images affects the learning process. The following experiment (see Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref>) was performed using Ye-Net for S-UNIWARD 0.4 bpp, image pixel values in the range [0,255], (original pixel values), with no SRM filter normalization, and a default data partition (see 'Default partition').</ns0:p></ns0:div>
<ns0:div><ns0:head>Database partition</ns0:head><ns0:p>In artificial intelligence, the databases are divided into training, validation, and testing. For steganalysis, a default data partition is used (see 'Default partition'). Table <ns0:ref type='table'>6</ns0:ref>, and Fig. <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> shows the results of the different data partitions with S-UNIWARD 0.4 bpp.</ns0:p></ns0:div>
<ns0:div><ns0:head>276</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_9'>7</ns0:ref>, and Fig. <ns0:ref type='figure' target='#fig_12'>7</ns0:ref> shows the results of the different data partitions with WOW 0.4 bpp. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Activation maps for cover, stego, and steganographic noise images</ns0:head><ns0:p>The Figure <ns0:ref type='figure' target='#fig_2'>12</ns0:ref> shows the ROC curves with Confidence Interval (CI) for the WOW steganography algorithm.</ns0:p><ns0:p>BOSSBase 1.01 database was used to train the model. These curves correspond to the model presented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> for GBRAS-Net. The ROC curves show the relationship between the false positive and true positive rates. These curves show the Area Under Curve (AUC) values; higher values indicate that the images were better classified by the computational model, which, in turn, depends on the steganography algorithm and payload. ,000, 1,000, 5,000. (B) 2,500, 2,500, 5,000. (C) 4,000, 3,000, 3,000. (D) 8,000, 1,000, 1,000.</ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy reporting in steganalysis</ns0:head><ns0:p>The results of the experiment are shown with a data distribution consisting of 8, 000, 1, 000, and 1, 000 Regarding image and SRM filter normalization, as shown in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>, the effectiveness of a normalization range depends on the selected CNN, such that SRM normalization (see Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>) can generate completely different results.</ns0:p><ns0:formula xml:id='formula_2'>pairs</ns0:formula><ns0:p>The image normalization experiment demonstrates essential aspects of this analysis. For example, considering the Xu-Net architecture in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>, the best result is obtained using images with the original values of the database (i.e., in the range 0 to 255). Given this, one could conclude that there is no need for image normalization in any architecture; however, a different result is observed with the Zhu-Net architecture. Zhu-Net has the best result using the normalization of the pixels from −12 to 8 (inspired by the minimum and maximum values of the original SRM filters). We recommend using the original pixel values as the first option because it is the best option for most of CNN.</ns0:p><ns0:p>When considering the combination of image normalization and filter normalization, the results can be different. For example, for SR-Net architecture from Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>, the normalization of the pixels between </ns0:p></ns0:div>
<ns0:div><ns0:head>334</ns0:head><ns0:p>Regarding the variability of accuracies, Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref> shows the results for the CNN input experiment and In the database partition experiment, the architectures' detection accuracy improved as the training set increased and the test set decreased. Furthermore, if the test dataset reduces considerably, performance on future cases can be affected. In response, recent investigations use the BOWS 2 dataset since it contains more information and. Consequently, with a bigger dataset, data partition can have more information on training and test that can enhance performance. A small test set may be an inadequate representation of the distribution of the images that the network must classify in a production setting; thus, a higher detection accuracy with this partition may not lead to a helpful improvement. choose the models from the results obtained in test data. For this reason, a good representation or quantity of test data is also important.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_10'>8</ns0:ref> shows that using different activation functions implies changes in performance. In Ye-</ns0:p><ns0:p>Net for WOW and S-UNIWARD with 3 × TanH, an average accuracy of 84.2% is achieved, and with 3 × HardSigmoid, an average accuracy of 83.9%. Although for WOW, the best result is given by using the 3 × HardSigmoid activation function overall. A model that serves for detection in several steganographic algorithms is better to use 3 × TanH shown by the average value of accuracy.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_16'>11</ns0:ref> shows that the activation maps from the stego image have differences with the cover image, which indicates a higher activation of the convolutional layer in the presence of the steganographic noise. Moreover, by comparing the activation maps, it is clear that a good learning process was achieved by extracting relevant features and focusing on borders and texture changes in the images, where the steganographic algorithms are known to embed most of the information. The analysis of the activation maps is an effective tool for researchers to evaluate the learning process and gain an understanding of the features that the CNN recognizes as relevant for the steganalysis task. This shows that GBRAS-Net has an excellent ability to discriminate between images without hidden content and with hidden content.</ns0:p><ns0:p>The design of CNN networks allows capturing steganographic content. The first layer (preprocessing), which contains the filters, is responsible for enhancing this noise while decreasing the content of the input image (See Fig 11 <ns0:ref type='figure'>.</ns0:ref> in the Cover and Stego columns for the SRM filters row). The Cover-Stego column in Fig. <ns0:ref type='figure' target='#fig_16'>11</ns0:ref> shows the noise. Adaptive steganography does its job well in adapting to image content; as seen in the image, it does so at hard-to-detect edges and places.</ns0:p><ns0:p>As proposed here Table <ns0:ref type='table' target='#tab_11'>9</ns0:ref>), the main advantage of accuracy reporting is to be able to determine the consistency of the results based not only on the final value or the best one. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Recommendation 1: measure CNN sensitivity to data and SRM filter normalizations.</ns0:p><ns0:p>• Recommendation 2: measure CNN sensitivity to data distributions.</ns0:p><ns0:p>• Recommendation 3: measure CNN sensitivity to data splits.</ns0:p><ns0:p>• Recommendation 4: measure CNN sensitivity to activation functions on preprocessing stage.</ns0:p><ns0:p>• Recommendation 5: show activation maps of cover, stego, and steganographic noise images.</ns0:p><ns0:p>• Recommendation 6: report the top five best epochs with accuracies and their standard deviation.</ns0:p><ns0:p>Finally, the contributions of this paper will be listed at a general level:</ns0:p><ns0:p>• Sensitivity in the percentages of accuracy in detecting steganographic images when applying different normalizations in the pixels of the images on six architectures of CNNs (See Table <ns0:ref type='table' target='#tab_6'>3 and Figures 4</ns0:ref>).</ns0:p><ns0:p>• Sensitivity in the percentage of accuracy detecting steganographic images when applying different normalizations in the SRM filters in the preprocessing stage on six CNNs architectures (See Table <ns0:ref type='table' target='#tab_8'>4 and Figures 5</ns0:ref>).</ns0:p><ns0:p>• Sensitivity in the percentages of accuracy detecting steganographic images when feeding the CNNs with different distribution orders of the dataset in their training process (See Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref>).</ns0:p><ns0:p>• Sensitivity in the percentages of accuracy detecting steganographic images has the partition of the set of images in training, validation, and test (See Table <ns0:ref type='table' target='#tab_11'>6 and 7 and Figures 8,9, and 10</ns0:ref>).</ns0:p><ns0:p>• Sensitivity in the percentages of accuracy detecting steganographic images that have tested different activation functions in the preprocessing stage for the training process (See Table <ns0:ref type='table' target='#tab_10'>8</ns0:ref>).</ns0:p><ns0:p>• The importance of analyzing the activation maps of the different convolutional layers to make new designs of CNNs architectures and understand their behavior (See Figure <ns0:ref type='figure' target='#fig_16'>11</ns0:ref>).</ns0:p><ns0:p>• The importance of reporting the average and standard deviation in the percentages of accuracy detecting steganographic images to determine the results reported in the experiments (See Table <ns0:ref type='table' target='#tab_11'>9</ns0:ref>).</ns0:p><ns0:p>Some possible limitations of the current work, which was developed under the clairvoyant scenario, come from the nature and characteristics of the database: the use of images with fixed resolutions, the specific cameras used to take the pictures, the bit depth of the images, and that all the experiments were performed in the spatial domain. As future work, it is proposed to study each state-of-the-art CNNs weaknesses and problems to design new architectures and computational elements for steganalysis, taking papers <ns0:ref type='bibr' target='#b34'>(Vasan et al., 2020)</ns0:ref> and <ns0:ref type='bibr' target='#b2'>(Bhattacharya et al., 2021)</ns0:ref> as a reference.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>As shown by the results presented in this paper, steganalysis detection systems are susceptible to changes in any stage of the process. Factors such as image and filter normalization ranges, database partition, the composition of training mini-batches, and activation function in the preprocessing stage affect the CNN performance to the point they determine its success. With this in mind, we present the analysis of the activation maps of convolutions for GBRAS-Net as a valuable tool to assess the CNN training process and its ability to extract distinctive features between cover and stego images. Understanding the behavior of steganalysis systems is key to design strategies and computational elements to overcome their limitations and improve their performance. For example, taking Ye-Net as a reference, using the WOW steganographic algorithm with 0.4 bpp, on the BOSSBase 1.01 database and the values of each pixel without any modification (0 and 255), results in an accuracy of 84.8% in the detection of steganographic images, while applying a normalization of the image pixels between 0 and 1 generates a result of 72.7%</ns0:p><ns0:p>(See Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>), taking into account the normal values of the SRM filters (-12 and 8), now if we normalize the values of the previous filters between 0 and 1 with the same characteristics mentioned above, we obtain results of 82.6% and 69.6% respectively (See Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>). Now with the same CNN and performing different</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>number of convolutional network layers (depth) l = convolutional layer where the convolution process is being performed M l = is the size of one side of the feature map in the l − th convolutional layer K l = is the size of one side of the kernel applied on the l − th convolutional layer C l−1 = number of channels of each convolution kernel at the input of the l − th convolutional layer C l = number of convolution kernels at the output of the l − th convolutional layer</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Convolutional Neural Network Architectures. (A) Xu-Net. And (B) GBRAS-Net.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>impact network performance. To evaluate the effect of different filter values, experiments were performed without normalization and with normalization by a factor of 1/12, which caused filter values to be in the range [−1, 2/3].</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The 30 SRM filters values</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>previously described in Tabares-Soto et al. (2021), the architectures and experiment implementations used Python 3.8.1 and TensorFlow<ns0:ref type='bibr' target='#b0'>(Abadi et al., 2015)</ns0:ref> 2.2.0 in a workstation running Ubuntu 20.04 LTS as an operating system. The computer runs a GeForce RTX 2080 Ti (11 GB), CUDA Version 11.0, an AMD Ryzen 9 3950X 16-Core Processor, and 128 GB of RAM. The remaining implementations used the Google Colaboratory platform in an environment with a Tesla P100 PCIe (16 GB) or TPUs, CUDA Version 10.1, and 25.51 GB RAM. GPUs and TPUs enhance deep learning models. Accessing TPUs is done from Google Colaboratory. Once there, the models are adjusted to work with the TPUs. For example, in the GBRAS-Net model, on an 11GB Nvidia RTX 2080Ti GPU (local computer), one epoch takes 229 seconds, whereas with the TPU configured in Google Colaboratory, the epoch needs only 52 seconds. That is, it performs it more than three times faster. For the Ye-Net model, on a 16GB Tesla P100 GPU (Google Colaboratory), one epoch took 44 seconds approximately; whereas with the TPU, it only takes 12 seconds. This verifies that the use of TPU helps the experiments run more efficiently. It is important to note that in Google Colaboratory, we can open several notebooks and use different accounts. Which helps reduce experiment times to an unprecedented level. To achieve a correct training of the CNN, batch sizes must be chosen for each model according to the hardware accelerator employed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>BOSSBase 1.01 with WOW 0.4 bpp. Training and validation were performed with Xu-Net, Ye-Net, Yedroudj-Net, SR-Net, Zhu-Net, and GBRAS-Net CNNs (see Fig. 2 for Xu-Net and GBRAS-Net), with default data partition (see 'Default partition') and no SRM filters normalization. Furthermore, the 7/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60080:1:1:NEW 2 Jun 2021) Manuscript to be reviewed Computer Science distribution of image classes within each image batch was done based on a random distribution of the training images (i.e., random positions of cover and stego images), a usual distribution of the validation images (i.e, inputting all the cover images first, then all the stego images), and a usual distribution of the test images.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Accuracy curves for SR-Net, Zhu-Net and GBRAS-Net CNN with WOW 0.4 bpp and image normalization. (A) 0 to 255. (B) -12 to 8. (C) 0 to 1. (D) -1 to 1. (E) -0.5 to 0.5</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figures 8 Figure 5 .</ns0:head><ns0:label>85</ns0:label><ns0:figDesc>Figures 8, 9, and 10 show the accuracy curves of SR-Net, Zhu-Net, and GBRAS-Net CNN with</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>283 partition'), image pixel values in the range [0,255], with no SRM filter normalization, and distribution 284 of classes within each batch of images based on a random distribution of the training images and usual 285 10/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60080:1:1:NEW 2 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Boxplots for the S-UNIWARD experiments. This figure shows different data partitions experiments for novel CNN architectures. Train, Validation, Test: (A) 4,000, 1,000, 5,000. (B) 2,500, 2,500, 5,000. (C) 4,000, 3,000, 3,000. (D) 8,000, 1,000, 1,000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Fig. 11. The activation maps correspond to cover, stego, and steganographic noise images.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Boxplots for the WOW experiments. This figure shows different data partitions experiments for novel CNN architectures. Train, Validation, Test: (A) 4,000, 1,000, 5,000. (B) 2,500, 2,500, 5,000. (C) 4,000, 3,000, 3,000. (D) 8,000, 1,000, 1,000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>of images, analyzed in GBRAS-Net and Xu-Net architecture using BOSSBase 1.01, image pixel values in the range [0,255], with no SRM filter normalization, and distribution of classes within each batch of images based on a random distribution of the training images and usual distributions of the validation and test images. Table 9 shows the results of accuracy reporting. The model accuracy was evaluated using the mean and standard deviation of the top five results achieved by the CNN during training, validation, and testing. DISCUSSION This study presents results obtained from testing different combinations of image and filter normalization ranges, various database partitions, a diverse composition of training mini-batches, different activation functions for the preprocessing stage, as well as analysis on activation maps of convolutions and how to report accuracy when training six CNN architectures applied to image steganalysis in the spatial domain. The experiments proposed here show highly variable results, indicating the importance of detailed documentation and reports derived from novel work in this field.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 8 .Figure 9 .</ns0:head><ns0:label>89</ns0:label><ns0:figDesc>Figure 8. Accuracy curves of SR-Net with S-UNIWARD and WOW 0.4 bpp. This figure shows different data partitions for each row. Train, Validation, Test: (A) 4,000, 1,000, 5,000. (B) 2,500, 2,500, 5,000. (C) 4,000, 3,000, 3,000. (D) 8,000, 1,000, 1,000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Accuracy curves of GBRAS-Net with S-UNIWARD and WOW 0.4 bpp. This figure shows different data partitions for each row. Train, Validation, Test: (A) 4,000, 1,000, 5,000. (B) 2,500, 2,500, 5,000. (C) 4,000, 3,000, 3,000. (D) 8,000, 1,000, 1,000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Activation maps of convolutional layers in GBRAS-Net architecture trained with WOW 0.4 bpp. This figure shows the Input image, the first convolutional layer or preprocessing layer with SRM Filters, and the last three convolutional layers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figures 8 Figure 12 .</ns0:head><ns0:label>812</ns0:label><ns0:figDesc>Figures 8, 9, and 10 show that a smaller training set produces highly variable validation and test curves, while a bigger training set generates smoother curves. Furthermore, these curves show how the validation curve can sometimes be higher or lower than the training curve. For this reason, it is better to</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Accuracy percentage of models for S-UNIWARD and WOW steganographic algorithms, with payloads of 0.2 and 0.4 bpp. The bold entries indicate the best results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Year -Algorithm</ns0:cell><ns0:cell>S-UNIWARD 0.2 bpp</ns0:cell><ns0:cell>S-UNIWARD 0.4 bpp</ns0:cell><ns0:cell>WOW 0.2 bpp</ns0:cell><ns0:cell>WOW 0.4 bpp</ns0:cell></ns0:row><ns0:row><ns0:cell>2020 -GBRAS-Net</ns0:cell><ns0:cell>73.6</ns0:cell><ns0:cell>87.1</ns0:cell><ns0:cell>80.3</ns0:cell><ns0:cell>89.8</ns0:cell></ns0:row><ns0:row><ns0:cell>2019 -Zhu-Net</ns0:cell><ns0:cell>71.4</ns0:cell><ns0:cell>84.5</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell>88.1</ns0:cell></ns0:row><ns0:row><ns0:cell>2018 -SR-Net</ns0:cell><ns0:cell>67.7</ns0:cell><ns0:cell>81.3</ns0:cell><ns0:cell>75.5</ns0:cell><ns0:cell>86.4</ns0:cell></ns0:row><ns0:row><ns0:cell>2018 -Yedroudj-Net</ns0:cell><ns0:cell>63.5</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>72.3</ns0:cell><ns0:cell>85.1</ns0:cell></ns0:row><ns0:row><ns0:cell>2017 -Ye-Net</ns0:cell><ns0:cell>60.1</ns0:cell><ns0:cell>68.7</ns0:cell><ns0:cell>66.9</ns0:cell><ns0:cell>76.7</ns0:cell></ns0:row><ns0:row><ns0:cell>2016 -Xu-Net</ns0:cell><ns0:cell>60.9</ns0:cell><ns0:cell>72.7</ns0:cell><ns0:cell>67.5</ns0:cell><ns0:cell>79.3</ns0:cell></ns0:row><ns0:row><ns0:cell>2015 -Qian-Net</ns0:cell><ns0:cell>53.7</ns0:cell><ns0:cell>69.1</ns0:cell><ns0:cell>61.4</ns0:cell><ns0:cell>70.7</ns0:cell></ns0:row><ns0:row><ns0:cell>2012 -SRM+Ensemble classifier</ns0:cell><ns0:cell>63.4</ns0:cell><ns0:cell>75.3</ns0:cell><ns0:cell>63.5</ns0:cell><ns0:cell>74.5</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Spatial and temporal complexity of the CNNs used to perform the steganalysis process.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>CNN</ns0:cell><ns0:cell>Total number of parameters for training</ns0:cell><ns0:cell>Spatial complexity In MegaBytes</ns0:cell><ns0:cell>Temporal Complexity In GigaFLOPS</ns0:cell></ns0:row><ns0:row><ns0:cell>Xu-Net</ns0:cell><ns0:cell>87,830</ns0:cell><ns0:cell>0.45</ns0:cell><ns0:cell>2.14</ns0:cell></ns0:row><ns0:row><ns0:cell>Ye-Net</ns0:cell><ns0:cell>88,586</ns0:cell><ns0:cell>0.43</ns0:cell><ns0:cell>5.77</ns0:cell></ns0:row><ns0:row><ns0:cell>Yedroudj-Net</ns0:cell><ns0:cell>252,459</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>12.51</ns0:cell></ns0:row><ns0:row><ns0:cell>SR-Net</ns0:cell><ns0:cell>4,874,074</ns0:cell><ns0:cell>19.00</ns0:cell><ns0:cell>134.77</ns0:cell></ns0:row><ns0:row><ns0:cell>Zhu-Net</ns0:cell><ns0:cell>10,233,770</ns0:cell><ns0:cell>39.00</ns0:cell><ns0:cell>3.07</ns0:cell></ns0:row><ns0:row><ns0:cell>GBRAS-Net</ns0:cell><ns0:cell>166,598</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>5.92</ns0:cell></ns0:row><ns0:row><ns0:cell>Experiments</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Image normalization</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60080:1:1:NEW 2 Jun 2021)</ns0:cell><ns0:cell>4/21</ns0:cell></ns0:row></ns0:table><ns0:note>Image normalization is a typical operation in digital image processing that changes the ranges of the pixel values to match the operating region of the activation function. The most used bounds for CNN training are 0 to 255, when the values are integers, and 0 to 1 with floating-point values. The selection of this range affects performance and, depending on the application, one or the other is preferred. The following ranges were tested to demonstrate these effects:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Image normalization and best test accuracy for CNNs with WOW 0.4 bpp using BOSSBase 1.01. The bold entries indicate the best results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Image</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>normalization</ns0:cell><ns0:cell>Xu-Net [%]</ns0:cell><ns0:cell>Ye-Net [%]</ns0:cell><ns0:cell>Yedroudj-Net [%]</ns0:cell><ns0:cell>SR-Net [%]</ns0:cell><ns0:cell>Zhu-Net [%]</ns0:cell><ns0:cell>GBRAS-Net [%]</ns0:cell></ns0:row><ns0:row><ns0:cell>[0, 255]</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>84.8</ns0:cell><ns0:cell>85.5</ns0:cell><ns0:cell>84.8</ns0:cell><ns0:cell>84.2</ns0:cell><ns0:cell>88.4</ns0:cell></ns0:row><ns0:row><ns0:cell>[−12, 8]</ns0:cell><ns0:cell>78.7</ns0:cell><ns0:cell>81.6</ns0:cell><ns0:cell>81.5</ns0:cell><ns0:cell>83.6</ns0:cell><ns0:cell>84.9</ns0:cell><ns0:cell>86.5</ns0:cell></ns0:row><ns0:row><ns0:cell>[0, 1]</ns0:cell><ns0:cell>65.9</ns0:cell><ns0:cell>72.7</ns0:cell><ns0:cell>51.0</ns0:cell><ns0:cell>50.5</ns0:cell><ns0:cell>78.0</ns0:cell><ns0:cell>84.4</ns0:cell></ns0:row><ns0:row><ns0:cell>[−1, 1]</ns0:cell><ns0:cell>51.4</ns0:cell><ns0:cell>76.3</ns0:cell><ns0:cell>52.1</ns0:cell><ns0:cell>75.9</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>84.4</ns0:cell></ns0:row><ns0:row><ns0:cell>[−0.5, 0.5]</ns0:cell><ns0:cell>64.2</ns0:cell><ns0:cell>76.2</ns0:cell><ns0:cell>50.6</ns0:cell><ns0:cell>50.2</ns0:cell><ns0:cell>77.7</ns0:cell><ns0:cell>85.1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>shows the best test accuracy results with different image normalizations in the convolutional neural networks with WOW 0.4 bpp. Figure 4, under the title 'Image Normalization' shows the accuracy curves of SR-Net, Zhu-Net, and GBRAS-Net CNNs with WOW 0.4 bpp for different image normalizations.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>SRM filters and image normalization and best test accuracy for CNNs with WOW 0.4 bpp using BOSSBase 1.01. The bold entries indicate the best results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Image</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell><ns0:cell>Test accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>normalization</ns0:cell><ns0:cell>Xu-Net [%]</ns0:cell><ns0:cell>Ye-Net [%]</ns0:cell><ns0:cell>Yedroudj-Net [%]</ns0:cell><ns0:cell>SR-Net [%]</ns0:cell><ns0:cell>Zhu-Net [%]</ns0:cell><ns0:cell>GBRAS-Net [%]</ns0:cell></ns0:row><ns0:row><ns0:cell>[0, 255]</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>81.6</ns0:cell><ns0:cell>82.8</ns0:cell><ns0:cell>84.9</ns0:cell><ns0:cell>87.1</ns0:cell></ns0:row><ns0:row><ns0:cell>[−12, 8]</ns0:cell><ns0:cell>50.8</ns0:cell><ns0:cell>75.9</ns0:cell><ns0:cell>52.2</ns0:cell><ns0:cell>50.4</ns0:cell><ns0:cell>78.9</ns0:cell><ns0:cell>83.4</ns0:cell></ns0:row><ns0:row><ns0:cell>[0, 1]</ns0:cell><ns0:cell>50.4</ns0:cell><ns0:cell>69.6</ns0:cell><ns0:cell>50.2</ns0:cell><ns0:cell>81.4</ns0:cell><ns0:cell>50.0</ns0:cell><ns0:cell>83.6</ns0:cell></ns0:row><ns0:row><ns0:cell>[−1, 1]</ns0:cell><ns0:cell>50.1</ns0:cell><ns0:cell>66.8</ns0:cell><ns0:cell>50.2</ns0:cell><ns0:cell>81.2</ns0:cell><ns0:cell>50.8</ns0:cell><ns0:cell>84.6</ns0:cell></ns0:row><ns0:row><ns0:cell>[−0.5, 0.5]</ns0:cell><ns0:cell>50.2</ns0:cell><ns0:cell>63.1</ns0:cell><ns0:cell>50.0</ns0:cell><ns0:cell>81.5</ns0:cell><ns0:cell>50.0</ns0:cell><ns0:cell>85.5</ns0:cell></ns0:row></ns0:table><ns0:note>Table4shows the best test accuracy result with a different image and filter normalization in the CNNs with WOW 0.4 bpp. Figure5, under the title 'SRM Filters Normalization' shows the accuracy curves for SR-Net, Zhu-Net, and GBRAS-Net CNNs with WOW 0.4 bpp and a different image and filter normalization.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>CNN input and best validation accuracy for Ye-Net with S-UNIWARD 0.4 bpp using BOSSBase 1.01.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Training image</ns0:cell><ns0:cell cols='3'>Validation image</ns0:cell><ns0:cell cols='2'>Best validation</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>distribution</ns0:cell><ns0:cell /><ns0:cell cols='2'>distribution</ns0:cell><ns0:cell>accuracy[%]</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Random</ns0:cell><ns0:cell /><ns0:cell cols='2'>Usual</ns0:cell><ns0:cell>83.4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Random</ns0:cell><ns0:cell /><ns0:cell cols='2'>Random</ns0:cell><ns0:cell>83.9</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Order</ns0:cell><ns0:cell /><ns0:cell cols='2'>Order</ns0:cell><ns0:cell>84.1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Usual</ns0:cell><ns0:cell /><ns0:cell cols='2'>Usual</ns0:cell><ns0:cell>84.3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Order</ns0:cell><ns0:cell /><ns0:cell cols='2'>Usual</ns0:cell><ns0:cell>84.1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Order</ns0:cell><ns0:cell /><ns0:cell cols='2'>Random</ns0:cell><ns0:cell>83.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Random</ns0:cell><ns0:cell /><ns0:cell cols='2'>Order</ns0:cell><ns0:cell>83.9</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Usual</ns0:cell><ns0:cell /><ns0:cell cols='2'>Random</ns0:cell><ns0:cell>83.8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Usual</ns0:cell><ns0:cell /><ns0:cell cols='2'>Order</ns0:cell><ns0:cell>84.8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='10'>Table 6. Different data partitions and best test accuracy for CNNs with S-UNIWARD 0.4 bpp</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>using BOSSBase 1.01. Corresponding to train, validation and test, respectively: (A) 4,000, 1,000,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>5,000. (B) 2,500, 2,500, 5,000. (C) 4,000, 3,000, 3,000. And (D) 8,000, 1,000, 1,000.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>CNN</ns0:cell><ns0:cell>Distribution</ns0:cell><ns0:cell cols='3'>Accuracy on test [%] Best Mean SD</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>Distribution</ns0:cell><ns0:cell cols='3'>Accuracy on test [%] Best Mean SD</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>79.0</ns0:cell><ns0:cell>0.51</ns0:cell><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>77.0</ns0:cell><ns0:cell>76.5</ns0:cell><ns0:cell>0.32</ns0:cell></ns0:row><ns0:row><ns0:cell>Xu-Net</ns0:cell><ns0:cell>B C</ns0:cell><ns0:cell>78.4 79.6</ns0:cell><ns0:cell>77.3 79.4</ns0:cell><ns0:cell>1.01 0.17</ns0:cell><ns0:cell>SR-Net</ns0:cell><ns0:cell>B C</ns0:cell><ns0:cell>73.3 77.7</ns0:cell><ns0:cell>73.1 77.5</ns0:cell><ns0:cell>0.23 0.20</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>D</ns0:cell><ns0:cell>85.0</ns0:cell><ns0:cell>84.4</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell /><ns0:cell>D</ns0:cell><ns0:cell>87.5</ns0:cell><ns0:cell>87.4</ns0:cell><ns0:cell>0.14</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>81.1</ns0:cell><ns0:cell>80.5</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>82.5</ns0:cell><ns0:cell>0.09</ns0:cell></ns0:row><ns0:row><ns0:cell>Ye-Net</ns0:cell><ns0:cell>B C</ns0:cell><ns0:cell>77.2 81.2</ns0:cell><ns0:cell>76.8 80.9</ns0:cell><ns0:cell>0.41 0.21</ns0:cell><ns0:cell>Zhu-Net</ns0:cell><ns0:cell>B C</ns0:cell><ns0:cell>81.2 81.2</ns0:cell><ns0:cell>80.5 80.7</ns0:cell><ns0:cell>0.35 0.34</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>D</ns0:cell><ns0:cell>86.8</ns0:cell><ns0:cell>86.0</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell /><ns0:cell>D</ns0:cell><ns0:cell>86.9</ns0:cell><ns0:cell>86.7</ns0:cell><ns0:cell>0.13</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>81.8</ns0:cell><ns0:cell>81.1</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell /><ns0:cell>A</ns0:cell><ns0:cell>82.8</ns0:cell><ns0:cell>82.1</ns0:cell><ns0:cell>0.59</ns0:cell></ns0:row><ns0:row><ns0:cell>Yedroudj-Net</ns0:cell><ns0:cell>B C</ns0:cell><ns0:cell>78.5 80.7</ns0:cell><ns0:cell>77.5 80.1</ns0:cell><ns0:cell>0.91 0.51</ns0:cell><ns0:cell>GBRAS-Net</ns0:cell><ns0:cell>B C</ns0:cell><ns0:cell>80.8 81.7</ns0:cell><ns0:cell>79.5 81.5</ns0:cell><ns0:cell>1.19 0.15</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>D</ns0:cell><ns0:cell>86.3</ns0:cell><ns0:cell>85.5</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell /><ns0:cell>D</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>88.3</ns0:cell><ns0:cell>0.45</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Different data partitions and best test accuracy for CNNs with WOW 0.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>4 bpp using</ns0:cell></ns0:row></ns0:table><ns0:note>286Another important experiment that we can present is what happens when the value that multiplies the 287 preprocessing activation function is changed. For WOW multiplying by 5, 8, 13, 21 the best results are: 288 82.9%, 83.0%, 82.3%, and 83.6% respectively. For S-UNIWARD multiplying by with 5, 8, 13, 21 the 289 best results are: 85.0%, 84.3%, 85.1%, and 84.8% respectively.290 11/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60080:1:1:NEW 2 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Effect of the activation function on two steganographic algorithms (WOW and S-UNIWARD) using Ye-Net architecture, trained on TPU with 200 epochs and batchsize of 64.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Activation function</ns0:cell><ns0:cell>WOW (Epoch) Accuracy</ns0:cell><ns0:cell>S-UNIWARD (Epoch) Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>3 × TanH</ns0:cell><ns0:cell>(119) 85.0</ns0:cell><ns0:cell>(196) 83.3</ns0:cell></ns0:row><ns0:row><ns0:cell>3 × HardSigmoid</ns0:cell><ns0:cell>(162) 86.0</ns0:cell><ns0:cell>(188) 81.8</ns0:cell></ns0:row><ns0:row><ns0:cell>3 × Sigmoid</ns0:cell><ns0:cell>(170) 85.5</ns0:cell><ns0:cell>(198) 81.8</ns0:cell></ns0:row><ns0:row><ns0:cell>3 × So f tsign</ns0:cell><ns0:cell>(154) 85.5</ns0:cell><ns0:cell>(163) 81.2</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Accuracy report structure. Show the results with this manner allows understand how the model have the behavior for a specific experiment.SRMs, as shown in Table4, the SR-Net CNN reaches an accuracy of up to 81.5%. However, as the 332 normalization experiments show, GBRAS-Net is the architecture that best behaves or adapts to changes in 333 data normalizations and distributions. We recommend making use of this new architecture.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Xu-Net: Train=8,000, Valid=1,000, Test=1,000</ns0:cell><ns0:cell cols='4'>GBRAS-Net: Train=8,000, Valid=1,000, Test=1,000</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>S-UNIWARD 0.4bpp Best 5% Accuracies</ns0:cell><ns0:cell cols='4'>S-UNIWARD 0.4bpp, the best 5% Accuracies</ns0:cell></ns0:row><ns0:row><ns0:cell>Train</ns0:cell><ns0:cell>Epoch</ns0:cell><ns0:cell>Valid</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>Train</ns0:cell><ns0:cell>Epoch</ns0:cell><ns0:cell>Valid</ns0:cell><ns0:cell>Test</ns0:cell></ns0:row><ns0:row><ns0:cell>83.4</ns0:cell><ns0:cell>149</ns0:cell><ns0:cell>77.6</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>88.4</ns0:cell><ns0:cell>99</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>87.3</ns0:cell></ns0:row><ns0:row><ns0:cell>83.1</ns0:cell><ns0:cell>144</ns0:cell><ns0:cell>77.0</ns0:cell><ns0:cell>85.0</ns0:cell><ns0:cell>88.1</ns0:cell><ns0:cell>90</ns0:cell><ns0:cell>81.2</ns0:cell><ns0:cell>86.5</ns0:cell></ns0:row><ns0:row><ns0:cell>82.9</ns0:cell><ns0:cell>145</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>83.5</ns0:cell><ns0:cell>87.9</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>81.6</ns0:cell><ns0:cell>86.8</ns0:cell></ns0:row><ns0:row><ns0:cell>82.7</ns0:cell><ns0:cell>147</ns0:cell><ns0:cell>76.7</ns0:cell><ns0:cell>83.4</ns0:cell><ns0:cell>87.9</ns0:cell><ns0:cell>86</ns0:cell><ns0:cell>80.8</ns0:cell><ns0:cell>85.8</ns0:cell></ns0:row><ns0:row><ns0:cell>82.7</ns0:cell><ns0:cell>148</ns0:cell><ns0:cell>77.3</ns0:cell><ns0:cell>83.1</ns0:cell><ns0:cell>87.8</ns0:cell><ns0:cell>87</ns0:cell><ns0:cell>83.0</ns0:cell><ns0:cell>88.1</ns0:cell></ns0:row><ns0:row><ns0:cell>82.9</ns0:cell><ns0:cell>mean</ns0:cell><ns0:cell>77.2</ns0:cell><ns0:cell>83.6</ns0:cell><ns0:cell>88.0</ns0:cell><ns0:cell>mean</ns0:cell><ns0:cell>81.8</ns0:cell><ns0:cell>86.9</ns0:cell></ns0:row><ns0:row><ns0:cell>0.32</ns0:cell><ns0:cell>standard deviation</ns0:cell><ns0:cell>0.35</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>standard deviation</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.85</ns0:cell></ns0:row><ns0:row><ns0:cell>Valid</ns0:cell><ns0:cell>Epoch</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>Train</ns0:cell><ns0:cell>Valid</ns0:cell><ns0:cell>Epoch</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>Train</ns0:cell></ns0:row><ns0:row><ns0:cell>77.6</ns0:cell><ns0:cell>143</ns0:cell><ns0:cell>83.9</ns0:cell><ns0:cell>81.6</ns0:cell><ns0:cell>83.0</ns0:cell><ns0:cell>87</ns0:cell><ns0:cell>88.1</ns0:cell><ns0:cell>87.8</ns0:cell></ns0:row><ns0:row><ns0:cell>77.6</ns0:cell><ns0:cell>149</ns0:cell><ns0:cell>83.3</ns0:cell><ns0:cell>83.4</ns0:cell><ns0:cell>82.9</ns0:cell><ns0:cell>94</ns0:cell><ns0:cell>87.9</ns0:cell><ns0:cell>87.6</ns0:cell></ns0:row><ns0:row><ns0:cell>77.4</ns0:cell><ns0:cell>145</ns0:cell><ns0:cell>83.5</ns0:cell><ns0:cell>82.9</ns0:cell><ns0:cell>82.9</ns0:cell><ns0:cell>71</ns0:cell><ns0:cell>88.3</ns0:cell><ns0:cell>86.5</ns0:cell></ns0:row><ns0:row><ns0:cell>77.4</ns0:cell><ns0:cell>150</ns0:cell><ns0:cell>84.1</ns0:cell><ns0:cell>82.2</ns0:cell><ns0:cell>82.8</ns0:cell><ns0:cell>77</ns0:cell><ns0:cell>88.2</ns0:cell><ns0:cell>85.8</ns0:cell></ns0:row><ns0:row><ns0:cell>77.3</ns0:cell><ns0:cell>130</ns0:cell><ns0:cell>84.0</ns0:cell><ns0:cell>82.3</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>99</ns0:cell><ns0:cell>87.3</ns0:cell><ns0:cell>88.4</ns0:cell></ns0:row><ns0:row><ns0:cell>77.5</ns0:cell><ns0:cell>mean</ns0:cell><ns0:cell>83.7</ns0:cell><ns0:cell>82.5</ns0:cell><ns0:cell>82.8</ns0:cell><ns0:cell>mean</ns0:cell><ns0:cell>87.9</ns0:cell><ns0:cell>87.2</ns0:cell></ns0:row><ns0:row><ns0:cell>0.12</ns0:cell><ns0:cell>standard deviation</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.15</ns0:cell><ns0:cell>standard deviation</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>1.06</ns0:cell></ns0:row><ns0:row><ns0:cell>Test</ns0:cell><ns0:cell>Epoch</ns0:cell><ns0:cell>Train</ns0:cell><ns0:cell>Valid</ns0:cell><ns0:cell>Test</ns0:cell><ns0:cell>Epoch</ns0:cell><ns0:cell>Train</ns0:cell><ns0:cell>Valid</ns0:cell></ns0:row><ns0:row><ns0:cell>85.0</ns0:cell><ns0:cell>144</ns0:cell><ns0:cell>83.1</ns0:cell><ns0:cell>77.0</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>87.2</ns0:cell><ns0:cell>82.5</ns0:cell></ns0:row><ns0:row><ns0:cell>84.4</ns0:cell><ns0:cell>141</ns0:cell><ns0:cell>82.7</ns0:cell><ns0:cell>76.7</ns0:cell><ns0:cell>88.3</ns0:cell><ns0:cell>71</ns0:cell><ns0:cell>86.5</ns0:cell><ns0:cell>82.9</ns0:cell></ns0:row><ns0:row><ns0:cell>84.4</ns0:cell><ns0:cell>131</ns0:cell><ns0:cell>82.2</ns0:cell><ns0:cell>77.3</ns0:cell><ns0:cell>88.2</ns0:cell><ns0:cell>77</ns0:cell><ns0:cell>85.8</ns0:cell><ns0:cell>82.8</ns0:cell></ns0:row><ns0:row><ns0:cell>84.3</ns0:cell><ns0:cell>133</ns0:cell><ns0:cell>81.0</ns0:cell><ns0:cell>76.3</ns0:cell><ns0:cell>88.1</ns0:cell><ns0:cell>87</ns0:cell><ns0:cell>87.8</ns0:cell><ns0:cell>83.0</ns0:cell></ns0:row><ns0:row><ns0:cell>84.2</ns0:cell><ns0:cell>140</ns0:cell><ns0:cell>82.2</ns0:cell><ns0:cell>77.1</ns0:cell><ns0:cell>87.9</ns0:cell><ns0:cell>76</ns0:cell><ns0:cell>82.7</ns0:cell><ns0:cell>82.2</ns0:cell></ns0:row><ns0:row><ns0:cell>84.4</ns0:cell><ns0:cell>mean</ns0:cell><ns0:cell>82.2</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell>88.3</ns0:cell><ns0:cell>mean</ns0:cell><ns0:cell>86.0</ns0:cell><ns0:cell>82.7</ns0:cell></ns0:row><ns0:row><ns0:cell>0.33</ns0:cell><ns0:cell>standard deviation</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.39</ns0:cell><ns0:cell>0.45</ns0:cell><ns0:cell>standard deviation</ns0:cell><ns0:cell>1.98</ns0:cell><ns0:cell>0.34</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head /><ns0:label /><ns0:figDesc>To obtain these results, as the architectures are trained, a model is saved from each epoch. With these models, the accuracies are then obtained in the datasets. With this, you can know which are the best models. And with this accuracy</ns0:figDesc><ns0:table /><ns0:note>reporting mode, when a specific experiment is presented, whoever will reproduce it will see the range of results to expect. As shown here, the sensitivity of deep learning is excellent in this problem, which can lead to reproducing a CNN not obtaining the same result from the reporter. With all information shown in this work for spatial image steganalysis using deep learning, we propose a set of recommendations for the design of experiments, listed below:17/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60080:1:1:NEW 2 Jun 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='18'>/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60080:1:1:NEW 2 Jun 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "May 18, 2021
Dr. Mamoun Alazab
Academic Editor, Peerj Computer Science
Dear Dr. Mamoun Alazab:
First of all, thank you for the valuable comments from reviewers. We have followed the suggestions and we made changes according to their advice. Below are the answers we gave to the reviewer's comments:
Reviewer 1 (Anonymous)
Basic reporting
The language needs polishing and a full revision by a native English speaker is needed.
Thank you for your recommendation. We decided to send the paper for language review by a native speaker.
Experimental design
Methods described with sufficient detail
Thank you for your comment.
Validity of the findings
Conclusion should highlight the achievements. Here it is more or less similar to the Abstract.
Thank you for your recommendation. We added a paragraph at the end of the first part of the conclusions which highlights the major achievements of this paper.
Comments for the Author
The paper presents a sensitivity analysis of CNN in steganalysis ((Xu-Net, Ye-Net, Yedroudj-Net, SR-Net, Zhu-Net, and GBRAS-Net)) , and offers recommendations to take into account when performing experiments in image steganalysis in the spatial domain. It is a very important work, since it shows how CNN architectures can be affected by multiple factors, from CNNs in their internal composition to the effect of databases.
I suggest the authors bring the following points into their consideration to revise the manuscript accordingly.
1. Delete the details in the caption of Figure 1 as it is already explained in the text.
Thank you for your appreciation. We removed the caption from figure 1.
2. In Table 1. I recommend mentioning what the acronym EC (Ensemble classifier) refers to. Nowhere in the document it is mentioned.
Thank you for your appreciation. We changed the word 'EC' to 'Ensemble Classifier' in table 1.
3. In Table 7 it would be interesting to also show some results for 3xTanH using other values to multiply the function, like 5, 8, 13, 21 for example.
Thank you for your appreciation. These experiments are interesting, so we performed them for both WOW and S-UNIWARD, using CNN Ye-Net with BOSSBase 1.01 data. We add a text on “Activation function of the preprocessing stage”, with these results. Lines 285-288
4. Show the difference in training times, when using GPU, and when using TPU. This experiment could be presented for GBRAS-Net for example. This would also show the importance of using TPU's.
Thank you for your appreciation. We add a text that shows the difference in training times, when using GPU, and when using TPU after ''Hardware and resources' lines 225-234.
5. Check, because the text of the link effectively leads to the repository, it cannot be done directly by clicking (possibly due to the generation of the Review PDF because it includes the 353 in the link, but it is still good to verify). It is also appreciated that the repository has been included.
Thank you for your appreciation. We have corrected the error.
6. In the 'Discussion' section the expression epoch to epoch I do not consider correct, I propose to use each epoch
Thank you for your appreciation. We changed the word 'epoch to epoch' to 'each epoch'.
7. Conclusion should highlight the achievements. Here it is more or less similar to the Abstract.
Thank you for your recommendation. We added a paragraph at the end of the first part of the conclusions which highlights the major achievements of this paper.
8. Finally, citing the following will enrich the literature and be very useful for readers:
-Deep learning in steganography and steganalysis, 2020
-Digital Media Steganography:Principles, Algorithms, and Advances, Elsevier, 2020 , ISBN: 9780128194386,
-A Novel Image Steganography Method for Industrial Internet of Things Security, IEEE Transactions on Industrial Informatics, 2021
-An adaptive image steganography method based on histogram of oriented gradient and PVD-LSB techniques,IEEE Access
Thank you for your appreciation. We have added these references to the document because we noted that they were important in the context of this paper. Lines 32,36 and 44
Reviewer 2 (Anonymous)
Basic reporting
1. Discuss the main motivations of the current work.
Thank you for your appreciation. We add the main motivations for writing this paper in the penultimate paragraph of the introduction.
2. List out the main contributions of the current work.
Thank you for your recommendation. We have added a list of the main contributions at the end of the discussion.
3. Summarize the drawbacks of existing works in the form of a table.4. Some of the recent works such as the following can be discussed in the paper: 'Image-Based malware classification using ensemble of CNN architectures (IMCEC), Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey, Hand gesture classification using a novel CNN-crow search algorithm'.
Thank you for your appreciation. We consider this as future work and add it to the last paragraph of the discussion.
4. Resolution of the images has to be improved.
Thank you for your appreciation. The images were redistributed to improve the visualization, it is important to clarify that the vectorized images do not have problems when zooming in.
5. Compare the current work with recent state-of-the-art.
Thank you for your appreciation. We believe that it is difficult to compare with the state of the art because there is no article with sensitivity analysis. Table 1 of the paper contains state of art networks to do steganalysis to facilitate understanding the paper's objective.
6. Discuss the limitations of the current work.
Thank you for your appreciation. We add a text in the discussion section. Lines 399-404
7. Present the computational complexity of the current work.
Thanks for your recommendation. We created a subsection entitled 'Complexity of CNNs' within the 'MATERIALS AND METHODS' section. In this subsection we explain how to calculate the spatial and temporal complexity of a CNN, in addition to the calculations of the spatial and temporal complexity for the CNNs worked on in this paper. The results can be seen in Table 2
Experimental design
Satisfactory.
Thank you for your appreciation.
Validity of the findings
Satisfactory.
Thank you for your appreciation.
Reviewer 3 (Anonymous)
Basic reporting
no comment
Experimental design
no comment
Validity of the findings
no comment
Comments for the Author
In this manuscript, Reinel et al applied different CNN architectures to solve an important question faced by spatial image steganalysis, i.e., steganographic images accuracy. The authors first provided an excellent overview of the field and then the working principles behind the different algorithms. An impressive test accuracy was reached, demonstrating the promise of the GBRAS-Net method. Overall, the paper is well written, and I recommend its publication after the authors address the following comments:
1. What is the uncertainty of the CNN’s predictions? Specifically, what are the error bars in accuracies reported in Table 5 and Table 6. And what is the confidence band in accuracy curves (Fig. 4 and Fig. 5).
Thank you for your appreciation. We add a text in ‘Activation maps for cover, stego, and steganographic noise images’ lines 296-301, and we add a Figure 12 that shows the ROC curves with Confidence Interval (CI) for WOW steganography algorithm, for GBRAS-Net. Respecting Tables 5 and 6, we add the boxplots in the new figures 6 and 7 for a better understanding by the reader.
2. Some numbers in Fig. 4 and Fig. 5 cannot be seen and the figure should be replaced.
Thank you for your appreciation. Figure 4 was split in two, and Figure 5 in three parts. In order to better display the content of the graphs. It is important to clarify that the vectorized images do not have problems when zooming in.
3. Although CNN architecture seems to be applied with care, it would be helpful to assign physical meanings to feature and model selections beyond simple black box. More discussions in this aspect will add confidence to the paper.
Thank you for your appreciation. We add a text in the discussion section. The text allows a better understanding of steganographic content. Lines 362-366
" | Here is a paper. Please give your review comments after reading it. |
171 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The wireless networks face challenges in efficient utilization of bandwidth due to paucity of resources and lack of management, which may result in undesired congestion. The cognitive radio (CR) paradigm can bring efficiency, better utilization of bandwidth, and appropriate management of limited resources. While the CR paradigm is an attractive choice, the CRs selfishly compete to acquire and utilize available bandwidth that may ultimately results in inappropriate power levels, causing degradation in Quality of Service of the network. A cooperative game theoretic approach can ease the problem of spectrum sharing and power utilization in a hostile and selfish environment. We focus on the challenge of congestion control that result due to inadequate and uncontrolled channels access and utilization of resources. The Nash Equilibrium of a cooperative congestion game is examined by considering the cost basis, which is embedded in the utility function.</ns0:p><ns0:p>The proposed algorithm inhibits the utility, which leads to the decrease in aggregate cost and global function maximization. The cost dominance is a pivotal agent for cooperation in CRs that results in efficient power allocation. The simulation results show reduction in power utilization due to improved management in cognitive radio resource allocation.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In this modern era of high-speed communications, the users and designers face the challenge of efficient spectrum utilization primarily due to its scarcity. In general, the usage of wireless radio spectrum is governed by allocating licenses to the primary users (PUs). In many scenarios, the allocated wireless bands are not fully used, which provides an opportunity to further improve spectral utilization. While the cognitive radio paradigm may have eased the problem of spectrum utilization, their deployment brings forth certain critical issues in radio resource management. In cognitive radio networks, if certain conditions are met, the PUs can opportunistically share their allocated bandwidth with secondary users (SUs) or unlicensed users. When the secondary users sense the spectrum holes to transmit their information, a competition begins with the peers in the network to use the resources. This induces antagonism among the SUs, which sometimes results in hostile environment that seriously hampers the efficient utilization of spectrum. Since there is no central coordinator in cognitive radio networks, all SUs selfishly try to maximize their throughput. Assuming the restriction of each node having a single radio transceiver, according to <ns0:ref type='bibr'>Law et al.</ns0:ref> only one channel can be accessed by each SU in the network. As the nodes are making decisions independently, every SU aims for the best possible channel that motivates them to switch to lucrative channels. The SUs start to behave like sheep, which result in frequent switching of PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science channels and cause rapid change in SIR levels for all the users. The throughput of a SU depends upon the number of SUs sharing a spectrum with desired power levels that ultimately results in congestion. The power control algorithms are very effective in controlling interference and throughput, but in the absence of a central authority it becomes challenging and difficult. In addition, successive waveform adaptation mechanisms in code division multiple access (CDMA) network can be employed to maintain the SIR threshold, which results in better SIR levels and improved resource sharing. Cognitive radios, therefore, need to manage both the benefits and costs of channel switching and spectrum sharing all together along with energy utilization. A cognitive radio network is a specialized technology that provides opportunities for spectrum sharing in competitive environments that compels the users in a game theoretic environment to coordinate, which assists in mitigating the effects of conflicts in spectrum sharing. In order to improve the performance of a cognitive radio network, it is imperative to induce mechanisms that alleviates the adverse effects caused by uncontrolled interference.</ns0:p><ns0:p>The focus of this paper is on cooperative spectrum sharing that considers the amount of interference each node faces due to neighboring nodes, and the cooperation problem is managed with game theoretic approach. The cooperative congestion game is proved to be a useful tool in resolving the adverse effects created by the selfish behavior of CRs within the network. The congestion game helps the CRs to make better decisions for the network stability. These decisions, when analyzed in a Nash equilibrium perspective, proved to be the cost effective for CRs. A unique cost function is deduced based on SINR and channel switching cost. Inverse SINR game results in better response of the CRs that converges quickly to a unique Nash Equilibrium. The power levels are adjusted by keeping the derivative of utility function zero. The proposed inverse algorithm converge quickly and helps to accommodate a greater number of users within the network. This method is useful in mitigating congestion within the available bandwidths.</ns0:p><ns0:p>The proposed work can be applied to many practical systems that are based on cognitive radios.</ns0:p><ns0:p>The application to cognitive radio wireless sensor network (CR-WSN) is interesting. The performance in power allocation and spectrum management can be enhanced by using the presented congestion game model. The proposed algorithm can also be used in emergency CR networks and public safety communications that use white space. Further applications include portable cognitive emergency network, medical body area networks (MBAN), and Vehicular networks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>RELATED WORK</ns0:head><ns0:p>In pursuit to acquire the best channel in the wireless spectrum, the CRs are geared with the hardware that assists to quickly switch between the channels. The primary reason to seek and switch to another channel is to mitigate the effects of interference as mentioned in Southwell et al. Several parameters contribute to the interference in a wireless network, which include transmission power and the choice of signature waveforms. In order to reduce interference, many adaptation algorithms are employed that use suitable waveforms on the channel opted by a SU. According to Ulukus and Yates the greedy interference avoidance (IA) algorithms adapt the waveform codes sequentially. An iterative methodology to manage orthogonal sequences is proposed in Anigstein and Anantharam. The MMSE algorithms depend upon the stochastic receiver measurements; hence the convergence must be examined. Welch Bound Equality is achieved in Anigstein and Anantharam by distributed algorithms but convergence to optimal sequence set cannot be assured. The stability of eigen-iterative interference aware (IA) technique due to the addition and deletion of nodes in CDMA system is discussed in Rose et al. However, the convergence-speed experiment is not performed for greedy IA algorithms. Only a single receiver is assumed for the experiments as multiple receivers showed unstable behavior. The IA techniques, dealing with the distributed signature sequences, are further examined by Popescu and Rose, Sung and Leung, Popescu and Rose, and Ulukus and Yates for multiple CR receivers and adaptations in asynchronous CDMA systems are discussed.</ns0:p><ns0:p>In wireless communication systems, power control is applied to compensate for fast fading, timevarying channel characteristics and to minimize battery power consumption especially in CDMA systems.</ns0:p><ns0:p>Most of the power control algorithms are focused on a QoS of CRs. However, the nature of CRs show the dependency of power allocation decision on interference levels each CR receiver faces. Interference temperature as a critical decision maker within cognitive radio network is introduced in Haykin. The evolutionary issue discussed is the trust factor of other users that interfere with cognitive radios. The non-cooperative congestion game model where users share common set of resources. However, the practical implementation of RAI policy is tricky as the exact cost of the mobile users is not incurred to make their migration decisions. The users have the measured-based cost estimation. Similarly, the simultaneous migrations proved to be damaging for the process and the Pure Nash Equilibrium is not assured. A local congestion game is formulated in <ns0:ref type='bibr'>Xu et al.</ns0:ref> that is proved to be an exact potential game. Spatial best response dynamic (SBRD) is proposed to achieve Nash Equilibrium based on local information. The potential function reflects the collision levels within the network and can converge at any NE point, global or local. Thus, the NE leads to sub-optimal network throughput, the optimal NE point remains a challenging task to achieve. Different utility functions have been proposed in cognitive radio networks in recent research. The utility function based on ratio between user throughput and transmission power is proposed in Saraydar et al. along with the linear pricing terms. The QoS is analytically settled as utility in non-cooperative power control game. The NE achieved is not efficient. However, the existence of NE is not necessarily unique.</ns0:p><ns0:p>The proposed algorithm in Kim adapts its transmission power levels to the constant change in network environment, to control the co-channel interference. Whereas, the convergence to the real-world scenarios is yet to be analyzed. A distributed power control through reinforcement learning is proposed in <ns0:ref type='bibr'>Zhou et al.</ns0:ref> that requires no information of channel interference and power strategy among users. The recent work on cognitive radio resource allocation is mostly based on non-cooperative game theoretic frameworks. The chaos-based game is formulated in Al Talabani et al. whose cost function is dependent on power vector and SIR values. The chaotic variable is trade off between power and SIR. The power consumption of the proposed algorithm is less than traditional algorithms at the expense of 1-3 percent drift from average SINR. However, the effect of interference on primary user still needs to be studied. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The Section 3 consists of system model and the proposed game model that leads to the Nash Equilibrium.</ns0:p><ns0:p>The proposed game algorithm is presented in 4. The simulation results are discussed in Section 5 along with the comparison with an exact potential game. The existence of Nash Equilibrium is also proved.</ns0:p><ns0:p>Section 6 presents the conclusion and future perspectives to this work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>CONGESTION GAMES</ns0:head><ns0:p>The congestion games are useful tools in game theory when it comes to resource sharing Rosenthal.</ns0:p><ns0:p>Rosenthal in 1973 proposed the congestion game model in game theory for the first time, which was then followed by Monderer and Shapley. Monderer and Shapley proved that every congestion game is an isomorphic to an exact potential game. The payoff function of each user in a congestion game depends on the choice of resources it makes and the number of users sharing that resource. The payoff function of an exact potential game can be modeled as cost or latency function in a congestion game. The cost function induces a negative effect of congestion. This effect dominates with an increase in the number of players sharing the same resource. Furthermore, by establishing a global function, pure Nash equilibrium can be achieved. Congestion games can be defined as a tuple (I, R, (S i ) i∈I , (U r ) r∈R ) where I = 1, 2, . . . , N denotes the set of players N, R is the finite set of available resources, (S i ) i∈I is the strategy set of each player, i ∈ I such that S i is the subset of the R and (U r ) r∈R is the payoff (cost) function associated to the resources players opt as their strategy. The payoff (Cost) function depends on the total number of players sharing the same resource. In general, players in congestion games aim to maximize their payoff function or minimize the total cost to achieve the Nash equilibrium.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>COOPERATIVE CONGESTION GAMES</ns0:head><ns0:p>Congestion games deal with both the cooperative and non-cooperative players. In game theory, the cooperative games focus on the joint actions that players make and the resultant collective payoff. The congestion externalities or the cooperative factors are involved in the process that may result in non-optimal equilibrium. However, the equilibrium could be socially optimal, regardless of the fixed parameters affecting utilities, if the cost increases with increasing number of players Milchtaich. Thus, the need for cooperation is evident for optimal sharing of resources. In this work, the utility of each player decreases as the size of players set sharing the same resource increases. The players are heterogeneous as they achieve different payoffs by opting for the same choice of resource.</ns0:p><ns0:p>In this paper, we propose that socially optimal Nash equilibrium of cooperative congestion game is achieved by the negative utility function considered as the cost paid by each user. This leads to the ultimate minimization of aggregate cost interpreted as the global function maximization.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>THE SYSTEM MODEL</ns0:head><ns0:p>The CR network consists of multiple transmit and receive node pairs. The SINR of each node is dependent on the correlation with the waveforms of other users sharing the same spectrum; their transmit power levels and spectrum characteristics. Waveforms of nodes are represented by the signal space characteristics that show nearly orthogonal signal dimensions (either in frequency, time, or spreading waveforms) Anigstein and Anantharam. One of the important aspects that we consider in this paper is reduction in inverse SIR by using efficient spectrum sharing based on the correlation between waveforms of users sharing that spectrum along with power optimization. Pseudo random sequences are taken as they hold various properties of white noise with minimum auto and cross-correlation. The base data pulses directly multiply with the pseudo random sequences and each resultant waveform pulse represents a chip. The resultant waveform signals are non-overlapping rectangular pulses of amplitude +1 and −1 Rappaport. Consider a network that consists of N cognitive radios, which are distributed randomly in the deployment area. The K transmission frequency bands are available in the network, where K < N.</ns0:p><ns0:p>The spectrum sharing of the CRs is modeled as a normal form of congestion game, G = (N, {S},U i∈N ).</ns0:p><ns0:p>The strategy space of users is S = (S 1 ×S 2 ×....×S N ). Here S i∈N is for the set of player i that consist of two subsets, ch i = {ch 1 , ch 2 , ....ch k } the set of available channels within the network and S i = {s 1 , s 2 , ....s N }, set of signature sequences) ∀i ∈ {1, 2, ....N}. The utility function of player i is expressed as U i∈N that also includes the cost function. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In the proposed game model, the inverse signal to interference ratio can be expressed as:</ns0:p><ns0:formula xml:id='formula_0'>γ i = s H i s j s H j s i p j h 2 ji p i h 2 ii (1)</ns0:formula><ns0:p>where, s i ,s j are the signature sequences of the nodes and s H i ,s H j are the transpose of these sequences.</ns0:p><ns0:p>p i is the transmit power of node i that is adaptable at each iteration. The link gain of the nodes in the networks is represented as h i j . However, the gain remains constant as the network topology is fixed for simplicity. The model can easily be applied on dynamic network to make it more practical. Random Waypoint Mobility model is the suitable one for this purpose. If CRs compete for unlicensed bands their fully cooperative behavior is considered. This cooperation helps them to maintain the stable network condotions even in dynamic network. The dynamic topology helps in motivating the CRs for cooperation to achieve better utility as given by Wang et al. Hence the proposed model in dynamic conditions can work in the efficient manner, however, it takes a little while to get the desired results.</ns0:p><ns0:p>The utility of a player comprises of the benefit of minimum correlation with other players sharing the same channel and the cost of choosing that channel.</ns0:p><ns0:formula xml:id='formula_1'>U i (s i , s −i ) = B i (s i , s −i ) −C i (s i , s −i )<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where, s −i is the strategy set of all the players except player i that can be denoted as:</ns0:p><ns0:formula xml:id='formula_2'>s −i = (s 1 × s 2 × . . . s i−1 × s i+1 . . . × s N ).</ns0:formula><ns0:p>Here, B i defines the benefit user attains for a particular chosen strategy and C i is the cost.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>INVERSE UTILITY CONGESTION GAME</ns0:head><ns0:p>The purpose of the spectrum sharing congestion game is to reach a suitable utility level at which network achieves Nash equilibria. The utility function might not be maximized as the cost of spectrum sharing and adaptation of a new suitable channel is involved. The cost function of a player i is:</ns0:p><ns0:formula xml:id='formula_3'>C i (S i , S −i ) = b p γ i (s i , s −i ) + c s p i (3)</ns0:formula><ns0:p>where, b p is the battery power of node i transmitter, γ i (s i , s −i ) is the inverse signal to interference ratio of player i at some particular channel, and c s is the channel switching cost. The channel switching cost increases as the player keeps on shifting from one channel to another in search of the optimal result.</ns0:p><ns0:p>Hence, the cumulative switching cost can be defined as:</ns0:p><ns0:formula xml:id='formula_4'>C switch = +1, if ch i,iter+1 = ch i,iter 0, otherwise<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The cost increases by a factor of 1 every time a player switches its strategy from one channel to another. But if the channel remains the same at next iteration the switching cost becomes 0. Since the benefit of choosing a channel and sharing it with other users is in terms of the minimum cross-correlation that is</ns0:p><ns0:formula xml:id='formula_5'>B i (S i , S −i ) = − s H i (X), s i p i (5)</ns0:formula><ns0:p>The sequential congestion game is played iteratively. The users make their choices after analyzing the interference faced at their particular channel, and on other channels. The interference faced by each user is dependent on the correlation and the transmitting power of users sharing same channel. The waveforms of users sharing same channel are replaced by the eigenvector corresponding to the smallest eigen values of correlation matrix X Menon et al.. The iterative game helps in reaching the minimum correlation set of each player that increases the benefit function by reducing interference at each channel. The utility function of the proposed game model is derived after substituting (3) and ( <ns0:ref type='formula'>5</ns0:ref>) in (2) as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_6'>U i (S i , S −i ) = − s H i (X)s i p i − b p γ i (s i , s −i ) + c s p i (<ns0:label>6</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_7'>U i (S i , S −i ) = − s H i (X)s i p i − b p s H i s j s H j s i p j h 2 ji p 2 i h 2 ii − c s p i (7)</ns0:formula><ns0:p>The users occupy the channel where they receive maximum utility. Whereas, the cost paid in terms of consumed power motivates the users to cooperate and opt the suitable band option for the network. The users then optimize their power according to the current channel conditions.</ns0:p><ns0:p>The game can be extended to simultaneous iterative play. However, users need to maintain the history chart in order to realize the behavior of other players. The history of opponents is helpful in making better decisions in the next iteration.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>EFFECT OF COST ON UTILITY FUNCTION</ns0:head><ns0:p>The utility function is specified in (2) of the proposed game model. As discussed above, each user measures its benefit and cost at each channel sequentially and then opt for the channel with maximum utility. The utility function of each user keeps on increasing as the game starts, since the CRs are selfish and opt for their best possible resource. After certain iterations, the cost factor starts to dominate players' benefit. The channel switching cost increments after every single change. The cost dominance is considered as an influencing factor for CRs to cooperate, resulting in a cooperative congestion game.</ns0:p><ns0:p>The dominance of cost function enforces the CR users to reach for the suitable utilities. This leads to the negative utility function values; the cost starts to increase and benefit decreases. However, at a certain point of equilibrium the players reach their best possible spectrum with minimum interference.</ns0:p><ns0:p>The negative resultant utility function models the inverse utility congestion game.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>INVERSE POWER CONTROL ALGORITHM</ns0:head><ns0:p>Most of the power-control algorithms are focused on cellular networks where satisfying the QoS constraint is a stringent requirement. In CR networks, transmitters increase power to cope with channel impairments and increasing levels of interference in an inconsiderate and competitive manner. Within the spectrum sharing framework, a network strongly opposes the secondary users to transmit with arbitrarily high power and interferes with the QoS of the primary users. Hence, to achieve the target SINR, σ i , secondary user power is required to be kept low. In order to find out the change in utility with respect to power of a node, we take derivative of (7),</ns0:p><ns0:formula xml:id='formula_8'>dU i (s i , s −i ) d p i = s H i (X)s i P 2 i + 2b p s H i s j s H j s i p j h 2 ji p 3 i h 2 ii + c s p 2 i (8)</ns0:formula><ns0:p>Putting the derivative equal to zero, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_9'>dU i (s i , s −i ) d p i = 0 (9) s H i (X)s i p 2 i + 2b p s H i s j s H j s i p j h 2 ji p 3 i h 2 ii + c s p 2 i = 0 (10) 1 p 2 i (s H i (X)s i ) + 2b p s H i s j s H j s i p j h 2 ji p i h 2 ii + c s = 0 (11) (s H i (X)s i ) + c s = −2b p s H i s j s H j s i p j h 2 ji p i h 2 ii (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>)</ns0:formula><ns0:formula xml:id='formula_11'>p i h 2 ii (s H i (X)s i ) + c s = −2b p s H i s j s H j s i p j h 2 ji (<ns0:label>13</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>)</ns0:formula><ns0:formula xml:id='formula_13'>p i = −2b p s H i s j s H j s i p j h 2 ji h 2 ii (s H i (X)s i + c s )<ns0:label>(</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Assuming orthogonal signature sequences (s H i (X)s i ) reaches to 1, the change in total power, p i , of a user i at time t is:</ns0:p><ns0:formula xml:id='formula_14'>p i t+1 = −2b p p j h 2 ii h 2 ii (1 + c s )<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>We know that SIR i reaches to maximum at minimum correlation between sequences of the users. Ideally, the signature sequences are considered orthogonal to each other, which means (s H i s j s H j s i ) = 1. Therefore,</ns0:p><ns0:formula xml:id='formula_15'>SIR i,max = σ i = p i h 2 ii p j h 2 ii (<ns0:label>16</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>)</ns0:formula><ns0:p>The updated power at each iteration can be calculated as:</ns0:p><ns0:formula xml:id='formula_17'>P i t+1 = SIR i,max SIR i P i (<ns0:label>17</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:formula xml:id='formula_19'>P i t+1 = σ i γ i P i (<ns0:label>18</ns0:label></ns0:formula><ns0:formula xml:id='formula_20'>)</ns0:formula><ns0:p>After substituting ( <ns0:ref type='formula'>1</ns0:ref>), ( <ns0:ref type='formula' target='#formula_14'>15</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_15'>16</ns0:ref>) in ( <ns0:ref type='formula' target='#formula_19'>18</ns0:ref>), we get</ns0:p><ns0:formula xml:id='formula_21'>P i t+1 = −2b p s H i s j s H j s i p j h 2 ji h 2 ii (1 + c s )<ns0:label>(19)</ns0:label></ns0:formula><ns0:p>Hence, the suitable power of each user at each iteration is calculated by P i t+1 .</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>GLOBAL FUNCTION</ns0:head><ns0:p>The utility of each user influences the strategy set s −i of opponents in games. This influence is made to minimize the interference within the network. The impact of utility of each user is projected in the form of a global function. In the proposed game framework, global function is the negative sum utilities of each user.</ns0:p><ns0:formula xml:id='formula_22'>P s = − N ∑ i=1,i = j [max s i ∈S U i (s i , s −i )]<ns0:label>(20)</ns0:label></ns0:formula><ns0:p>The convergence of global function shows social stability in the CR network where each user is operating at its best possible channel without creating any hindrance in transmission of other users or even PU (in case of underlay systems).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>SIMULATIONS RESULTS</ns0:head><ns0:p>The inverse utility congestion game is evaluated using a variable number of CRs. The network considered is of area 50m x 50m with 50 nodes, uniformly distributed to share K = 4 available channels. After iteratively playing the game, the utilities of players with 3 signal space dimensions are shown in Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>The utility of nodes starts to increase as each node acts selfishly and tries to maximize its own payoff.</ns0:p><ns0:p>This selfish behavior increases the cost of nodes and ultimately leads to the negative utility. Users keep on changing their channels until they reach an optimum point of utility. Convergence of channel allocation process is shown in Figure <ns0:ref type='figure'>?</ns0:ref>?. It is observed that users select their suitable channels after a few iterations, but the signature sequences take time to converge. The correlation between waveforms is reaching its minimum level with delay. Since the change is so small that it hardly influences the channel decision made by each user. However, it ultimately converges to its optimal point as shown in Figure <ns0:ref type='figure'>?</ns0:ref>?. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science within the network is the inverse of potential function that can be seen in Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>. This reduction depicts the minimum interference within the network and optimum power levels of each CR. Since they agree to cooperate, transmission power of each user starts reducing, as shown in Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>.</ns0:p><ns0:p>A quick comparison of convergence is shown in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>CONGESTION GAME ISOMORPHIC TO POTENTIAL GAME</ns0:head><ns0:p>As described above, the congestion game is an isomorphic to a potential game. We observe the results of inverse utility game without cost that can be considered as a potential game model as shown in Figure <ns0:ref type='figure' target='#fig_12'>9</ns0:ref> and Figure <ns0:ref type='figure'>10</ns0:ref>.</ns0:p><ns0:p>Similarly, the Inverse SIR of the network remains unstable for a long period of time. Since selfish nodes in CR network does not consider the cost initially. Later nodes must bear cost in terms of a destabilized network and excessive interference at each channel that enforces them towards cooperation.</ns0:p><ns0:p>This process results in system delay and convergence is not always guaranteed. The Figure <ns0:ref type='figure' target='#fig_12'>9</ns0:ref> shows the unstable trend of Inverse SIR until the convergence is achieved.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>EXISTENCE OF NASH EQUILIBRIUM</ns0:head><ns0:p>Players cannot improve further in terms of utility when they reach to the optimum point, which is called as the Nash equilibrium. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The round-robin best response iteration, φ , is the convergence point and is known as eigen-iteration.</ns0:p><ns0:p>On the other hand, despite being convergent strategy that is the set of eigenvectors, R rr (set of vectors corresponding to minimum eigen values of the correlation matrix), it might not be considered as most advantageous tactic and can be clarified through the following lemma presented in Hicks et al..</ns0:p></ns0:div>
<ns0:div><ns0:head>Lemma 1</ns0:head><ns0:p>Let s i ∈ S∀i ∈ N, S and φ be the best response dynamic. If s ∈ φ ∀i ∈ 1, 2, ..., N, s i is an eigenvector of R rr = sps T + R zz .</ns0:p><ns0:p>Here, R zz is the covariance matrix of additive Gaussian noise which is ideally assumed 0. Manuscript to be reviewed 1. At every iteration all of the strategy points, s i should be present in a compact set, s i ⊂ S.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>2. There is a continuous set: P : S → R such that (a) If s i ∈ S is not a NE, in that case P(S * ) > P(S) for any S * ∈ φ (S).</ns0:p><ns0:p>(b) If s ∈ S, in that case either the algorithm terminates or for any S * ∈ φ (S), P(S * ) ≥ P(S). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>When</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>above conditions 2(a) and 2(b) are fulfilled by using best response iteration of function P(.). Let φ be an upper semi-continuous correspondence, a compact space having closed graph, then third condition of above theorem is fulfilled. Due to the reason that all conditions are fulfilled, it can be argued that under the best response dynamics the potential games as well as the congestion games converge to a NE.</ns0:p><ns0:p>Therefore, it can be concluded from the discussion above that the proposed model of Inverse Utility Congestion Game also shows evidence of best response convergence. Furthermore, due to this best response convergence QoS is assured at better level and there is also an iterative decrease in interference but at the cost of maximum individual utility. Though utilities of the users are best possible ones and are not maximum still each CR user is getting better payoff at each iteration than the previous iterations, which at the end leads to an appropriate payoff for each user and becomes convergent point in the game.</ns0:p><ns0:p>The utility function is the concave function of strategies and has a unique maximum. The convergence to a NE is present at the maximum of the global function, as discussed above, it is bounded and continuously differentiable. Users get the best possible set of waveforms and appropriate sub-channels at the minimum level of inverse signal to interference ratio function.</ns0:p><ns0:p>As far as the complexity of the algorithm is concerned, it is locally executed by every user, therefore, the process involves sensing and making decisions on channel selection, power allocation and choice of waveforms independently. This requires constant time at the user node to execute the algorithm, with complexity of O(1).</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>In this paper, an Inverse Utility Congestion game is proposed to mitigate interference and congestion in CR network. The game model is designed to formulate the power optimization problem through waveform adaptation process for the efficient utilization of available resources. An intelligible algorithm is applied for efficient spectrum sharing of the users that possibly takes less time in reaching Nash equilibrium. The target SIR is achieved; however, the time lapse can be seen in the achieving nearly orthogonal waveform correlation matrix for the convergence. The utility is not maximum but at the suitable level for each node.</ns0:p><ns0:p>In this paper, a comparison of the inverse utility congestion game with and without cost effect is also analyzed. The Nash equilibrium is also achieved in later scenario, but the system experiences delay in convergence. Hence, the cost dominance is helpful in attaining cooperation among nodes. This cooperation leads to congestion mitigation in scarce bandwidth resources that could be useful in multiple practical systems.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>separable game-theoretic framework for distributed power and sequence control in CDMA systems is modeled in Sung et al. It is established that if the equilibrium of the sequence control sub game exists 2/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021) Manuscript to be reviewed Computer Science only then the equilibrium of the joint control game exists. Hence the convergence of joint control game is an open problem to be solved. The game theoretic analysis of wireless ad hoc networks is discussed in Srivastava et al., and potential games have attracted large audience due to the existence of at least one Nash Equilibrium. The congestion game model approach for resource allocation is discussed in Ibrahim et al. and Xu et al. The Radio Access Interfaces (RAI) selection process is proposed as a</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>A</ns0:head><ns0:label /><ns0:figDesc>new SIR-based cost function for game theoretic power control algorithm is proposed in Alpcan et al. The cost function is dependent on SIR logarithmically and is linear in power. However, the existence of the unique Nash equilibrium is only of the subset of the total number of active users. Another cost function based on weighted sum of linear power and SIR squared error is proposed in Koskie and Gajic. The static Nash Equilibrium is achieved with low individual power level by compromising on SIR values. Similarly, in new findings the spectrum sharing is investigated in Xiao et al. for moving vehicles in heterogeneous vehicular networks (HVNs) consisting of the macrocells and the Roadside Units (RSUs) with Cognitive Radio (CR) technology. The non-cooperative game theoretical strategy selection algorithm is designed based on regret matching. The utility function is dependent on number of RSUs within the network, time of vehicle's presence (taken constant) and number of vehicles present within the range of RSU. The focus of the investigation is on correlated equilibrium that includes the set of Nash equilibrium. However, in the study the handover issues have not been considered when a vehicle user moves across the coverage of RSUs, which may cause failure for vehicle to RSUs connection, especially in the high dynamic vehicular scenarios. A chance-constraint power control in CR network is proposed where channel gains are uncertain. The sum utility is attempted to maximize with outage probability constraints of CRs and PUs. To achieve a convex problem, a protection function is formulated by the Bernstein approach in Zhao et al. Because of the uncertainty, the sum utility is reduced as compare to other solutions. The design specifications of optimal strategy space including power, speed, and network information is introduced in Mohammed et al. A non-cooperative game is formulated for this purpose. An energy efficient power control algorithm is proposed in Zhou et al., in which protection margins for SINR and time-varying interference threshold are introduced. However, channel gain disturbances can be seen in simulation results. Intelligent reflecting surface (IRS) is a new technology applied in cognitive radios for interference minimization presented in Zhou et al. An IRS is employed in proposed system to assist SUs for data transmission in the multipleinput multiple-output (MIMO) CR system. The performance gain can be increased by increasing number of IRS's phase shifts or deploying them at optimal locations. However, IRS technology needs an extensive study in the game theoretic perspective. The rest of the paper is organized as follows. The congestion games are briefly discussed in Section 2. 3/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>The convergence of the global function of the network is shown in Figure3. The equilibrium point at which users within the network coordinate with each other and the network becomes stable is the maxima of the global function. The stability ensures the existence of a unique Nash equilibrium. Here the noticeable trend is the switching of convergence point twice at near 100 th iteration. In game theoretic perspective it could be the shifting of local maxima to global one. But trends need further study. According 7/14 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 1 .Figure 2 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 1. Utility of 50 CRs</ns0:figDesc><ns0:graphic coords='9,183.09,63.78,330.86,198.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Convergence of Global Function (P s ) with 50 Users</ns0:figDesc><ns0:graphic coords='10,183.09,63.78,330.87,198.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Inverse Signal to Interference Ratio (ISIR) with 50 Users</ns0:figDesc><ns0:graphic coords='10,183.09,298.21,330.86,198.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Power of 50 CRs at each Iteration</ns0:figDesc><ns0:graphic coords='11,183.09,63.78,330.86,198.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Convergence of Channel Allocation with 10 Users</ns0:figDesc><ns0:graphic coords='11,183.09,436.00,330.86,198.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .Figure 8 .</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7. Power of CRs with 10 Users</ns0:figDesc><ns0:graphic coords='12,183.09,63.78,330.87,198.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Convergence of CR Channel Allocation Process without Cost Factor</ns0:figDesc><ns0:graphic coords='13,183.09,63.78,330.86,198.43' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Table 1 with a chaotic optimization method of power control given in Al Talabani et al. and non-cooperative game theoretic approach in Heterogeneous Vehicular Networks (HVNs) with correlated equilibrium presented by Xiao et al.The proposed Inverse Power Control Algorithm (IPC) is noticeably accommodating more number of users with minimum interference than other algorithms in literature. This prevents the wastage of bandwidth of licensed band when it is free to use for CR. Although power of each user slightly decreases as the cost is dominant, but this helps in minimizing interference between users. Also, the proposed algorithm is simpler as compared to the current state-of-the-art algorithms. Figures 6, 7 and 8 shows the</ns0:figDesc><ns0:table /><ns0:note>8/14PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>The strategy set s i ∈ S is considered as NE if</ns0:figDesc><ns0:table /><ns0:note>9/14PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:1:2:NEW 19 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Convergence of Algorithms</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>No. of Users Accom-</ns0:cell><ns0:cell>Convergence</ns0:cell><ns0:cell>Convergence Unit</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>modated</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chaotic Optimization</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>50 th Iteration</ns0:cell><ns0:cell>Power and Informa-</ns0:cell></ns0:row><ns0:row><ns0:cell>of Power Control</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>tion rate</ns0:cell></ns0:row><ns0:row><ns0:cell>Non-Cooperative</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>20 th Iteration</ns0:cell><ns0:cell>Average Utility</ns0:cell></ns0:row><ns0:row><ns0:cell>HVN Game</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Proposed IPC</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>18 th Iteration</ns0:cell><ns0:cell>Utility, Power</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "
Department of Computer Sciences,
Quaid-i-Azam University,
Islamabad, Pakistan.
Tel: +92 3365420733
https://qau.edu.pk
ghazanfar@qau.edu.pk 31 January, 2021.
Dear Editor,
Cunhua Pan
Academic Editor, PeerJ Computer Science
Re: manuscript # CS-2020:10:54661:1:0:NEW
Title: “A Game Theoretic Power Control and Spectrum Sharing Approach Using Cost Dominance in Cognitive Radio Networks”
The authors would like to thank for providing the opportunity to revise and improve this manuscript. We also thank the anonymous reviewers for the valuable feedback and suggestions, which greatly helped in improving the quality of the paper. We greatly appreciate the opportunity offered to us by the editor and the reviewers to clearly present the research contributions. We have examined all the general and specific comments provided by the reviewer and accordingly made the necessary changes in the revised paper to meet the comments and suggestions by the reviewers. We hope that the revisions in the manuscript and our accompanying responses will be sufficient to make our manuscript suitable for publication in PeerJ.
Ghazanfar Farooq Siddiqui
Assistant Professor
Department of Computer Sciences
Quaid-i-Azam University, Islamabad.
On behalf of all authors.
Responses to the Reviewers’ Remarks
Reviewer: 1
Basic reporting
This paper has studied a game theoretic cost dominant approach for cognitive radio networks. In general, this paper is interesting and good but could be improved. Some typos should be corrected. For example, in line 218, there should be a space between 'benefit' and 'Wang. Line 266: 'Figure2'->'Figure 2'. The title and abstract may be rephrased. Related work section is very long and could be more concise.
Authors’ response: We are grateful to the Reviewer for careful reading of the original version of the paper and for the helpful comments/suggestions, which have significantly improved it. All the corrections are now incorporated in the revised manuscript.
in line 218, there should be a space between 'benefit' and 'Wang.
This typo is corrected (now line 210). We have also examined and removed the entire manuscript for any typos. This careful examination by the reviewer assisted us in greatly improving the quality of the manuscript.
Line 266: 'Figure2'->'Figure 2'.
We thank the reviewer for thoroughly examining the manuscript. The pointed-out typo is corrected in the revised manuscript as seen in line 258.
The title and abstract may be rephrased.
As suggested by the reviewer, the title of this manuscript is revised. We thank the reviewer for providing this valuable feedback on improving the title of this manuscript.
Revised title: “A Game Theoretic Power Control and Spectrum Sharing Approach Using Cost
Dominance in Cognitive Radio Networks”
The abstract is also revised in order to make it clear and understandable for the readers. This is indeed an important aspect of the paper that required an improvement and we are thankful to the reviewer for suggesting revision in this area.
The revised abstract: “The wireless networks face challenges in efficient utilization of bandwidth due to paucity of resources and lack of management, which may result in undesired congestion. The cognitive radio (CR) paradigm can bring efficiency, better utilization of bandwidth, and appropriate management of limited resources. While the CR paradigm is an attractive choice, the CRs selfishly compete to acquire and utilize available bandwidth that may ultimately results in inappropriate power levels, causing degradation in Quality of Service of the network. A cooperative game theoretic approach can ease the problem of spectrum sharing and power utilization in a hostile and selfish environment. We focus on the challenge of congestion control that result due to inadequate and uncontrolled channels access and utilization of resources. The Nash Equilibrium of a cooperative congestion game is examined by considering the cost basis, which is embedded in the utility function. The proposed algorithm inhibits the utility, which leads to the decrease in aggregate cost and global function maximization. The cost dominance is a pivotal agent for cooperation in CRs that results in efficient power allocation. The simulation results show reduction in power utilization due to improved management in cognitive radio resource allocation.”
Related work section is very long and could be more concise.
The related work is now concise after revision. The revised related work presents necessary work to support this manuscript.
Below, we provide further details of how we have considered each of the points raised by the Reviewer.
Experimental design
Experimental design is OK but could be improved. Some figure could be made clearer, for example, Fig. 2, 5, 6 are difficult to read.
Authors’ response:
In order to make the figures clearer, we have revised the graphs with reduced number of iterations. The number of iterations is reduced to 120 from original value of 200. There is no activity in this range and does not provide any useful information.
We thank the reviewer for this valuable feedback that improve the clarity of the figures. The size of the figures in the entire manuscript is also adjusted for clear view and better understanding. In the revised manuscript the Fig 6 is now Fig 9.
Validity of the findings
The power control in cognitive radio has been studied extensively in the past. The authors should state clear what the contribution their paper has brought compared to the existing state-of-the-art. The authors are also suggested to compare their findings with the existing solutions in the literature
and show the improvement. Also, it is better you can analyze more on the convergence and complexity theoretically of the algorithms.
Authors’ response:
The main objective of this manuscript in terms of performance is the quick convergence time and reduced complexity, while keeping the decisions of the users independent. The improvement in convergence time can be seen in Table 1 as compared to other schemes. In section 5, a comparison is presented with a chaotic optimization of power control algorithm (Table 1, Al Talabani et al.) and non-cooperative game theoretic approach in HVNs with correlated equilibrium presented by Xiao et al..
The proposed IPC (Inverse Power Control Algorithm) is noticeably accommodating a greater number of users with minimum interference than other algorithms in literature. This prevents the wastage of bandwidth of licensed band when it is free to use for CR. Although power of each user slightly decreases since cost is dominant, but this helps in minimizing interference between users and does not influence much the transmission of the users. Also, the proposed algorithm is simpler as compared to the current state-of-the-art algorithms.
The algorithm is locally executed by every user, therefore, the process involves sensing and making decisions on channel selection, power allocation and choice of waveforms independently. This requires constant time at the user node to execute the algorithm, with complexity of O(1). The analysis on convergence is presented in an added Section 5.2 ‘Existence of Nash Equilibrium.’
We are thankful to the reviewer for the valuable suggestions related to the comparison with the state-of-the-art prevailing algorithms.
Comments for the Author
It is better the authors can show how the proposed solution can be applied in the practical system. It seems just the theoretical analysis conducted in the current version.
We are thankful for this suggestion. We hope that the substantial revisions we have made based on your insightful comments and suggestions will improve the manuscript. The practical applications of the proposed algorithm is now presented in the last paragraph of the Introduction.
“The proposed work can be applied to many practical systems that are based on cognitive radios. The application to cognitive radio wireless sensor network (CR-WSN) is interesting. The performance in power allocation and spectrum management can be enhanced by using the presented congestion/power control game model. The proposed algorithm can also be used in emergency CR networks and public safety communications that use white space. Further applications include portable cognitive emergency network, medical body area networks (MBAN), and Vehicular networks.”
Reviewer 2:
Basic reporting
English grammar and sentence structure need to be rechecked. There are many irregularities in writing, such as:
1) The first sentence in abstract, 'In wireless networks, poor communication, and congestion results from an increase in demand of already scares bandwidth resources.'
2) The third sentence in abstract, 'While the CR paradigm is an attractive choice, the CRs selfishly compete to acquire and utilize available bandwidth that may ultimately results in power allocation,
causing degradation in Quality of Service of the network'
3) Page 5, line 34, 'In cognitive radio networks the PU'----> “In cognitive radio networks, the PU”, line 41, what is 'Law et al..'? The same problem appears on line 46.
4) Line 190, 'In this paper we propose...'-->'In this paper, we propose...'
Authors’ response: We are thankful to the reviewer for the valuable suggestions. The corrections have been made in the revised version of paper. The revised abstract is as follows:
“The wireless networks face challenges in efficient utilization of bandwidth due to paucity of resources and lack of management, which may result in undesired congestion. The cognitive radio (CR) paradigm can bring efficiency, better utilization of bandwidth, and appropriate management of limited resources. While the CR paradigm is an attractive choice, the CRs selfishly compete to acquire and utilize available bandwidth that may ultimately results in inappropriate power levels, causing degradation in Quality of Service of the network. A cooperative game theoretic approach can ease the problem of spectrum sharing and power utilization in a hostile and selfish environment. We focus on the challenge of congestion control that result due to inadequate and uncontrolled channels access and utilization of resources. The Nash Equilibrium of a cooperative congestion game is examined by considering the cost basis, which is embedded in the utility function. The proposed algorithm inhibits the utility, which leads to the decrease in aggregate cost and global function maximization. The cost dominance is a pivotal agent for cooperation in CRs that results in efficient power allocation. The simulation results show reduction in power utilization due to improved management in cognitive radio resource allocation.”
The typos are carefully examined and removed.
Experimental design
1) The research question is defined. The numerical results and relevant code are provided. However, more clearly and prominently statements should be given on how research is different from or outperforms the existing research works such as the papers 'A. Al Talabani, A. Nallanathan and H. X. Nguyen, 'A Novel Chaos Based Cost Function for Power Control of Cognitive Radio Networks,' in IEEE Communications Letters, vol. 19, no. 4, pp. 657-660, April 2015, doi: 10.1109/LCOMM.2014.2385068', 'Z. Xiao et al., 'Spectrum Resource Sharing in Heterogeneous Vehicular Networks: A Noncooperative Game-Theoretic Approach With Correlated Equilibrium,' in IEEE Transactions on Vehicular Technology, vol. 67, no. 10, pp. 9449-9458, Oct. 2018, doi: 10.1109/TVT.2018.2855683.
2) New technology is developed recently, for example the Intelligent Reflecting Surface (IRS), the CR throughput can be further improved with assistance of IRS. The authors should mention relevant research work as the latest research background, such as the paper 'Lei Zhang et al., Intelligent Reflecting Surface Aided MIMO Cognitive Radio Systems, in IEEE Transactions on Vehicular Technology, vol. 69, no. 10, pp. 11445-11457, Oct. 2020, doi: 10.1109/TVT.2020.3011308'.
Authors’ response: We are thankful for the suggested studies. We have added those at the appropriate places during the revision. It is mentioned in Section 1 of Related Work
“The recent work on cognitive radio resource allocation is mostly based on non-cooperative game theoretic frameworks. The chaos-based game is formulated in Al Talabani et al. whose cost function is dependent on power vector and SIR values. The chaotic variable is tradeoff between power and SIR. The power consumption of the proposed algorithm is less than traditional algorithms at the expense of 1-3 percent drift from average SINR. However, the effect of interference on primary user still needs to be studied.”
“Similarly, in new findings the spectrum sharing is investigated in Xiao et al. for moving vehicles in heterogeneous vehicular networks (HVNs) consisting of the macrocells and the Roadside Units (RSUs) with Cognitive Radio (CR) technology. The non-cooperative game theoretical strategy selection algorithm is designed based on regret matching. The utility function is dependent on number of RSUs within the network, time of vehicle’s presence (taken constant) and number of vehicles present within the range of RSU. The focus of the investigation is on correlated equilibrium that includes the set of Nash equilibrium. However, in the study the handover issues have not been considered when a vehicle user moves across the coverage of RSUs, which may cause failure for vehicle to RSUs connection, especially in the high dynamic vehicular scenarios.”
Also, “Intelligent reflecting surface is a new technology applied in cognitive radios for interference minimization presented in Zhou et al… An IRS is employed in proposed system to assist SUs for data transmission in the multiple-input multiple-output (MIMO) CR system. The performance gain can be increased by increasing number of IRS’s phase shifts or deploying them at optimal locations. However, IRS technology needs an extensive study in the game theoretic perspective.”
The comparison table and discussion has been provided at the results section.
As in section 5 “A quick comparison in Table 1 is made with a chaotic optimization of power control given in Al Talabani et al. and non-cooperative game theoretic approach in HVNs with correlated equilibrium presented by Xiao et al…
The proposed IPC (Inverse Power Control Algorithm) is noticeably accommodating more number of users with minimum interference than other algorithms in literature. This prevents the wastage of bandwidth of licensed band when it is free to use for CR. Although power of each user slightly decreases since cost is dominant, but this helps in minimizing interference between users and does not influence much the transmission of the users. Also, the proposed algorithm is simpler as compared to the current state-of-the-art algorithms.”
Comments for Author
A congestion game model is proposed in this paper to mitigate interference and congestion in CR network. However, some places that may confuse the reader require careful explanation by the author.
1) The result of the game is equivalent to minimize the inverse SIR, why not directly aim to maximize the SIR? It seems that they have a same Nash equilibrium.
Authors’ response: Thank you for giving us a chance of explaining our work. The effect of minimizing the inverse SIR or maximizing the SIR is the same. In the proposed game model, the inverse SIR is used in formulating the utility function (power, waveform, and inverse SIR), therefor, must be minimized. The inverse SIR is also used to determines the cost in the utility function. It is discussed in section ‘Cooperative Congestion Games’:
“In this paper we propose that socially optimal Nash equilibrium of cooperative congestion game is achieved by the negative utility function considered as the cost paid by each user. This leads to the ultimate minimization of aggregate cost interpreted as the global function maximization.”
Also, in section ‘Effect of Cost on Utility’: “The dominance of cost function enforces the CR users to reach for the suitable utilities instead of maximum. This leads to the negative utility function values; the cost starts to increase and benefit decreases. However, at a certain point of equilibrium the players reach their best possible spectrum with minimum interference. Negative resultant utility function models the inverse utility congestion game.”
2) In equations (6) and (7), the first terms after the equal signs, which is the benefit of minimum correlation with other players, should be positive (without '-') from the perspective of formula 3. Why do you add '-'?
Authors’ response: We are grateful for the thorough study of our paper and suggestions. The equation 3 is the cost of the cognitive radio it pays for a particular strategy it chooses. Whereas equations 6 and 7 can be seen in the perspective of equation 5 that shows the benefit of the user. The first term of both the equations 6 and 7 is given in equation 5.
3) Please explain the unstable state when the number of iterations is 100 in Figure 2.
Authors’ response: The unique Nash Equilibrium is achieved at 100th iteration as explained in the results section: “It is observed that users select their suitable channels after a few iterations, but the signature sequences take time to converge. The correlation between waveforms is reaching its minimum level with delay. Since the change is so small that it hardly influences the channel decision made by each user. However, it ultimately converges to its optimal point as shown in Figure 2.”
Also, it is added here, “The stability ensures the existence of a unique Nash Equilibrium. Here the noticeable trend is the switching of convergence point from 65th iteration to 100th iteration. In game theoretic perspective it could be the shifting of local maxima to global one. But trends need further study.”
4) The first occurrence of mathematical symbols should be explained, such as 'B_i()' and 'C_i()' in equation (2).
Authors’ response: We are thankful for your suggestion. The changes have been made in the revised manuscript.
Please accept our sincere gratitude for your thorough review and invaluable time.
THANK YOU AGAIN FOR YOUR INVALUABLE TIME AND INPUT!
" | Here is a paper. Please give your review comments after reading it. |
172 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The wireless networks face challenges in efficient utilization of bandwidth due to paucity of resources and lack of central management, which may result in undesired congestion. The cognitive radio (CR) paradigm can bring efficiency, better utilization of bandwidth, and appropriate management of limited resources. While the CR paradigm is an attractive choice, the CRs selfishly compete to acquire and utilize available bandwidth that may ultimately result in inappropriate power levels, causing degradation in network's Quality of Service (QoS). A cooperative game theoretic approach can ease the problem of spectrum sharing and power utilization in a hostile and selfish environment. We focus on the challenge of congestion control that results in inadequate and uncontrolled access of channels and utilization of resources. The Nash Equilibrium (NE) of a cooperative congestion game is examined by considering the cost basis, which is embedded in the utility function. The proposed algorithm inhibits the utility, which leads to the decrease in aggregate cost and global function maximization. The cost dominance is a pivotal agent for cooperation in CRs that results in efficient power allocation. Simulation results show reduction in power utilization due to improved management in cognitive radio resource allocation.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>In this modern era of high-speed communications, users and designers face the challenge of efficient spectrum utilization primarily due to its scarcity. In general, the usage of wireless radio spectrum is governed by allocating licenses to the primary users (PUs). In many scenarios, the allocated wireless bands are not fully used, which provides an opportunity to further improve spectral utilization. While the cognitive radio paradigm may have eased the problem of spectrum utilization, their deployment brings forth certain critical issues in radio resource management. In cognitive radio networks, if certain conditions are met, the PUs can opportunistically share their allocated bandwidth with secondary users (SUs) or unlicensed users. When the secondary users sense spectrum holes to transmit their information, competition begins with the peers in the network to use the resources. This induces antagonism among the SUs, which sometimes results in a hostile environment that seriously hampers the efficient utilization of the spectrum. Since there is no central coordinator in cognitive radio networks, all SUs selfishly try to maximize their throughput. Assuming the restriction of each node having a single radio transceiver, according to <ns0:ref type='bibr' target='#b9'>Law et al. (2012)</ns0:ref> only one channel can be accessed by each SU in the network. As the nodes are making decisions independently, every SU aims for the best possible channel that motivates them to switch to lucrative channels. The SUs start to behave like sheep, which results in frequent switching of channels and causes rapid change in signal to interference and noise ratio (SINR) levels for all the users. The throughput of a SU depends upon the number of SUs sharing a spectrum with desired power levels that ultimately results in congestion. The power control algorithms are very effective in controlling interference and throughput, but in the absence of a central authority it becomes challenging and difficult.</ns0:p><ns0:p>In addition, successive waveform adaptation mechanisms in a code division multiple access <ns0:ref type='bibr'>(CDMA)</ns0:ref> network can be employed to maintain the signal to interference ratio (SIR) threshold, which results in better SIR levels and improved resource sharing. Cognitive radios, therefore, need to collectively manage both the benefits and costs of channel switching and spectrum sharing along with energy utilization. A cognitive radio network is a specialized technology that provides opportunities for spectrum sharing in competitive environments that compel the users in a game theoretic environment to coordinate, which assists in mitigating the effects of conflicts in spectrum sharing. In order to improve the performance of a cognitive radio network, it is imperative to induce mechanisms that alleviate the adverse effects caused by uncontrolled interference.</ns0:p><ns0:p>The focus of this paper is on cooperative spectrum sharing that considers the amount of interference each node faces due to neighboring nodes while the cooperation problem is managed with game theoretic approach. The cooperative congestion game is proved to be a useful tool in resolving the adverse effects created by the selfish behavior of CRs within the network. The congestion game helps the CRs to make better decisions for network stability. These decisions, when analyzed in a Nash equilibrium perspective, proved to be cost effective for CRs. A unique cost function is deduced based on SIR and channel switching cost. The Inverse signal to interference ratio game results in better response of the CRs that converge quickly to a unique Nash Equilibrium. The power levels are adjusted by keeping the derivative of utility function to zero. The proposed inverse algorithm converges quickly and helps to accommodate a greater number of users within the network. This method is useful in mitigating congestion within the available bandwidths.</ns0:p><ns0:p>The proposed work can be applied to many practical systems that are based on cognitive radios.</ns0:p><ns0:p>The application to cognitive radio wireless sensor network (CR-WSN) is interesting. The performance in power allocation and spectrum management can be enhanced by using the presented congestion game model. The proposed algorithm can also be used in emergency CR networks and public safety communications that use white space. Further applications include portable cognitive emergency network, medical body area networks (MBAN), and vehicular networks in reference to <ns0:ref type='bibr' target='#b28'>Xiao et al. (2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In pursuit to acquire the best channel in the wireless spectrum, the CRs are geared with hardware that assists to quickly switch between the channels. The primary reason to seek and switch to another channel is to mitigate the effects of interference as mentioned in <ns0:ref type='bibr' target='#b21'>Southwell et al. (2012)</ns0:ref>. Several parameters contribute to the interference in a wireless network, which include transmission power and the choice of signature waveforms. In order to reduce interference, many adaptation algorithms are employed that use suitable waveforms on the channel opted by a SU. According to <ns0:ref type='bibr' target='#b25'>Ulukus and Yates (2001a)</ns0:ref> the greedy interference avoidance (IA) algorithms adapt the waveform codes sequentially. An iterative methodology to manage orthogonal sequences is proposed in <ns0:ref type='bibr' target='#b2'>Anigstein and Anantharam (2003)</ns0:ref>. The minimum mean squared error (MMSE) algorithms depend upon the stochastic receiver measurements; hence the convergence must be examined. Welch Bound Equality is achieved in <ns0:ref type='bibr' target='#b2'>Anigstein and Anantharam (2003)</ns0:ref> by distributed algorithms but convergence to optimal sequence set cannot be assured. The stability of eigen-iterative interference aware (IA) technique due to the addition and deletion of nodes in CDMA system is discussed in <ns0:ref type='bibr' target='#b18'>Rose et al. (2002)</ns0:ref>. However, the convergence-speed experiment is not performed for greedy IA algorithms. Only a single receiver is assumed for the experiments as multiple receivers showed unstable behavior. The IA techniques, dealing with the distributed signature sequences, are further examined and discussed by <ns0:ref type='bibr' target='#b15'>Popescu and Rose (2003)</ns0:ref>, <ns0:ref type='bibr' target='#b23'>Sung and Leung (2003)</ns0:ref>, <ns0:ref type='bibr' target='#b16'>Popescu and Rose (2004)</ns0:ref>, and <ns0:ref type='bibr' target='#b26'>Ulukus and Yates (2001b)</ns0:ref> for multiple CR receivers and adaptations in asynchronous CDMA systems.</ns0:p><ns0:p>In wireless communication systems, power control is applied to compensate for fast fading, timevarying channel characteristics and to minimize battery power consumption especially in CDMA systems.</ns0:p><ns0:p>Most of the power control algorithms are focused on a QoS of CRs. However, the nature of CRs show the dependency of power allocation decision on interference levels each CR receiver faces. Interference temperature as a critical decision maker within cognitive radio network is introduced in <ns0:ref type='bibr' target='#b4'>Haykin (2005)</ns0:ref>.</ns0:p><ns0:p>The evolutionary issue discussed is the trust factor of other users that interfere with cognitive radios within the network. The separable game-theoretic framework for distributed power and sequence control in Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>CDMA systems is modeled in <ns0:ref type='bibr' target='#b24'>Sung et al. (2006)</ns0:ref>. It is established that if the equilibrium of the sequence control sub-game exists only then the equilibrium of the joint control game exists. Hence, the convergence of joint control game is an open problem to be solved. The game theoretic analysis of wireless ad hoc networks is discussed in <ns0:ref type='bibr' target='#b22'>Srivastava et al. (2005)</ns0:ref>, and potential games have attracted large audience due to the existence of at least one Nash Equilibrium. The congestion game model approach for resource allocation is discussed in <ns0:ref type='bibr' target='#b6'>Ibrahim et al. (2010)</ns0:ref> and <ns0:ref type='bibr' target='#b29'>Xu et al. (2012)</ns0:ref>. The radio access interfaces (RAI) selection process is proposed as a non-cooperative congestion game model, where users share common set of resources. However, the practical implementation of RAI policy is tricky as the exact cost of the mobile users is not incurred to make their migration decisions. The users have the measured-based cost estimation. Similarly, the simultaneous migrations proved to be damaging for the process and the Pure Nash Equilibrium is not assured. A local congestion game is formulated in <ns0:ref type='bibr' target='#b29'>Xu et al. (2012)</ns0:ref> that is proved to be an exact potential game. Spatial best response dynamic (SBRD) is proposed to achieve Nash Equilibrium based on local information. The potential function reflects the collision levels within the network and can converge at any Nash Equilibrium point, global or local. Thus, the Nash Equilibrium leads to sub-optimal network throughput, the optimal Nash Equilibrium point remains a challenging task to achieve. Different utility functions have been proposed in cognitive radio networks in recent research. The utility function based on ratio between user throughput and transmission power is proposed in <ns0:ref type='bibr' target='#b20'>Saraydar et al. (2002)</ns0:ref> along with the linear pricing terms. The QoS is analytically settled as utility in non-cooperative power control game. The Nash Equilibrium achieved is not efficient. However, the existence of Nash Equilibrium is not necessarily unique.</ns0:p><ns0:p>The proposed algorithm in <ns0:ref type='bibr' target='#b7'>Kim (2011)</ns0:ref> adapts its transmission power levels to the constant change in network environment to control the co-channel interference. Whereas, the convergence to the real-world scenarios is yet to be analyzed. A distributed power control through reinforcement learning is proposed in <ns0:ref type='bibr' target='#b32'>Zhou et al. (2011)</ns0:ref> that requires no information of channel interference and power strategy among users.</ns0:p><ns0:p>The recent work on cognitive radio resource allocation is mostly based on non-cooperative game theoretic frameworks. The chaos-based game is formulated in Al <ns0:ref type='bibr' target='#b0'>Talabani et al. (2014)</ns0:ref> whose cost function is dependent on power vector and SINR values. The chaotic variable is a trade off between power and SINR.</ns0:p><ns0:p>The power consumption of the proposed algorithm is less than traditional algorithms at the expense of 1-3 percent drift from average SINR. However, the affect of interference on primary user still needs to be studied.</ns0:p><ns0:p>A new SIR-based cost function for game theoretic power control algorithm is proposed in <ns0:ref type='bibr' target='#b1'>Alpcan et al. (2002)</ns0:ref>. The cost function is dependent on SIR logarithmically and is linear in power. However, the existence of the unique Nash Equilibrium is only for a subset of the total number of active users.</ns0:p><ns0:p>Another cost function based on weighted sum of linear power and SIR squared error is proposed in <ns0:ref type='bibr' target='#b8'>Koskie and Gajic (2005)</ns0:ref>. The static Nash Equilibrium is achieved with low individual power level by compromising on SIR values. Similarly, in new findings the spectrum sharing is investigated in <ns0:ref type='bibr' target='#b28'>Xiao et al. (2018)</ns0:ref> for moving vehicles in heterogeneous vehicular networks (HVNs) consisting of the macrocells and the roadside units (RSUs) with cognitive radio (CR) technology. The non-cooperative game theoretical strategy selection algorithm is designed based on regret matching. The utility function is dependent on number of RSUs within the network, time of vehicle's presence (taken as constant) and number of vehicles present within the range of a RSU. The focus of the investigation is on correlated equilibrium that includes the set of Nash Equilibrium. However, in the study the handover issues have not been considered when a vehicle user moves across the coverage of RSUs, which may cause failure for vehicle to RSUs connection, especially in high dynamic vehicular scenarios.</ns0:p><ns0:p>A chance-constraint power control in CR network is proposed by the Bernstein approach in <ns0:ref type='bibr' target='#b30'>Zhao et al. (2019)</ns0:ref> where channel gains are uncertain. The sum utility is attempted to maximize with outage probability constraints of CRs and PUs. In order to achieve a convex problem, a protection function is formulated. Because of the uncertainty, the sum utility is reduced as compared to other solutions.</ns0:p><ns0:p>The design specifications of optimal strategy space including power, speed, and network information is introduced in <ns0:ref type='bibr' target='#b13'>Mohammed et al. (2019)</ns0:ref>. A non-cooperative game is formulated for this purpose. An energy efficient power control algorithm is proposed in <ns0:ref type='bibr' target='#b31'>Zhou et al. (2019)</ns0:ref>, in which protection margins for SINR and time-varying interference threshold are introduced. However, channel gain disturbances can be seen in simulation results. Intelligent reflecting surface (IRS) is a new technology applied in cognitive radios for interference minimization presented in <ns0:ref type='bibr' target='#b31'>Zhou et al. (2019)</ns0:ref>. An IRS is employed in proposed system to assist SUs for data transmission in the multiple-input multiple-output (MIMO) CR system.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The performance gain can be increased by increasing number of IRS's phase shifts or deploying them at optimal locations. However, IRS technology needs an extensive study in the game theoretic perspective.</ns0:p><ns0:p>The congestion games are briefly discussed in the next Section .The rest of the paper consists of the system model and the proposed game model that leads to the Nash Equilibrium. The proposed game algorithm and simulation results are discussed in the following sections along with the comparison to an exact potential game. The existence of Nash Equilibrium is also proved. Conclusion and future perspectives to this work are also presented.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONGESTION GAMES</ns0:head><ns0:p>The congestion game is a useful tool in game theory when it comes to resource sharing. <ns0:ref type='bibr' target='#b19'>Rosenthal (1973)</ns0:ref> proposed the congestion game model in game theory for the first time, which was then followed by <ns0:ref type='bibr' target='#b14'>Monderer and Shapley (1996)</ns0:ref>. Monderer and Shapley proved that every congestion game is an isomorphic to an exact potential game. The payoff function of each user in a congestion game depends on the choice of resources it makes and the number of users sharing that resource. The payoff function of an exact potential game can be modeled as cost or latency function in a congestion game. The cost function induces a negative effect to congestion. This effect dominates with an increase in the number of players sharing the same resource. Furthermore, by establishing a global function, pure Nash Equilibrium can be achieved.</ns0:p><ns0:p>Congestion games can be defined as a tuple (I, R, (S i ) i∈I , (U r ) r∈R ) where I = 1, 2, . . . , N denotes the set of players N, R is the finite set of available resources, (S i ) i∈I is the strategy set of each player, i ∈ I such that S i is the subset of the R and (U r ) r∈R is the payoff function associated to the resources players opt as their strategies. The payoff function depends on the total number of players sharing the same resource. In general, players in congestion games aim to maximize their payoff function or minimize the total cost to achieve the Nash Equilibrium.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cooperative Congestion Games</ns0:head><ns0:p>Congestion games deal with both the cooperative and non-cooperative players. In game theory, the cooperative games focus on the joint actions that players make and the resultant collective payoffs.</ns0:p><ns0:p>The congestion externalities or the cooperative factors are involved in the process that may result in non-optimal equilibrium. However, according to <ns0:ref type='bibr' target='#b12'>Milchtaich (2004)</ns0:ref> the equilibrium could be socially optimal, regardless of the fixed parameters affecting utilities, if the cost increases with increasing number of players. Thus, the need for cooperation is evident for optimal sharing of resources. In this work, the utility of each player decreases as the size of players set sharing the same resource increases. The players are heterogeneous as they achieve different payoffs by opting for the same choice of resource.</ns0:p><ns0:p>In this paper, we propose that socially optimal Nash Equilibrium of cooperative congestion game is achieved by the negative utility function considered as the cost paid by each user. This leads to the ultimate minimization of aggregate cost interpreted as the global function maximization.</ns0:p></ns0:div>
<ns0:div><ns0:head>THE SYSTEM MODEL</ns0:head><ns0:p>The CR network consists of multiple transmit and receive node pairs. The SINR of each node in CDMA system is dependent on the correlation with the waveforms of other users sharing the same spectrum; their transmit power levels and spectrum characteristics. Waveforms of nodes are represented by the signal space characteristics that show nearly orthogonal signal dimensions (either in frequency, time, or spreading waveforms), as mentioned by <ns0:ref type='bibr' target='#b2'>Anigstein and Anantharam (2003)</ns0:ref>. One of the important aspects that we consider in this paper is reduction in inverse signal to interference ratio (ISIR) by using efficient spectrum sharing based on the correlation between waveforms of users sharing that spectrum along with power optimization. Pseudo random sequences are taken as they hold various properties of white noise with minimum auto and cross-correlation. The base data pulses directly multiply with the pseudo random sequences and each resultant waveform pulse represents a chip. The resultant waveform signals are non-overlapping rectangular pulses of amplitude +1 and −1, <ns0:ref type='bibr' target='#b17'>Rappaport et al. (1996)</ns0:ref>. Consider a network that consists of N cognitive radios, which are distributed randomly in the deployment area. The K transmission frequency bands are available in the network, where K < N.</ns0:p><ns0:p>The spectrum sharing of the CRs is modeled as a normal form of congestion game, G = (N, {S},U i∈N ).</ns0:p><ns0:p>The strategy space of users is S = (S 1 × S 2 × .... × S N ). Here S i∈N is for the set of player i that consists of two subsets, ch i = {ch 1 , ch 2 , ....ch k } the set of available channels within the network and In the proposed game model, the ISIR can be expressed as:</ns0:p><ns0:formula xml:id='formula_0'>γ i = s H i s j s H j s i p j h 2 ji p i h 2 ii (1)</ns0:formula><ns0:p>where, s i , s j are the signature sequences of the nodes and s H i , s H j are the transpose of these sequences.</ns0:p><ns0:p>p i is the transmit power of node i that is adaptable at each iteration. The link gain of the nodes in the networks is represented as h i j . However, the gain remains constant as the network topology is fixed for simplicity. The model can easily be applied on dynamic network to make it more practical. Random waypoint mobility model is suitable for this purpose. If CRs compete for unlicensed bands then their fully cooperative behavior is considered. This cooperation helps them to maintain the stable network conditions even in a dynamic network. The dynamic topology helps in motivating the CRs for cooperation to achieve better utility as given by <ns0:ref type='bibr' target='#b27'>Wang et al. (2010)</ns0:ref>. Hence the proposed model in dynamic conditions can work in an efficient manner, however, it takes more time to get the desired results.</ns0:p><ns0:p>The utility of a player comprises of the benefit of minimum correlation with other players sharing the same channel and the cost of choosing that channel.</ns0:p><ns0:formula xml:id='formula_1'>U i (s i , s −i ) = B i (s i , s −i ) −C i (s i , s −i ) (2)</ns0:formula><ns0:p>where, s −i is the strategy set of all the players except player i that can be denoted as:</ns0:p><ns0:formula xml:id='formula_2'>s −i = (s 1 × s 2 × . . . s i−1 × s i+1 . . . × s N ).</ns0:formula><ns0:p>Here, B i defines the benefit user attains for a particular choice of strategy and C i is the cost.</ns0:p></ns0:div>
<ns0:div><ns0:head>INVERSE UTILITY CONGESTION GAME</ns0:head><ns0:p>The purpose of the spectrum sharing congestion game is to reach a suitable utility level at which network achieves Nash Equilibria. The utility function might not be maximized as the cost of spectrum sharing and adaptation of a new suitable channel is involved. The cost function of a player i is:</ns0:p><ns0:formula xml:id='formula_3'>C i (s i , s −i ) = b p γ i (s i , s −i ) + c s p i (3)</ns0:formula><ns0:p>where, b p is the battery power of node i transmitter, γ i (s i , s −i ) is the inverse signal to interference ratio of player i at some particular channel, and c s is the channel switching cost. The channel switching cost increases as the player keeps on shifting from one channel to another in search of the optimal result.</ns0:p><ns0:p>Hence, the cumulative switching cost can be defined as:</ns0:p><ns0:formula xml:id='formula_4'>C switch = +1, if ch i,iter+1 = ch i,iter 0, otherwise<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The cost increases by a factor of 1 every time a player switches its strategy from one channel to another. But if the channel remains the same at next iteration the switching cost becomes 0. Since the benefit of choosing a channel and sharing it with other users is in terms of the minimum cross-correlation that is</ns0:p><ns0:formula xml:id='formula_5'>B i (s i , s −i ) = − s H i (X)s i p i (5)</ns0:formula><ns0:p>The sequential congestion game is played iteratively. The users make their choices after analyzing the interference faced at their particular channel, and on other channels. The interference faced by each user is dependent on the correlation and the transmitting power of users sharing same channel. The waveforms of users sharing same channel are replaced by the eigenvector corresponding to the smallest eigen values of correlation matrix X, as narrated by <ns0:ref type='bibr' target='#b11'>Menon et al. (2005)</ns0:ref>. The iterative game helps in reaching the minimum correlation set of each player that increases the benefit function by reducing interference at <ns0:ref type='formula'>3</ns0:ref>) and ( <ns0:ref type='formula'>5</ns0:ref>) in (2) as:</ns0:p><ns0:formula xml:id='formula_6'>U i (s i , s −i ) = − s H i (X)s i p i − b p γ i (s i , s −i ) + c s p i (6) U i (s i , s −i ) = − s H i (X)s i p i − b p s H i s j s H j s i p j h 2 ji p 2 i h 2 ii − c s p i (7)</ns0:formula><ns0:p>The users occupy the channel where they receive maximum utility. Whereas, the cost paid in terms of consumed power motivates the users to cooperate and opt a suitable band option for the network. The users then optimize their power according to the current channel conditions.</ns0:p><ns0:p>The game can be extended to simultaneous iterative play. However, users need to maintain the history chart in order to realize the behavior of other players. The history of opponents is helpful in making better decisions in the next iteration.</ns0:p></ns0:div>
<ns0:div><ns0:head>Effect of Cost on Utility Function</ns0:head><ns0:p>The utility function is specified in (2) of the proposed game model. As discussed above, each user measures its benefit and cost at each channel sequentially and then opt for the channel with maximum utility. The utility function of each user keeps on increasing as the game starts, since the CRs are selfish and opt for their best possible resource. After certain iterations, the cost factor starts to dominate players' benefit. The channel switching cost increments after every single change. The cost dominance is considered as an influencing factor for CRs to cooperate, resulting in a cooperative congestion game.</ns0:p><ns0:p>The dominance of cost function enforces the CR users to reach for the suitable utilities. This leads to the negative utility function values; the cost starts to increase and benefit decreases. However, at a certain point of equilibrium the players reach their best possible spectrum choice with minimum interference.</ns0:p><ns0:p>The negative resultant utility function models the inverse utility congestion game.</ns0:p></ns0:div>
<ns0:div><ns0:head>Inverse Power Control Algorithm</ns0:head><ns0:p>Most of the power-control algorithms are focused on cellular networks where satisfying the QoS constraint is a stringent requirement. In CR networks, transmitters increase power to cope with channel impairments and increasing levels of interference in an inconsiderate and competitive manner. Within the spectrum sharing framework, a network strongly opposes the secondary users to transmit with arbitrarily high power and interferes with the QoS of the primary users. Hence, to achieve the target SINR, σ i , secondary user power is required to be kept low. In order to find out the change in utility with respect to power of a node, we take derivative of (7),</ns0:p><ns0:formula xml:id='formula_7'>dU i (s i , s −i ) d p i = s H i (X)s i P 2 i + 2b p s H i s j s H j s i p j h 2 ji p 3 i h 2 ii + c s p 2 i (8)</ns0:formula><ns0:p>Putting the derivative equal to zero, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_8'>dU i (s i , s −i ) d p i = 0 (9) s H i (X)s i p 2 i + 2b p s H i s j s H j s i p j h 2 ji p 3 i h 2 ii + c s p 2 i = 0 (10) 1 p 2 i (s H i (X)s i ) + 2b p s H i s j s H j s i p j h 2 ji p i h 2 ii + c s = 0 (11) (s H i (X)s i ) + c s = −2b p s H i s j s H j s i p j h 2 ji p i h 2 ii (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:formula xml:id='formula_10'>p i h 2 ii (s H i (X)s i ) + c s = −2b p s H i s j s H j s i p j h 2 ji (<ns0:label>13</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_11'>p i = −2b p s H i s j s H j s i p j h 2 ji h 2 ii (s H i (X)s i + c s )<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>Assuming orthogonal signature sequences (s H i (X)s i ) reaches to 1, the change in total power, p i , of a user i at time t is:</ns0:p><ns0:formula xml:id='formula_12'>p i t+1 = −2b p p j h 2 ii h 2 ii (1 + c s )<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>We know that SIR i reaches to maximum at minimum correlation between sequences of the users. Ideally, the signature sequences are considered orthogonal to each other, which means (s H i s j s H j s i ) = 1. Therefore,</ns0:p><ns0:formula xml:id='formula_13'>SIR i,max = σ i = p i h 2 ii p j h 2 ii (<ns0:label>16</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>)</ns0:formula><ns0:p>The updated power at each iteration can be calculated as:</ns0:p><ns0:formula xml:id='formula_15'>P i t+1 = SIR i,max SIR i P i (<ns0:label>17</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>)</ns0:formula><ns0:formula xml:id='formula_17'>P i t+1 = σ i γ i P i (18)</ns0:formula><ns0:p>After substituting (1), ( <ns0:ref type='formula' target='#formula_12'>15</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_13'>16</ns0:ref>) in (18), we get</ns0:p><ns0:formula xml:id='formula_18'>P i t+1 = −2b p s H i s j s H j s i p j h 2 ji h 2 ii (1 + c s )<ns0:label>(19)</ns0:label></ns0:formula><ns0:p>Hence, the suitable power of each user at each iteration is calculated by P i t+1 .</ns0:p></ns0:div>
<ns0:div><ns0:head>Global Function</ns0:head><ns0:p>The utility of each user influences the strategy set s −i of opponents in games. This influence is made to minimize the interference within the network. The impact of utility of each user is projected in the form of a global function. In the proposed game framework, global function is the negative sum utilities of each user.</ns0:p><ns0:formula xml:id='formula_19'>P s = − N ∑ i=1,i = j [max s i ∈S U i (s i , s −i )]<ns0:label>(20)</ns0:label></ns0:formula><ns0:p>The convergence of global function shows social stability in the CR network, where each user is operating at its best possible channel without creating any hindrance in transmission of other users or even PU (in case of underlay systems).</ns0:p></ns0:div>
<ns0:div><ns0:head>SIMULATIONS RESULTS</ns0:head><ns0:p>The inverse utility congestion game is evaluated using a variable number of CRs. The network considered is of area 50m x 50m with 50 nodes, uniformly distributed to share K = 4 available channels. After iteratively playing the game, the utilities of players with three signal space dimensions are shown in Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>The utility of nodes starts to increase as each node acts selfishly and tries to maximize its own payoff.</ns0:p><ns0:p>This selfish behavior increases the cost of nodes and ultimately leads to the negative utility. Users keep on changing their channels until they reach an optimum point of utility. Convergence of channel allocation process is shown in Figure <ns0:ref type='figure'>2</ns0:ref>. It is observed that users select their suitable channels after a few iterations, but the signature sequences take time to converge. The correlation between waveforms reach its minimum level with delay; since, the change is so small that it hardly influences the channel decision made by each user. However, it ultimately converges to its optimal point as shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The convergence of the global function of the network is shown in Figure <ns0:ref type='figure'>3</ns0:ref>. The equilibrium point at which users within the network coordinate with each other and the network becomes stable is the maxima of the global function. The stability ensures the existence of a unique Nash Equilibrium.</ns0:p><ns0:p>Here the noticeable trend is the switching of convergence point twice at near 100 th iteration. In game theoretic perspective, it could be the shifting of local maxima to global one. But trends need further study.</ns0:p><ns0:p>According to <ns0:ref type='bibr' target='#b31'>Zhou et al. (2019)</ns0:ref> the global function of the congestion game is isomorphic to the potential function of the exact potential games.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1. Utility of 50 CRs</ns0:head><ns0:p>The steady state of the function is pure Nash Equilibrium, where no further change in strategy of a user can be beneficial. In other words, no unilateral deviation from the equilibrium point gives incentive to any user. Global function shows the overall throughput of the network. The reduction in ISIR within the network is the inverse of potential function that can be seen in Figure <ns0:ref type='figure'>4</ns0:ref>. This reduction depicts the minimum interference within the network and optimum power levels of each CR. Since they agree to cooperate, transmission power of each user starts reducing, as shown in Figure <ns0:ref type='figure'>5</ns0:ref>.</ns0:p><ns0:p>The behavior of the cognitive nodes is more clarified in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>. The pattern of five users show how they switch between channels and opt the suitable utility.</ns0:p><ns0:p>A quick comparison of convergence is shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> with a chaotic optimization method of power control given by Al Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Congestion Game Isomorphic to Potential Game</ns0:head><ns0:p>As described above, the congestion game is an isomorphic to a potential game. We observe the results of inverse utility game without cost that can be considered as a potential game model as shown in Figure <ns0:ref type='figure'>11</ns0:ref> and Figure <ns0:ref type='figure'>12</ns0:ref>.</ns0:p><ns0:p>Similarly, the ISIR of the network remains unstable for a long period of time, since selfish nodes in CR network do not consider the cost initially. Later, nodes must bear cost in terms of a destabilized Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021) Manuscript to be reviewed Computer Science S i = {s 1 , s 2 , ....s N }, set of signature sequences ∀i ∈ {1, 2, ....N}. The utility function of player i is expressed as U i∈N that also includes the cost function.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021) Manuscript to be reviewed Computer Science each channel. The utility function of the proposed game model is derived after substituting (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. Convergence of CR Channel Allocation Process with 50 Users</ns0:figDesc><ns0:graphic coords='10,141.74,63.78,413.57,255.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. Inverse Signal to Interference Ratio (ISIR) with 50 Users</ns0:figDesc><ns0:graphic coords='11,141.74,63.78,413.57,255.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Utility of 5 CRs</ns0:figDesc><ns0:graphic coords='12,141.74,63.78,413.57,255.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Convergence of CR Channel Allocation Process of 5 Users</ns0:figDesc><ns0:graphic coords='12,141.74,375.08,413.57,255.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 8 .Figure 9 .</ns0:head><ns0:label>89</ns0:label><ns0:figDesc>Figure 8. Convergence of Channel Allocation with 10 Users</ns0:figDesc><ns0:graphic coords='13,141.74,63.78,413.57,255.12' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 10 .Figure 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Figure 10. Utility of 10 CRs</ns0:figDesc><ns0:graphic coords='14,141.74,63.78,413.57,255.12' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='9,141.74,160.76,413.57,255.12' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,141.74,63.78,413.57,255.12' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc><ns0:ref type='bibr' target='#b0'>Talabani et al. (2014)</ns0:ref> and non-cooperative game theoretic approach in heterogeneous vehicular networks (HVNs) with correlated equilibrium presented by<ns0:ref type='bibr' target='#b28'>Xiao et al. (2018)</ns0:ref> Convergence of Algorithms</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Accommodated Users Convergence Point</ns0:cell><ns0:cell>Convergence Unit</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Chaotic Optimization of</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>50 th Iteration</ns0:cell><ns0:cell>Power and Information Rate</ns0:cell></ns0:row><ns0:row><ns0:cell>Power Control</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Non-Cooperative</ns0:cell><ns0:cell>HVN</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>20 th Iteration</ns0:cell><ns0:cell>Average Utility</ns0:cell></ns0:row><ns0:row><ns0:cell>Game</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Proposed IPC</ns0:cell><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>9 th Iteration</ns0:cell><ns0:cell>Utility and Power</ns0:cell></ns0:row></ns0:table><ns0:note>The proposed inverse power control (IPC) algorithm is noticeably accommodating more number of users with minimum interference than other algorithms in literature. This prevents the wastage of8/16PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54661:2:1:NEW 17 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Department of Electronics,
Quaid-i-Azam University,
Islamabad, Pakistan.
https://qau.edu.pk
3rd May, 2021.
Dear Editor,
Cunhua Pan
Academic Editor, PeerJ Computer Science
Re: manuscript # #CS-2020:10:54661:1:2: REVIEW
Title: “A Game Theoretic Power Control and Spectrum Sharing Approach Using Cost
Dominance in Cognitive Radio Networks”
We thank the reviewers for their generous comments on the manuscript and we are pleased to
inform you that we have revised the manuscript in light of reviewers’ comments. The reviewers’
recommendations were extremely useful, and we have addressed all their recommendations in
the revised manuscript. Please see below for responses to each comment. We hope that the
revisions in the manuscript and our accompanying responses will be sufficient to make our
manuscript suitable for publication in PeerJ.
Sundus Naseer
Student of the Department of Electronics
On behalf of all authors, “A Game Theoretic Power Control and Spectrum Sharing Approach
Using Cost Dominance in Cognitive Radio Networks”
Authors’ response: We are grateful to the Reviewer for doing careful reading of the revised
version of the paper and for the helpful comments/suggestions, which have further improved it.
All the corrections are now incorporated in the revised manuscript. Following is the reply to the
reviewer.
Reviewer: 1
Basic reporting
The format of the paper should be improved.
The format has been improved according to the Journal described format. This examination by
the reviewer assisted us in greatly improving the quality of the manuscript. We are complying to
the journal template.
The figures are not clear.
We thank the reviewer for carefully examining the figures. In the revised manuscript, the figures
are revised for clarity and proper representation of the results. The size of the figures is also
increased with revision in the legend. In addition, we have revised the figures with less iterations
and reduced number of users to show the trends and results more clearly.
Introduction should be section I?
As suggested by the reviewer, the sections have been corrected. We thank the reviewer for
providing this valuable feedback. The formatting is according to the guidelines provided by
journal template.
In Simulation section, section 5, second paragraph, there is “Figure. ??”
The correction has been made. We thank the reviewer for pointing out this error in the
manuscript.
Experimental design
Simulation part could be improved. The figures are difficult to read.
In order to make the figures clearer, we have revised the graphs with reduced number of
iterations. By reducing the number of iterations, the figures now depict more details for clear
analysis. The convergence patterns have been shown in Figure 6 and Figure 7 with less number
of users, keeping all other conditions of the system same. By reducing the number of users, it is
easy to understand the results as the trends are clearly visible. We thank the reviewer for this
valuable feedback that improve the clarity of the figures. The size of the figures in the entire
manuscript is also adjusted for clear view and better understanding.
In Simulation section, section 5, second paragraph, there is ''Figure. ??''
The correction has been made. We are thankful for the reviewer to carefully examining the
manuscript and pointing out very important corrections.
Validity of the findings
Please improve your simulation part. Some figures are not clear, and it is difficult to read.
We are thankful to the reviewer for the valuable suggestions related to the simulation part and
figures. The convergence patterns have been shown in Figure 6 and Figure 7 with less number of
users, keeping all other conditions of the system same. This suggestion helped us a lot in
improving and better understanding of the manuscript.
Comments for the Author
The paper should be future improved. For example, the figures are not clear. Introduction
Section should be Section I? There are several typos in the paper.
We are thankful to the reviewer for the valuable suggestions related to the simulation part and
figures. The convergence patterns have been shown in Figure 6 and Figure 7 with less number of
users, keeping all other conditions of the system same. This suggestion helped us a lot in
improving and better understanding of the manuscript.
We are thankful for this suggestion. We hope that the substantial revisions we have made based
on your insightful comments and suggestions will improve the manuscript. The entire manuscript
has been examined and any typos present have been removed. The paper is further improved by
making the figures clear. The correction related to the section numbers is also incorporated in the
revised manuscript. The format is the journal is followed.
Reviewer 2:
Basic reporting
My questions have been answered appropriately. The quality of the revised paper has been
improved. However, the grammar and typos should be checked again through the whole article.
For example, in the third sentence of the abstract, '.... that may ultimately results in
inappropriate....' should be '...that may ultimately result in inappropriate...”
Authors’ response:
We are thankful to the reviewer for the valuable suggestions. The corrections have been made in
the revised version of paper. The entire manuscript has been examined and any typos or
grammatical mistakes present have been removed.
Experimental design
No comment
Validity of the findings
No comment
Please accept our sincere gratitude for your thorough review and invaluable time.
THANK YOU AGAIN FOR YOUR INVALUABLE TIME AND INPUT!
" | Here is a paper. Please give your review comments after reading it. |
173 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Understanding the concept of simple interest is essential in financial mathematics because it establishes the basis to comprehend complex conceptualizations. Nevertheless, students often have problems learning about simple interest. This paper aims to introduce a prototype called 'simple interest computation with mobile augmented reality' (SICMAR) and evaluate its effects on students in a financial mathematics course. The research design comprises four stages: i) planning; ii) hypotheses development; iii) software development; and iv) design of data collection instruments. The planning stage explains the problems that students confront to learn about simple interest. In the second stage, we present the twelve hypotheses tested in the study. The stage of software development discusses the logic implemented for SICMAR functionality. In the last stage, we design two surveys and two practice tests to assess students. The pre-test survey uses the attention, relevance, confidence, and satisfaction (ARCS) model to assess students' motivation in a traditional learning setting. The post-test survey assesses motivation, technology usage with the technology acceptance model (TAM), and prototype quality when students use SICMAR. Also, students solve practice exercises to assess their achievement. One hundred three undergraduates participated in both sessions of the study. The findings revealed the direct positive impact of SICMAR on students' achievement and motivation. Moreover, students expressed their interest in using the prototype because of its quality. In summary, students consider SICMAR as a valuable complementary tool to learn simple interest topics.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The economic factor is involved in practically all the processes of making decisions. Therefore, to avoid making wrong financial decisions, it is recommendable to know how money is obtained, managed, invested, and optimized. The lack of these skills could be solved by completing a financial education course <ns0:ref type='bibr' target='#b17'>(Carpena & Zia, 2020)</ns0:ref>.</ns0:p><ns0:p>Financial education must start at an early stage. <ns0:ref type='bibr' target='#b10'>Berry, Karlan, & Pradhan (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b53'>Sun et al. (2020)</ns0:ref> demonstrated how financial education helped prevent problems such as having low credit scores or defaulting on a loan. Because of the relevance of financial education, the United States of America included various finance courses as a part of the primary school curriculum <ns0:ref type='bibr' target='#b55'>(Urban et al., 2020)</ns0:ref>. Other countries such as China <ns0:ref type='bibr' target='#b25'>(Ding, Lu & Ye, 2020)</ns0:ref>, Ghana <ns0:ref type='bibr' target='#b10'>(Berry, Karlan, & Pradhan, 2018)</ns0:ref>, Hong Kong <ns0:ref type='bibr' target='#b27'>(Feng, 2020)</ns0:ref>, and India <ns0:ref type='bibr' target='#b17'>(Carpena & Zia, 2020</ns0:ref>) successfully adopted this trend; however, success cannot be generalized. As pointed out by <ns0:ref type='bibr' target='#b5'>Arceo & Villagómez (2017)</ns0:ref> and <ns0:ref type='bibr' target='#b13'>Bruhn, Lara, & McKenzie (2014)</ns0:ref>, underdeveloped countries such as Mexico reported minimum benefits due to the inclusion of financial education in schools.</ns0:p><ns0:p>To obtain insights about why students show no interest in financial education, we performed monitoring of undergraduates enrolled in financial mathematics courses at four public northern Mexican universities. All the students, no matter what program they are enrolled in, must take a financial mathematics course because it is mandatory within the school curriculum. Therefore, we monitored students from the accounting, administration, business, and engineering fields. As a result, we detected three problems: i) students lack mathematical skills; ii) sometimes the techniques used by the professors to teach the basics are boring, and iii) students do not comprehend the basics such as simple and compound interest, which are fundamental to sound financial education.</ns0:p><ns0:p>In financial mathematics, interest is calculated as simple interest or compound interest; the former determines how much interest to apply to a principal balance, whereas the latter is the addition of interest to the principal sum of a loan or deposit <ns0:ref type='bibr' target='#b32'>(Hastings, 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b0'>Abylkassymova et al. (2020)</ns0:ref> and <ns0:ref type='bibr' target='#b12'>Blue & Grootenboer (2019)</ns0:ref> focused their research on seeking alternative methods to solve students' difficulties in understanding the basic concepts explained in a financial mathematics course. The most used options are individualized explanations outside of class time, multimedia material, computer simulations, and information and communications technologies (ICTs). Nevertheless, there are still opportunities to propose teaching-learning strategies to help students comprehend financial education basics. This paper assesses mobile augmented reality (MAR) technology as an alternative learning strategy to comprehend simple interest topics. <ns0:ref type='bibr'>Gutiérrez et al. (2016)</ns0:ref> and <ns0:ref type='bibr' target='#b19'>Chen (2019)</ns0:ref> defined MAR as 'a real-time direct or indirect view of a real-world environment that has been augmented by adding virtual computergenerated information to it.' In summary, mobile augmented reality is a novel way of superimposing digital content into the real context.</ns0:p><ns0:p>Akçayır & Akçayır (2017) and <ns0:ref type='bibr' target='#b6'>Arici et al. (2019)</ns0:ref> explained the benefits of mobile augmented reality in educational settings, especially for mathematics. The benefits include student achievement increase, autonomy facilitation (self-learning), generation of positive attitudes to the educational activity, commitment, motivation, knowledge retention, interaction, collaboration, and availability for all.</ns0:p><ns0:p>Motivated by MAR advantages and the problems detected regarding financial education, this paper aims to develop the simple interest computation with mobile augmented reality (SICMAR) prototype and to assess its effects in an undergraduate financial mathematics course. The study, divided into pre and post-test, was designed to assess students' motivation, achievement, technology acceptance, and prototype quality.</ns0:p><ns0:p>The main contributions of the paper are summarized below.</ns0:p><ns0:p>1. We explain the details to develop the SICMAR prototype.</ns0:p><ns0:p>2. We offer a proposal to assess students' motivation, achievement, technology acceptance, and SICMAR quality in a real educational setting.</ns0:p><ns0:p>3. We explain the facts to support that SICMAR could be a valuable complementary tool to learn about simple interest.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 discusses related work about AR to support mathematical learning. In Section 3, the basis to develop SICMAR and the surveys created are described. The results obtained from tests and the corresponding discussion are shown in Section 4. Finally, conclusions are outlined in Section 5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Learning Mathematics with Augmented Reality</ns0:head><ns0:p>Many research studies have been published on AR usage for educational purposes. Interested readers can consult the works by <ns0:ref type='bibr' target='#b2'>Akçayır & Akçayır (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b50'>Saltan & Arslan (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b28'>Garzón, Pavón & Baldiris (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b6'>Arici et al. (2019)</ns0:ref> to obtain a comprehensive overview of the educative fields that have been addressed.</ns0:p><ns0:p>As <ns0:ref type='bibr' target='#b6'>Arici et al. (2019)</ns0:ref> explained, most AR works focused on sciences such as medicine, physics, history, arts, and astronomy; however, the social fields, laws, and business are less addressed. <ns0:ref type='bibr' target='#b32'>Ibáñez & Delgado-Klos (2018)</ns0:ref> presented a literature review of AR to support science, technology, engineering, and mathematics (STEM) learning. <ns0:ref type='bibr' target='#b39'>Medina, Castro & Juárez (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b24'>Demitriadou, Stavroulia & Lanitis (2020)</ns0:ref> presented comparisons between virtual reality and augmented reality for mathematics learning. In both studies, no significant difference was found between virtual and augmented reality technologies in contributing to mathematics learning. The work of <ns0:ref type='bibr' target='#b45'>Radianti et al. (2020)</ns0:ref> is recommended as a starting point if the readers want to explore the field of virtual reality in education.</ns0:p><ns0:p>For this study, we found papers related to augmented reality for mathematics teaching-learning. However, only studies published between 2013 and 2020 were considered. The query strings included 'mathematics,' 'financial mathematics,' 'augmented reality,' 'mobile augmented reality,' 'teaching,' 'education,' and 'learning.' Also, we use the Boolean operators 'OR,' 'AND' to mix multiple strings. We collected the papers from journals included in the journal citation reports (JCR) and manuscripts published in conferences through the Web of Science (WoS).</ns0:p><ns0:p>As a result, we detected 17 studies focused on learning mathematics inside formal and informal environments. Concerning the formal settings, the learners' education level ranges from preschool to undergraduate. The elementary level is the one in which more studies have been published. Moreover, geometry is the subject with more implementations. This is due to the ability of augmented reality to promote interaction and visualization with 2D and 3D objects. <ns0:ref type='bibr' target='#b49'>Salinas et al. (2013)</ns0:ref> tested the impact of AR on learning algebraic functions using 3D visualizations. The experience was assessed by 30 undergraduates from Mathematics I course. Likewise, <ns0:ref type='bibr' target='#b8'>Barraza, Cruz & Vergara (2015)</ns0:ref> used AR to help undergraduate students learn quadratic equations. The pilot study was conducted with 59 students at a Mexican school, and most comments obtained were positive. An AR app for mathematical analysis was presented by <ns0:ref type='bibr' target='#b20'>Coimbra, Cardoso & Mateus (2015)</ns0:ref>. Thirteen undergraduates participated in the experience, where most of them expressed 'classes should all be like this.' Regarding geometry, <ns0:ref type='bibr' target='#b30'>Gutíerrez et al. (2016)</ns0:ref> presented an AR system aimed at the learning of descriptive geometry. A positive impact on the spatial ability of 50 undergraduates was found. <ns0:ref type='bibr' target='#b44'>Purnama, Andrew & Galinium (2014)</ns0:ref> designed an AR tool to help elementary students learn the protractor's use. According to the students' responses, 92% found that the prototype makes the learning process faster than using a conventional method. <ns0:ref type='bibr' target='#b35'>Li et al. (2017)</ns0:ref> designed an augmented reality game for helping elementary students in the counting process. The two students who participated in the experience expressed that learn to count was easy using AR. Moreover, <ns0:ref type='bibr' target='#b54'>Tobar, Fabregat, & Baldiris (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b18'>Cascales et al. (2017)</ns0:ref> explained the advantages of using mobile augmented reality to learn mathematics in elementary special education needs (SEN) contexts. <ns0:ref type='bibr' target='#b52'>Sommerauer and Muller (2014)</ns0:ref> conducted a pre-test and post-test with 101 participants at a mathematics exhibition. The aim was to measure the effect of AR on acquiring and retaining mathematical knowledge in an informal learning environment. The pre-test score captured previous knowledge regarding the mathematical exhibits, while the post-test captured the knowledge level after visiting the exhibition. The results revealed that visitors performed significantly better on post-test questions.</ns0:p><ns0:p>A summary of the features of the papers analyzed is shown in Table <ns0:ref type='table'>1</ns0:ref>. There are no signs of papers related to financial mathematics, neither for simple interest computation. Regarding the preferred software for implementing AR, Vuforia is the leader. The number of participants varies from 2 to 140. It seems that there is no consensus about the sample size to validate an AR study. Only five works presented assessments about students' motivation. Most of the work was concentrated on prototype perception and students' achievement. No work focused on assessing technology acceptance was found. Qualitative research is the most common theory-base employed, followed by the nonparametric Wilcoxon signed-rank test. Most of the technologies used to implement the prototypes were mobile devices, which evidences PCs are less preferred, and smart glasses are not yet used in academic scenarios, mainly due to the high cost. All the works introduced single-user-based applications because it is still complex to build collaborative applications. Based on the analysis conducted, our proposal's novelty relies on the field addressed (simple interest) and the constructs assessed in the same study (motivation, achievement, quality, and technology acceptance).</ns0:p></ns0:div>
<ns0:div><ns0:head>Research Design</ns0:head><ns0:p>In this research, we use a mixed-method to allow the synergistic usage of qualitative and quantitative data <ns0:ref type='bibr' target='#b46'>(Reeping et al., 2019)</ns0:ref>. Furthermore, this research is considered exploratory and descriptive. Exploratory because we investigate the problem in an early stage and obtain insights into what is happening. Descriptive, because we describe the features of the phenomenon studied. The systematic methodology to conduct the research comprises four stages: i) planning; ii) hypotheses development; iii) software development; and iv) design of data collection instruments.</ns0:p></ns0:div>
<ns0:div><ns0:head>Planning</ns0:head><ns0:p>After the conversation with three financial mathematics professors, the planning stage began. The professors agreed with us about the three problems detected during the monitoring. However, they mentioned the barriers that students face when learning simple interest: a) the problem is not analyzed, therefore, it is not understood; b) they confuse simple interest with compound interest and vice versa; c) the terms involved to solve the computation are wrongly cleared; d) the concepts such as principal, amount, interest rate, and time are misinterpreted; and e) conversions between time units are wrongly performed. Professors explained that around 70% of Mexican students commit at least one of the errors mentioned above. They also stated that simple interest knowledge is fundamental for mastering finances and understanding complex concepts such as compound interest, amortization tables, and annuities. Hence, students must comprehend the topic.</ns0:p><ns0:p>We ask professors for an explanation to understand how to compute simple interest. The explanation was based on the following example. When an individual borrows money, the lender expects to be paid back the loan amount plus an additional charge for using the money called interest. In contrast, when money is deposited in a bank, it pays the depositor to use the capital, also called interest.</ns0:p><ns0:p>Simple interest (I) represents the fee you pay on a loan or income you earn on deposits. In other words, simple interest represents the price of the money over a specific period. As shown in Eqs. ( <ns0:ref type='formula'>1</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_0'>2</ns0:ref>), there are two ways to compute simple interest. Furthermore, notice the four terms involved: i) Principal (P) is the original sum of money borrowed (also called present value); ii) Interest rate (r) is the amount charged on top of the principal for the use of assets (expressed as a percentage); iii) Time (t) represents the period of the financial operation; and iv) Amount (A) is the total accrued amount (principal plus interest), represents the future value of the financial operation <ns0:ref type='bibr' target='#b32'>(Hastings, 2015)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_0'>, 𝐼 = 𝑃𝑟𝑡 (1) 𝐼 = 𝐴 -𝑃.<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>The terms regarding principal, interest rate, time, and the amount can be cleared from Eqs. (1) and (2) as is depicted in Eqs. (3), (4), (5), and (6), respectively.</ns0:p><ns0:formula xml:id='formula_1'>𝑃 = 𝐼 𝑟𝑡 . (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>) 𝑟 = 𝐼 𝑃𝑡 . (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>) 𝑡 = 𝐼 𝑃𝑟 . (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>) 𝐴 = 𝑃 + 𝐼.<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Usually, the interest rate and time in Eqs. (1) to (6) are expressed in years. However, it could also be expressed in days, weeks, fortnights, months, bimesters, quarters, or semesters. For any calculation, if the period for r and t is defined in different units, then a conversion must be computed, which often causes mistakes.</ns0:p><ns0:p>To this end, we propose a prototype to support simple interest learning and design two surveys and two practice tests to assess the effects of using it with undergraduate students. The prototype is called 'simple interest computation with mobile augmented reality (SICMAR).'</ns0:p></ns0:div>
<ns0:div><ns0:head>Hypotheses Development</ns0:head><ns0:p>Our study assesses students' motivation when learning simple interest in traditional settings and with SICMAR. Also, we assess the students' achievement when learning in both settings through a test. Finally, we consider obtaining insights about SICMAR technology acceptance and quality. Thereby, we pose twelve hypotheses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Students Motivation</ns0:head><ns0:p>Motivation affects what, how, and when the learners learn, and it is directly related to the development of students' attitudes and persistent efforts toward achieving a goal <ns0:ref type='bibr' target='#b37'>(Lin et al., 2021)</ns0:ref>. Motivation is an activity that must be performed to i) attract and sustain students' attention (A); ii) define the relevance (R) of a content students need to learn; iii) help students to believe they succeed in making efforts (gain confidence (C)); and iv) assist students in obtaining a sense of satisfaction (S) about their accomplishments in learning <ns0:ref type='bibr' target='#b14'>(Cabero-Almenara & Roig-Vila, 2019)</ns0:ref>. In this sense, Keller's ARCS model provides guidelines for designing and developing strategies to motivate students learning <ns0:ref type='bibr' target='#b36'>(Li & Keller, 2018)</ns0:ref>.</ns0:p><ns0:p>In previous studies, the ARCS model was used to observe if mobile augmented reality could be a resource that motivates students to learn anatomy and art (Cabero-Almenara & Roig-Vila, 2019), dimensional analysis <ns0:ref type='bibr' target='#b26'>(Estapa & Nadolny, 2015)</ns0:ref>, and geometry <ns0:ref type='bibr' target='#b33'>(Ibáñez et al. 2020)</ns0:ref>, obtaining promising results. Therefore, the present paper poses the following five hypotheses.</ns0:p><ns0:p>H 1 : There is a significant difference in students' attention scores in the pre-test and the post-</ns0:p></ns0:div>
<ns0:div><ns0:head>test.</ns0:head><ns0:p>H 2 : There is a significant difference in students' relevance scores in the pre-test and the post-</ns0:p></ns0:div>
<ns0:div><ns0:head>test.</ns0:head><ns0:p>H 3 : There is a significant difference in students' confidence scores in the pre-test and the post-test.</ns0:p><ns0:p>H 4 : There is a significant difference in students' satisfaction scores in the pre-test and the post-test.</ns0:p><ns0:p>H 5 : There is a significant difference in students' motivation scores in the pre-test and the post-test.</ns0:p></ns0:div>
<ns0:div><ns0:head>Students Achievement</ns0:head><ns0:p>Academic achievement is the extent to which a student has accomplished specific goals that focus on activities in instructional environments <ns0:ref type='bibr' target='#b9'>(Bernacki, Greene & Crompton, 2019)</ns0:ref>. In this paper, student achievement is related to how capable the students are when solving simple interest computation problems. The studies by <ns0:ref type='bibr' target='#b44'>Purnama, Andrew & Galinium (2014)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Nadolny (2015); <ns0:ref type='bibr' target='#b54'>Tobar, Fabregat & Baldiris (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b18'>Cascales et al. (2017), and</ns0:ref><ns0:ref type='bibr' target='#b33'>Ibáñez et al. (2020)</ns0:ref> show how the use of MAR can affect mathematics academic achievement positively. Therefore, we propose the following hypothesis.</ns0:p><ns0:p>H 6 : Students learning with SICMAR achieve higher scores in simple interest tests than students exposed to traditional learning.</ns0:p></ns0:div>
<ns0:div><ns0:head>Technology Acceptance</ns0:head><ns0:p>The technology acceptance model (TAM) was formulated by <ns0:ref type='bibr' target='#b21'>Davis (1989)</ns0:ref>. TAM suggested that the perceived ease of use (PEU) and the perceived usefulness (PU) are determinants to explain what causes the intention of a person to use (ITU) a technology. The perceived ease of use refers to the degree to which a person believes that using a system would be free from effort. The perceived usefulness refers to the degree to which the user believes that a system would improve his/her work performance. The intention to use is employed to measure the degree of technology acceptance <ns0:ref type='bibr' target='#b21'>(Davis, 1989)</ns0:ref>.</ns0:p><ns0:p>In previous research, TAM was used to examine the adoption of augmented reality technology in teaching using videos (Cabero-Almenara, Fernández & Barroso-Ozuna, 2019) and learning the Mayo language <ns0:ref type='bibr' target='#b41'>(Miranda et al. 2016)</ns0:ref>. However, no studies related to the use of TAM in mathematical settings were detected. In this paper, the TAM is extended with prototype quality variable to explain and predict the SICMAR usage. The aim is to study the relationships between quality, perceived ease of use, and perceived usefulness and their positive effects on students' intention to use SICMAR.</ns0:p><ns0:p>The family of statistical multivariant models that estimate the effect and the relationships between multiple variables is known as structural equation modeling (SEM) <ns0:ref type='bibr' target='#b3'>(Al-Gahtani, 2016)</ns0:ref>. Therefore, we use SEM to test the following hypotheses.</ns0:p><ns0:p>H 7 : Quality positively affects students' perceived usefulness of SICMAR.</ns0:p><ns0:p>H 8 : Quality positively affects students' perceived ease of use of SICMAR.</ns0:p><ns0:p>H 9 : Perceived ease of use positively affects students' perceived usefulness of SICMAR.</ns0:p><ns0:p>H 10 : Perceived ease of use positively affects students' intention to use SICMAR.</ns0:p><ns0:p>H 11 : Perceived usefulness positively affects students' intention to use SICMAR.</ns0:p><ns0:p>The words 'positively affect' mean that when the measured value of one variable increases, the related variable also increases.</ns0:p></ns0:div>
<ns0:div><ns0:head>SICMAR Quality</ns0:head><ns0:p>Software quality is the field of study that describes the desirable characteristics of a software product. Establishing a measure for the quality of the software is not an easy task. However, attributes such as design, usability, operability, security, compatibility, maintainability, and functionality can be considered to define metrics <ns0:ref type='bibr' target='#b22'>(Dalla et al., 2020)</ns0:ref>. When the quality of an AR PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>product is evaluated, questions such as how fast the system responds, how difficult it is to manipulate the system and markers, and to which extent the illumination affects marker recognition must be answered <ns0:ref type='bibr' target='#b23'>(De Paiva & Farinazzo, 2014)</ns0:ref>. In this paper, the quality was assessed considering the design and usability of SICMAR as recommended by <ns0:ref type='bibr' target='#b8'>Barraza, Cruz & Vergara (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b43'>Pranoto et al. (2017)</ns0:ref>.</ns0:p><ns0:p>In the literature, no MAR works that report a hypothesized minimal mean value of quality that serves for comparison were found. Therefore, we propose the following procedure: i) determine the minimum and the maximum length of the Likert scale; ii) compute the scale range by subtracting (5-1=4) and dividing by five (4/5=0.80); iii) add the range to the least scale value to obtain the maximum. The ranges computed for a five-point Likert scale are 1--1.8--2.6--3.4--4.2--5. Results greater than 3.4 and less or equal to 4.2 are considered as much quality. Thus, the mean value of 3.8 was supposed to determine much quality. Moreover, as is explained in the results section, this value is the median obtained after experimentation. Hence, the following hypothesis is established.</ns0:p><ns0:p>H 12 : The mean value evaluated by students regarding the quality of SICMAR is greater than 3.8.</ns0:p></ns0:div>
<ns0:div><ns0:head>Software Development</ns0:head><ns0:p>We consider the cascade model to establish the stages to develop SICMAR. The cascade model is a linear procedure characterized by dividing the software development process into successive phases <ns0:ref type='bibr' target='#b48'>(Ruparelia, 2010)</ns0:ref>. The model encompasses five phases, including i) requirements; ii) design; iii) implementation; iv) testing; and v) maintenance. A visual representation of the SICMAR phases is shown in Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Requirements</ns0:head><ns0:p>According to <ns0:ref type='bibr'>Billinghurts, Grasset & Looser (2005)</ns0:ref>, the physical components of the interface (inputs), the virtual visual and auditory display (outputs), and the interaction metaphors must be considered to build intuitive AR applications. Therefore, we determine five characteristics of the prototype to deal with the barriers faced by students: i) a set of markers will be used to determine the term to compute and the parameters involved (inputs); ii) 2D models will represent all the information needed for the calculations; iii) markers' movement will be used to observe the 2D models from different perspectives; iv) the calculation to solve will be defined with a combination of markers (touch manipulation metaphor); and v) we will employ virtual objects, such as text boxes, arrows, and images, to explain step-by-step calculations (outputs). A traditional computer application cannot offer all these features.</ns0:p><ns0:p>From the variety of information and communication technologies, PCs and mobile devices were considered to implement the prototype due to the high probability that a student has either one of them. The main differences between PCs and mobile devices are the display size, how it is manipulated, the processing power, bandwidth, and usage time. Portability, sensors included, and ease manipulation were considered to select mobile devices. Indeed, young users prefer mobile devices because they can be used anytime, carried from place to place, and connected to the Internet all day long. Moreover, recent studies have shown that almost 75% of AR works for educational settings were implemented on mobile devices obtaining satisfactory results <ns0:ref type='bibr' target='#b9'>(Bernacki, Greene & Crompton, 2019;</ns0:ref><ns0:ref type='bibr'>Cabero-Almenara et al., 2019)</ns0:ref>. AR applications are different from conventional applications that use a mouse and keyboard. Mobile augmented reality improves the user perception and interaction with the real world; a non-AR application cannot offer that feature. Also, MAR can turn a classic learning process into an engaging experience because students perceive learning as a game <ns0:ref type='bibr' target='#b6'>(Arici et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Regarding mobile gadgets, Android and iOS-based devices are the leaders. Nevertheless, we selected Android because i) it is the leading mobile operating system worldwide; ii) the price for publishing an app in the play store is much lesser than posting an app in the apple store; iii) the cost of Android-based devices is less than iOS-based devices; due to price, a student rarely has an iPhone; iv) it possesses a good support architecture and functional performance; v) the customization level offered makes it easy to use; vi) the assortment of the batteries' sizes overpowers the iPhone <ns0:ref type='bibr' target='#b34'>(Ivanov, Reznik, & Succi, 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Design</ns0:head><ns0:p>There are various alternatives to implement AR solutions, including Wikitude, ARToolKit, Augumenta, Easy AR, HP Reveal, and Vuforia, offering exciting characteristics. Considering the analysis presented in Table <ns0:ref type='table'>1</ns0:ref> and based on the authors' experience, Vuforia Software Developer Kit (SDK) was selected. Vuforia is a robust platform that contains the libraries to implement the tasks related to AR, including real-time marker detection, recognition and tracking, and the computations for object superimposition. Unity 3D was employed to create the SICMAR visual environment and all the 2D virtual objects that will be superimposed on each marker.</ns0:p><ns0:p>SICMAR was designed based on the framework proposed by <ns0:ref type='bibr' target='#b8'>Barraza, Cruz, & Vergara (2015)</ns0:ref>.</ns0:p><ns0:p>In the rendering subsystem, two main tasks are executed: i) displaying the video acquired from the real world, and ii) rendering the 2D models. We designed a touch-based graphical user interface (GUI) to display the components and the video acquired from the mobile device. At the top of the GUI, we inserted two sections: i) input data and ii) calculate (output). The first show the input terms (markers) detected inside the scene, and the second shows the term the user wants to compute (see the upper left corner in Fig. <ns0:ref type='figure'>2</ns0:ref>). We used the Unity sprite renderer for rendering all the photorealistic images of the 2D models that will be superimposed inside the real-world video stream.</ns0:p><ns0:p>The context and world model subsystem includes the design of image targets (markers), the data about the interest points, and the 2D objects that are going to be used in the augmentation. We used the Brosvision marker generator to design the markers to represent each of the five terms explained in Eqs. (1) to (6). As shown in Fig. <ns0:ref type='figure'>3</ns0:ref>, the markers include lines, triangles, quadrilaterals, and at the center, a square with a letter corresponding to the simple interest term was added. Using Vuforia, we conducted a test of the contrast-based features (interest points) of the individual markers visible to the camera. All the markers earned five stars rating, which means they included excellent features for detection and tracking. Finally, four 2D objects were created for user interactions and augmentations, as shown in Fig. <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>(a) To display information about the detected marker (input term).</ns0:p><ns0:p>(b) To capture the user inputs and determine if a term is handled as input or output.</ns0:p><ns0:p>(c) To display information about the time conversions.</ns0:p><ns0:p>(d) To display the calculation result (output term) or show an error.</ns0:p><ns0:p>Vuforia SDK was used to carry out the tracking subsystem. This subsystem exchanges marker tracking information with the rendering subsystem to superimpose the virtual 2D objects to the original scene displayed to the user.</ns0:p><ns0:p>Finally, the interaction subsystem collects and processes any input required by the user. A series of C# scripts were linked to the GUI objects. When a tap occurs on the screen, verification is carried out to determine if an element was touched. If the verification is valid, the search for a marker starts. If a marker is detected, then the corresponding method is invoked to carry out the task.</ns0:p></ns0:div>
<ns0:div><ns0:head>Implementation</ns0:head><ns0:p>The logic implemented to solve any of the Eqs. ( <ns0:ref type='formula'>1</ns0:ref>) to ( <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>) is the following. The user taps the SICMAR icon to start the execution. The presentation screen is displayed, and the camera of the mobile device is turned on. When the user shows a valid marker in the front of the camera, it is recognized as the desired output. Then, the position, rotation, and perspective of the marker are computed, and the corresponding virtual object is superimposed accordingly to the view of the real scene. Next, the input checkbox is activated, and the prototype waits for the user to show the markers for input terms. When input markers are recognized, the text boxes to insert data are displayed, and the 2D objects are superimposed inside the real scene. Any marker different from the first selected can be used as input. The user must insert the data for each term with the keyboard of the device. Once the data was introduced, the input checkbox must be disabled to perform the computation. Immediately, verification is conducted to detect if the necessary data for the computation were inserted correctly. If there is any missing data, an error object is displayed, else the output calculated is presented. The process can be executed continuously.</ns0:p></ns0:div>
<ns0:div><ns0:head>Testing</ns0:head><ns0:p>An example of simple interest computation using SICMAR is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. If the user inserts r and t with different periods, then the associated conversions are computed. Notice that the result of simple interest computation is highlighted with the color blue. As shown in Fig. <ns0:ref type='figure'>4</ns0:ref>, the user selected quarters for r and fortnights for t. At the bottom of the screen, the value obtained from the conversion is displayed and explained for both periods. An example of students testing SICMAR is shown in Fig. <ns0:ref type='figure'>5</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Maintenance Two experienced software developers tested our beta version of SICMAR. They recommended conducting modifications related to the color and size of the objects. We also performed modifications to the GUI, including object location changes and those related to interactivity. As explained in the discussion section, the students recommended performing additional modifications to SICMAR, which will be implemented soon.</ns0:p></ns0:div>
<ns0:div><ns0:head>Design of Data Collection Instruments</ns0:head><ns0:p>We designed two surveys to collect the data. The first serves to obtain information about students' motivation when the professor explained the simple interest topic using traditional materials (textbooks, slides, and whiteboards). The second survey gathers data about students' motivation when learning with SICMAR, technology acceptance, and prototype quality. Besides, we designed a data consent form and two five items tests to measure students' achievement.</ns0:p></ns0:div>
<ns0:div><ns0:head>The First Survey (Pre-test)</ns0:head><ns0:p>The first survey has two sections: the first includes items to collect students' general information, such as name, gender, and age, and the second includes items related to Keller's ARCS motivation model <ns0:ref type='bibr' target='#b36'>(Li & Keller, 2018)</ns0:ref>.</ns0:p><ns0:p>The instructional materials motivation survey (IMMS) assesses students' motivation based on the ARCS model that includes 36 items distributed as 12 items for (A), nine items for (R), nine items for (C), and six items for (S). Although IMMS was used and tested with a Cronbach α=0.96, it is long, and not all items are necessary, especially those measured in a negative or reverse way <ns0:ref type='bibr' target='#b19'>(Chen, 2019)</ns0:ref>. Therefore, the reduced IMMS (RIMMS) proposed by <ns0:ref type='bibr' target='#b38'>Loorbach et al. (2015)</ns0:ref> was employed. RIMMS comprises 12 five-point Likert scale items, three for each ARCS dimension. The original version was translated and adapted to the lesson of simple interest (see the left side of Table <ns0:ref type='table'>2</ns0:ref>). The minimum score on the RIMMS survey is 12, and the maximum is 60 with a midpoint of 36.</ns0:p><ns0:p>The items about attention measure the degree to which the professor's lesson attracts the learner's attention. We consider the organization, quality, and variety of the materials employed. On the other hand, we use the content and style of explanations to measure the lesson's relevance perceived by students. The items regarding confidence measure the degree to which the learner felt confident while completing the simple interest lesson. The final three items measure the degree to which the learner finds the lesson satisfactory and the intention to keep working.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Second Survey (Post-test)</ns0:head><ns0:p>The second survey comprises four sections. The first includes items to collect students' general information. The second section includes the 12 RIMMS items of the first survey but adapted to assess students' motivation using SICMAR (see the right side of Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p><ns0:p>The third section related to TAM comprises four items for perceived usefulness, five for perceived ease of use, and two for the intention to use (see Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>). The 11 items used a five-point Likert scale and were adapted from <ns0:ref type='bibr' target='#b41'>Miranda et al. (2016)</ns0:ref>. The minimum score for TAM is 11, and the maximum is 55 with a midpoint of 33. The four items regarding perceived usefulness measure the extent to which students believe that SICMAR would improve their performance in learning simple interest. The easiness that students perceived when using SICMAR is measured with the five items related to perceived ease of use (prototype manipulation employing the markers). The last two items measure the degree of acceptance when students use SICMAR.</ns0:p><ns0:p>The fourth section aims to gather information about SICMAR quality. The ten items based on the Likert five-point scale were adapted from <ns0:ref type='bibr' target='#b8'>Barraza, Cruz & Vergara (2015)</ns0:ref>. We collect information about SICMAR design (colors, size of the objects, velocity) and usability (results obtained) that together determined quality, as shown at the bottom of Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Practical Tests</ns0:head><ns0:p>Two financial mathematics professors helped us to design a set of practical exercises regarding simple interest computation. We divided the set of exercises to design two practical tests with five items each. The first test is applied after professor intervention, and the second after using SICMAR. Professors carefully reviewed both tests to ensure similar difficulty. In both tests, the first two items ask the students to compute simple interest. The last three questions are challenging because the terms to compute must be cleared from Eqs. ( <ns0:ref type='formula'>1</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_0'>2</ns0:ref>). The third question deals with principal computation, while the fourth and fifth deal with interest rate and time calculation, respectively. An example of two pre-test exercises is shown on the left column of Table <ns0:ref type='table'>S1</ns0:ref>, while the post-test examples are shown on the right column.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The study was conducted in early March 2020. A classroom at a public university located in northern Mexico was used as an educational setting. One professor that participated in the planning stage organized the sessions that comprised the study. Both sessions were conducted with three days of difference.</ns0:p><ns0:p>The Mexican university where the study was conducted imposed three restrictions regarding the participation of the students and professor: i) all students enrolled in the financial mathematics course must participate; ii) the professor could only use the time established in the curriculum to offer explanations, and iii) only one session could be used to test SICMAR. Therefore, we decide to conduct a quasi-experimental study to establish a cause-and-effect relationship between independent and dependent variables.</ns0:p><ns0:p>A quasi-experimental study is characterized because the sample to study is not selected randomly, and control groups are not required. Instead, participants are assigned to the sample based on non-random criteria previously established (all the students in the financial mathematics course must participate). This study is also called a nonrandomized or pre-post intervention and is frequently employed to conduct research in the educative field <ns0:ref type='bibr' target='#b42'>(Otte et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Before the experiment, students did not have prior knowledge of the concepts related to simple interest. Students were informed about the research goal and that the data obtained will be treated with confidentiality and used only for academic purposes. Moreover, students completed a consent form regarding data use. Institute of Engineering and Technology of Universidad Autonoma de Ciudad Juarez emitted the approval to use the data and reviewed the consent form students filled out.</ns0:p><ns0:p>In the first session, which lasted two hours, the professor explained the simple interest lesson employing traditional materials. Students were then asked to realize a practice consisting of the pre-tests five test exercises and fill out the first survey. At the end of the first session, we request students to get an Android-based mobile device for the second session.</ns0:p><ns0:p>The second session lasted one and a half hours and started with an explanation about the use of SICMAR. Afterward, each student received a set of markers. Fortunately, all students brought the Android mobile device. Hence, smartphones with different features were used, which allow us to observe the variety of devices in which SICMAR can be executed. The average time to interact with the prototype was 39 minutes. Next, students were asked to realize the post-test practice consisting of five exercises and fill out the second survey. Students answered the surveys through the Internet (Microsoft forms) and practical exercises on a sheet of paper in both sessions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preliminary Data Analysis</ns0:head><ns0:p>Due to the restrictions imposed by the university, students were not divided into a control and experimental group. One hundred thirty-nine students enrolled in the financial mathematics course were surveyed. Data collected from the surveys was downloaded from Microsoft forms to create a database with IBM SPSS software. The responses obtained were minutely revised. The extreme values were not discarded, but 36 registers with incomplete information were identified. Therefore, the final sample comprises data from 103 students.</ns0:p><ns0:p>The sample size was deemed valid due to i) our sample almost doubled the mean (M=58.2) from the fourth column of Table <ns0:ref type='table'>1</ns0:ref>; and ii) the section related to ARCS is the biggest of our surveys; therefore, the rule of thumb that a sample should have at least five times as many observations as there are variables to be analyzed was fulfilled (5x12=60).</ns0:p><ns0:p>Of the 103 participants, n=59 (57.28%) were female, and n=44 (42.72%) were male. The participants' ages ranged from 18 to 30 years with a mean age (M=19.74, SD=1.93). We measure the internal reliability of the surveys with Cronbach's Alpha (α). A summary of the results is shown in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. Values greater than 0.7 are accepted (good-excellent). The total item correlation computed does not reflect the necessity of eliminating any item. Therefore, the α value for R in pre-test ARCS was also accepted.</ns0:p></ns0:div>
<ns0:div><ns0:head>Assessment of Students Motivation with RIMMS</ns0:head><ns0:p>This part of the study allowed us to assess if a significant difference in motivation is obtained when comparing the professor's lesson and SICMAR. The mean and standard deviation for each item are displayed in Table <ns0:ref type='table'>2</ns0:ref>. All scores exceed the central value of the scale. Moreover, the greater mean values were always obtained with SICMAR. The minimum difference is observed for attention (4.14-3.95=0.19) and the maximum for relevance (4.38-3.87=0.51). The difference for the whole study is (4.17-3.87=0.3). The results for both motivation studies are plotted in Fig. <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>Also, it was necessary to determine if the differences obtained are statistically significant. The normality test indicated that data from the survey were normally distributed. Therefore, the paired t-test with a 5% level of significance was calculated (t=-1.761 for attention; t=-6.120 for relevance, t=-2.281 for confidence, t=-2.877 for satisfaction, and t=-3.613 for ARCS). P-values less than or equal to 0.05 are considered significant, and values greater than 0.05 as nonsignificant. Considering the null hypothesis, 'there is no significant difference between pre-test and post-test scores':</ns0:p><ns0:p>• H 1 : Is rejected. We obtained p=0.081; therefore, the difference of 0.19 is not significant regarding attention (A).</ns0:p><ns0:p>• H 2 : Is accepted. There is statistical evidence (p<0.001) to support that with SICMAR, a significant difference of 0.51 on students' relevance (R) is obtained.</ns0:p><ns0:p>• H 3 : Is accepted. The difference of 0.21 (4.08-3.87) is significant regarding the confidence (C) dimension with p=0.025.</ns0:p><ns0:p>• H 4 : Is accepted. We obtained p=0.005; hence, the difference of 0.3 is significant regarding students' satisfaction (S).</ns0:p><ns0:p>The magnitude and significance of causal connections between variables can be estimated using path analysis. We perform a path analysis to compute total effects among ARCS four dimensions and determine students' motivation. The diagram in Fig. <ns0:ref type='figure'>7</ns0:ref> is the visual representation of the relationships between variables. The path coefficients (β) estimate the variance of the indicator that is accounted by the latent construct. The higher the value of β, the stronger the effect. The values calculated for the pre-test are shown above the arrows and the post-test values below the arrows. We also calculate the determination coefficients (R 2 ) to measure how close the data are to the fitted regression line (values must be greater than 0.2). The pre-test values are shown in the upper right corner and the lower right corner for the post-test.</ns0:p><ns0:p>It is noted from Fig. <ns0:ref type='figure'>7</ns0:ref> that a significant direct effect exists from A->R, from R->C, and C->S with a significance level of 5% for both tests. Hence, the hypothesis:</ns0:p><ns0:p>• H 5 : Is accepted. Students increased their motivation using SICMAR. The value M=4.17 obtained with SICMAR is greater than M=3.87 obtained with the professor's lesson. The difference of 0.3 is statistically significant (p<0.001), representing a motivation increase of 7.75%. In summary, the mean values and the path analysis corroborate the motivation increase.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Assessment of Students Achievement in Practice Tests</ns0:head><ns0:p>The professor reviewed the students' responses to emit the grade. An answer is correct only if the result and the procedure to obtain the response are good. Many students presented good results but a wrong procedure; these cases were qualified as incorrect. Since the test includes five items, each correct answer sums 20 points. Therefore, the final grade ranged from 0 to 100. A summary of the correct and incorrect responses for each test is shown in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p><ns0:p>We obtained (M=39.02 and SD=28.88) for the pre-test and (M=66.60 and SD=29.02) for the post-test. Hence, an increase of 70.68% was observed on post-test grades when comparing with the pre-test. For the post-test, 25 students obtained the maximum grade (100), and only four in the pre-test. Thirteen students (13.59%) obtained better grades on the pre-test than post-test. Moreover, 72 students obtained better scores for the post-test than the pre-test, while 18 students obtained the same score for both tests. In both sessions, women obtained better grades, with pretest values (M=44.06, SD=27.41) and post-test values (M=76.61, SD=20.41). The pre-test values obtained for men were (M=32.27, SD=30.56), and for the post-test (M=53.18, SD=31.38). The plot (box and whiskers) of the scores obtained by students is illustrated in Fig. <ns0:ref type='figure'>8</ns0:ref>.</ns0:p><ns0:p>The test of Kolmogorov-Smirnov was employed to select the statistical analysis tool accordingly to the data distribution. The results obtained with a 5% level of significance using SPSS for the pre-test were (Z=0.162, p<0.001, skewness=0.285, skewness standard error=0.238, kurtosis=-0.885, and kurtosis standard error= 0.472). For the post-test the results were (Z=0.192, p<0.001, skewness=-0.675, skewness standard error=0.238, kurtosis=-0.396, and kurtosis standard error=0.472). For both tests (pre-test-post-test) were (Z=0.109, p=0.004, skewness=-0.329, skewness standard error=0.238, kurtosis=0.242, and kurtosis standard error=0.472), meaning that normality is not satisfied. Thus, the two paired Wilcoxon signed-rank test was utilized to observe if grade difference is significant (Z=-6.129, p<0.001, and medium effect size d=-0.427). Therefore:</ns0:p><ns0:p>• H 6 : Is accepted. Students answer questions correctly 3.33 times with SICMAR and 1.95 with the professor's material. The difference of 1.38 is statistically significant (p<0.001).</ns0:p></ns0:div>
<ns0:div><ns0:head>SICMAR Technology Acceptance Assessment</ns0:head><ns0:p>We use AMOS software to examine the effects between observed and latent variables and the validity of the proposed hypotheses. The variables and the relationships between them were established considering <ns0:ref type='bibr' target='#b41'>Miranda et al. (2016)</ns0:ref> and <ns0:ref type='bibr' target='#b31'>Hamidi & Chavoshi (2018)</ns0:ref>. The model in Fig. <ns0:ref type='figure'>9</ns0:ref> comprises four latent variables (spheres) and 21 observed variables (squares). The relationships are symbolized with unidirectional arrows. The latent variable of quality is independent because no arrow is connected to it, and the remainder are dependent (at least one arrow was connected).</ns0:p><ns0:p>In structural equation modeling, only the identified (over, just, or under-identified) models can be estimated. Identification is the act of formally state a model. We conduct the identification by computing the degrees of freedom (DoF=184). When the DoF is greater than 0, the model has more information than parameters to estimate. Therefore, our model is over-identified. Afterward, we calculate the sample variances and covariances to obtain the values that provide a PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science reproduced matrix that best fit the observed matrix. A model fits the data well if differences between observed and predicted values are small. For this purpose, we employ the maximum likelihood method.</ns0:p><ns0:p>A summary of the values obtained is shown in Table <ns0:ref type='table'>6</ns0:ref>. We expected that χ 2 /DoF ranged from 2 to 3, a GFI value near 1, and RMR closer to 0. Our model fulfills the conditions; therefore, we have a good-fitting model. Next, we compute the coefficients of determination (R 2 ) to measure the percentage of variance explained by the independent variables. The results are shown in Fig. <ns0:ref type='figure'>9</ns0:ref>. Values higher than 0.5 are considered good.</ns0:p><ns0:p>Then, we compute the standardized factor loadings and p-values for the observed variables. A summary of the results is shown in the third and fourth columns in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. All the relations between observed variables to latent variables are accepted with a confidence of 1%. For the case of quality, the variables Q9 and Q10 related to markers were the most important. For the perceived ease of use, PEU2 and PEU4, which address the familiarity with the technology and manipulation of the prototype controls, obtained the greater values. Regarding the perceived usefulness, PU3 and PU4 were the most important, which refers to the usability of SICMAR to learn and remember concepts. The highest value was obtained for ITU1, where students expressed their interest in keeping using SICMAR. Finally, we compute the path coefficients (β), the p-values, and the direct, indirect, and total effects between variables (see Table <ns0:ref type='table'>7</ns0:ref>). A direct effect is a relationship that exists between one variable to another. An indirect effect is a relationship between two variables mediated by at least one or more different variables. The sum of direct and indirect effects determines the total effect. Each direct effect is represented with a β in Fig. <ns0:ref type='figure'>9</ns0:ref> and helps validate the hypotheses.</ns0:p><ns0:p>• H 7 : Is accepted. The quality effect on the perceived usefulness has β=0.694 and p<0.05.</ns0:p><ns0:p>When the quality increases its standard deviation by one unit, the perceived usefulness goes up by 0.694 units, establishing a significant relationship with a confidence of 95%. Also, the quality establishes an indirect effect on the intention to use when it passes by the perceived usefulness.</ns0:p><ns0:p>• H 8 : Is accepted. The quality effect on the perceived ease of use has β=0.902 and p<0.001. When the quality increases its standard deviation by one unit, the perceived ease of use goes up by 0.902 units, establishing a significant relationship with a confidence of 95%.</ns0:p><ns0:p>• H 9 : Is rejected. The perceived ease of use effect on the perceived usefulness has β=0.153 and p=0.562. Therefore, the direct effect is not significant, with a confidence of 95%.</ns0:p><ns0:p>• H 10 : Is rejected. The perceived ease of use effect on the intention to use has β=0.054 and p<0.699. Therefore, the direct effect is not significant, with a confidence of 95%.</ns0:p><ns0:p>• H 11 : Is accepted. The perceived usefulness effect on the intention to use has β=0.830 and p<0.001. When the perceived usefulness increases its standard deviation by one unit, the intention to use goes up by 0.830 units, establishing a significant relationship with a confidence of 95%.</ns0:p><ns0:p>According to the results, the intention to use SICMAR is significantly affected by the quality and perceived usefulness. Students expressed their intention to use SICMAR due to the total effect of 0.738 encountered in the path Quality->PU->ITU.</ns0:p></ns0:div>
<ns0:div><ns0:head>SICMAR Quality Assessment</ns0:head><ns0:p>We conduct a study to determine if students considered SICMAR a good quality prototype. The scores obtained (M=3.93 and SD=0.62) demonstrate that students consider SICMAR a good quality prototype, as shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. The minimum value obtained (M=3.16) was regarding item Q5. Students considered that buttons are small, so we need to enlarge the buttons for better manipulation. The next minimum corresponds to Q10 (M=3.56); hence, students could not easily manipulate the device and the markers simultaneously. The better results correspond to Q1 (M=4.45) and Q6 (M=4.40), which suggest that all the simple interest terms were included, and the velocity of response for computations was fast.</ns0:p><ns0:p>Data obtained from quality follow a normal distribution. Therefore, a one-sample t-test with a significance of 5% and a reference value of 3.8 was performed (t=2.126, p=0.036, and d=0.20). :</ns0:p><ns0:p>• H 12 : Is accepted. A significant difference is obtained when comparing M=3.93 with the reference value (3.8). Also, as mentioned in the TAM study, quality influences the students' intention to use SICMAR.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>From the results obtained, we observed that mobile augmented reality could be applied to financial mathematics, obtaining the benefit of improving the user perception and interaction with the real world; a non-AR application cannot offer those features.</ns0:p><ns0:p>The findings of our motivation study are consistent with those reported in the papers of Table <ns0:ref type='table'>1</ns0:ref>. Mobile augmented reality changes the way on how students interact with the world. Students use their fingers to insert the values and uses the markers to define the inputs and outputs. As a result, students' motivation to learn simple interest increases. According to the professor, students became more engaged during the post-test session. This is due to the different and interactive ways of presenting the information. Students determined that MAR could turn a classic learning process into an engaging experience (students perceived learning as a game). Based on Table <ns0:ref type='table'>2</ns0:ref>, the elements to increase students' motivation were: i) regarding relevance, the content, and style of the SICMAR explanations; ii) regarding confidence, the organization of the information; iii) regarding satisfaction, the design of the prototype (the interactive representations of time conversions, the bi-dimensional models, and how markers interaction determined the calculation to be computed). Besides, students did not consider the contents of PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>SICMAR as sufficient to keep all the attention. The fact of using ICTs also influence the results obtained. Moreover, the younger participants felt more motivated with SICMAR, as expected.</ns0:p><ns0:p>According to <ns0:ref type='bibr' target='#b38'>Loorbach et al. (2015)</ns0:ref>, confidence influences students' persistence and accomplishment. Hence, it is crucial for motivation. In our post-test study, confidence (β =0.902) positively affects students' motivation. The main differences of our findings with the works by <ns0:ref type='bibr' target='#b26'>Estapa & Nadolny (2015)</ns0:ref>, <ns0:ref type='bibr' target='#b18'>Cascales et al. (2017), and</ns0:ref><ns0:ref type='bibr' target='#b33'>Ibáñez et al. (2020)</ns0:ref> were that our sample size is the biggest, we used RIMMS instead of IMMS (since there are few items, students are less worn), and that we utilized path analysis. In summary, the statistical results indicate that students who used SICMAR significantly increase their motivation scores (7.75%) compared with scores obtained in the professor's lesson.</ns0:p><ns0:p>By observing Fig. <ns0:ref type='figure'>8</ns0:ref>, students performed better when answering the practice exercises using SICMAR compared with the professor's lesson's answers. An increase of 70.76% was observed. Regarding simple interest computation, students using SICMAR performed better for the first question but not for the second (see Table <ns0:ref type='table'>5</ns0:ref>). All the students with incorrect answers to these questions for the pre-test failed in the period conversion. On the other hand, only ten students failed in the post-test due to period conversions. Mistakes such as not include the procedure to solve the problem, not copy the correct answer, and wrong selection of markers were the most common.</ns0:p><ns0:p>Regarding questions 3-5, it is notable the performance increase in students using SICMAR. In these questions, the terms to compute the solution must be cleared. All the students with incorrect answers for the pre-test fail due to conversions. On the other hand, half of the incorrect responses for the pre-test were due to conversions. The remaining mistakes were due not to include the procedure to solve the problem or not copy the correct answer.</ns0:p><ns0:p>The works by <ns0:ref type='bibr' target='#b26'>Estapa & Nadolny (2015)</ns0:ref>, <ns0:ref type='bibr' target='#b54'>Tobar et al. (2015)</ns0:ref>, and <ns0:ref type='bibr' target='#b33'>Ibáñez et al. (2020)</ns0:ref> reported about students' achievements; however, they measured the time used to execute the tasks, unlike our proposal that qualified the answers of practice exercises. <ns0:ref type='bibr' target='#b44'>Purnama et al. (2014)</ns0:ref> reported an increase of 17% in the learning process; unfortunately, the way it was measured was never explained. <ns0:ref type='bibr' target='#b20'>Coimbra et al. (2015)</ns0:ref> presented only qualitative preliminary explanations about math learning enhancing. Therefore, we cannot provide comparisons against literature works.</ns0:p><ns0:p>None of the works in Table <ns0:ref type='table'>1</ns0:ref> utilized the TAM; hence, we cannot offer comparisons. However, the path Quality->PU->ITU determined the students' intention to use SICMAR. Students considered the concepts explained, calculation speed, the results, and the size and color of texts displayed as the critical features to determine quality. Students considered SICMAR useful for learning, and it helped to remember the concepts related to simple interest. Finally, students determined SICMAR quality enough to use the prototype continuously.</ns0:p></ns0:div>
<ns0:div><ns0:head>Lessons Learned</ns0:head><ns0:p>Our augmented reality educational prototype serves as an alternative tool to learn the simple interest topic, but it cannot replace the teacher. Professors will continue looking for tools to improve the teaching-learning process. However, many times, teachers are not willing to make the efforts to create the tools, and frequently they do not have the computer skills to develop them because a usable app is challenging to create. The software to rapidly create augmented reality experiences does not offer all the resources needed to explain complex science topics.</ns0:p><ns0:p>Augmented reality causes enjoyment in students and a desire to repeat the experience. Although not complex 3D models were needed to represent the augmented reality for SICMAR, this alternative representation of the real phenomena causes motivation to the students. Even when the literature has been asserted that augmented reality can be exploited in any field, we recommend choosing application areas where it is needed that 3D models show different views of the objects. Even though 3D models are the base of augmented reality, it is still difficult to explain how the computer-based models inserted into the real scene increase student achievement.</ns0:p><ns0:p>Some students expressed that prolonged use of SICMAR slows down and warm up the device. We know that this problem is common when a mobile device is used for a considerable time. However, we are not sure if this problem is accentuated due to the execution of our prototype's complex routines. This leads to inviting developers to conduct a thorough review to detect routines that can optimize the processor usage, look for native development, or test another framework for coding.</ns0:p><ns0:p>Remarkably, we detected that students commit mistakes in handling the five markers to manipulate SICMAR. For example, they selected the principal marker when the correct one was the amount. Therefore, SICMAR cannot help students understand the problem stated and neither identify the concepts involved. The solution to the issue of problem understanding is still an open challenge. We recommend to developers avoid combinations of many markers to trigger augmented reality.</ns0:p><ns0:p>If professors and students disagreed with testing SICMAR, they would still be thinking that paper, blackboard, books, slides, or computer-based content are the unique resources to learn. With the findings obtained, we observed the potential of augmented reality for educational settings. The school administrators must be convinced that augmented reality is an educational tool that they should provide for the students. Moreover, schools must invest effort into implementing more resources based on augmented reality to the complete curriculum. With this, it will be possible: i) to observe the real impact of augmented reality on students; ii) have tracking about the usability of the resources; iii) detect the moment when the interest is lost; iv) establish if the impact was due only to the novelty; and v) to know what happens with concentration, and cognitive load of the students.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>After experimentation, we note some limitations in our study. Because we conducted a quasiexperimental study, our data might be biased. Therefore, the reported findings should be replicated with an experimental study (using control and experimental groups). We cannot know if students change their behavior because they are aware of participating in a research study. Only one financial mathematics professor used SICMAR, so the positive comments regarding usability may change as more teachers are involved. Some students focused their attention on the application and not on the essential parts of the topic to learn. This fact is known as the attention tunneling effect, which can explain why some students scored lower using SICMAR. Also, not all students felt comfortable using SICMAR, which offered clues that some persons could be challenging using ICTs. Moreover, the issues related to gender were not in-depth-analyzed, which is currently a trend in the AR field.</ns0:p></ns0:div>
<ns0:div><ns0:head>Future Research</ns0:head><ns0:p>Extensions of the proposed study include the improvement of the interaction environment, a larger sample of students, the measurement of cognitive load, to involve more financial mathematics teachers, and the implementation of other topics about financial mathematics. It is desirable to study in-depth the case of students that decrease the grade using SICMAR. The worst case is a student who obtained a grade of 80 with professor lesson and cero with SICMAR. Also, in-depth analyses of the individual answers to the tests and the questionnaires will be conducted. A critical review of the interface design should be performed. Further research is necessary to determine the content that must be added to SICMAR to keep students' attention. Finally, it would be recommendable to run a pilot study with Microsoft Hololenses to observe if the possibility of not using the hands increases students' motivation and achievement.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this paper, the SICMAR prototype based on augmented reality was introduced to verify its effects in learning the concept of simple interest to undergraduate students of financial mathematics. To the best of our knowledge, concepts such as principal, amount, time, interest rate, and simple interest are considered fundamental to promote students' financial education. SICMAR was tested in a real university setting to assess its quality, students' motivation using ARCS, the achievement by answering practice exercises, and technology acceptance with extended TAM. The results obtained from tests with 103 participants revealed that the undergraduate students were interested in using SICMAR frequently because of its quality, they were motivated to learn the simple interest topics and increased their achievement in answering practice exercises. All this conveys to conclude that SICMAR is a valuable complementary tool to learn the concepts related to simple interest computation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>Visual representation of the methodology to develop SICMAR. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The third and fourth sections of the second survey (post-test).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science 1 SICMAR TAM Please select the number that best represents how do you feel about SICMAR acceptance: 1=Strongly disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly agree. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,117.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,394.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,273.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,175.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,326.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,284.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>; Estapa and </ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Please think about each statement concerning the professor's lesson you have just participated and indicated how true it is. Give the answer that truly applies to you, not what you would like to be true or what you think others want to hear. Use the following values to indicate your response to each item: 1=Not true, 2=Slightly true, 3=Moderately true, 4=Mostly true, and 5=Very true. Please think about each statement concerning the SICMAR you have just used and indicated how true it is. Give the answer that truly applies to you, not what you would like to be true or what you think others want to hear. Use the following values to indicate your response to each item: 1=Not true, 2=Slightly true, 3=Moderately true, 4=Mostly true, and 5=Very true.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>General Data</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Name (s):</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Surname:</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Age:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Gender:</ns0:cell><ns0:cell>o (Male)</ns0:cell><ns0:cell /><ns0:cell>o (Female)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ARCS Professor</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>ARCS SICMAR</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell></ns0:row><ns0:row><ns0:cell>Attention (A)</ns0:cell><ns0:cell>3.95</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>Attention (A)</ns0:cell><ns0:cell>4.14</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell>A1. The quality of the materials used helped to hold my attention.</ns0:cell><ns0:cell>3.91</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>A1. The quality of the contents displayed helped to hold my attention.</ns0:cell><ns0:cell>4.19</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell>A2. The way the information was</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>A2. The way the information was</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>organized helped keep my attention.</ns0:cell><ns0:cell>3.97</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>organized (buttons, menus) helped</ns0:cell><ns0:cell>4.09</ns0:cell><ns0:cell>0.90</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>keep my attention.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>A3. The variety of readings, exercises,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>A3. The variety of 2D models and</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>and illustrations helped keep my</ns0:cell><ns0:cell>3.98</ns0:cell><ns0:cell>1.04</ns0:cell><ns0:cell>interactions helped keep my attention</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell>attention on the explanations.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>on the explanations.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Relevance (R)</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>Relevance (R)</ns0:cell><ns0:cell>4.38</ns0:cell><ns0:cell>0.70</ns0:cell></ns0:row><ns0:row><ns0:cell>R1. It is clear to me how the content of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>R1. It is clear to me how the content of</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>this lesson is related to things I already</ns0:cell><ns0:cell>3.35</ns0:cell><ns0:cell>1.02</ns0:cell><ns0:cell>SICMAR is related to things I already</ns0:cell><ns0:cell>4.48</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell>know.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>know.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>R2. The content and style of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>R2. The content and style of</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>explanations convey the impression that being able to work with simple</ns0:cell><ns0:cell>4.05</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>explanations used by SICMAR convey the impression that being able to work</ns0:cell><ns0:cell>4.31</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>interest is worth it.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>with simple interest is worth it.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>R3. The content of this lesson will be useful to me.</ns0:cell><ns0:cell>4.22</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>R3. The content of SICMAR will be useful to me.</ns0:cell><ns0:cell>4.36</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>Confidence (C)</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>Confidence (C)</ns0:cell><ns0:cell>4.08</ns0:cell><ns0:cell>0.73</ns0:cell></ns0:row><ns0:row><ns0:cell>C1. As I worked with this lesson, I was</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>C1. As I worked with SICMAR, I was</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>confident that I could learn how to</ns0:cell><ns0:cell>4.12</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>confident that I could learn how to</ns0:cell><ns0:cell>4.07</ns0:cell><ns0:cell>0.88</ns0:cell></ns0:row><ns0:row><ns0:cell>compute simple interest well.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>compute simple interest well.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>C2. After working with this lesson for</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>C2. After working with SICMAR for a</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>a while, I was confident that I would be able to pass a test about simple</ns0:cell><ns0:cell>3.54</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>while, I was confident that I would be able to pass a test about simple</ns0:cell><ns0:cell>4.08</ns0:cell><ns0:cell>0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>interest.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>interest.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>C3. The good organization of the</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>C3. The good organization of</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>content helped me be confident that I</ns0:cell><ns0:cell>3.96</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>SICMAR helped me be confident that</ns0:cell><ns0:cell>4.11</ns0:cell><ns0:cell>0.75</ns0:cell></ns0:row><ns0:row><ns0:cell>would learn about simple interest.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>I would learn about simple interest.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Satisfaction (S)</ns0:cell><ns0:cell>3.80</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>Satisfaction (S)</ns0:cell><ns0:cell>4.10</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell>S1. I enjoyed working with this lesson</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>S1. I enjoyed working with SICMAR</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>so much that I was stimulated to keep</ns0:cell><ns0:cell>3.61</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>so much that I was stimulated to keep</ns0:cell><ns0:cell>3.92</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell>on working.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>on working.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>S2. I really enjoyed working with this simple interest lesson.</ns0:cell><ns0:cell>3.85</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>S2. I really enjoyed working with SICMAR.</ns0:cell><ns0:cell>4.07</ns0:cell><ns0:cell>0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>S3. It was a pleasure to work with such a well-designed lesson.</ns0:cell><ns0:cell>3.95</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>S3. It was a pleasure to work with such a well-designed prototype.</ns0:cell><ns0:cell>4.31</ns0:cell><ns0:cell>0.89</ns0:cell></ns0:row><ns0:row><ns0:cell>ARCS</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>ARCS</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.66</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Please select the number that best represents how do you feel about SICMAR quality: 1=Not at all, 2=A little, 3=Moderately, 4=Much, 5=Very much.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SD</ns0:cell><ns0:cell>Standardized Factor Loadings</ns0:cell><ns0:cell>Hypotheses Interpretation</ns0:cell></ns0:row><ns0:row><ns0:cell>Perceived Usefulness (PU)</ns0:cell><ns0:cell>4.09</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PU1. I could improve my learning performance by using SICMAR</ns0:cell><ns0:cell>3.97</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.762</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PU2. I could enhance my simple interest proficiency by using SICMAR</ns0:cell><ns0:cell>3.99</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.771</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PU3. I think SICMAR is useful for learning purposes.</ns0:cell><ns0:cell>4.25</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.820</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PU4. Using SICMAR will be easy to remember the</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>concepts related to the calculation of simple</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>interest.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Perceived Ease of Use (PEU)</ns0:cell><ns0:cell>4.04</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PEU1. I think SICMAR is attractive and easy to use</ns0:cell><ns0:cell>3.79</ns0:cell><ns0:cell>1.13</ns0:cell><ns0:cell>0.679</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PEU2. Learning to use SICMAR was not a</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>problem for me due to my familiarity with the</ns0:cell><ns0:cell>4.32</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.805</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>technology used.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PEU3. The marker detection was fast.</ns0:cell><ns0:cell>4.02</ns0:cell><ns0:cell>1.04</ns0:cell><ns0:cell>0.664</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PEU4. The tasks related to the manipulation of controls were simple to execute.</ns0:cell><ns0:cell>3.92</ns0:cell><ns0:cell>1.04</ns0:cell><ns0:cell>0.817</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PEU5. I was able to locate the areas for conversions and calculations quickly.</ns0:cell><ns0:cell>4.19</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.792</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Intention to Use SICMAR (ITU)</ns0:cell><ns0:cell>4.38</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ITU1. I want to use the app in the future if I have the opportunity.</ns0:cell><ns0:cell>4.28</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.925</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>ITU2. The main concepts of SICMAR can be used to learn other topics.</ns0:cell><ns0:cell>4.49</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.754</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>TAM</ns0:cell><ns0:cell>4.12</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>SICMAR Quality</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Quality questions</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell>Standardized Factor Loadings</ns0:cell><ns0:cell>Hypotheses Interpretation</ns0:cell></ns0:row><ns0:row><ns0:cell>Q1. SICMAR showed all the concepts explained by the teacher.</ns0:cell><ns0:cell>4.45</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.450</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q2. The results obtained with SICMAR were correct.</ns0:cell><ns0:cell>4.24</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.562</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q3. The colors used for conversions were adequate.</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.527</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q4. The texts and numbers displayed by SICMAR were legible.</ns0:cell><ns0:cell>4.13</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.627</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q5. The size of the buttons allowed the easy manipulation of SICMAR.</ns0:cell><ns0:cell>3.16</ns0:cell><ns0:cell>1.22</ns0:cell><ns0:cell>0.531</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q6. SICMAR velocity of response to carry out the calculations was fast.</ns0:cell><ns0:cell>4.40</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.528</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q7. The classroom illumination was adequate.</ns0:cell><ns0:cell>3.79</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.513</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q8. The manipulation of the electronic device I use was straightforward.</ns0:cell><ns0:cell>3.76</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.676</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q9. Markers' manipulation was easy.</ns0:cell><ns0:cell>3.65</ns0:cell><ns0:cell>1.05</ns0:cell><ns0:cell>0.747</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q10. The manipulation of the device in conjunction with the markers was easy.</ns0:cell><ns0:cell>3.56</ns0:cell><ns0:cell>1.06</ns0:cell><ns0:cell>0.703</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Quality</ns0:cell><ns0:cell>3.93</ns0:cell><ns0:cell>0.62</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Cronbach´s alpha values for both surveys.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Measurement</ns0:cell><ns0:cell>α</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>0.867</ns0:cell></ns0:row><ns0:row><ns0:cell>R</ns0:cell><ns0:cell>0.679</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>0.821</ns0:cell></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell>0.872</ns0:cell></ns0:row><ns0:row><ns0:cell>ARCS (pre-test)</ns0:cell><ns0:cell>0.934</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>0.847</ns0:cell></ns0:row><ns0:row><ns0:cell>R</ns0:cell><ns0:cell>0.776</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>0.814</ns0:cell></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell>0.889</ns0:cell></ns0:row><ns0:row><ns0:cell>ARCS (Post-test)</ns0:cell><ns0:cell>0.931</ns0:cell></ns0:row><ns0:row><ns0:cell>PU</ns0:cell><ns0:cell>0.877</ns0:cell></ns0:row><ns0:row><ns0:cell>PEU</ns0:cell><ns0:cell>0.859</ns0:cell></ns0:row><ns0:row><ns0:cell>ITU</ns0:cell><ns0:cell>0.815</ns0:cell></ns0:row><ns0:row><ns0:cell>TAM</ns0:cell><ns0:cell>0.921</ns0:cell></ns0:row><ns0:row><ns0:cell>Quality</ns0:cell><ns0:cell>0.839</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:1:1:NEW 28 Feb 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Rebuttal Letter to Editor and Reviewers of PeerJ Computer Science
(First round of reviews)
February 16th, 2021
Paper title: 'Effects of Using Mobile Augmented Reality for Simple Interest Computation in a Financial Mathematics Course.'
Paper ID: 52967
Dear editor and reviewers:
The authors would like to thank you for the careful and thorough review of our manuscript and for providing us with comments and suggestions to improve its quality.
The reviewers agree that our paper is interesting. However, they offered many suggestions to improve it, including asking for a native English speaker to review the paper, reorder some sections, include more references regarding augmented reality, add details about the model, the interface, and the hypotheses, discuss the findings specifically, explain the survey and the participant selection method, explain how augmented reality collaborate with achievements increase, among others.
Thanks to the reviewers' suggestions, our paper was improved. We have inserted new text, three figures (1, 6, and 8), and one Table (5). However, the paper length increased significantly. We believe that the manuscript is now suitable for publication in PeerJ Computer Science.
The following responses were prepared in a point-by-point fashion to explain how the reviewers' suggestions were addressed. We hope the reviewers will be satisfied with our responses to the comments and the recommendations offered to the original manuscript. Original reviewers' comments have been italicized and highlighted in black color and the authors' response in blue color. The changes in the reviewed manuscript are tracked using the 'compare function' in Microsoft Word.
Dr. Osslan Osiris Vergara Villegas
Universidad Autónoma de Ciudad Juarez
Head of the Computer Vision and Augmented Reality Laboratory
On behalf of all authors.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Reviewer 1
- The topic of your paper is extremely specific, yet it is quite interesting.
R. We thank the comment from the reviewer. However, as suggested by the reviewers, the paper's length was extended to include additional details of our study. All the new information is highlighted with the color green in the reviewed paper.
- You claim that students 'show no interest in financial education' but looked at 'undergraduates enrolled in financial mathematics courses'. Why should anyone without an interest in finance take a course in financial mathematics? As part of not directly related degree programs? Please explicate.
R. We apologize for the lack of this explanation. In México, most students have problems learning mathematics. Therefore, no matter what program students are enrolled in, they must take a financial mathematics course because it is mandatory within the scholar curriculum. We have added this explanation on page 2, lines 57-59.
- The hypotheses are fine. I do not think they should be placed in the introduction, though, where I would only expect the research question. I would develop the hypotheses in a section on research design.
R. We thank this suggestion. We have written and reviewed papers from different journals in which hypotheses are shown in the introduction. However, as suggested by the reviewer, we have moved the hypotheses to the new section called 'Research design,' particularly in the subsection named 'Hypotheses development.' Moreover, we have included explanations of how the hypotheses were developed (page 7, lines 211-303).
- Nice to explicate your contributions.
R. We appreciate the positive comment from the reviewer. We consider that highlighting the contributions of a paper using enumerations is a recommendable practice.
- Your literature study is too short, particularly concerning general work on the use of VR / AE in education. For the VR side, I recommend the following article as a starting point:
Radianti, J., Majchrzak, T. A., Fromm, J., and Wohlgenannt, I. (2020): A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda, Computers & Education, 147, 103778, Elsevier
I think you did a great job regarding related studies, but you could put more detail into drawing the 'broader picture' of related research activities.
R. The authors agree with the reviewer's comment that the broader picture must be presented. To address this suggestion, we have added references to four papers that summarize the use of augmented reality for educational purposes (page 4, lines 102-105). Even when it is not our aim to discuss VR works, we have added two references that compared augmented reality and virtual reality for mathematics learning (page 4, lines 109-112). We have also added and recommended the suggested reference Radianti et al. (2020) as a starting point to explore virtual reality in education (page 4, lines 112-114). Finally, we have added a brief discussion of the papers reviewed (page 4, lines 121-147).
- I am tempted to suggest renaming 'Materials & Methods' to 'Research Design', to include hypothesis building (possibly also adding justification for the different hypotheses), and to make it more a classical section on how the research is conceptualized and carried out.
R. This is a significant suggestion to improve the quality of our paper. We followed the PeerJ Computer Science template in which the standard sections are explained; that was the reason to include a section named 'Materials & methods.' However, as the reviewer suggested, we have changed this section's title to 'Research design' (page 5, line 163). Moreover, we have moved the hypotheses from the introduction to this section, particularly in a subsection named 'Hypotheses development' (page 7, line 211). Besides, an explanation of the basis considered to establish each hypothesis was inserted. Please observe all these changes on page 7, lines 211-303.
- I think it would be nice to get more details on the development. You may consider adding a model of the tool.
R. We agree that adding a model of the tool will be helpful. Hence, we have added the details and a picture of the model (Figure 1). Considering the cascade software developer model, we divide the development subsection into five stages depicted in Figure 1. Also, we have reorganized the information to explain how we address each stage of the cascade model. The new organization and all the explanations can be observed on page 9, lines 305-413.
- The survey detail should be described in more detail.
R. We apologize for the lack of details. We agree that explaining in detail what the surveys measure is imperative to have a readable paper. We have offered our survey's details on page 12, lines 430-440 and 444-445 for ARCS; on page 12, lines 446-453 for TAM; and on page 13, lines 454-457 for the quality. Furthermore, we have inserted the details regarding the practical test design in the new subsection called 'The Practical Tests' (page 13, lines 459-468).
- Selection of the participants should be described in more detail.
R. We appreciate this suggestion. We have added the details regarding participant selection. In summary, due to the university's restrictions, we cannot select our sample randomly. Therefore, a quasi-experimental study was conducted. All the students enrolled in the financial mathematics course participated in the study (139 students). After analyzing the responses, 36 students were removed because they did not fill out all the survey items. Thus, the final sample is comprised of 103 students. The information can be observed on page 13, lines 475-486, and page 14, lines 506-511.
- The presentation of the results is fine.
R. We appreciate the positive comment from the reviewer.
- The discussion should be much extended. I suggest to at least have subsections that
-- discuss the lessons learned from your study and how this related to the literature and your expectations,
R. We agree with the suggestion. We have added a subsection called 'Lessons learned' to include the information requested. The new subsection can be observed on page 19, lines 720-756.
-- give implications for research and state possible contributions to theory, thereby generalizing your findings beyond its narrow domain,
R. We have inserted information about this on page 19, lines 721-726, and page 20, lines 727-734.
-- scrutinize implications for practice, such as advice for educators, and
R. We have added this information on page 20, lines 735-756.
-- name limitations of your work.
R. We have added a subsection about the 'Limitations' of our work. The new subsection can be consulted on page 20, lines 758-769.
- You may also give further research directions.
R. Thank you for this suggestion. We have added a subsection to explain the 'Future Research.' The new subsection is shown on page 21, lines 771-781.
- The reference list is surprisingly short for the topic at hand. I think you should check if you can relate to works that use AR for education, even if domains and settings are different. Quite possibly this can yield valuable insights.
R. Certainly, our list of references is short. With the aim of address this suggestion, we have added 22 references to our paper. It is important to highlight that all the references inserted were carefully selected, and its inclusion was justified. The references added are highlighted in the reference section (page 21, lines 796-948).
- Figures that are vectors by source must be inserted in vectorized form. This applies to Figure 1, Figure 8, and Figure 9.
R. We consider this a useful suggestion. We have vectorized Figures 7 and 9 (in the past version of the paper, Figures 8 and 9). We have also vectorized the new Figure 1. The files were uploaded as supplemental material (*.pdf).
- The tables are very nice and helpful.
R. We appreciate the positive feedback from the reviewer.
- The supplementary materials make a very good impression.
R. We thank the reviewer's comment.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Reviewer 2
- The English language is not clear enough. It contains grammatical or syntactic errors (highlighted in purple in attachment) or vague phrasings that are not sufficiently informative (highlighted in blue in attachment).
R. We regret there were problems with the language. A native English speaker has revised the paper to improve grammar and readability. However, we did not address few blue or purple suggestions because we did not understand the error highlighted.
- The set of references is extensive, however, the authors often do not clearly indicate which information or decision come from the literature (with or without adaptations) or are assumptions. For instance, the model in Figure 9 come from Hamidi & Chevoshi, with modification (omissions).
R. We apologize for this mistake. We have checked the paper to add the missing references when the information comes from the literature. On the other hand, the model in Fig 12 (previously Fig. 9) does not pertain to Hamidi & Chevoshi (2018); it also considers the model of our work published in 'Miranda, E., Vergara, O., Cruz, V., García, J., & Favela, J. (2016). Study on mobile augmented reality adoption for Mayo language learning. Mobile Information Systems, 2016, 1-15.' We have corrected the mistake by inserting both references, as shown on page 16, line 597.
- References to the literature are sometimes unspecified (e.g., 'recent studies' mentioned p.10 without citation).
R. We apologize for this error. We have checked for similar errors, and the correction was performed. The correction for this particular concern can be observed on page 10, lines 329-331.
- For instance, reference to Davis (1989) is missing: Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340.
R. We appreciate the reviewer's observation. In all the manuscripts we write, it is a custom to avoid inserting old references. We prefer including references non older than six-years. That was the explanation for no insert the Davis reference. Instead, we used Hamidi & Chavoshi (2018), which presented an in-depth explanation of the TAM. However, due to the suggestion, we have added the reference to Davi's work (page 8, line 255).
- Acronyms are used too often, and sometimes unnecessarily, without recapping their meaning. It makes it difficult to read the paper.
R. We thank this relevant comment. We have received recommendations from other journals regarding acronyms: 1. define abbreviations at first mention; 2. put the acronym in parentheses after explanation; and 3. After the explanation, always use the acronym. That was the reason for always use abbreviations. However, in our paper's reviewed version, we have tried to recap acronyms' meaning frequently as suggested.
- The acronym for Perceived Ease of Use is inconsistent (PEOU Vs PEU).
R. We apologize for this mistake. We have checked the manuscript to solve this error. Moreover, as suggested, most times, the acronyms meaning were recapped.
- Finally, null p-values are reported (p = 0.000) which is misleading and inexact (better say p < 0.001).
R. We apologize for this error. The software SPSS offers the p=0.000 as output, which is undoubtedly a mistake. We have changed all these mistakes (page 15, lines 541, 562; page 16, lines 583, 585, 589, 593).
- Equations (1-6) are highly redundant, and may not reflect the equations used in practice, from a human perspective. Mentioning only equations (2) and (5-6) should be sufficient.
R. We aimed to provide a tool to learn simple interest topics, and the terms such as principal, interest rate, time, and amount are involved in computations. Therefore, the equations presented are the classical found in financial literacy. Sorry, but we do not understand what the reviewer refers to as 'redundant.' We cannot erase the equations because they are the basis for all the computations conducted with our prototype. However, to address this comment, we have changed the equations to the simplest representation. With this, the reader could understand how the terms are cleared. The equations can be observed on page 6, lines 196-201.
- The meaning of the equation terms should be stated in advanced, e.g., as an introduction to the topic at hand.
R. Thank you for this comment. We have reordered this part of the manuscript. Therefore, the meaning of the equations is stated in advance. This can be observed on page 6, lines 188-195.
- In particular, the term 'simple interest' is barely explained or understandable until the equations are shown.
R. We have added an explanation of simple interest in the introduction (page 3, lines 64-66). Moreover, we have added an example of interest on page 6, lines 184-187, and a definition on page 6, lines 188-189.
- Hypotheses are not sufficiently explained before being stated. For instance, it is unclear what 'positive effects' mean until SEM is mentioned. As well, Hypotheses 12 mentions a threshold of 3.8 without any justification. It seems arbitrary until it is explained later.
R. This is a valuable suggestion. We have reordered our manuscript. The hypotheses were moved from the introduction to the new subsection called 'Hypotheses development.' An explanation of the hypotheses was inserted, including what positive effects mean (page 8, lines 279-280). Moreover, an explanation of the process to define the threshold was inserted (page 9, lines 292-298). Please refer to subsection 'Hypotheses development' on page 7, lines 211-303.
- The experimental design relies on solid statistics, however it does not provide specific insights into what makes the interface successful or not. There is no comment on the interface design itself.
R. We thank this suggestion. As software developers, we always consider the basic rules to design successful interfaces. Based on the recommendations to our past publications related to augmented reality, we decided not to include all the details regarding the SICMAR interface. Specialized journals suggest not including the interface details because it could be a matter of an entire paper. However, to accomplish the reviewer's suggestion, we have added the information regarding interface design. First, we have explained the considerations to design an intuitive AR application (page 9, lines 313-315). Then, we discussed the set of features considered to make our interface successful (page 9, lines 315-321). The graphical user interface (GUI) design details were added on page 10, lines 355-359, and in Figure 2. Finally, the description of the objects used for interactions is included on page 11, lines 369-375, and in Figure 2.
- Furthermore, the UI design could be applied on simple computer screen, and Augmented Reality (AR) technology is not providing extra features. It may have a positive effect due to its novelty, but the interface may be more usable on simple computer screen. This research omit such consideration, hence it is not possible to claim that the positive results are due to using AR or using the SICMAR design. Similar positive impacts may be observed using simple computer screens, compared to learning with a teacher.
R. We appreciate this comment. We have discussed the set of five features considered to make our interface successful (page 9, lines 315-321). We have also explained that those features could not be offered by a 'simple computer screen' (page 9, lines 321-322). Moreover, we discuss the difference between mobile devices and PCs (page 9, lines 325-326, page 10, lines 327-329). Next, we have mentioned the differences between traditional applications and AR applications (page 10, lines 331-335). Finally, we have explained that the positive effects are not due to novelty but to the resources employed to explain the computations, including marker manipulation and 2D models (see the discussion section, page 18, lines 679-683, page 19, lines 715-716). On page 18, lines 669-671, we have explained that MAR could be used in financial mathematics due to user perception and interaction improvements. A nonaugmented reality application cannot offer that feature. Finally, we could not assert that 'similar positive impacts may be observed using simple computer screens' because we have not performed such a study. However, from our experience, we believe that the results would not be similar since two different technologies are compared.
- Finally the experimental design has a crucial flaw. Students are first exposed to pedagogical content with a teacher, before they are again expose to contents on the same topic using SICMAR. No control group is set apart, and exposed again to pedagogical content without using SICMAR. Hence the observed positive impacts may be due to the students' learning curve, not to the use of SICMAR. Similar positive impacts may be observed even if students had practiced again with a teacher.
R. The reviewer pointed out a crucial point. We have added information to explain this concern. On page 13, lines 475-480, we have explained that the university where the study was conducted imposed several restrictions to conduct our experiments, such as i) all the students (139) enrolled in the financial mathematics course must participate; ii) the professor can take only the original time defined in the curricula to offer the lesson of simple interest; iii) only one session could be used to test SICMAR. Due to the restrictions, a quasi-experiment was performed. We have also explained that no control group is needed, and this kind of study is frequently employed in the educative field (page 13, lines 481-486). We agree that maybe as the reviewer commented, the good results regarding students' achievement could not be due to augmented reality. However, our findings demonstrate the opposite. On the other hand, we have not conducted a study to observe if 'the positive impacts are due to the learning curve or practice.' Therefore, we cannot offer comments about this. Finally, all the scientific techniques employed to conduct experiments have advantages and disadvantages associated. We consider that the fact of not using control groups should not lead to discarding our research. A simple search in the literature returns many high-quality papers regarding augmented reality for education in which quasi-experiments were conducted.
- The authors mention exaggerated claims and interpretations of their findings.
R. We apologize about this. We understand the reviewer's concern; however, we are shocked by the comment. Our daily professional activity includes the review of the scientific manuscripts written by our students. We always suggest the students not to abuse the use of adjectives. Therefore, we take care of this. We have reviewed the entire manuscript to avoid the presentation of exaggerated claims by erasing the qualifying adjectives and interpretations. The only 'exaggerated claim' that remains is that 'SICMAR is a valuable complementary tool to learn simple interest topics,' however, this was expressed by students.
- It is most important to state the limitations of the study.
R. The study's limitations were mentioned in the section 'Limitations' on page 20, lines 758-769.
Besides limitations due to the experimental design, the data analysis has additional limitations.
- The text of the questionnaire contain English mistakes and is sometimes vague or difficult to read. The poor quality of the English language may have affected the study. (e.g., misunderstanding, fatigue).
R. We have amended the errors related to English. Fortunately, we can confirm that English errors have not affected the original study since it was applied in Spanish, which is students' native language. Moreover, the Spanish versions of our surveys were reviewed by colleagues with more than ten years of experience in survey design and validation. Please observe Table 3 to check the correction of issues related English language. On the other hand, we have not corrected the English mistakes highlighted in Table 2. Please consult Loorbach et al. (2015) to observe that our text is similar to the suggested in this manuscript where the original RIMMS was presented.
- The statistical analysis is extensive, and is sufficiently explained, but it remains quite difficult to read due to the phrasing and grammar.
R. We apologize for this mistake. We have corrected the errors related to grammar and phrasing.
- The individual data points could be plotted relatively easily, and would make it easier to review the findings.
R. We agree with the reviewer. We have used a box and whisker plot to show the motivation study results (Figure 6) and the achievement study results (Figure 8). The SICMAR quality and technology acceptance results were not plotted because a pre and post-test were not conducted; therefore, a comparison is not needed.
- Answers to single questions (e.g., A1, A2, etc...) should be compared individually, i.e., answers with or without SICMAR.
R. We appreciate the reviewer's comment. We have added Figure 6 to allow comparisons of the motivation study individually. The box and whisker plot allows observing the differences with and without SICMAR. Moreover, Table 5 was added to show the results of the individual questions of the practice tests. The results of the achievement study are also shown in Figure 8 using a box and whisker plot. Moreover, we have added details in the 'Discussion' (page 18).
- The formula to calculate the 'Totals' in Tables 2, 3, 5 is not specified. Furthermore, it is incorrect to call these totals, as they are means (since their range remains within 1-5).
R. Sorry for this mistake. We have erased the 'total' names and moved the 'results' to the top of each section. The change can be observed in Tables 2 and 3.
- When analysis the grades of the exam: The text of the exam is not provided.
R. Thanks for your suggestion. We reviewed other augmented reality manuscripts, and not all the questions were inserted. Hence, we have offered two examples for each test conducted. The questions and answers of the tests are provided in Table S1.
- The answers to individual questions are not analysed (e.g., to identify what is particularly difficult to learn)
R. In Table 5 and Figure 8, we have inserted information about the tests' individual questions. Also, we have added the information on page 19, lines 694-706.
- The progress of individual students is not analysed, just averages are provided. However, the author mention that some student grades have decreased with SICMAR (how many? by how much?).
R. We thank the reviewer for pointing out this. We have added information about this on page 16, lines 574-580, and in Table 5, Figure 8. However, we agree that an in-depth study of the individual progress of students is desirable. We have pointed this in the 'future research' subsection (page 21, lines 774-778)
- This research investigates a promising technology that may enhance the learning process of students. However, the research design has crucial methodological limitations, and the findings are not specific enough to provide guidance or insights for future work (i.e., to inform the design such tools, rather than observing general improvements of student's motivations or exam results).
R. We appreciate the reviewer's recommendation. We have added the subsection 'Lessons Learned' to address this suggestion (page 19, lines 720-756).
- To be more valuable, the research should be continued, e.g., with in-depth analyses of the answers to the exam and to the questionnaires, and by making a critical review of the interface design (i.e., in relation to the analysis of the exam and questionnaire).
R. We have added this suggestion to the future research subsection (page 21, lines 774-778).
" | Here is a paper. Please give your review comments after reading it. |
174 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Understanding the concept of simple interest is essential in financial mathematics because it establishes the basis to comprehend complex conceptualizations. Nevertheless, students often have problems learning about simple interest. This paper aims to introduce a prototype called 'simple interest computation with mobile augmented reality' (SICMAR) and evaluate its effects on students in a financial mathematics course. The research design comprises four stages: i) planning; ii) hypotheses development; iii) software development; and iv) design of data collection instruments. The planning stage explains the problems that students confront to learn about simple interest. In the second stage, we present the twelve hypotheses tested in the study. The stage of software development discusses the logic implemented for SICMAR functionality. In the last stage, we design two surveys and two practice tests to assess students. The pre-test survey uses the attention, relevance, confidence, and satisfaction (ARCS) model to assess students' motivation in a traditional learning setting. The post-test survey assesses motivation, technology usage with the technology acceptance model (TAM), and prototype quality when students use SICMAR. Also, students solve practice exercises to assess their achievement. One hundred three undergraduates participated in both sessions of the study. The findings revealed the direct positive impact of SICMAR on students' achievement and motivation. Moreover, students expressed their interest in using the prototype because of its quality. In summary, students consider SICMAR as a valuable complementary tool to learn simple interest topics.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The economic factor is involved in practically all the processes of making decisions. Therefore, to avoid making wrong financial decisions, it is recommendable to know how money is obtained, managed, invested, and optimized. The lack of these skills could be solved by completing a financial education course <ns0:ref type='bibr' target='#b17'>(Carpena & Zia, 2020)</ns0:ref>.</ns0:p><ns0:p>Financial education must start at an early stage. <ns0:ref type='bibr' target='#b9'>Berry, Karlan, & Pradhan (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b54'>Sun et al. (2020)</ns0:ref> demonstrated how financial education helped prevent problems such as having low credit scores or defaulting on a loan. Because of the relevance of financial education, the United States of America included various finance courses as a part of the primary school curriculum <ns0:ref type='bibr' target='#b56'>(Urban et al., 2020)</ns0:ref>. Other countries such as China <ns0:ref type='bibr' target='#b25'>(Ding, Lu & Ye, 2020)</ns0:ref>, Ghana <ns0:ref type='bibr' target='#b9'>(Berry, Karlan, & Pradhan, 2018)</ns0:ref>, Hong Kong <ns0:ref type='bibr' target='#b27'>(Feng, 2020)</ns0:ref>, and India <ns0:ref type='bibr' target='#b17'>(Carpena & Zia, 2020</ns0:ref>) successfully adopted this trend; however, success cannot be generalized. As pointed out by <ns0:ref type='bibr' target='#b4'>Arceo & Villagómez (2017)</ns0:ref> and <ns0:ref type='bibr' target='#b12'>Bruhn, Lara, & McKenzie (2014)</ns0:ref>, underdeveloped countries such as Mexico reported minimum benefits due to the inclusion of financial education in schools.</ns0:p><ns0:p>To obtain insights about why students show no interest in financial education, we performed monitoring of undergraduates enrolled in financial mathematics courses at four public northern Mexican universities. All the students, no matter what program they are enrolled in, must take a financial mathematics course because it is mandatory within the school curriculum. Therefore, we monitored students from the accounting, administration, business, and engineering fields. As a result, we detected three problems: i) students lack mathematical skills; ii) sometimes the techniques used by the professors to teach the basics are boring, and iii) students do not comprehend the basics such as simple and compound interest, which are fundamental to sound financial education.</ns0:p><ns0:p>In financial mathematics, interest is the cost of using the money of a person or an institution. If somebody borrows money, then interest must be paid. Contrary, if somebody lends money, then interest is earned. Interest is calculated as simple or compound; the former is a percentage of the principal amount of a loan, whereas the latter accrues and is added to the accumulated interest of previous periods; therefore, it includes interest on interest <ns0:ref type='bibr' target='#b32'>(Hastings, 2015)</ns0:ref>.</ns0:p><ns0:p>In financial mathematics, interest is calculated as simple interest or compound interest; the former determines how much interest to apply to a principal balance, whereas the latter is the addition of interest to the principal sum of a loan or deposit <ns0:ref type='bibr' target='#b32'>(Hastings, 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b0'>Abylkassymova et al. (2020)</ns0:ref> and <ns0:ref type='bibr' target='#b11'>Blue & Grootenboer (2019)</ns0:ref> focused their research on seeking alternative methods to solve students' difficulties in understanding the basic concepts explained in a financial mathematics course. The most used options are individualized explanations outside of class time, multimedia material, computer simulations, and information and communications technologies (ICTs). Nevertheless, there are still opportunities to propose teaching-learning strategies to help students comprehend financial education basics. This paper assesses mobile augmented reality (MAR) technology as an alternative learning strategy to comprehend simple interest topics. <ns0:ref type='bibr'>Gutiérrez et al. (2016)</ns0:ref> and <ns0:ref type='bibr' target='#b19'>Chen (2019)</ns0:ref> defined MAR as 'a real-time direct or indirect view of a real-world environment that has been augmented by adding virtual computergenerated information to it.' In summary, mobile augmented reality is a novel way of superimposing digital content into the real context.</ns0:p><ns0:p>Akçayır & Akçayır (2017) and <ns0:ref type='bibr' target='#b5'>Arici et al. (2019)</ns0:ref> explained the benefits of mobile augmented reality in educational settings, especially for mathematics. The benefits include student achievement increase, autonomy facilitation (self-learning), generation of positive attitudes to the educational activity, commitment, motivation, knowledge retention, interaction, collaboration, and availability for all.</ns0:p><ns0:p>Motivated by MAR advantages and the problems detected regarding financial education, this paper aims to develop the simple interest computation with mobile augmented reality (SICMAR) prototype and to assess its effects in an undergraduate financial mathematics course. The study, divided into pre and post-test, was designed to assess students' motivation, achievement, technology acceptance, and prototype quality.</ns0:p><ns0:p>The main contributions of the paper are summarized below.</ns0:p><ns0:p>1. We explain the details to develop the SICMAR prototype.</ns0:p><ns0:p>2. We offer a proposal to assess students' motivation, achievement, technology acceptance, and SICMAR quality in a real educational setting.</ns0:p><ns0:p>3. We explain the facts to support that SICMAR could be a valuable complementary tool to learn about simple interest.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 discusses related work about AR to support mathematical learning. In Section 3, the basis to develop SICMAR and the surveys created are described. The results obtained from tests and the corresponding discussion are shown in Section 4. Finally, conclusions are outlined in Section 5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Learning Mathematics with Augmented Reality</ns0:head><ns0:p>Many research studies have been published on AR usage for educational purposes. Interested readers can consult the works by <ns0:ref type='bibr' target='#b2'>Akçayır & Akçayır (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b51'>Saltan & Arslan (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b28'>Garzón, Pavón & Baldiris (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b5'>Arici et al. (2019)</ns0:ref> to obtain a comprehensive overview of the educative fields that have been addressed.</ns0:p><ns0:p>As <ns0:ref type='bibr' target='#b5'>Arici et al. (2019)</ns0:ref> explained, most AR works focused on sciences such as medicine, physics, history, arts, and astronomy; however, the social fields, laws, and business are less addressed. <ns0:ref type='bibr' target='#b33'>Ibáñez & Delgado-Klos (2018)</ns0:ref> presented a literature review of AR to support science, technology, engineering, and mathematics (STEM) learning. <ns0:ref type='bibr' target='#b40'>Medina, Castro & Juárez (2019)</ns0:ref> and <ns0:ref type='bibr' target='#b24'>Demitriadou, Stavroulia & Lanitis (2020)</ns0:ref> presented comparisons between virtual reality and augmented reality for mathematics learning. In both studies, no significant difference was found between virtual and augmented reality technologies in contributing to mathematics learning. The work of <ns0:ref type='bibr' target='#b45'>Radianti et al. (2020)</ns0:ref> is recommended as a starting point if the readers want to explore the field of virtual reality in education.</ns0:p><ns0:p>For this study, we found papers related to augmented reality for mathematics teaching-learning. However, only studies published between 2013 and 2020 were considered. The query strings included 'mathematics,' 'financial mathematics,' 'augmented reality,' 'mobile augmented reality,' 'teaching,' 'education,' and 'learning.' Also, we use the Boolean operators 'OR,' 'AND' to mix multiple strings. We collected the papers from journals included in the journal citation reports (JCR) and manuscripts published in conferences through the Web of Science (WoS).</ns0:p><ns0:p>As a result, we detected 17 studies focused on learning mathematics inside formal and informal environments. Concerning the formal settings, the learners' education level ranges from preschool to undergraduate. The elementary level is the one in which more studies have been published. Moreover, geometry is the subject with more implementations. This is due to the ability of augmented reality to promote interaction and visualization with 2D and 3D objects. <ns0:ref type='bibr' target='#b50'>Salinas et al. (2013)</ns0:ref> tested the impact of AR on learning algebraic functions using 3D visualizations. The experience was assessed by 30 undergraduates from Mathematics I course. Likewise, <ns0:ref type='bibr' target='#b7'>Barraza, Cruz & Vergara (2015)</ns0:ref> used AR to help undergraduate students learn quadratic equations. The pilot study was conducted with 59 students at a Mexican school, and most comments obtained were positive. An AR app for mathematical analysis was presented by <ns0:ref type='bibr' target='#b20'>Coimbra, Cardoso & Mateus (2015)</ns0:ref>. Thirteen undergraduates participated in the experience, where most of them expressed 'classes should all be like this.' Regarding geometry, <ns0:ref type='bibr' target='#b30'>Gutíerrez et al. (2016)</ns0:ref> presented an AR system aimed at the learning of descriptive geometry. A positive impact on the spatial ability of 50 undergraduates was found. <ns0:ref type='bibr' target='#b44'>Purnama, Andrew & Galinium (2014)</ns0:ref> designed an AR tool to help elementary students learn the protractor's use. According to the students' responses, 92% found that the prototype makes the learning process faster than using a conventional method. <ns0:ref type='bibr' target='#b36'>Li et al. (2017)</ns0:ref> designed an augmented reality game for helping elementary students in the counting process. The two students who participated in the experience expressed that learn to count was easy using AR. Moreover, <ns0:ref type='bibr' target='#b55'>Tobar, Fabregat, & Baldiris (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b18'>Cascales et al. (2017)</ns0:ref> explained the advantages of using mobile augmented reality to learn mathematics in elementary special education needs (SEN) contexts. <ns0:ref type='bibr' target='#b53'>Sommerauer and Muller (2014)</ns0:ref> conducted a pre-test and post-test with 101 participants at a mathematics exhibition. The aim was to measure the effect of AR on acquiring and retaining mathematical knowledge in an informal learning environment. The pre-test score captured previous knowledge regarding the mathematical exhibits, while the post-test captured the knowledge level after visiting the exhibition. The results revealed that visitors performed significantly better on post-test questions.</ns0:p><ns0:p>A summary of the features of the papers analyzed is shown in Table <ns0:ref type='table'>1</ns0:ref>. There are no signs of papers related to financial mathematics, neither for simple interest computation. Regarding the preferred software for implementing AR, Vuforia is the leader. The number of participants varies from 2 to 140. It seems that there is no consensus about the sample size to validate an AR study. Only five works presented assessments about students' motivation. Most of the work was concentrated on prototype perception and students' achievement. No work focused on assessing technology acceptance was found. According to the theory base employed, qualitative research was the most used (seven times), followed by the nonparametric Wilcoxon signed-rank test (four times). Most of the technologies used to implement the prototypes were mobile devices, which evidences PCs are less preferred, and smart glasses are not yet used in academic scenarios, mainly due to the high cost. All the works introduced single-user-based applications because it is still complex to build collaborative applications. Based on the analysis conducted, our proposal's novelty relies on the field addressed (simple interest) and the constructs assessed in the same study (motivation, achievement, quality, and technology acceptance).</ns0:p></ns0:div>
<ns0:div><ns0:head>Research Design</ns0:head><ns0:p>In this research, we use a mixed-method to allow the synergistic usage of qualitative and quantitative data <ns0:ref type='bibr' target='#b46'>(Reeping et al., 2019)</ns0:ref>. Furthermore, this research is considered exploratory and descriptive. Exploratory because we investigate the problem in an early stage and obtain insights into what is happening. Descriptive, because we describe the features of the phenomenon studied. The systematic methodology to conduct the research comprises four stages: i) planning; ii) hypotheses development; iii) software development; and iv) design of data collection instruments.</ns0:p></ns0:div>
<ns0:div><ns0:head>Planning</ns0:head><ns0:p>After the conversation with three financial mathematics professors, the planning stage began. The professors agreed with us about the three problems detected during the monitoring. However, they mentioned the barriers that students face when learning simple interest: a) the problem is not analyzed, therefore, it is not understood; b) they confuse simple interest with compound interest and vice versa; c) the terms involved to solve the computation are wrongly cleared; d) the concepts such as principal, amount, interest rate, and time are misinterpreted; and e) conversions between time units are wrongly performed. Professors explained that around 70% of Mexican students commit at least one of the errors mentioned above. They also stated that simple interest knowledge is fundamental for mastering finances and understanding complex concepts such as compound interest, amortization tables, and annuities. Hence, students must comprehend the topic.</ns0:p><ns0:p>We ask professors for an explanation to understand how to compute simple interest. The explanation was based on the following example. When an individual borrows money, the lender expects to be paid back the loan amount plus an additional charge for using the money called interest. In contrast, when money is deposited in a bank, it pays the depositor to use the capital, also called interest.</ns0:p><ns0:p>Simple interest (I) represents the fee you pay on a loan or income you earn on deposits. In other words, simple interest represents the price of the money over a specific period. As shown in Eqs. ( <ns0:ref type='formula'>1</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_0'>2</ns0:ref>), there are two ways to compute simple interest. Furthermore, notice the four terms involved: i) Principal (P) is the original sum of money borrowed (also called present value); ii) Interest rate (r) is a fraction/percentage of the principal (charged) per unit of time; iii) Time (t) represents the time period over which the interest rates apply/are charged; and iv) Amount (A) is the total accrued amount (principal plus interest), represents the future value of the financial operation <ns0:ref type='bibr' target='#b32'>(Hastings, 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>, 𝐼 = 𝑃𝑟𝑡</ns0:head><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>𝐼 = 𝐴 -𝑃.<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>The terms regarding principal, interest rate, time, and the amount can be cleared from Eqs. (1) and (2) as is depicted in Eqs. (3), (4), (5), and (6), respectively.</ns0:p><ns0:formula xml:id='formula_1'>𝑃 = 𝐼 𝑟𝑡 . (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>) 𝑟 = 𝐼 𝑃𝑡 . (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>) 𝑡 = 𝐼 𝑃𝑟 . (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>) 𝐴 = 𝑃 + 𝐼.<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>In Eqs.</ns0:p><ns0:p>(1) to (6), it is common to use years as the time unit. However, time could also be expressed in days, weeks, fortnights, months, bimesters, quarters, or semesters. For any calculation, if the period for r and t is defined in different units, then a conversion must be computed, which often causes mistakes.</ns0:p><ns0:p>To this end, we propose a prototype to support simple interest learning and design two surveys and two practice tests to assess the effects of using it with undergraduate students. The prototype is called 'simple interest computation with mobile augmented reality (SICMAR).'</ns0:p></ns0:div>
<ns0:div><ns0:head>Hypotheses Development</ns0:head><ns0:p>Our study assesses students' motivation when learning simple interest in traditional settings and with SICMAR. Also, we assess the students' achievement when learning in both settings through a test. Finally, we consider obtaining insights about SICMAR technology acceptance and quality. Thereby, we pose twelve hypotheses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Students Motivation</ns0:head><ns0:p>Motivation affects what, how, and when the learners learn, and it is directly related to the development of students' attitudes and persistent efforts toward achieving a goal <ns0:ref type='bibr' target='#b38'>(Lin et al., 2021)</ns0:ref>. Motivation is an activity that must be performed to i) attract and sustain students' attention (A); ii) define the relevance (R) of a content students need to learn; iii) help students to believe they succeed in making efforts (gain confidence (C)); and iv) assist students in obtaining a sense of satisfaction (S) about their accomplishments in learning <ns0:ref type='bibr' target='#b13'>(Cabero-Almenara & Roig-Vila, 2019)</ns0:ref>. In this sense, Keller's ARCS model provides guidelines for designing and developing strategies to motivate students learning <ns0:ref type='bibr' target='#b37'>(Li & Keller, 2018)</ns0:ref>.</ns0:p><ns0:p>In previous studies, the ARCS model was used to observe if mobile augmented reality could be a resource that motivates students to learn anatomy and art (Cabero-Almenara & Roig-Vila, 2019), dimensional analysis <ns0:ref type='bibr' target='#b26'>(Estapa & Nadolny, 2015)</ns0:ref>, and geometry <ns0:ref type='bibr' target='#b34'>(Ibáñez et al. 2020)</ns0:ref>, obtaining promising results. Therefore, the present paper poses the following five hypotheses.</ns0:p><ns0:p>H 1 : There is a significant difference in students' attention scores in the pre-test and the post-</ns0:p></ns0:div>
<ns0:div><ns0:head>test.</ns0:head><ns0:p>H 2 : There is a significant difference in students' relevance scores in the pre-test and the post-</ns0:p></ns0:div>
<ns0:div><ns0:head>test.</ns0:head><ns0:p>H 3 : There is a significant difference in students' confidence scores in the pre-test and the post-test.</ns0:p><ns0:p>H 4 : There is a significant difference in students' satisfaction scores in the pre-test and the post-test. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Students Achievement</ns0:head><ns0:p>Academic achievement is the extent to which a student has accomplished specific goals that focus on activities in instructional environments <ns0:ref type='bibr' target='#b8'>(Bernacki, Greene & Crompton, 2019)</ns0:ref>. In this paper, student achievement is related to how capable the students are when solving simple interest computation problems. The studies by <ns0:ref type='bibr' target='#b44'>Purnama, Andrew & Galinium (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b26'>Estapa and Nadolny (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b55'>Tobar, Fabregat & Baldiris (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b18'>Cascales et al. (2017), and</ns0:ref><ns0:ref type='bibr' target='#b34'>Ibáñez et al. (2020)</ns0:ref> show how the use of MAR can affect mathematics academic achievement positively. Therefore, we propose the following hypothesis.</ns0:p><ns0:p>H 6 : Students learning with SICMAR achieve higher scores in simple interest tests than students exposed to traditional learning.</ns0:p></ns0:div>
<ns0:div><ns0:head>Technology Acceptance</ns0:head><ns0:p>The technology acceptance model (TAM) was formulated by <ns0:ref type='bibr' target='#b21'>Davis (1989)</ns0:ref>. TAM suggested that the perceived ease of use (PEU) and the perceived usefulness (PU) are determinants to explain what causes the intention of a person to use (ITU) a technology. The perceived ease of use refers to the degree to which a person believes that using a system would be free from effort. The perceived usefulness refers to the degree to which the user believes that a system would improve his/her work performance. The intention to use is employed to measure the degree of technology acceptance <ns0:ref type='bibr' target='#b21'>(Davis, 1989)</ns0:ref>.</ns0:p><ns0:p>In previous research, TAM was used to examine the adoption of augmented reality technology in teaching using videos (Cabero-Almenara, Fernández & Barroso-Ozuna, 2019) and learning the Mayo language <ns0:ref type='bibr' target='#b41'>(Miranda et al. 2016)</ns0:ref>. However, no studies related to the use of TAM in mathematical settings were detected. In this paper, the TAM is extended with prototype quality variable to explain and predict the SICMAR usage. The aim is to study the relationships between quality, perceived ease of use, and perceived usefulness and their positive effects on students' intention to use SICMAR.</ns0:p><ns0:p>The family of statistical multivariant models that estimate the effect and the relationships between multiple variables is known as structural equation modeling (SEM) <ns0:ref type='bibr' target='#b3'>(Al-Gahtani, 2016)</ns0:ref>. Therefore, we use SEM to test the following hypotheses.</ns0:p><ns0:p>H 7 : Quality positively affects students' perceived usefulness of SICMAR.</ns0:p><ns0:p>H 8 : Quality positively affects students' perceived ease of use of SICMAR.</ns0:p><ns0:p>H 9 : Perceived ease of use positively affects students' perceived usefulness of SICMAR.</ns0:p><ns0:p>H 10 : Perceived ease of use positively affects students' intention to use SICMAR.</ns0:p><ns0:p>H 11 : Perceived usefulness positively affects students' intention to use SICMAR.</ns0:p><ns0:p>The words 'positively affect' mean that when the measured value of one variable increases, the related variable also increases.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2020:09:52967:2:0:NEW 12 May 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>SICMAR Quality</ns0:head><ns0:p>Software quality is the field of study that describes the desirable characteristics of a software product. Establishing a measure for the quality of the software is not an easy task. However, attributes such as design, usability, operability, security, compatibility, maintainability, and functionality can be considered to define metrics <ns0:ref type='bibr' target='#b22'>(Dalla et al., 2020)</ns0:ref>. When the quality of an AR product is evaluated, questions such as how fast the system responds, how difficult it is to manipulate the system and markers, and to which extent the illumination affects marker recognition must be answered <ns0:ref type='bibr' target='#b23'>(De Paiva & Farinazzo, 2014)</ns0:ref>. In this paper, the quality was assessed considering the design and usability of SICMAR as recommended by <ns0:ref type='bibr' target='#b7'>Barraza, Cruz & Vergara (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b43'>Pranoto et al. (2017)</ns0:ref>.</ns0:p><ns0:p>In the literature, no MAR works that report a hypothesized minimal mean value of quality that serves for comparison were found. Therefore, we propose the following procedure: i) determine the minimum and the maximum length of the Likert scale; ii) compute the scale range by subtracting (5-1=4) and dividing by five (4/5=0.80); iii) add the range to the least scale value to obtain the maximum. The ranges computed for a five-point Likert scale are 1--1.8--2.6--3.4--4.2--5. Results greater than 3.4 and less or equal to 4.2 are considered as good quality. Thus, the mean value of 3.8 was supposed to determine good quality. Moreover, as is explained in the results section, this value is the median obtained after experimentation. Hence, the following hypothesis is established.</ns0:p><ns0:p>H 12 : The mean value evaluated by students regarding the quality of SICMAR is greater than 3.8.</ns0:p></ns0:div>
<ns0:div><ns0:head>Software Development</ns0:head><ns0:p>We consider the cascade model to establish the stages to develop SICMAR. The cascade model is a linear procedure characterized by dividing the software development process into successive phases <ns0:ref type='bibr' target='#b48'>(Ruparelia, 2010)</ns0:ref>. The model encompasses five phases, including i) requirements; ii) design; iii) implementation; iv) testing; and v) maintenance. A visual representation of the SICMAR phases is shown in Fig. <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Requirements</ns0:head><ns0:p>According to Billinghurts, <ns0:ref type='bibr' target='#b10'>Grasset & Looser (2005)</ns0:ref>, the physical components of the interface (inputs), the virtual visual and auditory display (outputs), and the interaction metaphors must be considered to build intuitive AR applications. Therefore, we determine five characteristics of the prototype to deal with the barriers faced by students: i) a set of markers will be used to determine the term to compute and the parameters involved (inputs); ii) 2D models will represent all the information needed for the calculations; iii) markers' movement will be used to observe the 2D models from different perspectives; iv) the calculation to solve will be defined with a combination of markers (touch manipulation metaphor); and v) we will employ virtual objects, such as text boxes, arrows, and images, to explain step-by-step calculations (outputs). A traditional computer application cannot offer all these features.</ns0:p><ns0:p>From the variety of information and communication technologies, PCs and mobile devices were considered to implement the prototype due to the high probability that a student has either one of them. The main differences between PCs and mobile devices are the display size, how it is manipulated, the processing power, bandwidth, and usage time. Portability, sensors included, and ease manipulation were considered to select mobile devices. Indeed, young users prefer mobile devices because they can be used anytime, carried from place to place, and connected to the Internet all day long. Moreover, recent studies have shown that almost 75% of AR works for educational settings were implemented on mobile devices obtaining satisfactory results <ns0:ref type='bibr' target='#b8'>(Bernacki, Greene & Crompton, 2019;</ns0:ref><ns0:ref type='bibr'>Cabero-Almenara et al., 2019)</ns0:ref>. AR applications are different from conventional applications that use a mouse and keyboard. By superimposing virtual information, mobile augmented reality increases perception (virtual objects can be observed from different perspectives) and user interaction (objects are manipulated with the fingers) with the environment; a non-AR application cannot offer that feature. Also, MAR can turn a classic learning process into an engaging experience because students perceive learning as a game <ns0:ref type='bibr' target='#b5'>(Arici et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Regarding mobile devices, Android and iOS-based devices are the leaders. Nevertheless, we selected Android because i) it is the leading mobile operating system worldwide; ii) the price for publishing an app in the play store is much lesser than posting an app in the apple store; iii) the cost of Android-based devices is less than iOS-based devices; due to price, a student rarely has an iPhone; iv) it possesses a good support architecture and functional performance; v) the customization level offered makes it easy to use; vi) the assortment of the batteries' sizes overpowers the iPhone <ns0:ref type='bibr' target='#b35'>(Ivanov, Reznik, & Succi, 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Design</ns0:head><ns0:p>There are various alternatives to implement AR solutions, including Wikitude, ARToolKit, Augumenta, Easy AR, HP Reveal, and Vuforia, offering exciting characteristics. Considering the analysis presented in Table <ns0:ref type='table'>1</ns0:ref> and based on the authors' experience, Vuforia Software Developer Kit (SDK) was selected. Vuforia is a robust platform that contains the libraries to implement the tasks related to AR, including real-time marker detection, recognition and tracking, and the computations for object superimposition. Unity 3D was employed to create the SICMAR visual environment and all the 2D virtual objects that will be superimposed on each marker. SICMAR was designed based on the framework proposed by <ns0:ref type='bibr' target='#b7'>Barraza, Cruz, & Vergara (2015)</ns0:ref>. The framework comprises four subsystems i) the rendering; ii) the context and world model; iii) the tracking; and iv) the interaction, which works together to create the mobile application.</ns0:p><ns0:p>In the rendering subsystem, two main tasks are executed: i) displaying the video acquired from the real world, and ii) rendering the 2D models. We designed a touch-based graphical user interface (GUI) to display the components and the video acquired from the mobile device. At the PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science top of the GUI, we inserted two sections: i) input data and ii) calculate (output). The first show the input terms (markers) detected inside the scene, and the second shows the term the user wants to compute (see the upper left corner in Fig. <ns0:ref type='figure'>2</ns0:ref>). We used the Unity sprite renderer for rendering all the photorealistic images of the 2D models that will be superimposed inside the real-world video stream.</ns0:p><ns0:p>The context and world model subsystem includes the design of image targets (markers), the data about the interest points, and the 2D objects that are going to be used in the augmentation. We used the Brosvision marker generator to design the markers to represent each of the five terms explained in Eqs. ( <ns0:ref type='formula'>1</ns0:ref>) to ( <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>). As shown in Fig. <ns0:ref type='figure'>3</ns0:ref>, the markers include lines, triangles, quadrilaterals, and at the center, a square with a letter corresponding to the simple interest term was added. Using Vuforia, we conducted a test of the contrast-based features (interest points) of the individual markers visible to the camera. All the markers earned five stars rating, which means they included excellent features for detection and tracking. Finally, four 2D objects were created for user interactions and augmentations, as shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. Vuforia SDK was used to carry out the tracking subsystem. This subsystem exchanges marker tracking information with the rendering subsystem to superimpose the virtual 2D objects to the original scene displayed to the user.</ns0:p><ns0:p>Finally, the interaction subsystem collects and processes any input required by the user. A series of C# scripts were linked to the GUI objects. When a tap occurs on the screen, verification is carried out to determine if an element was touched. If the verification is valid, the search for a marker starts. If a marker is detected, then the corresponding method is invoked to carry out the task.</ns0:p></ns0:div>
<ns0:div><ns0:head>Implementation</ns0:head><ns0:p>The logic implemented to solve any of the Eqs. ( <ns0:ref type='formula'>1</ns0:ref>) to ( <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>) is the following. The user taps the SICMAR icon to start the execution. The presentation screen is displayed, and the camera of the mobile device is turned on. When the user shows a valid marker in the front of the camera, it is recognized as the desired output. Then, the position, rotation, and perspective of the marker are computed, and the corresponding virtual object is superimposed accordingly to the view of the real scene. Next, the input checkbox is activated, and the prototype waits for the user to show the markers for input terms. When input markers are recognized, the text boxes to insert data are displayed, and the 2D objects are superimposed inside the real scene. Any marker different from the first selected can be used as input. The user must insert the data for each term with the keyboard of the device. Once the data was introduced, the input checkbox must be disabled to perform the computation. Immediately, verification is conducted to detect if the necessary data for the computation were inserted correctly. If there is any missing data, an error object is displayed, else the output calculated is presented. The process can be executed continuously.</ns0:p></ns0:div>
<ns0:div><ns0:head>Testing</ns0:head><ns0:p>An example of simple interest computation using SICMAR is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. If the user inserts r and t with different periods, then the associated conversions are computed. Notice that the result of simple interest computation is highlighted with the color blue. As shown in Fig. <ns0:ref type='figure'>4</ns0:ref>, the user selected quarters for r and fortnights for t. At the bottom of the screen, the value obtained from the conversion is displayed and explained for both periods. An example of students testing SICMAR is shown in Fig. <ns0:ref type='figure' target='#fig_0'>5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Maintenance</ns0:head><ns0:p>Two experienced software developers tested our beta version of SICMAR. They recommended conducting modifications related to the color and size of the objects. We also performed modifications to the GUI, including object location changes and those related to interactivity. As explained in the discussion section, the students recommended performing additional modifications to SICMAR, which will be implemented soon.</ns0:p></ns0:div>
<ns0:div><ns0:head>Design of Data Collection Instruments</ns0:head><ns0:p>We designed two surveys to collect the data. The first serves to obtain information about students' motivation when the professor explained the simple interest topic using traditional materials (textbooks, slides, and whiteboards). The second survey gathers data about students' motivation when learning with SICMAR, technology acceptance, and prototype quality. Besides, we designed a data consent form and two five-item tests to measure students' achievement. It is essential to highlight that Spanish was the language employed for the surveys and the whole experiment. Therefore, in this paper, English translations are presented.</ns0:p></ns0:div>
<ns0:div><ns0:head>The First Survey (Pre-test)</ns0:head><ns0:p>The first survey has two sections: the first includes items to collect students' general information, such as name, gender, and age, and the second includes items related to Keller's ARCS motivation model <ns0:ref type='bibr' target='#b37'>(Li & Keller, 2018)</ns0:ref>.</ns0:p><ns0:p>The instructional materials motivation survey (IMMS) assesses students' motivation based on the ARCS model that includes 36 items distributed as 12 items for (A), nine items for (R), nine items for (C), and six items for (S). Although IMMS was used and tested with a Cronbach α=0.96, it is long, and not all items are necessary, especially those measured in a negative or reverse way <ns0:ref type='bibr' target='#b19'>(Chen, 2019)</ns0:ref>. Therefore, the reduced IMMS (RIMMS) proposed by <ns0:ref type='bibr' target='#b39'>Loorbach et al. (2015)</ns0:ref> was employed. RIMMS comprises 12 five-point Likert scale items, three for each ARCS dimension. The original version was translated and adapted to the lesson of simple interest (see the left side of Table <ns0:ref type='table'>2</ns0:ref>). The minimum score on the RIMMS survey is 12, and the maximum is 60 with a midpoint of 36.</ns0:p><ns0:p>The items about attention measure the degree to which the professor's lesson attracts the learner's attention. We consider the organization, quality, and variety of the materials employed. On the other hand, we use the content and style of explanations to measure the lesson's relevance perceived by students. The items regarding confidence measure the degree to which the learner felt confident while completing the simple interest lesson. The final three items measure the degree to which the learner finds the lesson satisfactory and the intention to keep working.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Second Survey (Post-test)</ns0:head><ns0:p>The second survey comprises four sections. The first includes items to collect students' general information. The second section includes the 12 RIMMS items of the first survey but adapted to assess students' motivation using SICMAR (see the right side of Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p><ns0:p>The third section related to TAM comprises four items for perceived usefulness, five for perceived ease of use, and two for the intention to use (see Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>). The 11 items used a fivepoint Likert scale and were adapted from <ns0:ref type='bibr' target='#b41'>Miranda et al. (2016)</ns0:ref>. The minimum score for TAM is 11, and the maximum is 55 with a midpoint of 33. The four items regarding perceived usefulness measure the extent to which students believe that SICMAR would improve their performance in learning simple interest. The easiness that students perceived when using SICMAR is measured with the five items related to perceived ease of use (prototype manipulation employing the markers). The last two items measure the degree of acceptance when students use SICMAR.</ns0:p><ns0:p>The fourth section aims to gather information about SICMAR quality. The ten items based on the Likert five-point scale were adapted from <ns0:ref type='bibr' target='#b7'>Barraza, Cruz & Vergara (2015)</ns0:ref>. We collect information about SICMAR design (colors, size of the objects, velocity) and usability (results obtained) that together determined quality, as shown at the bottom of Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Practical Tests</ns0:head><ns0:p>Two financial mathematics professors helped us to design a set of practical exercises regarding simple interest computation. We divided the set of exercises to design two practical tests with five items each. The first test is applied after professor intervention, and the second after using SICMAR. Professors carefully reviewed both tests to ensure similar difficulty. In both tests, the first two items ask the students to compute simple interest. The last three questions are challenging because the terms to compute must be cleared from Eqs. ( <ns0:ref type='formula'>1</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_0'>2</ns0:ref>). The third question deals with principal computation, while the fourth and fifth deal with interest rate and time calculation, respectively. An example of two pre-test exercises is shown on the left column of Table <ns0:ref type='table'>S1</ns0:ref>, while the post-test examples are shown on the right column.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The study was conducted in early March 2020. A classroom at a public university located in northern Mexico was used as an educational setting. One professor that participated in the planning stage organized the sessions that comprised the study. Both sessions were conducted with three days of difference.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The Mexican university where the study was conducted imposed three restrictions regarding the participation of the students and professor: i) all students enrolled in the financial mathematics course must participate; ii) the professor could only use the time established in the curriculum to offer explanations, and iii) only one session could be used to test SICMAR. Therefore, we decide to conduct a quasi-experimental study to establish a cause-and-effect relationship between independent and dependent variables.</ns0:p><ns0:p>A quasi-experimental study is characterized because the sample to study is not selected randomly, and control groups are not required. Instead, participants are assigned to the sample based on non-random criteria previously established (all the students in the financial mathematics course must participate). This study is also called a nonrandomized or pre-post intervention and is frequently employed to conduct research in the educative field <ns0:ref type='bibr' target='#b42'>(Otte et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Before the experiment, students did not have prior knowledge of the concepts related to simple interest. Students were informed about the research goal and that the data obtained will be treated with confidentiality and used only for academic purposes. Moreover, students completed a consent form regarding data use. Institute of Engineering and Technology of Universidad Autonoma de Ciudad Juarez emitted the approval to use the data and reviewed the consent form students filled out.</ns0:p><ns0:p>In the first session, which lasted two hours, the professor explained the simple interest lesson employing traditional materials. Students were then asked to realize a practice consisting of the pre-tests five test exercises and fill out the first survey. At the end of the first session, we request students to get an Android-based mobile device for the second session.</ns0:p><ns0:p>The second session lasted one and a half hours and started with an explanation about the use of SICMAR. Afterward, each student received a set of markers. Fortunately, all students brought the Android mobile device. Hence, mobile devices (smartphones and tablets) with different features were used, which allow us to observe the variety of devices in which SICMAR can be executed. The average time to interact with the prototype was 39 minutes. Next, students were asked to realize the post-test practice consisting of five exercises and fill out the second survey. Students answered the surveys through the Internet (Microsoft forms) and practical exercises on a sheet of paper in both sessions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preliminary Data Analysis</ns0:head><ns0:p>Due to the restrictions imposed by the university, students were not divided into a control and experimental group. One hundred thirty-nine students enrolled in the financial mathematics course were surveyed. Data collected from the surveys was downloaded from Microsoft forms to create a database with IBM SPSS software. The responses obtained were minutely revised. The extreme values were not discarded, but registers with incomplete information were identified. The registers with incomplete information correspond to 36 students who did not attend the second session. Therefore, the final sample comprises data from 103 students.</ns0:p><ns0:p>The sample size was deemed valid due to i) our sample almost doubled the mean (M=58.2) from the fourth column of Table <ns0:ref type='table'>1</ns0:ref>; and ii) the section related to ARCS is the biggest of our surveys; therefore, the rule of thumb that a sample should have at least five times as many observations as there are variables to be analyzed was fulfilled (5x12=60).</ns0:p><ns0:p>Of the 103 participants, n=59 (57.28%) were female, and n=44 (42.72%) were male. The participants' ages ranged from 18 to 30 years with a mean age (M=19.74, SD=1.93). We measure the internal reliability of the surveys with Cronbach's Alpha (α). A summary of the results is shown in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>. Values greater than 0.7 are accepted (good-excellent). The total item correlation computed does not reflect the necessity of eliminating any item. Therefore, the α value for R in pre-test ARCS was also accepted.</ns0:p></ns0:div>
<ns0:div><ns0:head>Assessment of Students Motivation with RIMMS</ns0:head><ns0:p>This part of the study allowed us to assess if a significant difference in motivation is obtained when comparing the professor's lesson and SICMAR. The mean and standard deviation for each item are displayed in Table <ns0:ref type='table'>2</ns0:ref>. All scores exceed the central value of the scale. Moreover, the greater mean values were always obtained with SICMAR. The minimum difference is observed for attention (4.14-3.95=0.19) and the maximum for relevance (4.38-3.87=0.51). The difference for the whole study is (4.17-3.87=0.3). The results for both motivation studies are plotted in Fig. <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>Also, it was necessary to determine if the differences obtained are statistically significant. The normality test indicated that data from the survey were normally distributed. Therefore, the paired t-test with a 5% level of significance was calculated (t=-1.761 for attention; t=-6.120 for relevance, t=-2.281 for confidence, t=-2.877 for satisfaction, and t=-3.613 for ARCS). P-values less than or equal to 0.05 are considered significant and values greater than 0.05 as nonsignificant. Considering the null hypothesis, 'there is no significant difference between pre-test and post-test scores':</ns0:p><ns0:p>• H 1 : Is rejected. We obtained p=0.081; therefore, the difference of 0.19 is not significant regarding attention (A).</ns0:p><ns0:p>• H 2 : Is accepted. There is statistical evidence (p<0.001) to support that with SICMAR, a significant difference of 0.51 on students' relevance (R) is obtained.</ns0:p><ns0:p>• H 3 : Is accepted. The difference of 0.21 (4.08-3.87) is significant regarding the confidence (C) dimension with p=0.025.</ns0:p><ns0:p>• H 4 : Is accepted. We obtained p=0.005; hence, the difference of 0.3 is significant regarding students' satisfaction (S).</ns0:p><ns0:p>The magnitude and significance of causal connections between variables can be estimated using path analysis. We perform a path analysis to compute total effects among ARCS four dimensions and determine students' motivation. The diagram in Fig. <ns0:ref type='figure'>7</ns0:ref> is the visual representation of the relationships between variables. The path coefficients (β) estimate the variance of the indicator that is accounted by the latent construct. The higher the value of β, the stronger the effect. The values calculated for the pre-test are shown above the arrows and the post-test values below the arrows. We also calculate the determination coefficients (R 2 ) to measure how close the data are to the fitted regression line (values must be greater than 0.2). The pre-test values are shown in the upper right corner and the lower right corner for the post-test.</ns0:p><ns0:p>It is noted from Fig. <ns0:ref type='figure'>7</ns0:ref> that a significant direct effect exists from A->R, from R->C, and C->S with a significance level of 5% for both tests. Hence, the hypothesis:</ns0:p><ns0:p>• H 5 : Is accepted. Students increased their motivation using SICMAR. The value M=4.17 obtained with SICMAR is greater than M=3.87 obtained with the professor's lesson. The difference of 0.3 is statistically significant (p<0.001), representing a motivation increase of 7.75%. In summary, the mean values and the path analysis corroborate the motivation increase.</ns0:p></ns0:div>
<ns0:div><ns0:head>Assessment of Students Achievement in Practice Tests</ns0:head><ns0:p>The professor reviewed the students' responses to emit the grade. An answer is correct only if the result and the procedure to obtain the response are good. Many students presented good results but a wrong procedure; these cases were qualified as incorrect. Since the test includes five items, each correct answer sums 20 points. Therefore, the final grade ranged from 0 to 100. A summary of the correct and incorrect responses for each test is shown in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p><ns0:p>We obtained (M=39.02 and SD=28.88) for the pre-test and (M=66.60 and SD=29.02) for the post-test. Hence, an increase of 70.68% was observed on post-test grades when comparing with the pre-test. For the post-test, 25 students obtained the maximum grade (100), and only four in the pre-test. Thirteen students (13.59%) obtained better grades on the pre-test than post-test. Moreover, 72 students obtained better scores for the post-test than the pre-test, while 18 students obtained the same score for both tests. In both sessions, women obtained better grades, with pretest values (M=44.06, SD=27.41) and post-test values (M=76.61, SD=20.41). The pre-test values obtained for men were (M=32.27, SD=30.56), and for the post-test (M=53.18, SD=31.38). The plot (box and whiskers) of the scores obtained by students is illustrated in Fig. <ns0:ref type='figure'>8</ns0:ref>.</ns0:p><ns0:p>The test of Kolmogorov-Smirnov was employed to select the statistical analysis tool accordingly to the data distribution. The results obtained with a 5% level of significance using SPSS for the pre-test were: Z=0.162, p<0.001, skewness=0.285, skewness standard error=0.238, kurtosis=-0.885, and kurtosis standard error= 0.472. For the post-test the results were: Z=0.192, p<0.001, skewness=-0.675, skewness standard error=0.238, kurtosis=-0.396, and kurtosis standard error=0.472. For both tests (pre-test and post-test) the results were: Z=0.109, p=0.004, skewness=-0.329, skewness standard error=0.238, kurtosis=0.242, and kurtosis standard error=0.472, meaning that normality is not satisfied. Thus, the two paired Wilcoxon signed-rank test was utilized to observe if grade difference is significant (Z=-6.129, p<0.001, and medium effect size d=-0.427). Therefore:</ns0:p><ns0:p>• H 6 : Is accepted. Students obtained an average grade of 66.6 with SICMAR and 39.02 with the professor's material. The difference of 27.58 is statistically significant (p<0.001).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>SICMAR Technology Acceptance Assessment</ns0:head><ns0:p>We use AMOS software to examine the effects between observed and latent variables and the validity of the proposed hypotheses. The variables and the relationships between them were established considering <ns0:ref type='bibr' target='#b41'>Miranda et al. (2016)</ns0:ref> and <ns0:ref type='bibr' target='#b31'>Hamidi & Chavoshi (2018)</ns0:ref>. The model in Fig. <ns0:ref type='figure'>9</ns0:ref> comprises four latent variables (spheres) and 21 observed variables (squares). The relationships are symbolized with unidirectional arrows. The latent variable of quality is independent because no arrow is connected to it, and the remainder are dependent (at least one arrow was connected).</ns0:p><ns0:p>In structural equation modeling, only the identified (over, just, or under-identified) models can be estimated. Identification is the act of formally stating a model. We conduct the identification by computing the degrees of freedom (DoF=184). When the DoF is greater than 0, the model has more information than parameters to estimate. Therefore, our model is over-identified. Afterward, we calculate the sample variances and covariances to obtain the values that provide a reproduced matrix that best fit the observed matrix. A model fits the data well if differences between observed and predicted values are small. For this purpose, we employ the maximum likelihood method.</ns0:p><ns0:p>A summary of the values obtained is shown in Table <ns0:ref type='table'>6</ns0:ref>. We expected that χ 2 /DoF ranged from 2 to 3, a GFI value near 1, and RMR closer to 0. Our model fulfills the conditions; therefore, we have a good-fitting model. Next, we compute the coefficients of determination (R 2 ) to measure the percentage of variance explained by the independent variables. The results are shown in Fig. <ns0:ref type='figure'>9</ns0:ref>. Values higher than 0.5 are considered good.</ns0:p><ns0:p>Then, we compute the standardized factor loadings and p-values for the observed variables. A summary of the results is shown in the third and fourth columns in Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>. All the relations between observed variables to latent variables are accepted with a confidence of 1%. For the case of quality, the variables Q9 and Q10 related to markers were the most important. For the perceived ease of use, PEU2 and PEU4, which address the familiarity with the technology and manipulation of the prototype controls, obtained the greater values. Regarding the perceived usefulness, PU3 and PU4 were the most important, which refers to the usability of SICMAR to learn and remember concepts. The highest value was obtained for ITU1, where students expressed their interest in keeping using SICMAR. Finally, we compute the path coefficients (β), the p-values, and the direct, indirect, and total effects between variables (see Table <ns0:ref type='table'>7</ns0:ref>). A direct effect is a relationship that exists between one variable to another. An indirect effect is a relationship between two variables mediated by at least one or more different variables. The sum of direct and indirect effects determines the total effect. Each direct effect is represented with a β in Fig. <ns0:ref type='figure'>9</ns0:ref> and helps validate the hypotheses.</ns0:p><ns0:p>• H 7 : Is accepted. The quality effect on the perceived usefulness has β=0.694 and p<0.05.</ns0:p><ns0:p>When the quality increases its standard deviation by one unit, the perceived usefulness goes up by 0.694 units, establishing a significant relationship with a confidence of 95%. Also, the quality establishes an indirect effect on the intention to use when it passes by the perceived usefulness.</ns0:p><ns0:p>• H 8 : Is accepted. The quality effect on the perceived ease of use has β=0.902 and p<0.001. When the quality increases its standard deviation by one unit, the perceived ease of use goes up by 0.902 units, establishing a significant relationship with a confidence of 95%.</ns0:p><ns0:p>• H 9 : Is rejected. The perceived ease of use effect on the perceived usefulness has β=0.153 and p=0.562. Therefore, the direct effect is not significant, with a confidence of 95%.</ns0:p><ns0:p>• H 10 : Is rejected. The perceived ease of use effect on the intention to use has β=0.054 and p<0.699. Therefore, the direct effect is not significant, with a confidence of 95%.</ns0:p><ns0:p>• H 11 : Is accepted. The perceived usefulness effect on the intention to use has β=0.830 and p<0.001. When the perceived usefulness increases its standard deviation by one unit, the intention to use goes up by 0.830 units, establishing a significant relationship with a confidence of 95%.</ns0:p><ns0:p>According to the results, the intention to use SICMAR is significantly affected by the quality and perceived usefulness. Students expressed their intention to use SICMAR due to the total effect of 0.738 encountered in the path Quality->PU->ITU.</ns0:p></ns0:div>
<ns0:div><ns0:head>SICMAR Quality Assessment</ns0:head><ns0:p>We conduct a study to determine if students considered SICMAR a good quality prototype. The scores obtained (M=3.93 and SD=0.62) demonstrate that students consider SICMAR a good quality prototype, as shown in Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref>. The minimum value obtained (M=3.16) was regarding item Q5. Students considered that buttons are small, so we need to enlarge the buttons for better manipulation. The next minimum corresponds to Q10 (M=3.56); hence, students could not easily manipulate the device and the markers simultaneously. The better results correspond to Q1 (M=4.45) and Q6 (M=4.40), which suggest that all the simple interest terms were included, and the velocity of response for computations was fast.</ns0:p><ns0:p>Data obtained from quality follow a normal distribution. Therefore, a one-sample t-test with a significance of 5% and a reference value of 3.8 was performed <ns0:ref type='bibr'>(t=2.126, p=0.036, and d=0.20)</ns0:ref>. :</ns0:p><ns0:p>• H 12 : Is accepted. A significant difference is obtained when comparing M=3.93 with the reference value (3.8). Also, as mentioned in the TAM study, quality influences the students' intention to use SICMAR.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>From the results obtained, we observed that mobile augmented reality could be applied to financial mathematics, obtaining the benefit of increasing perception and user interaction with the environment; a non-AR application cannot offer those features.</ns0:p><ns0:p>The findings of our motivation study are consistent with those reported in the papers of Table <ns0:ref type='table'>1</ns0:ref>. Mobile augmented reality changes the way in which students interact with the world. Moreover, the use of MAR increases students' motivation when learning about simple interest topics. Students expressed that using the fingers to insert the values for calculation and the interaction of the markers to define the inputs and outputs is attractive. According to the professor, students became more engaged during the post-test session. This is due to the different and interactive ways of presenting the information. Students expressed that MAR could turn a classic learning process into an engaging experience. Based on Table <ns0:ref type='table'>2</ns0:ref>, the elements to increase students' motivation were: i) regarding relevance, the content, and style of the SICMAR explanations; ii) regarding confidence, the organization of the information; iii) regarding satisfaction, the design of the prototype (the interactive representations of time conversions, the 2D virtual objects, and how markers interaction determined the calculation to be computed). Besides, students did not consider the quality of the contents, the organization of the information, and the variety of 2D models and interactions of SICMAR to keep their attention. The fact of using ICTs also influence the results obtained. Moreover, the younger participants felt more motivated with SICMAR, as expected.</ns0:p><ns0:p>According to <ns0:ref type='bibr' target='#b39'>Loorbach et al. (2015)</ns0:ref>, confidence influences students' persistence and accomplishment. Hence, it is crucial for motivation. In our post-test study, confidence (β =0.902) positively affects students' motivation. The main differences of our findings with the works by <ns0:ref type='bibr' target='#b26'>Estapa & Nadolny (2015)</ns0:ref>, <ns0:ref type='bibr' target='#b18'>Cascales et al. (2017), and</ns0:ref><ns0:ref type='bibr' target='#b34'>Ibáñez et al. (2020)</ns0:ref> were that our sample size is the biggest, we used RIMMS instead of IMMS (since there are few items, students are less worn), and that we utilized path analysis. In summary, the statistical results indicate that students who used SICMAR significantly increase their motivation scores (7.75%) compared with scores obtained in the professor's lesson.</ns0:p><ns0:p>By observing Fig. <ns0:ref type='figure'>8</ns0:ref>, students performed better when answering the practice exercises using SICMAR compared with the professor's lesson's answers. An increase of 70.76% was observed. Regarding simple interest computation, students using SICMAR performed better for the first question but not for the second (see Table <ns0:ref type='table'>5</ns0:ref>). All the students with incorrect answers to these questions for the pre-test failed to convert the time unit. On the other hand, only ten students in the post-test failed to convert the time unit. Mistakes such as not including/describing the procedure to solve the problem, not copying the correct answer, and wrong selection of markers were the most common.</ns0:p><ns0:p>Regarding exam questions 3-5 in Table <ns0:ref type='table'>5</ns0:ref>, it is notable that the performance increases in students using SICMAR. In these questions, the terms to compute the solution must be cleared. All the students with incorrect answers for the pre-test failed to convert the time unit. On the other hand, half of the incorrect responses for the pre-test were due to conversions. The remaining mistakes were due to not including the procedure to solve the problem or not copying the correct answer.</ns0:p><ns0:p>The works by <ns0:ref type='bibr' target='#b26'>Estapa & Nadolny (2015)</ns0:ref>, <ns0:ref type='bibr' target='#b55'>Tobar et al. (2015)</ns0:ref>, and <ns0:ref type='bibr' target='#b34'>Ibáñez et al. (2020)</ns0:ref> reported about students' achievements; however, they measured the time used to execute the tasks, unlike our proposal that quantified the answers of practice exercises. <ns0:ref type='bibr' target='#b44'>Purnama et al. (2014)</ns0:ref> reported an increase of 17% in the learning process; unfortunately, the way it was measured was never explained. <ns0:ref type='bibr' target='#b20'>Coimbra et al. (2015)</ns0:ref> presented only qualitative preliminary explanations about math learning enhancing. Therefore, we cannot provide comparisons against literature works.</ns0:p><ns0:p>None of the works in Table <ns0:ref type='table'>1</ns0:ref> utilized the TAM; hence, we cannot offer comparisons. However, the path Quality->PU->ITU determined the students' intention to use SICMAR. Students gave the highest quality scores to the concepts explained, the calculation speed, the results, the colors, and the legibility of texts displayed. On the other hand, the question regarding the size of the buttons obtained the lowest score. We observed that smartphone users offered all the comments about the small size of the buttons. This can lead to conduct future research to obtain insights into how a mobile device's screen size influences the perception of the AR experience. Students considered SICMAR useful for learning, and it helped to remember the concepts related to simple interest. Finally, students expressed SICMAR quality enough to use the prototype continuously.</ns0:p></ns0:div>
<ns0:div><ns0:head>Lessons Learned</ns0:head><ns0:p>Our augmented reality educational prototype serves as an alternative tool to learn the simple interest topic, but it cannot replace the teacher. Professors will continue looking for tools to improve the teaching-learning process. However, many times, teachers are not willing to make the efforts to create the tools, and frequently they do not have the computer skills to develop them because a usable app is challenging to create. The software to rapidly create augmented reality experiences does not offer all the resources needed to explain complex science topics.</ns0:p><ns0:p>Augmented reality causes enjoyment in students and a desire to repeat the experience. Although not complex 3D models were needed to represent the augmented reality for SICMAR, this alternative representation of the real phenomena causes motivation to the students. Even when the literature has been asserted that augmented reality can be exploited in any field, we recommend choosing application areas where it is needed that 3D models show different views of the objects. Even though 3D models are the base of augmented reality, it is still difficult to explain how the computer-based models inserted into the real scene increase student achievement. Some students expressed that prolonged use of SICMAR slows down and warm up the device. We know that this problem is common when a mobile device is used for a considerable time. However, we are not sure if this problem is accentuated due to the execution of our prototype's complex routines. This leads to inviting developers to conduct a thorough review to detect routines that can optimize the processor usage, look for native development, or test another framework for coding.</ns0:p><ns0:p>Remarkably, we detected that students commit mistakes in handling the five markers to manipulate SICMAR. For example, they selected the principal marker when the correct one was the amount. Therefore, SICMAR cannot help students understand the problem stated and neither identify the concepts involved. The solution to the issue of problem understanding is still an open challenge. We recommend to developers avoid combinations of many markers to trigger augmented reality.</ns0:p><ns0:p>If professors and students disagreed with testing SICMAR, they would still be thinking that paper, blackboard, books, slides, or computer-based content are the unique resources to learn. With the findings obtained, we observed the potential of augmented reality for educational settings. The school administrators must be convinced that augmented reality is an educational tool that they should provide for the students. Moreover, schools must invest effort into implementing more resources based on augmented reality to the complete curriculum. With this, it will be possible: i) to observe the real impact of augmented reality on students; ii) have tracking about the usability of the resources; iii) detect the moment when the interest is lost; iv) establish if the impact was due only to the novelty; and v) to know what happens with concentration, and cognitive load of the students.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>After experimentation, we note some limitations in our study. Because we conducted a quasiexperimental study, our data might be biased. Therefore, the reported findings should be replicated with an experimental study (using control and experimental groups). We cannot know if students change their behavior because they are aware of participating in a research study. Only one financial mathematics professor used SICMAR, so the positive comments regarding usability may change as more teachers are involved. Some students focused their attention on the application and not on the essential parts of the topic to learn. This fact is known as the attention tunneling effect, which can explain why some students scored lower using SICMAR. Also, not all students felt comfortable using SICMAR, which offered clues that for some person it could be challenging to use ICTs. Moreover, the issues related to gender were not analyzed in depth, which is currently a trend in the AR field.</ns0:p></ns0:div>
<ns0:div><ns0:head>Future Research</ns0:head><ns0:p>Extensions of the proposed study include the improvement of the interaction environment, a larger sample of students, the measurement of cognitive load, to involve more financial mathematics teachers, and the implementation of other topics about financial mathematics. It is desirable to study in-depth the case of students which grade has decreased using SICMAR. The worst case is a student who obtained a grade of 80 with professor lesson and zero with SICMAR. Also, in-depth analyses of the individual answers to the tests and the questionnaires will be conducted. A critical review of the interface design should be performed. Further research is necessary to determine the content that must be added to SICMAR to keep students' attention. Finally, it would be recommendable to run a pilot study with Microsoft Hololenses to observe if the possibility of not clicking on screens increases students' motivation and achievement.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this paper, the SICMAR prototype based on augmented reality was introduced to verify its effects in learning the concept of simple interest to undergraduate students of financial mathematics. To the best of our knowledge, concepts such as principal, amount, time, interest PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science rate, and simple interest are considered fundamental to promote students' financial education. SICMAR was tested in a real university setting to assess its quality, students' motivation using ARCS, the achievement by answering practice exercises, and technology acceptance with extended TAM. The results obtained from tests with 103 participants revealed that the undergraduate students were interested in using SICMAR frequently because of its quality, they were motivated to learn the simple interest topics and increased their achievement in answering practice exercises. All this leads to concluding that SICMAR is a valuable complementary tool to learn the concepts related to simple interest computation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>Visual representation of the methodology to develop SICMAR. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The third and fourth sections of the second survey (post-test).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science 1 SICMAR TAM Please select the number that best represents how do you feel about SICMAR acceptance: 1=Strongly disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly agree. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean</ns0:head><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>H 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>There is a significant difference in students' motivation scores in the pre-test and the post-test.PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>(a) To display information about the detected marker (input term). (b) To capture the user inputs and determine if a term is handled as input or output. (c) To display information about the time conversions. (d) To display the calculation result (output term) or show an error.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,120.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,394.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,274.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,178.87,525.00,175.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,178.87,525.00,326.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,284.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Please think about each statement concerning the professor's lesson you have just participated and indicated how true it is. Give the answer that truly applies to you, not what you would like to be true or what you think others want to hear. Use the following values to indicate your response to each item: 1=Not true, 2=Slightly true, 3=Moderately true, 4=Mostly true, and 5=Very true. Please think about each statement concerning the SICMAR you have just used and indicated how true it is. Give the answer that truly applies to you, not what you would like to be true or what you think others want to hear. Use the following values to indicate your response to each item: 1=Not true, 2=Slightly true, 3=Moderately true, 4=Mostly true, and 5=Very true.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>General Data</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Name (s):</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Surname:</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Age:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Gender:</ns0:cell><ns0:cell>o (Male)</ns0:cell><ns0:cell /><ns0:cell>o (Female)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ARCS Professor</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>ARCS SICMAR</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell /><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell></ns0:row><ns0:row><ns0:cell>Attention (A)</ns0:cell><ns0:cell>3.95</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>Attention (A)</ns0:cell><ns0:cell>4.14</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell>A1. The quality of the materials used helped to hold my attention.</ns0:cell><ns0:cell>3.91</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>A1. The quality of the contents displayed helped to hold my attention.</ns0:cell><ns0:cell>4.19</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell>A2. The way the information was</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>A2. The way the information was</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>organized helped keep my attention.</ns0:cell><ns0:cell>3.97</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>organized (buttons, menus) helped</ns0:cell><ns0:cell>4.09</ns0:cell><ns0:cell>0.90</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>keep my attention.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>A3. The variety of readings, exercises,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>A3. The variety of 2D models and</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>and illustrations helped keep my</ns0:cell><ns0:cell>3.98</ns0:cell><ns0:cell>1.04</ns0:cell><ns0:cell>interactions helped keep my attention</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.94</ns0:cell></ns0:row><ns0:row><ns0:cell>attention on the explanations.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>on the explanations.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Relevance (R)</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>Relevance (R)</ns0:cell><ns0:cell>4.38</ns0:cell><ns0:cell>0.70</ns0:cell></ns0:row><ns0:row><ns0:cell>R1. It is clear to me how the content of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>R1. It is clear to me how the content of</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>this lesson is related to things I already</ns0:cell><ns0:cell>3.35</ns0:cell><ns0:cell>1.02</ns0:cell><ns0:cell>SICMAR is related to things I already</ns0:cell><ns0:cell>4.48</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell>know.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>know.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>R2. The content and style of</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>R2. The content and style of</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>explanations convey the impression that being able to work with simple</ns0:cell><ns0:cell>4.05</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>explanations used by SICMAR convey the impression that being able to work</ns0:cell><ns0:cell>4.31</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>interest is worth it.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>with simple interest is worth it.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>R3. The content of this lesson will be useful to me.</ns0:cell><ns0:cell>4.22</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>R3. The content of SICMAR will be useful to me.</ns0:cell><ns0:cell>4.36</ns0:cell><ns0:cell>0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>Confidence (C)</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>Confidence (C)</ns0:cell><ns0:cell>4.08</ns0:cell><ns0:cell>0.73</ns0:cell></ns0:row><ns0:row><ns0:cell>C1. As I worked with this lesson, I was</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>C1. As I worked with SICMAR, I was</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>confident that I could learn how to</ns0:cell><ns0:cell>4.12</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>confident that I could learn how to</ns0:cell><ns0:cell>4.07</ns0:cell><ns0:cell>0.88</ns0:cell></ns0:row><ns0:row><ns0:cell>compute simple interest well.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>compute simple interest well.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>C2. After working with this lesson for</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>C2. After working with SICMAR for a</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>a while, I was confident that I would be able to pass a test about simple</ns0:cell><ns0:cell>3.54</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>while, I was confident that I would be able to pass a test about simple</ns0:cell><ns0:cell>4.08</ns0:cell><ns0:cell>0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>interest.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>interest.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>C3. The good organization of the</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>C3. The good organization of</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>content helped me be confident that I</ns0:cell><ns0:cell>3.96</ns0:cell><ns0:cell>0.83</ns0:cell><ns0:cell>SICMAR helped me be confident that</ns0:cell><ns0:cell>4.11</ns0:cell><ns0:cell>0.75</ns0:cell></ns0:row><ns0:row><ns0:cell>would learn about simple interest.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>I would learn about simple interest.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Satisfaction (S)</ns0:cell><ns0:cell>3.80</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>Satisfaction (S)</ns0:cell><ns0:cell>4.10</ns0:cell><ns0:cell>0.83</ns0:cell></ns0:row><ns0:row><ns0:cell>S1. I enjoyed working with this lesson</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>S1. I enjoyed working with SICMAR</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>so much that I was stimulated to keep</ns0:cell><ns0:cell>3.61</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>so much that I was stimulated to keep</ns0:cell><ns0:cell>3.92</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell>on working.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>on working.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>S2. I really enjoyed working with this simple interest lesson.</ns0:cell><ns0:cell>3.85</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>S2. I really enjoyed working with SICMAR.</ns0:cell><ns0:cell>4.07</ns0:cell><ns0:cell>0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>S3. It was a pleasure to work with such a well-designed lesson.</ns0:cell><ns0:cell>3.95</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>S3. It was a pleasure to work with such a well-designed prototype.</ns0:cell><ns0:cell>4.31</ns0:cell><ns0:cell>0.89</ns0:cell></ns0:row><ns0:row><ns0:cell>ARCS</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>ARCS</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.66</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Please select the number that best represents how do you feel about SICMAR quality: 1=Not at all, 2=A little, 3=Moderately, 4=Much, 5=Very much.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>SD</ns0:cell><ns0:cell>Standardized Factor Loadings</ns0:cell><ns0:cell>Hypotheses Interpretation</ns0:cell></ns0:row><ns0:row><ns0:cell>Perceived Usefulness (PU)</ns0:cell><ns0:cell>4.09</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PU1. I could improve my learning performance by using SICMAR</ns0:cell><ns0:cell>3.97</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.762</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PU2. I could enhance my simple interest proficiency by using SICMAR</ns0:cell><ns0:cell>3.99</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.771</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PU3. I think SICMAR is useful for learning purposes.</ns0:cell><ns0:cell>4.25</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.820</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PU4. Using SICMAR will be easy to remember the</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>concepts related to the calculation of simple</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.832</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>interest.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Perceived Ease of Use (PEU)</ns0:cell><ns0:cell>4.04</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PEU1. I think SICMAR is attractive and easy to use</ns0:cell><ns0:cell>3.79</ns0:cell><ns0:cell>1.13</ns0:cell><ns0:cell>0.679</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PEU2. Learning to use SICMAR was not a</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>problem for me due to my familiarity with the</ns0:cell><ns0:cell>4.32</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.805</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>technology used.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PEU3. The marker detection was fast.</ns0:cell><ns0:cell>4.02</ns0:cell><ns0:cell>1.04</ns0:cell><ns0:cell>0.664</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PEU4. The tasks related to the manipulation of controls were simple to execute.</ns0:cell><ns0:cell>3.92</ns0:cell><ns0:cell>1.04</ns0:cell><ns0:cell>0.817</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>PEU5. I was able to locate the areas for conversions and calculations quickly.</ns0:cell><ns0:cell>4.19</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.792</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Intention to Use SICMAR (ITU)</ns0:cell><ns0:cell>4.38</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>ITU1. I want to use the app in the future if I have the opportunity.</ns0:cell><ns0:cell>4.28</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.925</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>ITU2. The main concepts of SICMAR can be used to learn other topics.</ns0:cell><ns0:cell>4.49</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.754</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>TAM</ns0:cell><ns0:cell>4.12</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>SICMAR Quality</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Quality questions</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SD</ns0:cell><ns0:cell>Standardized Factor Loadings</ns0:cell><ns0:cell>Hypotheses Interpretation</ns0:cell></ns0:row><ns0:row><ns0:cell>Q1. SICMAR showed all the concepts explained by the teacher.</ns0:cell><ns0:cell>4.45</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.450</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q2. The results obtained with SICMAR were correct.</ns0:cell><ns0:cell>4.24</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.562</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q3. The colors used for conversions were adequate.</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.91</ns0:cell><ns0:cell>0.527</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q4. The texts and numbers displayed by SICMAR were legible.</ns0:cell><ns0:cell>4.13</ns0:cell><ns0:cell>0.94</ns0:cell><ns0:cell>0.627</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q5. The size of the buttons allowed the easy manipulation of SICMAR.</ns0:cell><ns0:cell>3.16</ns0:cell><ns0:cell>1.22</ns0:cell><ns0:cell>0.531</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q6. SICMAR velocity of response to carry out the calculations was fast.</ns0:cell><ns0:cell>4.40</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>0.528</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q7. The classroom illumination was adequate.</ns0:cell><ns0:cell>3.79</ns0:cell><ns0:cell>0.98</ns0:cell><ns0:cell>0.513</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q8. The manipulation of the electronic device I use was straightforward.</ns0:cell><ns0:cell>3.76</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>0.676</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q9. Markers' manipulation was easy.</ns0:cell><ns0:cell>3.65</ns0:cell><ns0:cell>1.05</ns0:cell><ns0:cell>0.747</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Q10. The manipulation of the device in conjunction with the markers was easy.</ns0:cell><ns0:cell>3.56</ns0:cell><ns0:cell>1.06</ns0:cell><ns0:cell>0.703</ns0:cell><ns0:cell><0.01, Accepted</ns0:cell></ns0:row><ns0:row><ns0:cell>Quality</ns0:cell><ns0:cell>3.93</ns0:cell><ns0:cell>0.62</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Cronbach´s alpha values for both surveys.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Measurement</ns0:cell><ns0:cell>α</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>0.867</ns0:cell></ns0:row><ns0:row><ns0:cell>R</ns0:cell><ns0:cell>0.679</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>0.821</ns0:cell></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell>0.872</ns0:cell></ns0:row><ns0:row><ns0:cell>ARCS (pre-test)</ns0:cell><ns0:cell>0.934</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>0.847</ns0:cell></ns0:row><ns0:row><ns0:cell>R</ns0:cell><ns0:cell>0.776</ns0:cell></ns0:row><ns0:row><ns0:cell>C</ns0:cell><ns0:cell>0.814</ns0:cell></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell>0.889</ns0:cell></ns0:row><ns0:row><ns0:cell>ARCS (Post-test)</ns0:cell><ns0:cell>0.931</ns0:cell></ns0:row><ns0:row><ns0:cell>PU</ns0:cell><ns0:cell>0.877</ns0:cell></ns0:row><ns0:row><ns0:cell>PEU</ns0:cell><ns0:cell>0.859</ns0:cell></ns0:row><ns0:row><ns0:cell>ITU</ns0:cell><ns0:cell>0.815</ns0:cell></ns0:row><ns0:row><ns0:cell>TAM</ns0:cell><ns0:cell>0.921</ns0:cell></ns0:row><ns0:row><ns0:cell>Quality</ns0:cell><ns0:cell>0.839</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:09:52967:2:0:NEW 12 May 2021)</ns0:note>
</ns0:body>
" | "Rebuttal Letter to Editor and Reviewers of PeerJ Computer Science
(Second round of reviews)
May 11th, 2021
Paper title: 'Effects of Using Mobile Augmented Reality for Simple Interest Computation in a Financial Mathematics Course.'
Paper ID: 52967
Dear editor and reviewers:
The authors would like to thank you for the careful and thorough review of our manuscript and providing us with comments and suggestions to improve its quality.
The reviewers agree that our paper was improved. However, our manuscript still has few mistakes that must be addressed before publication. The recommendations included inserting Figures 3 and 6 to 9 in vectorized form. Correct a few English mistakes, clarify some claims with a too large scope, among others.
Thanks to the reviewers' suggestions, our paper was improved. We have inserted the responses to all the suggestions. Therefore, we believe that the manuscript is now suitable for publication in PeerJ Computer Science.
The following responses were prepared in a point-by-point fashion to explain how the reviewers' suggestions were addressed. We hope the reviewers will be satisfied with our responses to the comments and the recommendations offered to the second round of reviews. Original reviewers' comments have been italicized and highlighted in black color and the authors' response in blue color. The changes in the reviewed manuscript are tracked using the 'compare function' in Microsoft Word.
Dr. Osslan Osiris Vergara Villegas
Universidad Autónoma de Ciudad Juarez
Head of the Computer Vision and Augmented Reality Laboratory
On behalf of all authors.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Reviewer 1
- Basic reporting (Good language, easy to read, structure is fine.); Experimental design (fine); Validity of the findings (Fine).
R. We thank all the comments from the reviewer.
- Thank you for the revision and for the good overview of changes in the rebuttal document. I think you did an excellent job in commenting on and addressing the reviews.
R. We thank the reviewer for the careful reading of the manuscript and the constructive remarks. Thanks to the suggestions offered, our paper was improved and clarified.
- The amount of changes is impressive. You obviously took the revision very serious, which is appreciated. Although it is somewhat uncommong to accept R1 after a major revision 'as is', I recommend doing so. The only strong suggestion that I have is inserted Figures 6 to 9 in vectorized form (I am not sure about Figure 3, it might be possible as well).
R. We appreciate this suggestion. We have vectorized and uploaded to the PeerJ system Figures 3 and 6 to 9.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Reviewer 2
- The English has (almost) no mistake, the references are complete, and the methods are described in detail. The sections on 'Lessons learned' and 'Limitations', and the additional literature review (incl. Table 1) are very good additions.
R. We want to thank the reviewer for the work and patience in carefully reviewing our manuscript. We are happy that the reviewer appreciates our work.
- However, there are still sentences that are not precise enough (details below). It makes the discussion state claims with too large a scope, and that are not fully supported by the study in the paper.
R. We apologize for this. We have worked to correct all the mistakes detected.
- L.674-675, 'Students use their fingers to insert the values and uses the markers to define the inputs and outputs. As a result, students' motivation to learn simple interest increases.': The paper provides no evidence that a specific way to use fingers is what increased motivation.
R. We agree with the reviewer. The use of the fingers and markers is not what increases motivation. What increases motivation is the use of mobile augmented reality technology. The confusion was caused by our mistake in writing the sentence. We have amended the error by inserting a new sentence that can be observed on page 19 (lines 686-688).
- L.678, 'students perceived learning as a game': The paper does not provide evidence to this, i.e., no questions asked to participants targeted this aspect in particular. Perhaps oral was collected, but this is not reported.
R. We appreciate the reviewer's observation. We have never presented evidence of this; therefore, we have erased this sentence (page 19, line 691).
- L.683-684, 'students did not consider the contents of SICMAR as sufficient to keep all the attention': The absence of significant impact on attention, as measured with the questionnaire, cannot be directly linked to the sufficiency (sic) of content.
R. We entirely agree with the reviewer's observation. In the attention attribute, we have not asked about the sufficiency of the contents. Instead, we have asked about the quality of the contents (question1), the organization of the information (question2), and the variety of 2D models and interactions (question 3). We have amended this error by rewriting the sentence on page 19 (lines 696-697).
- L.715-716, 'Students considered the [...list of features...] as the critical features to determine quality': The word 'consider' is ambiguous. To be precise, students gave the highest quality scores to these features. It does not mean that they identified these as 'critical features to determine quality'. The authors of the paper are the ones determining quality using these features (and others in the questionnaire), perhaps students would consider other features as critical for quality.
R. The reviewer is correct in the two comments issued. We agree that the word 'consider' is ambiguous. Moreover, we agree that students could consider other features as critical for quality. To solve this mistake, we have changed the sentence on page 20 (lines 730-731).
- L.65-66, 'principal sum/balance': the terms can be unclear for a reader with no expertise in finance.
R. We agree that the terms can be unclear for a reader with no expertise in finance. Therefore, we have changed our explanation. Please observe page 3 (lines 64-68).
- L.154-155, 'Qualitative research is the most common theory-base employed, followed by the nonparametric Wilcoxon signed-rank test': Comparing qualitative research with a statistical test is odd. What matters is what is tested, more than the type of test.
R. Thank you for the comment. We want to clarify that we are not making a comparison. In lines 154-155, we are only summarizing the information of the last column in Table 1. We believe that this confusion was caused due to a wrong explanation. Therefore, we have changed the sentence. The change can be observed on page 5 (line 158-160).
- L.192: the definition of 'interest rate' is imprecise. In particular these words in [...] : 'the amount charged [on top of] the principal [for the use of assets] (expressed as a [percentage])'. The definition does not mention of the time dimension, and it's a percentage of what? Also equation (4) is not a percentage, but that's a detail. So it's rather, e.g., 'a fraction/percentage of the Principal (charged) per unit of time'.
R. We apologize for this mistake. We have changed the sentence as suggested. Please observe page 6 (line 196).
- L.193: the definition of 'time' is also imprecise, e.g., 'the period of the [financial operation]' could be phrased as 'the time period over which the interest rates apply/are charged'.
R. Thank you for the suggestion. We have changed the sentence to: 'iii) Time (t) represents the time period over which the interest rates apply/are charged;' Please observe page 6 (line 197).
- L.203, 'Eqs. (1) to (6) are expressed in years': the interest rate is not expressed in years, i.e., it is not the unit. To be precise, one should rather say 'use year as the time unit'.
R. Thank you very much for pointing out this. We apologize for this mistake. We have changed the sentence by 'In Eqs. (1) to (6), it is common to use years as the time unit.' The change can be observed on page 7 (line 206).
- L.297-298: 'much quality' -> 'good/sufficient quality'
R. Thank you. We have changed the word 'much' to 'good' on page 9 (lines 301-302).
- L. 333 & 670, 'improves the user perception and interaction with the real world': This is rather vague, what is improved exactly?
R. We apologize for the confusion. We believe that the problem was due to bad writing. In order to solve this, we have changed the whole sentence. The changes can be observed on page 10 (lines 336-339) and page 18 (682-683).
- L. 336: 'gadgets' -> 'devices'
R. We apologize for this mistake. We have changed the word 'gadgets' to 'devices' on page 10 (line 342).
- L.362, 'The context and world model subsystem includes...': I'm not sure why the terms 'context' and 'world model' are used, nor why the element listed thereafter fit together.
R. We apologize for cause confusion. As was explained on page 9 (lines 317-319), to build an AR application, the inputs, outputs, and interaction metaphors are needed. To address this, in our past work (Barraza, Cruz, & Vergara (2015)), we have proposed a framework to design augmented reality applications. Therefore, to clarify the reviewer concern, we have added the following sentence 'The framework comprises four subsystems i) the rendering; ii) the context and world model; iii) the tracking; and iv) the interaction, which works together to create the mobile application' on page 10 (lines 359-361). The sentence was added to explain that the four subsystems interact to build all the necessary items to create an AR application.
- L.420: 'two five items tests' -> 'two five-item tests'.
R. Thank you for your suggestion. We have changed 'two five items tests' by 'two five-item tests', as can be observed on page 12 (line 430).
- L.432, 'translated': You should mention the language was Spanish, and so for the whole experiment.
R. We entirely agreed with your comment. We have added the sentence 'It is essential to highlight that Spanish was the language employed for the surveys and the whole experiment. Therefore, in this paper, English translations are presented' on page 12 (lines 429-431) to clarify this.
- L.583, 584, 586: 'were (...' -> 'were: ...' and L.586 is missing '[the results] were'.
R. We apologize for the colons missed. We have added the colons and 'the results.' Also, we have erased the parentheses, as can be observed on page 16 (lines 594, 595, and 597).
- L. 596, 'pre-test-post-test: pre-test and post-test.
R. Thank you for your comment. We have erased the hyphen, and the word 'and' was added. The change can be observed on page 16 (line 597).
- L.592, '3.33 times': it is vague, and the unit is not that of the grades (total of 100 points). The reader can only understand this number by looking at Fig. an dividing the mean by 20 (20 points per question). You should mention the 'average grade of 66.6'.
R. We thank the reviewer's suggestion. We have changed the sentence to 'Students obtained an average grade of 66.6 with SICMAR and 39.02 with the professor's material. The difference of 27.58 is statistically significant (p<0.001).' The change can be observed on page 16 (lines 603-605).
- L.602: 'state' -> 'stating'
R. We have changed 'state' by 'stating' on page 17 (line 614).
- L.673, 'the way on how' -> 'the way in which'
R. Sorry for this mistake. We have changed 'the way on how' by 'the way in which' on page 18 (line 685).
- L.677 & 718: 'determined' -> 'expressed'?
R. We apologize for the inappropriate use of the word determined. We have changed the word by 'expressed' on page 19 (line 690) and page 20 (line 736).
- L.682: 'bi-directional models' -> '2D virtual objects'
R. We have conducted the change on page 19 (line 695).
- L.698, 699, 704: 'failed in/due the period conversion' -> 'failed to convert the time unit' (3 occurrences)
R. Thank you for the suggestions. We have changed the three occurrences on page 19 (lines 712, 713, and 718).
- L.699-700 + 705-706: 'not include' -> 'not including/describing' and 'not copy' -> 'not copying'
R. We apologize for this error. We have corrected all the mistakes on page 19 (lines 713-714, 720).
- L.702, 'questions 3-5' add 'in Table 5' and/or 'exam question'. + 'notable [that] the performance increase[s]'.
R. Thank you for the recommendation. The change can be observed on page 19 (line 716).
- L.709: 'qualified' -> 'quantified'
R. The correction can be observed on page 19 (line 724).
- L.768: 'some persons could be challenging using ICTs' -> 'for some person it could be challenging to use ICTs' and 'in-depth-analyzed' -> 'analyzed in depth'
R. We apologize for this mistake. We have conducted the correction on page 21 (lines 786-787).
- L.775: 'students that decrease the grade' -> 'students which grade has decreased'
R. Thank you for the suggestion. The correction can be observed on page 21 (line 794).
- L.776: 'cero' -> 'zero'
R. We have corrected this mistake on page 21 (line (795).
- L.781: 'not using the hands' -> 'not clicking on screens' (hand gestures are still needed).
R. Thank you for the suggestion. We have changed the text on page 21(line 800).
- L.793: 'conveys to conclude' -> 'leads to concluding'
R. We have corrected this mistake on page 22 (line 812).
- However, another detail concerns the type of mobile devices used with the AR app. Figure 5 shows a tablet, and I was under the impression that mobile phones were used. The type of device (e.g., screen size) can influence the results of the experiment. This should be clarified: e.g., did all students used a tablet? a phone? or a mix of both?.
R. Thank you very much. The reviewer has pointed out an important issue that had not been explained. Indeed, in Figure 5, the students are using a tablet. However, as was explained on page 14 (lines 510-512), all the students employed their mobile devices. Therefore, SICMAR was tested on a variety of mobile devices, including smartphones and tablets.
Regarding the question, if the type of device (screen size) can influence the experiment results? The answer is that we have not performed such a type of study. Then, we do not have a response to the question. However, we have observed that smartphone users explained that the size of SICMAR buttons is small. This can be related to the size of the device screen. We consider the comment of the reviewer as an opportunity to conduct future research. To address this concern, we have added the sentence on page 20 (lines 731-735).
- The section added on 'Limitations' and the author's rebuttal seem to address most of my concerns. However, my concern with exaggerated claims holds, but I'm willing to consider that this is due to issues with how claims are written down (see section 1 above).
R. We thank the reviewer for highlighting this point. We apologize that you still have the impression that we offer exaggerated claims. We have worked a lot to avoid this. Moreover, we have addressed all the suggestions offered in section 1 above. We hope that by addressing the suggestions, your point of view can change.
- Furthermore, regarding the 36 students that were removed from the study (fro not completing the questionnaires): this is a considerable number (25% of the participants). This could impact the validity of the study. For instance, these students may have had issues with prototype, that could explain their incomplete answers to the questionnaires. If so, their negative feedback would not be reported (but the authors could check if their partial answers indicate such effect).
R. We understand the reviewer's concern. However, as was explained on page 13 (lines 484-485), the study was conducted in two sessions with three days of difference. Unfortunately, the 36 students removed from the study did not attend the second session. We have added a sentence to explain this on page 14 (lines 522-523). Therefore, we can ensure that the absence of information is not due to issues with the prototype. The issues related to the prototype were commented on in the 'SICMAR quality assessment' subsection on page 18 (lines 664-678).
" | Here is a paper. Please give your review comments after reading it. |
175 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In dentistry, practitioners interpret various dental X-ray imaging modalities to identify tooth-related problems, abnormalities, or teeth structure changes. Another aspect of dental imaging is that it can be helpful in the field of biometrics. Human dental image analysis is a challenging and time-consuming process due to the unspecified and uneven structures of various teeth, and hence the manual investigation of dental abnormalities is at par excellence. However, automation in the domain of dental image segmentation and examination is essentially the need of the hour in order to ensure error-free diagnosis and better treatment planning. In this article, we have provided a comprehensive survey of dental image segmentation and analysis by investigating more than 130 research works conducted through various dental imaging modalities, such as various modes of X-ray, CT (Computed Tomography), CBCT (Cone Beam Computed Tomography), etc. Overall state-ofthe-art research works have been classified into three major categories, i.e., image processing, machine learning, and deep learning approaches, and their respective advantages and limitations are identified and discussed. The survey presents extensive details of the state-of-the-art methods, including image modalities, pre-processing applied for image enhancement, performance measures, and datasets utilized.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Dental X-ray imaging (DXRI) has been developed as the foundation for dental professionals across the world because of the assistance provided in detecting the abnormalities present in the teeth structures <ns0:ref type='bibr' target='#b91'>(Oprea et al., 2008)</ns0:ref>. For dentists, radiography imparts a significant role in assisting imaging assessment in providing a thorough clinical diagnosis and dental structures preventive examinations <ns0:ref type='bibr' target='#b78'>(Molteni, 1993)</ns0:ref>. However, to analyze a dental X-ray image, researchers primarily use image processing methods to extract the relevant information. Image segmentation is the most widely used image-processing technique to analyze medical images and help improve computer-aided medical diagnosis systems <ns0:ref type='bibr' target='#b69'>(Li et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b122'>Shah et al., 2006)</ns0:ref>. Furthermore, Manual examination of a large collection of X-ray images can be time-consuming because visual inspection and tooth structure analysis have an abysmal sensitive rate; therefore, human screening may not identify a high proportion of caries <ns0:ref type='bibr' target='#b90'>(Olsen et al., 2009)</ns0:ref>. In most cases, the automatic computerized tool that can help the investigation process would be highly beneficial <ns0:ref type='bibr' target='#b0'>(Abdi, Kasaei & Mehdizadeh, 2015;</ns0:ref><ns0:ref type='bibr' target='#b54'>Jain & Chauhan, 2017)</ns0:ref>. Dental image examination involved various stages consisting of image enhancement, segmentation, feature extractions, and identification of regions, which are subsequently valuable for detecting cavities, tooth fractures, cyst or tumor detection, root canal length, and growth tooth in children <ns0:ref type='bibr' target='#b62'>(Kutsch, 2011;</ns0:ref><ns0:ref type='bibr' target='#b99'>Purnama et al., 2015)</ns0:ref>. Also, various studies revealed that Recently, numerous machine learning approaches have been proposed by researchers to improve dental image segmentation and analysis performance. Deep learning and artificial intelligence techniques are remarkably successful in addressing the challenging segmentation dilemmas presented in various studies. <ns0:ref type='bibr' target='#b42'>(Hatvani et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b66'>Lee et al., 2018a;</ns0:ref><ns0:ref type='bibr' target='#b144'>Yang et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b50'>Hwang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b113'>Sai Ambati et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b57'>Khanagar et al., 2021)</ns0:ref>, So we can foresee a whirlwind of inventiveness and lines of findings in the coming years, based on achievements that recommend machine learning models concerning semiotic segmentation for DXRI.</ns0:p><ns0:p>In the existing surveys <ns0:ref type='bibr' target='#b104'>(Rad et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b120'>Schwendicke et al., 2019)</ns0:ref>, various techniques and methods have been discussed for DXRI. In <ns0:ref type='bibr' target='#b104'>(Rad et al. 2013)</ns0:ref>, Segmentation techniques are divided into three classes: pixel-based, edge-based, and region-based and further classified into thresholding, clustering boundarybased, region-based, or watershed approaches. However, there is no discussion on enhancement techniques, image databases used, and modalities used for DXRI. Furthermore, after <ns0:ref type='bibr' target='#b104'>(Rad et al., 2013)</ns0:ref> survey, a large number of approaches have been introduced by researchers. Next, a review of dental image diagnosis using convolution neural network is presented by <ns0:ref type='bibr' target='#b120'>(Schwendicke et al., 2019)</ns0:ref>, focusing on diagnostic accuracy studies that pitted a CNN against a reference test, primarily on routine imagery data. It has been observed that in the previous surveys, a thorough investigation of traditional image processing, machine learning, and deep learning approaches is missing.</ns0:p><ns0:p>Being an emerging and promising research domain, dental X-ray imaging requires a comprehensive and detailed survey of dental image segmentation and analysis to diagnose and treat various dental diseases. In this study, we have made the following contributions that are missing in the previous surveys: Firstly, we have imparted various studies from 2004 to 2020 covering more than 130 articles and is almost double than previous surveys given by <ns0:ref type='bibr' target='#b104'>Rad et al. (2013)</ns0:ref> and <ns0:ref type='bibr' target='#b120'>Schwendicke et al. (2019)</ns0:ref>. Secondly, we have presented X-ray pre-processing techniques, traditional image analysis approaches, machine learning, and deep learning advancements in DXRI. Third, specific image modality (such as periapical, panoramic, bitewing and CBCT, etc.) based methods are categorized. At last, performance metrics and dataset descriptions are investigated up to a great extent. Also, specific benchmarks in the advancement of DXRI methods are represented in Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1.'>A brief about dental imaging modalities</ns0:head><ns0:p>Dental imaging modalities give insights into teeth growth, bone structures, soft tissues, tooth loss, decay and also helps in root canal treatment (RCT), which is not visible during a dentist's clinical inspection.</ns0:p><ns0:p> Panoramic X-rays. X-rays are full-sized and capture the overall tooth structure. Also, the pictures provide information about the skull and jaw. These images are mainly used to examine fractures, trauma, jaws diseases, pathological lesions and evaluate the impacted teeth.</ns0:p><ns0:p> Cephalometric X-rays. Also called ceph X-ray, it depicts the jaw's whole part, including the head's entire side. It is employed in both dentistry and medicine for diagnosis and clinical preparation purposes.</ns0:p><ns0:p> Sialogram. It uses a substance that is infused into the salivary glands to make them visible on X-ray film. Doctors may recommend this check to ensure problems with the salivary glands, such as infections or Sjogren's syndrome signs (a symptom condition identified by sore mouth and eyes; this condition may cause tooth decay).</ns0:p></ns0:div>
<ns0:div><ns0:head> Computed tomography (CT).</ns0:head><ns0:p>It is an imaging technique that gives insights into 3-D internal structures. This kind of visualization is used to identify maladies such as cysts, cancers, and fractures in the face's bones.</ns0:p><ns0:p> Cone-beam computed tomography (CBCT) generates precise and high-quality pictures. Cone beam CT is an X-ray type that generates 3D visions of dental formations, soft tissues, nerves, and bones. It helps in guiding the tooth implants and finding cyst and tumefaction in the mouth. It can also find</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.1'>Pre-processing techniques</ns0:head><ns0:p>Dental imaging consists of different image modalities, where X-rays are the most common medical imaging method used to classify bone and hard tissues. In dentistry, imaging modalities help identify fractures, teeth structures, jaws alignment, cyst, and bone loss, which has become tremendously popular in dental imaging <ns0:ref type='bibr' target='#b39'>(Goyal, Agrawal & Sohi, 2018)</ns0:ref>. Noise level, artifacts, and image contrast are vital values that control an image's overall quality. The image quality obtained depends on varying factors such as the dynamic range of the sensors, the lighting conditions, distortion, and the artifact examined <ns0:ref type='bibr' target='#b116'>(Sarage & Jambhorkar, 2012)</ns0:ref>. Interpretation of a low-resolution image is often a complex and time-consuming process. Pre-processing techniques enhance the quality of low-resolution images, which corrects the spatial resolution and local adjustment to improve the input image's overall quality <ns0:ref type='bibr' target='#b46'>(Hossain, Alsharif & Yamashita, 2010)</ns0:ref>. Moreover, enhancement and filtering methods improve the overall image quality parameters before further processing. In Table <ns0:ref type='table'>2</ns0:ref>, pre-processing techniques are addressed to recuperate the quality of dental images.</ns0:p><ns0:p>Contrast stretching, Grayscale stretching, Log transformation, Gamma correction, image negative, and histogram equalization methods are standard enhancement methods to improve the quality of medical images. X-rays are typically grayscale pictures, with high noise rates and low resolution. Thus, the image contrast and boundary representation are relatively weak and small <ns0:ref type='bibr' target='#b105'>(Ramani, Vanitha & Valarmathy, 2013)</ns0:ref>. Extracting features from these X-rays is quite a difficult task with very minimal details and a lowquality image. By adding specific contrast enhancement techniques significantly improves image quality. So that segmentation and extraction of features from such images can be performed more accurately and conveniently <ns0:ref type='bibr' target='#b61'>(Kushol et al., 2019)</ns0:ref>. Therefore, a contrast stretching approach has been widely used to enhance digital X-rays quality <ns0:ref type='bibr' target='#b63'>(Lai & Lin, 2008;</ns0:ref><ns0:ref type='bibr' target='#b139'>Vijayakumari et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b11'>Berdouses et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b99'>Purnama et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b7'>Avuçlu & Bacsçiftçi, 2020)</ns0:ref>. Adaptive local contrast stretching makes use of local homogeneity to solve the problem of over and under enhancement. One of the prominent methods to refine the contrast of the image is Histogram Equalization (HE) <ns0:ref type='bibr' target='#b41'>(Harandi & Pourghassem, 2011;</ns0:ref><ns0:ref type='bibr' target='#b75'>Menon & Rajeshwari, 2016;</ns0:ref><ns0:ref type='bibr' target='#b88'>Obuchowicz Rafałand Nurzynska et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b9'>Banday & Mir, 2019)</ns0:ref>. HE is the way of extending the dynamic range of an image histogram and it also causes unrealistic impacts in images; however, it is very effective for scientific pictures i.e satellite images, computed tomography, or X-rays. A downside of the approach is its indiscriminate existence. This can increase ambient noise contrast while reducing the useful quality features of an image.</ns0:p><ns0:p>On the other hand, filtering methods applied to medical images help to eradicate the noise up to some extent. Gaussian, Poisson, and quantum noise are different types of noise artifacts usually found in X-Rays & CTs, particularly when the image is captured <ns0:ref type='bibr' target='#b108'>(Razifar et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b39'>Goyal, Agrawal & Sohi, 2018)</ns0:ref>. The noise-free images achieve the efficiency to get the best result and improve the test's precision.</ns0:p><ns0:p>If we try to minimize one class of noise, it may disrupt the other. Various filters have been used to achieve the best potential outcome for the irregularities present in dental images like Average filter, Bilateral filter, Laplacian filter, Homomorphic filter, and Butterworth filter, Median Gaussian filter, and Weiner filter. In recent studies, various filtering techniques used by researchers but widely used filtering methods are Gaussian filter and the median filter, which shows the best result <ns0:ref type='bibr' target='#b10'>(Benyó et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b96'>Prajapati, Desai & Modi, 2012;</ns0:ref><ns0:ref type='bibr' target='#b87'>Nuansanong, Kiattisin & Leelasantitham, 2014;</ns0:ref><ns0:ref type='bibr' target='#b107'>Razali et al., 2014;</ns0:ref><ns0:ref type='bibr'>Datta & Chaki, 2015a,b;</ns0:ref><ns0:ref type='bibr' target='#b103'>Rad et al., 2015;</ns0:ref><ns0:ref type='bibr'>Tuan, Ngan & others, 2016;</ns0:ref><ns0:ref type='bibr' target='#b54'>Jain & Chauhan, 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Alsmadi, 2018)</ns0:ref>. However, the drawback of the median filter is that it degrades the boundary details. Whereas the Gaussian filter performs best in peak detection, the limitation is that it reduces the picture's information.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1.2'>Dental image segmentation approaches used for different imaging modalities</ns0:head><ns0:p>DXRI segmentation is an essential step to extract valuable information from various imaging modalities.</ns0:p><ns0:p>In dentistry, segmentation faces more difficulties than other medical imaging modalities, making the segmentation process more complicated or challenging. Here, the problems faced by researchers in analyzing dental X-ray images and the purpose of segmentation are given in Figure <ns0:ref type='figure'>6</ns0:ref>. The segmentation process refers to the localization of artifacts or the boundary tracing, analysis of structure, etc. Human eyes distinguish the objects of interest quickly and remove them from the background tissues, but it is a great challenge in developing algorithms.</ns0:p><ns0:p>Furthermore, image segmentation has applications distinct from computer vision; it is often used to extract or exclude different portions of an image. General dental image segmentation methods are categorized as thresholding-based, contour or snake models, level set methods, clustering, and region growing <ns0:ref type='bibr' target='#b104'>(Rad et al., 2013)</ns0:ref>. Moreover, there has been a significant number of surveys presented by various authors <ns0:ref type='bibr' target='#b104'>(Rad et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b123'>Sharma, Rana & Kundra, 2015)</ns0:ref>. However, none of them categorized the methods based on dental imaging modalities. Various segmentation and classification techniques are discussed and reviewed in this article, considering multiple dental imaging modalities. In the field of dental imaging, the choice of selecting a correct algorithm for the particular image dataset is most important. This study explores image processing techniques explicitly applied for dental imaging modalities, as given in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>Bitewing X-rays are widely used by researchers for the application of human identification and biometrics. Human identification is achieved by applying adaptive thresholding, iterative thresholding, and region growing approaches. Afterwards, image features are extracted to archive and retrieve dental images used for human identification <ns0:ref type='bibr' target='#b72'>(Mahoor & Abdel-Mottaleb, 2004</ns0:ref><ns0:ref type='bibr'>, 2005;</ns0:ref><ns0:ref type='bibr' target='#b83'>Nomir & Abdel-Mottaleb, 2005</ns0:ref><ns0:ref type='bibr'>, 2007</ns0:ref><ns0:ref type='bibr' target='#b27'>, 2008;</ns0:ref><ns0:ref type='bibr' target='#b148'>Zhou & Abdel-Mottaleb, 2005)</ns0:ref>. In <ns0:ref type='bibr' target='#b49'>(Huang et al., 2012)</ns0:ref>, missing tooth locations were detected with an adaptive windowing scheme combined with the isolation curve method, which shows the accuracy rate higher than <ns0:ref type='bibr' target='#b83'>(Nomir & Abdel-Mottaleb, 2005)</ns0:ref>. In <ns0:ref type='bibr'>(Pushparaj, Gurunathan & Arumugam, 2013)</ns0:ref>, primarily aimed at estimating the shape of the entire tooth. In which segmentation is performed by applying horizontal and vertical integral projection. In addition, teeth boundary was estimated using the fast connected component labeling algorithm, and lastly, Mahalanobis distance is measured for the matching.</ns0:p><ns0:p>Periapical X-rays help in clinical diagnosis considering dental caries and root canal regions by applying various image processing techniques <ns0:ref type='bibr' target='#b91'>(Oprea et al., 2008)</ns0:ref>. Many times dentists use periapical X-ray images to spot caries lesions from dental X-rays. Regardless of human brain vision, it is often hard to correctly identify caries by manually examining the X-ray image. Caries detection methods for periapical X-rays have been used iteratively to isolate the initially suspected areas. Then, separated regions are subsequently analyzed. In <ns0:ref type='bibr' target='#b103'>(Rad et al., 2015)</ns0:ref>, automatic caries identified by applying segmentation using k-means clustering and feature detection using GLCM. However, it shows image quality issues in some cases, and because of these issues, tooth detection may give a false result. On the other hand, <ns0:ref type='bibr' target='#b125'>(Singh & Agarwal, 2018)</ns0:ref> applied color masking techniques to mark the curios lesions to find the percentage value of the affected area.</ns0:p><ns0:p>Another approach is given by <ns0:ref type='bibr' target='#b92'>(Osterloh & Viriri, 2019)</ns0:ref> mainly focused on upper and lower jaws separation with the help of thresholding and integral projection, and the learning model is employed to extract caries. This model shows better accuracy than <ns0:ref type='bibr' target='#b27'>(Dykstra, 2008;</ns0:ref><ns0:ref type='bibr' target='#b129'>Tracy et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b135'>Valizadeh et al., 2015)</ns0:ref>. In <ns0:ref type='bibr' target='#b88'>(Obuchowicz Rafałand Nurzynska et al., 2018)</ns0:ref>, k-means clustering (CLU) and first-order features (FOF) were used to show the best performance for the identification of caries. However, this approach applied to the dataset of 10 patients with confirmed caries. A geodesic contour technique <ns0:ref type='bibr' target='#b21'>(Datta, Chaki & Modak, 2019)</ns0:ref> shows better computational time results than multilevel thresholding, watershed, and level set. The limitation of this approach is that it does not work well for poor-quality pictures, which leads to inappropriate feature extraction. In <ns0:ref type='bibr' target='#b23'>(Datta, Chaki & Modak, 2020)</ns0:ref>, a method reduced the computational efforts and caries region identified in optimum time. The X-ray image is processed in the neutrosophic domain to identify the suspicious part, and an active contour method is employed to detect the outer line of the carious part. The benefit of this method is that it prevents recursive iterations using neutrosophication during suspicious area detection.</ns0:p><ns0:p>The semi-automatic method for root canal length detection is proposed by <ns0:ref type='bibr' target='#b41'>(Harandi & Pourghassem, 2011;</ns0:ref><ns0:ref type='bibr' target='#b99'>Purnama et al., 2015)</ns0:ref> to help dental practitioners properly treat root canal treatment (RCT). In some studies, periapical X-rays are also used for the automatic segmentation of cyst or abscess. <ns0:ref type='bibr' target='#b24'>(Devi, Banumathi & Ulaganathan, 2019)</ns0:ref> proposed a fully automated hybrid method that combined featurebase isophote curvature and model-based fast marching (FMM). It shows good accuracy and optimum results as compared to <ns0:ref type='bibr' target='#b54'>(Jain & Chauhan, 2017)</ns0:ref>. Furthermore, various approaches were used to automatically detect teeth structures <ns0:ref type='bibr' target='#b48'>(Huang & Hsu, 2008;</ns0:ref><ns0:ref type='bibr' target='#b117'>Sattar & Karray, 2012;</ns0:ref><ns0:ref type='bibr' target='#b82'>Niroshika, Meegama & Fernando, 2013;</ns0:ref><ns0:ref type='bibr' target='#b87'>Nuansanong, Kiattisin & Leelasantitham, 2014;</ns0:ref><ns0:ref type='bibr' target='#b60'>Kumar, Bhadauria & Singh, 2020)</ns0:ref>.</ns0:p><ns0:p>Panoramic X-rays help identify jaw fractures, the structure of jaws, and deciduous teeth. These X-rays are less detailed as compared to periapical and bitewing. It has been observed that the segmentation of panoramic X-rays using wavelet transformation shows better results than adaptive and iterative thresholding <ns0:ref type='bibr' target='#b94'>(Patanachai, Covavisaruch & Sinthanayothin, 2010)</ns0:ref>. Another, fully automatic segmentation of the teeth using the template matching technique introduced by <ns0:ref type='bibr' target='#b95'>(Poonsri et al., 2016)</ns0:ref> shows 50% matching accuracy results. In <ns0:ref type='bibr' target='#b107'>(Razali et al., 2014)</ns0:ref> analyzed X-rays for the age estimations by comparing edge detection approaches. <ns0:ref type='bibr' target='#b5'>(Amer & Aqel, 2015)</ns0:ref> have suggested a method used to extract wisdom teeth using the Otsu's threshold combined with morphological dilation. Then, jaws and teeth regions are extracted using connected component labeling.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b71'>(Mahdi & Kobashi, 2018)</ns0:ref>, it sets a multi-threshold by applying quantum particle swarm optimization to improve the accuracy. <ns0:ref type='bibr' target='#b29'>(Fariza et al., 2019)</ns0:ref> employed a method to extract dentin, enamel, pulp, and other surrounding dental structures using conditional spatial fuzzy C-means clustering. Subsequently, the performance improved as compared to inherently used FCM approaches. <ns0:ref type='bibr' target='#b25'>(Dibeh, Hilal & Charara, 2018)</ns0:ref> separates maxillary and mandibular jaws using N-degree polynomial regression. In <ns0:ref type='bibr' target='#b0'>(Abdi, Kasaei & Mehdizadeh, 2015)</ns0:ref>, a four-step method is proposed: gap valley extraction, modified canny edge detector, guided iterative contour tracing, and template matching. However, estimating the overall performance of automated segmentation with individual results, all of which were estimated to be above 98%, clearly demonstrates that the computerized process can still be improved to meet the gold standard more precisely.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b136'>(Veena Divya, Jatti & Revan Joshi, 2016)</ns0:ref>, active contour-based segmentation is proposed for cystic lesion segmentation and extraction to analyze cyst development behavior. The segmentation method has positive results for nonlinear background, poor contrast, and noisy image. The author <ns0:ref type='bibr' target='#b26'>(Divya et al., 2019</ns0:ref>) has compared the level set method and watershed segmentation to detect cyst and lesion. The study reveals that the level set segmentation produces more predicted results for cyst/Lesion. An approach used</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:03:59361:1:1:NEW 21 May 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to identify age & gender by analyzing dental images is very useful in biometrics <ns0:ref type='bibr' target='#b7'>(Avuçlu & Bacsçiftçi, 2020)</ns0:ref>. Several other image processing techniques are used on dental images to achieve the best biometric results.</ns0:p><ns0:p>Hybrid-dataset is the image dataset combining different dental imaging modalities used for the analysis. <ns0:ref type='bibr' target='#b115'>(Said et al., 2006)</ns0:ref> have used periapical & bitewing X-rays for the teeth segmentation. In this approach, the background area is discarded using an appropriate threshold, then mathematical morphology and connected component labeling are applied for the teeth extraction. This approach finds difficulty in extracting images having low contrast between teeth and bones, blurred images, etc. Another approach introduced by <ns0:ref type='bibr'>(Tuan, Ngan & others, 2016;</ns0:ref><ns0:ref type='bibr' target='#b133'>Tuan & others, 2017;</ns0:ref><ns0:ref type='bibr' target='#b130'>Tuan et al., 2018)</ns0:ref> the semi-supervised fuzzy clustering method with some modification to find the various teeth and bone structures. Photographic color images are the RGB images of occlusal surfaces that are mainly useful for detecting caries and human identification <ns0:ref type='bibr'>(Datta & Chaki, 2015a,b)</ns0:ref>. Teeth segmentation is performed by integrating watershed and snake-based techniques on dental RGB images. Subsequently, incisors tooth features extracted for the recognition of a person. This method can segment individual teeth, lesions from caries and track the development of lesion size. This research's primary objective is to identify the caries lesions of the tooth surfaces, which benefits to improve the diagnosis. In <ns0:ref type='bibr' target='#b38'>(Ghaedi et al., 2014)</ns0:ref>, caries segmentation was employed using the region-widening method and circular hough transform (CHT), then morphological operations applied to locate the unstable regions around the tooth boundaries. Another fully automatic approach for the caries classification is given by <ns0:ref type='bibr' target='#b11'>(Berdouses et al., 2015)</ns0:ref>, where segmentation separates caries lesion then after area features are extracted to assign the region to a particular class. It can be a valuable method to support the dentist in making more reliable and accurate detection and analysis of occlusal caries.</ns0:p><ns0:p>CT & CBCT Images provide 3D visualization of teeth and assist dental practitioners in orthodontic surgery, dental implants, and cosmetic surgeries. The study <ns0:ref type='bibr' target='#b45'>(Hosntalab et al., 2010)</ns0:ref> recommended a multi-step procedure for labeling and classification in CT images. However, teeth segmentation is performed by employing global thresholding, morphological operations, region growing, and variational level sets. Another approach, a multi-step procedure, was introduced by (Mortaheb, Rezaeian & Soltanian-Zadeh, 2013) based on the mean shift algorithm for CT image segmentation of the tooth area, which results best as compare with watershed, thresholding, active contour. Another technique that does not depend on mean shift is suggested by <ns0:ref type='bibr' target='#b36'>(Gao & Li, 2013)</ns0:ref>, which uses an iterative scheme to label events for the segmentation. Furthermore, segmentation methods are improved by applying active contour tracking algorithms and level set methods <ns0:ref type='bibr' target='#b35'>(Gao & Chae, 2010)</ns0:ref>. It shows higher accuracy and visualization of tooth regions as compared to other methods. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Conventional machine learning algorithms for dental image analysis</ns0:head><ns0:p>Computer Science features, i.e., identifying dental caries involves extracting texture features-an overview of various machine learning algorithms given in Figure <ns0:ref type='figure'>7</ns0:ref>.</ns0:p><ns0:p>ML datasets are generally composed of exclusive training, validation, and test sets. It determines system characteristics by evaluating and testing the dataset then validates the features acquired from the input data. Using the test dataset, one might finally verify ML's precision and extract valuable features to formulate a powerful training model. Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> reveals the conventional machine-learning algorithms used for dental X-ray imaging.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Deep learning techniques for dental image analysis</ns0:head><ns0:p>Artificial intelligence, machine learning, and deep learning approaches assist medical imaging technicians in spotting abnormalities and diagnosing disorders in a fraction of the time required earlier (and with more accurate tests generally). Deep learning (DL) is an improvement of artificial neural networks (ANN), which has more layers and allows for more accurate data predictions <ns0:ref type='bibr' target='#b64'>(LeCun, Bengio & Hinton, 2015;</ns0:ref><ns0:ref type='bibr' target='#b118'>Schmidhuber, 2015)</ns0:ref>. Deep learning is associated with developing self-learning back-propagation techniques that incrementally optimize data outcomes and increase computing power. Deep learning is a rapidly developing field with numerous applications in the healthcare sector. The number of available, high-quality datasets in ML and DL applications plays a significant role in evaluating the outcome accuracy. Also, information fusion assists in integrating multiple datasets and their use of DL models to enhance accuracy parameters. The predictive performance of deep learning algorithms in the medical imaging field exceeds human skill levels, transforming the role of computer-assisted diagnosis into a more interactive one <ns0:ref type='bibr' target='#b15'>(Burt et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b93'>Park & Park, 2018)</ns0:ref>.</ns0:p><ns0:p>Health diagnostic computer-aided software is used in the medical field as a secondary tool, but developing traditional CAD systems tend to be very strenuous. Recently, there have been introducing deep learning approaches to CAD, with accurate outcomes for different clinical applications <ns0:ref type='bibr' target='#b17'>(Cheng et al., 2016)</ns0:ref>. The research study mostly used a convolution neural network model to analyze other dental imaging modalities. CNN's are a typical form of deep neural network feed-forward architectures, and they are usually used for computer vision and image object identification tasks. CNN's were initially released about two decades back; however, in 2012, AlexNet 's architecture outpaced added ImageNet large-scale competition challenges <ns0:ref type='bibr' target='#b59'>(Krizhevsky, Sutskever & Hinton, 2012)</ns0:ref>. Machine vision came in as the deep learning revolution, and since then, CNNs have been rapidly evolving. Feature learning methods have taken a massive turn since the CNN model has come into the picture. Fully convolution neural network Alexnet architecture is used to categorize teeth, including molar, premolar, canine, and incisor, by training cone-beam CT images <ns0:ref type='bibr' target='#b76'>(Miki et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b89'>Oktay, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b134'>(Tuzoff et al., 2019)</ns0:ref> applied the Faster R-CNN model, which interprets pipeline and optimizes computation to detect the tooth <ns0:ref type='bibr' target='#b110'>(Ren et al., 2017)</ns0:ref> and VGG-16 convolutional architecture for classification <ns0:ref type='bibr' target='#b124'>(Simonyan & Zisserman, 2014)</ns0:ref>. These methods are beneficial in practical applications and further investigation of computerized dental X-ray image analysis.</ns0:p><ns0:p>In DXRI, CNNs have been extensively used to detect tooth fractures, bone loss, caries detection, periapical lesions, or also used for the analysis of different dental structures <ns0:ref type='bibr' target='#b67'>(Lee et al., 2018b;</ns0:ref><ns0:ref type='bibr' target='#b120'>Schwendicke et al., 2019)</ns0:ref>. Neural networks need to be equipped and refined, and X-ray dataset repositories are necessary <ns0:ref type='bibr' target='#b66'>(Lee et al., 2018a)</ns0:ref>. In <ns0:ref type='bibr' target='#b65'>(Lee et al., 2019)</ns0:ref>, the mask R-CNN model is applied based on a CNN that can identify, classify, and mask artifacts in an image. A mask R-CNN mask operated in two steps. In the first step, the Region of interest (ROIs) selection procedure was performed.</ns0:p><ns0:p>Next, the R-CNN mask includes a binary mask similarity to the classification and bounding box foresight for each ROI (Romera-Paredes & Torr, 2016; <ns0:ref type='bibr' target='#b43'>He et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Dental structures (enamel, dentin, and pulp) identified using U-net architecture show the best outcome <ns0:ref type='bibr' target='#b112'>(Ronneberger, Fischer & Brox, 2015)</ns0:ref>. CNN is a standard technique for multi-class identification and characterization, but it requires extensive training to achieve a successful result if used explicitly. In the medical sphere, the lack of public data is a general problem because of privacy. To address this issue, <ns0:ref type='bibr' target='#b147'>(Zhang et al., 2018)</ns0:ref> suggested a technique that uses a label tree to assign multiple labels to each tooth and decompose a task that can manage data shortages. Table <ns0:ref type='table'>5</ns0:ref> presents various studies considering deep learning-based techniques in the field of dentistry.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Challenges and future directions</ns0:head><ns0:p>After reviewing various works focusing on traditional image processing techniques, it has been perceived that researchers faced multiple challenges in the field of DXRI segmentation and analysis, such as intensity variation in the X-ray images, poor image quality due to noise, irregular shape of an object, limitations of capturing devices, proper selection of methodology and lack of availability of datasets. Also, experience severe challenges in automatically detecting abnormalities, root canal infection, and sudden changes in the oral cavity. Since there are different varieties of dental X-ray images, it is hard to find a particular segmentation approach; it all depends on the precise condition of the X-rays. Some articles have used pre-processed digital X-rays that were manually cropped to include the area of interest. Because of inconsistencies in the manual method, it is hard to accurately interpret and compare outcomes <ns0:ref type='bibr' target='#b68'>(Lee, Park & Kim, 2017)</ns0:ref>.</ns0:p><ns0:p>Moreover, convolutional neural networks (and their derivatives) are performing outstandingly in dental X-ray image analysis. One notable conclusion is that many researchers use almost the same architectures, the same kind of network, but have very different outcomes. Deep neural networks are most successful when dealing with a large training dataset, but large datasets are not publically available in the DXRI and are not annotated. If vast publicly accessible dental X-ray image datasets were constructed, our research community would undoubtedly benefit exceedingly.</ns0:p><ns0:p>For the future perspective, the dental X-ray image public repository needs to be developed, and data uniformity is required for deep learning applications in dentistry. Also, DXRI aims to create a classifier that can classify multiple anomalies, caries classes, types of jaw lesions, Cyst, Root canal infection, etc., in dental images using features derived from the segmentation results. There is also a need to build machine learning-based investigative methods and rigorously validate them with a large number of dental professionals. The participation of specialists in this process will increase the likelihood of growth and development. Currently, there exists no universally acceptable software or tool for dental image analysis. However, such a tool is essentially needed to improve the performance of CAD systems and better treatment planning.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Performance measures</ns0:head><ns0:p>In general, if the algorithm's efficiency is more significant than other algorithms, one algorithm is prioritized over another. Evaluating the effectiveness of a methodology requires the use of a universally accessible and valid measure. Various performance metrics have been used to compare algorithms or machine learning approaches depending on the domain or study area. It comprises accuracy, Jaccard index, sensitivity, precision, recall, DSC, F-measure, AUC, MSE, Error rate, etc. Here, we include a thorough analysis of the success metrics employed in dental image analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Performance metrics used for dental image processing</ns0:head><ns0:p>Calculating performance metrics used for dental segmentation is performed by authenticating pixel by pixel and analyzing the segmentation results with the gold standard. Manual annotation of X-ray images done by a radiologist is considered to be the gold standard. Pixel-based metrics are measured using precision, dice coefficient, accuracy, specificity, and F-score widely used in segmentation analysis. Some of the problems in analyzing image segmentation are metric selection, the use of multiple meanings for some metrics in the literature, and inefficient metric measurement implementations that lead to significant large volume dataset difficulties. Poorly described metrics can result in imprecision conclusions on stateof-the-art algorithms, which affects the system's overall growth. Table <ns0:ref type='table'>6</ns0:ref> presents an overview of performance metrics widely used by researchers for dental image segmentation and analysis.</ns0:p><ns0:p>The significance of accuracy and assurance is essential in the medical imaging field. Also, the validation of segmentation achieves the result and dramatically increases the precision, accuracy, conviction, and computational speed of segmentation. Segmentation methods are especially helpful in computer-aided medical diagnostic applications where the interpretation of objects that are hard to differentiate by human vision is a significant component.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Confusion Matrix</ns0:head><ns0:p>The confusion matrix is used to estimate the performance of medical image segmentation and classification. The confusion matrix helps identify the relationship between the outcomes of the predictive algorithm and the actual ones. Some of the terms used for the confusion matrix are given in Table <ns0:ref type='table'>7</ns0:ref>, True positive (TP): Correctly identified or detected, False positive (FP): Evaluated or observed incorrectly, False negative (FN): wrongly rejected, True Negative (TN): Correctly rejected. In the approach <ns0:ref type='bibr' target='#b73'>(Mahoor & Abdel-Mottaleb, 2005)</ns0:ref>, experimental outcomes proved that molar classification is relatively easy compared to premolars, and for teeth classification, centroid distance is less effective than a coordinate signature. Various metrics such as the Signature vector, Force field (FF), and Fourier descriptor (FD) were used to test the efficiency of the approach given by <ns0:ref type='bibr' target='#b84'>(Nomir & Abdel-Mottaleb, 2007)</ns0:ref>, and for matching euclidean distance and absolute distance, FF & FD give small values, suggesting that they performed better than the others. Here, FF & FD give small values for matching Euclidean distance and absolute distance, indicating that the performance is better than the other two methods. In another approach <ns0:ref type='bibr' target='#b96'>(Prajapati, Desai & Modi, 2012)</ns0:ref>, feature vectors are evaluated and used to find the image distance vector ( ) using formula:</ns0:p><ns0:p>, where feature vector (TnFV) is used for database</ns0:p><ns0:formula xml:id='formula_0'>D n D n = ∑ |T n FV -FVQ|</ns0:formula><ns0:p>image and (FVQ) is used for the query image. The minimum value of the distance vector indicates the best match of the image with the database image.</ns0:p><ns0:p>The study <ns0:ref type='bibr' target='#b49'>(Huang et al., 2012)</ns0:ref> shows better isolation precision accuracy for the segmentation of jaws as compared with Nomir and Abdel-Mottaleb. Another method evaluated the complete length of the tooth and capered with the dentist's manual estimation <ns0:ref type='bibr' target='#b41'>(Harandi & Pourghassem, 2011)</ns0:ref>. Here, measurement error (ME) is evaluated for root canals applying the formula:</ns0:p><ns0:p>and evaluated ME is</ns0:p><ns0:formula xml:id='formula_1'>𝑀𝐸 = 𝑀𝑒𝑠𝑢𝑟𝑒𝑑 𝑙𝑒𝑛𝑔𝑡ℎ 𝐴𝑐𝑡𝑢𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ</ns0:formula><ns0:p>lowest for one canal compared to two and three canals. <ns0:ref type='bibr' target='#b82'>(Niroshika, Meegama & Fernando, 2013)</ns0:ref> traced the tooth boundaries using active contour and distance parameters are compared with the Kass algorithm. The value of the standard distance parameter was found to be lower than that of the Kass algorithm, implying that the proposed method is more efficient for tracing the tooth boundary than the Kass algorithm. Another approach used for counting molar and premolar teeth is considering precision and sensitivity <ns0:ref type='bibr'>(Pushparaj et al., 2013)</ns0:ref>. Here performance is using metric is given by: . Where 'm' represents the total number of teeth counted, and 'n'</ns0:p><ns0:formula xml:id='formula_2'>'𝜂' 𝜂 = (𝑚 -𝑛) 𝑛 * 100</ns0:formula><ns0:p>represents the incorrectly numbered teeth. The counting of molar and premolar teeth is more than 90% accurate using this method.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b0'>(Abdi, Kasaei & Mehdizadeh, 2015)</ns0:ref>, mandible segmentation and Hausdorff distance parameters were compared to the manually annotated gold standard. The algorithm results appear to be very close to the manually segmented gold standard in terms of sensitivity, accuracy, and dice similarity coefficient (DSC).</ns0:p><ns0:p>In this study <ns0:ref type='bibr' target='#b5'>(Amer & Aqel, 2015)</ns0:ref>, a wisdom tooth is extracted, and the mean absolute error (MAE) is used to equate the procedure with the other two methods. As compared to other approaches, the lower MAE value showed better segmentation.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b95'>(Poonsri et al., 2016)</ns0:ref>, precision is calculated for single-rooted and double-rooted teeth using template matching. According to their study, segmentation accuracy is greater than 40%. <ns0:ref type='bibr' target='#b132'>(Tuan & others, 2016</ns0:ref><ns0:ref type='bibr' target='#b133'>(Tuan & others, , 2017) )</ns0:ref> used the following cluster validity metrics: PBM, Simplified Silhouette Width Criterion (SSWC), Davis-Bouldin (DB), BH, VCR, BR, and TRA, and the measures of these parameters indicate the best performance as compared with the results of current algorithms.</ns0:p></ns0:div>
<ns0:div><ns0:head>PBM:</ns0:head><ns0:p>The maximum value of this index is said to be the PBM index, across the hierarchy provides the best partitioning.</ns0:p><ns0:p>Simplified Silhouette Width Criterion (SSWC): The silhouette analysis tests how well the observation is clustered and calculates the average distance between clusters. The silhouette plot shows how similar each point in a cluster is to the neighboring clusters' points.</ns0:p><ns0:p>Davies-Bouldin index (DB): This index determines the average 'similarity' amongst clusters, in which the resemblance is a metric that measures the distance between clusters with the size of clusters themselves. The lower Davies-Bouldin index refers to a model with a greater detachment of clusters.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ball and Hall index (BH):</ns0:head><ns0:p>It is used to determine the distance within a group, with a higher value showing better results.</ns0:p><ns0:p>Calinski-Harabasz index, also called Variance Ratio Criterion (VCR): It can be applied to evaluate the partition data by variance, and its higher value indicates good results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Banfeld-Raftery index (BR):</ns0:head><ns0:p>It is evaluated using a variance-covariance matrix for each cluster.</ns0:p><ns0:p>Difference-like index (TRA): It calculates the cluster difference, and a higher value gives the best results.</ns0:p><ns0:p>Comparison of various performance metrics used in dental X-ray imaging considering deep learning methods are given in Figure <ns0:ref type='figure'>8</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Dataset Description</ns0:head><ns0:p>The researcher in the dental imaging field has used various types of databases. In which some of the databases are available online, while some records are not present. The most prominent dilemma is finding out which investigation has given valid results because everyone has shown promising results on their datasets. All the dental imaging databases that have been used so far are given in Table <ns0:ref type='table'>8</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Conclusion</ns0:head><ns0:p>Dental X-ray image analysis is a challenging area, and it receives significantly less attention from the community of researchers. There is, however, no systematic review that addresses the state-of-the-art approaches of DXRI. This paper has performed a thorough analysis of more than 130 techniques suggested by different researchers over the last few decades. This study presented a survey of various segmentation and classification techniques widely used for dental X-ray imaging. Methods are characterized as image processing, conventional machine learning, and deep learning. Human identification <ns0:ref type='bibr' target='#b83'>(Nomir & Abdel-Mottaleb, 2005)</ns0:ref> Results are improved by using a signature vector in conjunction with adaptive and iterative thresholding. 117 Human identification <ns0:ref type='bibr' target='#b84'>(Nomir & Abdel-Mottaleb, 2007)</ns0:ref> Iterative followed by adaptive thresholding used for the Segmentation and features extracted using fourier descriptors after forcefield transformation then matching is done by using euclidian distance 162 Human identification <ns0:ref type='bibr' target='#b63'>(Lai & Lin, 2008)</ns0:ref> The B-spline curve is used to extract intensity and texture characteristics for K-means clustering to locate the bones and teeth contour.</ns0:p><ns0:p>N.A Teeth detection <ns0:ref type='bibr' target='#b85'>(Nomir & Abdel-Mottaleb, 2008)</ns0:ref> The procedure starts with an iterative process guided by adaptive thresholding. Finally, the Bayesian framework is employed for tooth matching.</ns0:p><ns0:p>187 Human identification <ns0:ref type='bibr'>(Harandi, Pourghassem & Mahmoodian, 2011)</ns0:ref> An active geodesic contour is employed for upper and lower jaws segmentation. 14 Jaw identification <ns0:ref type='bibr' target='#b49'>(Huang et al., 2012)</ns0:ref> An adaptive windowing scheme with isolation-curve verification is used to detect missing tooth regions. 60 Missing teeth detection <ns0:ref type='bibr' target='#b96'>(Prajapati, Desai & Modi, 2012)</ns0:ref> A region growing technique is applied to the X-rays to extract the tooth; then, the content-based image retrieval (CBIR) technique is used for matching purposes.</ns0:p></ns0:div>
<ns0:div><ns0:head>30</ns0:head><ns0:p>Human identification <ns0:ref type='bibr'>(Pushparaj, Gurunathan & Arumugam, 2013)</ns0:ref> The tooth area's shape is extracted using contour-based connected component labeling, and the Mahalanobis distance (MD) is measured for matching.</ns0:p></ns0:div>
<ns0:div><ns0:head n='50'>Person identification</ns0:head><ns0:p>Imaging modality: Periapical X-rays <ns0:ref type='bibr' target='#b48'>(Huang & Hsu, 2008)</ns0:ref> Binary image transformations, thresholding, quartering, characterization, and labeling were all used as part of the process.</ns0:p></ns0:div>
<ns0:div><ns0:head>420</ns0:head><ns0:p>Teeth detection <ns0:ref type='bibr' target='#b91'>(Oprea et al., 2008)</ns0:ref> Simple thresholding technique applied for Segmentation of caries. N.A Caries detection <ns0:ref type='bibr' target='#b41'>(Harandi & Pourghassem, 2011)</ns0:ref> Otsu thresholding method with canny edge detection is used to segment the root canal area. 43 Root canal detection (Lin, <ns0:ref type='bibr' target='#b49'>Huang & Huang, 2012)</ns0:ref> The lesion is detected using a variational level set method after applying otsu's method. 6 Lesion detection <ns0:ref type='bibr' target='#b117'>(Sattar & Karray, 2012)</ns0:ref> Phase congruency based approach is used to provide a framework for local image structure + edge detection N.A Teeth detection <ns0:ref type='bibr' target='#b82'>(Niroshika, Meegama & Fernando, 2013)</ns0:ref> Deformation and re-parameterize are added to the contour to detect the tooth comer points. N.A Teeth detection <ns0:ref type='bibr'>(Ayuningtiyas et al., 2013)</ns0:ref> Dentin and pulp are separated using active contour, and qualitative analysis is conducted using the dentist's visual inspection, while quantitative testing is done by measuring different statistic parameters.</ns0:p><ns0:p>N.A Tooth detection <ns0:ref type='bibr' target='#b87'>(Nuansanong, Kiattisin & Leelasantitham, 2014)</ns0:ref> Canny edge detection was initially used, followed by an active contour model with data mining (J48 tree) and integration with the competence path.</ns0:p><ns0:p>Approx. 50 Tooth detection <ns0:ref type='bibr'>(Lin et al., 2014)</ns0:ref> The otsu's threshold and connected component analysis are used to precisely segment the teeth from alveolar bones and remove false teeth areas.</ns0:p></ns0:div>
<ns0:div><ns0:head>28</ns0:head><ns0:p>Teeth detection <ns0:ref type='bibr' target='#b99'>(Purnama et al., 2015)</ns0:ref> For root canal segmentation, an active shape model and thinning (using a hit-and-miss transform) were used. 7 Root canal detection <ns0:ref type='bibr' target='#b103'>(Rad et al., 2015)</ns0:ref> The Segmentation is initially done using K-means clustering.</ns0:p><ns0:p>Then, using a gray-level co-occurrence matrix, characteristics were extracted from the X-rays.</ns0:p></ns0:div>
<ns0:div><ns0:head>32</ns0:head><ns0:p>Caries detection <ns0:ref type='bibr' target='#b54'>(Jain & Chauhan, 2017)</ns0:ref> First, all parameter values defined in the snake model then initial contour points initializes, and at last canny edge detection extract the affected part.</ns0:p><ns0:p>N.A Cyst detection <ns0:ref type='bibr' target='#b125'>(Singh & Agarwal, 2018)</ns0:ref> The color to mark the carious lesion is provided by the contrast limited adaptive histogram (CLAHE) technique combined with masking.</ns0:p></ns0:div>
<ns0:div><ns0:head>23</ns0:head><ns0:p>Caries detection <ns0:ref type='bibr'>(Rad et al., 2018)</ns0:ref> The level set segmentation process (LS) is used in two stages. The hybrid algorithm is applied using isophote curvature and the fast marching method (FMM) to extract the cyst. 3 Cyst detection <ns0:ref type='bibr' target='#b21'>(Datta, Chaki & Modak, 2019)</ns0:ref> The geodesic active contour method is applied to identify the dental caries lesion. 120 Caries detection <ns0:ref type='bibr' target='#b92'>(Osterloh & Viriri, 2019)</ns0:ref> Proposed unsupervised model to extract the caries region. Jaws partition is done using thresholding and an integral projection algorithm. The top and bottom hats, as well as active contours, were used to detect caries borders.</ns0:p><ns0:p>N.A Caries detection <ns0:ref type='bibr' target='#b60'>(Kumar, Bhadauria & Singh, 2020)</ns0:ref> The various dental structures were separated using the fuzzy Cmeans algorithm and the hyperbolic tangent gaussian kernel function.</ns0:p><ns0:p>152 Dental structures <ns0:ref type='bibr' target='#b23'>(Datta, Chaki & Modak, 2020)</ns0:ref> This method converts the X-ray image data into its neutrosophic analog domain. A custom feature called 'weight' is used for neutrosophication. Contrary to popular belief, this feature is determined by merging other features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='120'>Caries detection</ns0:head><ns0:p>Imaging Modality: Panoramic X-rays <ns0:ref type='bibr' target='#b94'>(Patanachai, Covavisaruch & Sinthanayothin, 2010)</ns0:ref> The wavelet transform, thresholding segmentation, and adaptive thresholding segmentation are all compared. Where, the results of wavelet transform show better accuracy as compare to others.</ns0:p></ns0:div>
<ns0:div><ns0:head>N.A</ns0:head><ns0:p>Teeth detection <ns0:ref type='bibr'>(Frejlichowski & Wanat, 2011)</ns0:ref> An automatic human identification system applies a horizontal integral projection to segment the individual tooth in this approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>218</ns0:head><ns0:p>Human identification <ns0:ref type='bibr' target='#b139'>(Vijayakumari et al., 2012)</ns0:ref> A gray level co-occurrence matrix is used to detect the cyst (GLCM). 3 Cyst detection <ns0:ref type='bibr'>(Pushparaj et al., 2013)</ns0:ref> Horizontal integral projection with a B-spline curve is employed to separate maxilla and mandible N.A Teeth numbering <ns0:ref type='bibr'>(Lira et al., 2014)</ns0:ref> Supervised learning used for segmentation and feature extraction is carried out through computing moments and statistical characteristics. At last, the bayesian classifier is used to identify different classes.</ns0:p><ns0:p>1 Teeth detection <ns0:ref type='bibr'>(Banu et al., 2014)</ns0:ref> The gray level co-occurrence matrix is used to compute texture characteristics (GLCM) and classification results obtained in the feature space, focusing on the centroid and K-mean classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>23</ns0:head><ns0:p>Cyst detection <ns0:ref type='bibr' target='#b107'>(Razali et al., 2014)</ns0:ref> This study aims to compare the edge segmentation methods: Canny and Sobel on X-ray images. N.A Teeth detection <ns0:ref type='bibr' target='#b5'>(Amer & Aqel, 2015)</ns0:ref> The segmentation process uses the global Ots's thresholding technique with linked component labeling. The ROI extraction and post-processing are completed at the end.</ns0:p><ns0:p>1 Wisdom teeth detection <ns0:ref type='bibr' target='#b0'>(Abdi, Kasaei & Mehdizadeh, 2015)</ns0:ref> Four stages used for Segmentation: Gap valley extraction, canny edge with morphological operators, contour tracing, and template matching.</ns0:p></ns0:div>
<ns0:div><ns0:head>95</ns0:head><ns0:p>Mandible detection <ns0:ref type='bibr' target='#b136'>(Veena Divya, Jatti & Revan Joshi, 2016)</ns0:ref> Active contour or snake model used to detect the cyst boundary. 10 Cyst detection <ns0:ref type='bibr' target='#b95'>(Poonsri et al., 2016)</ns0:ref> Teeth identification, template matching using correlation, and area segmentation using K-means clustering are used. 25 Teeth detection <ns0:ref type='bibr' target='#b146'>(Zak et al., 2017)</ns0:ref> Individual arc teeth segmentation (IATS) with adaptive thresholding is applied to find the palatal bone. 94 Teeth detection <ns0:ref type='bibr' target='#b4'>(Alsmadi, 2018)</ns0:ref> In panoramic X-ray images that can help in diagnosing jaw lesions, the fuzzy C-means concept and the neutrosophic technique are combinedly used to segment jaw pictures and locate the jaw lesion region.</ns0:p></ns0:div>
<ns0:div><ns0:head>60</ns0:head><ns0:p>Lesion detection <ns0:ref type='bibr' target='#b25'>(Dibeh, Hilal & Charara, 2018)</ns0:ref> The methods use a shape-free layout fitted into a 9-degree polynomial curve to segment the area between the maxillary and mandibular jaws.</ns0:p></ns0:div>
<ns0:div><ns0:head>62</ns0:head><ns0:p>Jaw separation+teeth detection <ns0:ref type='bibr' target='#b71'>(Mahdi & Kobashi, 2018)</ns0:ref> Quantum Particle Swarm Optimization (QPSO) is employed for multilevel thresholding. 12 Teeth detection <ns0:ref type='bibr' target='#b3'>(Ali et al., 2018)</ns0:ref> A new clustering method based on the neutrosophic orthogonal matrix is presented to help in the extraction of teeth and jaws areas from panoramic X-rays. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Autoregression(AR) model is adopted, and AR coefficients are derived from the feature vector. At last, matching is performed using euclidean distance. <ns0:ref type='bibr' target='#b29'>(Fariza et al., 2019)</ns0:ref> For tooth segmentation, the Gaussian kernel-based conditional spatial fuzzy c-means (GK-csFCM) clustering algorithm is used. 10 Teeth detection <ns0:ref type='bibr'>(Aliaga et al., 2020)</ns0:ref> The region of interest is extracted from the entire X-ray image, and Segmentation is performed using k-means clustering. 370 Osteoporosis detection, mandible detection <ns0:ref type='bibr' target='#b7'>(Avuçlu & Bacsçiftçi, 2020)</ns0:ref> The Image is converted to binary using Otsu's thresholding, and then a canny edge detector is used to find the object of interest.</ns0:p><ns0:p>1315 Determination of age and gender Imaging modality: Hybrid dataset images <ns0:ref type='bibr' target='#b115'>(Said et al., 2006)</ns0:ref> Thresholding with mathematical morphology is performed for the Segmentation.</ns0:p><ns0:p>500 Bitewing & 130 Periapical images.</ns0:p><ns0:p>Teeth detection <ns0:ref type='bibr' target='#b69'>(Li et al., 2006)</ns0:ref> The fast and accurate segmentation approach used strongly focused on mathematical morphology and shape analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='500'>(Bitewing and Periapical images)</ns0:head><ns0:p>Person identification (Al-sherif, Guo & Ammar, 2012)</ns0:p><ns0:p>A two-phase threshold processing is used, starting with an iterative threshold followed by an adaptive threshold to binarize teeth images after separating the individual tooth using the seam carving method. Teeth structures <ns0:ref type='bibr' target='#b130'>(Tuan et al., 2018)</ns0:ref> Graph-based clustering algorithm called enhanced affinity propagation clustering (APC) used for classification process and fuzzy aggregation operators used for disease detection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='87'>(Periapical & Panoramic) Disease detection</ns0:head><ns0:p>Imaging modality: Photographic color images <ns0:ref type='bibr' target='#b38'>(Ghaedi et al., 2014)</ns0:ref> Segmentation functions in two ways. In the first step, the tooth surface is partitioned using a region-widening approach and the Circular Hough Transform (CHT). The second stage uses morphology operators to quantify texture to define the abnormal areas of the tooth's boundaries. Finally, a random forest classifies the various classes.</ns0:p></ns0:div>
<ns0:div><ns0:head>88</ns0:head><ns0:p>Caries detection <ns0:ref type='bibr' target='#b19'>(Datta & Chaki, 2015a)</ns0:ref> They have proposed a biometrics dental technique using RGB images. Segment individual teeth with water Shed and Snake's help, then afterward incisors teeth features are obtained to identify the human.</ns0:p><ns0:p>270 Images dataset Person identification <ns0:ref type='bibr' target='#b20'>(Datta & Chaki, 2015b)</ns0:ref> The proposed method introduces a method for filtering optical teeth images and extracting caries lesions followed by clusterbased Segmentation.</ns0:p></ns0:div>
<ns0:div><ns0:head>45</ns0:head><ns0:p>Caries detection <ns0:ref type='bibr' target='#b11'>(Berdouses et al., 2015)</ns0:ref> The proposed scheme included two processes: (a) identification, in which regions of interest (pre-cavitated and cavitated occlusal lesions) were partitioned, and (b) classification, in which the identified zones were categorized into one of the seven ICDAS classes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='103'>Caries detection</ns0:head><ns0:p>Imaging modality: CT & CBCT <ns0:ref type='bibr' target='#b34'>(Gao & Chae, 2008)</ns0:ref> The multi-step procedure using thresholding, dilation, connected component labeling, upper-lower jaw separation, and last arch curve fitting was used to find the tooth region. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the variational level was increased in several ways. <ns0:ref type='bibr' target='#b79'>(Mortaheb, Rezaeian & Soltanian-Zadeh, 2013)</ns0:ref> Mean shift algorithm is used for CBCT segmentation with new feature space and is compared to thresholding, watershed, level set, and active contour techniques.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>CBCT images</ns0:head><ns0:p>Teeth detection <ns0:ref type='bibr' target='#b36'>(Gao & Li, 2013)</ns0:ref> The volume data are initially divided into homogeneous blocks and then iteratively merged to produce the initial labeled and unlabeled instances for semi-supervised study.</ns0:p><ns0:p>N.A Teeth detection <ns0:ref type='bibr'>(Ji, Ong & Foong, 2014)</ns0:ref> The study adds a new level set procedure for extracting the contour of the anterior teeth. Additionally, the proposed method integrates the objective functions of existing level set methods with a twofold intensity model. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Imaging Modality: Photographic Color Images <ns0:ref type='bibr' target='#b30'>(Fernandez & Chang, 2012)</ns0:ref> Teeth segmentation and Classification of teeth palate using ANN gives better results as compared to SVM. It shows that ANN is 7-times faster than SVM in terms of time N.A ANN + Multilayer perceptrons trained with the error back-propagation algorithm.</ns0:p><ns0:p>Oral infectocontagious diseases, <ns0:ref type='bibr' target='#b98'>(Prakash, Gowsika & Sathiyapriya, 2015)</ns0:ref> The prognosticating faults method includes the following stages: pre-processing, Segmentation, features extraction, SVM classification, and prediction of diseases.</ns0:p></ns0:div>
<ns0:div><ns0:head>N.A</ns0:head><ns0:p>Adaptive threshold + Unsupervised SVM classifier</ns0:p></ns0:div>
<ns0:div><ns0:head>Dental defect prediction</ns0:head><ns0:p>Imaging Modality: CBCT or CT <ns0:ref type='bibr' target='#b145'>(Yilmaz, Kayikcioglu & Kayipmaz, 2017)</ns0:ref> Classifier efficiency improved by using the forward feature selection algorithm to reduce the size of the feature vector. The SVM classifier performs best in classifying periapical cyst and keratocystic odontogenic tumor (KCOT) lesions. Imaging modality: Hybrid dataset <ns0:ref type='bibr' target='#b141'>(Wang et al., 2016)</ns0:ref> U-net architecture <ns0:ref type='bibr' target='#b112'>(Ronneberger, Fischer & Brox, 2015)</ns0:ref> Landmark detection in cephalometric radiographs and Dental structure in bitewing radiographs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='50'>CBCT 3D scans</ns0:head><ns0:p>F-score = > 0.7 <ns0:ref type='bibr' target='#b68'>(Lee, Park & Kim, 2017)</ns0:ref> LightNet and MatConvNet Landmark detection N.A <ns0:ref type='bibr' target='#b56'>(Karimian et al., 2018)</ns0:ref> Conventional CNN Caries detection Sensitivity:-97.93~99.85% Specificity:-100% Imaging modality: Color images/ Oral images <ns0:ref type='bibr' target='#b106'>(Rana et al., 2017)</ns0:ref> Conventional CNN Detection of inflamed and healthy gingiva precision:-0.347, Recall: 0.621, AUC:-0.746 Image type not defined <ns0:ref type='bibr' target='#b52'>(Imangaliyev et al., 2016)</ns0:ref> Conventional Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b2'>(Ali, Ejbali & Zaied, 2015)</ns0:ref> compared CPU & GPU results after applying the Chan-Vese model with active contour without edge. It shows that GPU model implementation is several times faster than the CPU version.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Furthermore, a novel taxonomy mainly focusing on the imaging modalities-based categorization such as bitewing, periapical, panoramic, CBCT/CT, hybrid datasets, and color pictures. Various studies have found that opting for one type of segmentation technique is very difficult in conventional image-processing methods because of image dataset variability. The primary barrier in the growth of a high-performance classification model is the requirement of an annotated datasets, as pointed by various researchers mentioned in this study. Dental Imaging data is not the same as other medical images because of the different image characteristics. This difference has an impact on the deep learning model's adaptability during image classification. It is also challenging to validate and verify the algorithm's correctness because of the inadequate datasets available for the hypothesis. Now we would like to bring the researcher's attention towards future directions in DXRI. Since most dental X-ray image analysis methods result in decreased efficiency, more sophisticated segmentation techniques should be designed to improve clinical treatment. Further, It is being observed that limited work is employed in the recent studies to detect caries classes such as class I, class II, class III, class IV, class V, class VI, and root canal infection. Researchers should therefore focus on implementing new methodologies for caries classification and detection. Recently, deep learning has improved DXRI segmentation and classification performance and requires large annotated image datasets for training, but large annotated X-ray datasets are not publicly accessible. Further, a public repository for dental X-ray images needs to be developed. It is still an open problem so that we can expect new findings and research outcomes in the coming years.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b26'>Divya et al., 2019)</ns0:ref> Textural details extracted using GLCM to classify the cyst and caries.10 Dental caries & cyst extraction (Banday & Mir, 2019) Edge detection method for the Segmentation then, the 210 Human identification PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59361:1:1:NEW 21 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:03:59361:1:1:NEW 21 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59361:1:1:NEW 21 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,264.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,252.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,104.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,274.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,323.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,414.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Development in the field of Machine Learning (ML) and Artificial Intelligence (AI) is growing over the last few years. ML and AI methods have made a meaningful contribution to the field of dental imaging, such as computer-aided diagnosis & treatment, X-ray image interpretation, image-guided treatment,</ns0:figDesc><ns0:table /><ns0:note>infected area detection, and information representation adequately and efficiently. The ML and AI make it easier and help doctors diagnose and presume disease risk accurately and more quickly in time. Conventional machine learning algorithms for image perception rely exclusively on expertly designed PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59361:1:1:NEW 21 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>The first stage is the initial contour creation to create the most appropriate IC, and the second stage is the artificial neural network-based smart level approach.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>(Obuchowicz Rafałand Nurzynska et al., 2018)</ns0:cell><ns0:cell>K-means clustering applied considering intensity values and first-order features (FOF) detect the caries spots</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>Caries detection</ns0:cell></ns0:row><ns0:row><ns0:cell>(Devi, Banumathi &</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ulaganathan, 2019)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>120 Caries detection PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59361:1:1:NEW 21 May 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The table shows relevant review findings of conventional machine learning algorithms for different imaging modalities Segmentation of mandibular teeth carried out by applying Random forest regressionvoting constrained local model (RFRV-CLM) in 2 steps: The 1st step gives an estimate of individual teeth and mandible regions used to initialize search for the tooth. In the second step, the investigation is carried out separately for each tooth.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset consists</ns0:cell><ns0:cell>HOG (histogram of oriented</ns0:cell><ns0:cell>Osteoporosis</ns0:cell></ns0:row><ns0:row><ns0:cell>of 40 images</ns0:cell><ns0:cell>gradients + SVM</ns0:cell><ns0:cell>detection</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59361:1:1:NEW 21 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Authors' Response to the Comments
We are thankful to the Editor and the Reviewers for their valuable feedback to improve the quality and presentation of the manuscript entitled 'Descriptive analysis of dental X-ray images using various practical methods: A review' The reviewers comments are valuable and helpful for revising and improving the manuscript. According to the Editor and the Reviewers comments, extensive modifications have been made in the manuscript. The in-detail responses to the comments are appended below.
Also reduced the overall length of submission by decreasing the manuscript word count and removed some figures. ( As per Peer J Computer Science policy)
Reviewer 1:
Comment 1: Is it the first of it kind survey on dental X-ray imaging? If not, compare the current survey with existing surveys and highlight how it is different or enhances the existing surveys.
Response. Thank you for your suggestion. We have added two paragraphs in the revised manuscript for comparison. (as given in Section 1. Introduction on the second page in the manuscript )
“In the existing surveys (Rad et al., 2013; Schwendicke et al., 2019), various techniques and methods have been discussed for DXRI. In (Rad et al. 2013), Segmentation techniques are divided into three classes: pixel-based, edge-based, and region-based and further classified into thresholding, clustering boundary-based, region-based, or watershed approaches. However, there is no discussion on enhancement techniques, image databases used, and modalities used for DXRI. Furthermore, after (Rad et al., 2013) survey, a large number of approaches have been introduced by researchers. Next, a review of dental image diagnosis using convolution neural network is presented by (Schwendicke et al., 2019), focusing on diagnostic accuracy studies that pitted a CNN against a reference test, primarily on routine imagery data. It has been observed that in the previous surveys, a thorough investigation of traditional image processing, machine learning, and deep learning approaches is missing.”
“Being an emerging and promising research domain, dental X-ray imaging requires a comprehensive and detailed survey of dental image segmentation and analysis to diagnose and treat various dental diseases. In this study, we have made the following contributions that are missing in the previous surveys: Firstly, we have imparted various studies from 2004 to 2020 covering more than 130 articles and is almost double than previous surveys given by Rad et al. (2013) and Schwendicke et al. (2019). Secondly, we have presented X-ray pre-processing techniques, traditional image analysis approaches, machine learning, and deep learning advancements in DXRI. Third, specific image modality (such as periapical, panoramic, bitewing and CBCT, etc.) based methods are categorized. At last, performance metrics and dataset descriptions are investigated up to a great extent.”
Comment 2: Some of the recent works on application of DN/CNN on several domains such as the following can be discussed when discussing about ML algorithms. 'Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey, A novel PCA–whale optimization-based deep neural network model for classification of tomato plant diseases using GPU, Hand gesture classification using a novel CNN-crow search algorithm'.
Response. Thank you for your suggestion. As per suggestion, recent works on the application of DN/CNN have been discussed in the revised manuscript on Page 2, Paragraph 1.
Comment 3: A good survey should present a detailed section on challenges faced by the existing methodologies and also give directions to the researchers who want to carry out the research in that domain. I suggest that the authors can add a section, 'challenges and future directions'.
Response. Thank you for the constructive comment. As suggested by the reviewer, a new section with the name 'challenges and future directions', is added in the revised manuscript as 'Section 2.4'.
Comment 4: A good survey paper should have eye catching images. Check this paper for reference 'Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey.'
Response. As suggested by the reviewer, Figure 2, Figure 4, Figure 5, Figure 6, Figure 7, and Figure 8 are updated and redrawn in the revised manuscript.
Reviewer 2 (Kadiyala Ramana):
Comments for the author
Please check grammar to submit the final paper.
Response. Thank you for appreciating our work. As suggested by the reviewer, the manuscript is thoroughly checked and corrected in terms of English grammar.
Reviewer 3:
Comments for the author
The authors have chosen a very important topic. The paper is very interesting and timely. I suggest the authors incorporate these changes to further improve the quality of the paper.
Response: Thank you for appreciating our work. As per the suggestion, we have incorporated all the changes to improve the quality of the paper.
Comment 1: There are many Typos and grammatical errors throughout the manuscript.
Response. Thank you for your suggestion. The revised manuscript is thoroughly checked, and typo and grammatical errors are corrected.
Comment 2: Figure's clarity should be thoroughly enhanced.
Response: Thank you for your suggestion. As per suggestion, all the figures are updated and enhanced.
Comment 3: The introduction lacks many important references in the field.
Response. Thank you for the suggestion. We have updated the introduction section by adding two paragraphs. A few important references have been added to the manuscript. (“Also added two paragraphs as given on page no. 4, at the end of Section 1.2”)
Comment 4:. I wish to reorganize your paper with better and more clarity by Rectifying the above comments and submit the revised version.
Response: As suggested by the reviewer, the revised manuscript is reorganized after incorporating the comments.
Comment 5:. Introduction and related work is not synchronized.
Response: Thank you for the constructive comment. We have synchronized the introduction and related work in the revised manuscript as suggested by the reviewer.
Comment 6: The conclusion and future work part can be extended to have a better understanding of the approach and issues related to that which can be taken into consideration for future work.
Response. Thank you for your comment. As per the suggestion, the conclusion and future have been modified, improved, and extended in Section 5.
Please cite the following relevant literature but not limited to:
1) Rehman, Z. U., Zia, M. S., Bojja, G. R., Yaqub, M., Jinchao, F., & Arshid, K. (2020). Texture based localization of a brain tumor from MR images by using a machine learning approach. Medical hypotheses, 141, 109705.
2) Bhattacharya, S., Maddikunta, P. K. R., Pham, Q. V., Gadekallu, T. R., Chowdhary, C. L., Alazab, M., & Piran, M. J. (2021). Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. Sustainable cities and society, 65, 102589.
3) Sai Ambati, L., Narukonda, K., Bojja, G. R., & Bishop, D. (2020). Factors Influencing the Adoption of Artificial Intelligence in Organizations-From an Employee's Perspective.
4) Shabaan, M., Arshad, K., Yaqub, M., Jinchao, F., Zia, M. S., Boja, G. R., ... & Munir, R. (2020). Survey: smartphone-based assessment of cardiovascular diseases using ECG and PPG analysis. BMC medical informatics and decision making, 20(1), 1-16.
Response. As suggested by the reviewer, above mentioned relevant literature has been discussed and cited in the revised manuscript.
" | Here is a paper. Please give your review comments after reading it. |
176 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Regression analysis makes up a large part of supervised machine learning, and consists of the prediction of a continuous independent target from a set of other predictor variables.</ns0:p><ns0:p>The difference between binary classification and regression is in the target range: in binary classification, the target can have only two values (usually encoded as 0 and 1), while in regression the target can have multiple values. Even if regression analysis has been employed in a huge number of machine learning studies, no consensus has been reached on a single, unified, standard metric to assess the results of the regression itself. Many studies employ the mean square error (MSE) and its rooted variant (RMSE), or the mean absolute error (MAE) and its percentage variant (MAPE). Although useful, these rates share a common drawback: since their values can range between zero and +infinity, a single value of them does not say much about the performance of the regression with respect to the distribution of the ground truth elements. In this study, we focus on two rates that actually generate a high score only if the majority of the elements of a ground truth group has been correctly predicted: the coefficient of determination (R-squared) and the symmetric mean absolute percentage error (SMAPE). After showing their mathematical properties, we report a comparison between R 2 and SMAPE in several use cases and in two real medical scenarios. Our results demonstrate that the coefficient of determination (Rsquared) is more informative and truthful than SMAPE, and does not have the interpretability limitations of MSE, RMSE, MAE, and MAPE. We therefore suggest the usage of R-squared as standard metric to evaluate regression analyses in any scientific domain.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head></ns0:div>
<ns0:div><ns0:head>31</ns0:head><ns0:p>The role played by regression analysis in data science cannot be overemphasised: predicting a continuous 32 target is a pervasive task not only in practical terms, but also at a conceptual level. Regression is deeply 33 investigated even nowadays, to the point of still being worth of considerations in top journals <ns0:ref type='bibr'>(Jaqaman 34 and Danuser, 2006;</ns0:ref><ns0:ref type='bibr' target='#b3'>Altman and Krzywinski, 2015;</ns0:ref><ns0:ref type='bibr' target='#b61'>Krzywinski and Altman, 2015)</ns0:ref>, and widespread used 35 also in the current scientific war against <ns0:ref type='bibr'>COVID-19 (Chan et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b80'>Raji and Lakshmi, 2020;</ns0:ref><ns0:ref type='bibr'>Senapati 36 et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Gambhir et al., 2020)</ns0:ref>. The theoretical basis of regression encompasses several aspects 37 revealing hidden connections in the data and alternative perspectives even up to broadly speculative 38 view: for instance, interpreting the whole statistical learning as a particular kind of regression <ns0:ref type='bibr'>(Berk,</ns0:ref> technical studies <ns0:ref type='bibr' target='#b96'>(Sykes, 1993;</ns0:ref><ns0:ref type='bibr' target='#b62'>Lane, 2002)</ns0:ref> or articles outlining practical applications <ns0:ref type='bibr' target='#b39'>(Draper and Smith, 1998;</ns0:ref><ns0:ref type='bibr' target='#b82'>Rawlings et al., 2001;</ns0:ref><ns0:ref type='bibr' target='#b22'>Chatterjee and Hadi, 2015)</ns0:ref>, including handbooks <ns0:ref type='bibr' target='#b23'>(Chatterjee and Simonoff, 2013)</ns0:ref> or works covering specific key subtopics <ns0:ref type='bibr' target='#b91'>(Seber and Lee, 2012)</ns0:ref>. However, the reference landscape is far wider: the aforementioned considerations stimulated a steady flow of studies investigating more philosophically oriented arguments <ns0:ref type='bibr' target='#b2'>(Allen, 2004;</ns0:ref><ns0:ref type='bibr' target='#b10'>Berk, 2004)</ns0:ref>, or deeper analysis of implications related to learning <ns0:ref type='bibr' target='#b9'>(Bartlett et al., 2020)</ns0:ref>. Given the aforementioned overall considerations, it comes as no surprise that, similarly to what happened for binary classification, a plethora of performance metrics have been defined and are currently in use for evaluating the quality of a regression model <ns0:ref type='bibr' target='#b93'>(Shcherbakov et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b54'>Hyndman and Koehler, 2006;</ns0:ref><ns0:ref type='bibr' target='#b14'>Botchkarev, 2018b</ns0:ref><ns0:ref type='bibr' target='#b15'>Botchkarev, ,a, 2019))</ns0:ref>. The parallel with classification goes even further: in the scientific community, a shared consensus on a preferential metric is indeed far from being reached, concurring to making comparison of methods and results a daunting task.</ns0:p><ns0:p>The present study provides a contribute towards the detection of critical factors in the choice of a suitable performance metric in regression analysis, through a comparative overview of two measures of current widespread use, namely the coefficient of determination and the symmetric mean absolute percentage error.</ns0:p><ns0:p>Indeed, despite the lack of a concerted standard, a set of well established and preferred metrics does exist and we believe that, as primus inter pares, the coefficient of determination R-squared deserves a major role. Introduced by Sewell <ns0:ref type='bibr' target='#b104'>Wright (1921)</ns0:ref> and generally indicated by R 2 , its original formulation quantifies how much the dependent variable is determined by the independent variables, in terms of proportion of variance. Again, given the age and diffusion of R 2 , a wealth of studies about it has populated the scientific literature of the last century, from general references detailing definition and characteristics <ns0:ref type='bibr' target='#b37'>(Di Bucchianico, 2008;</ns0:ref><ns0:ref type='bibr' target='#b7'>Barrett, 2000;</ns0:ref><ns0:ref type='bibr' target='#b17'>Brown, 2009;</ns0:ref><ns0:ref type='bibr' target='#b8'>Barrett, 1974)</ns0:ref>, to more refined interpretative works <ns0:ref type='bibr' target='#b90'>(Saunders et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b48'>Hahn, 1973;</ns0:ref><ns0:ref type='bibr' target='#b72'>Nagelkerke, 1991;</ns0:ref><ns0:ref type='bibr' target='#b76'>Ozer, 1985;</ns0:ref><ns0:ref type='bibr' target='#b29'>Cornell and Berger, 1987;</ns0:ref><ns0:ref type='bibr' target='#b79'>Quinino et al., 2013)</ns0:ref>; efforts have been dedicated to the treatment of particular cases <ns0:ref type='bibr' target='#b1'>(Allen, 1997;</ns0:ref><ns0:ref type='bibr' target='#b12'>Blomquist, 1980;</ns0:ref><ns0:ref type='bibr' target='#b78'>Piepho, 2019;</ns0:ref><ns0:ref type='bibr' target='#b95'>Srivastava et al., 1995;</ns0:ref><ns0:ref type='bibr' target='#b38'>Dougherty et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b30'>Cox and Wermuth, 1992;</ns0:ref><ns0:ref type='bibr' target='#b108'>Zhang, 2017;</ns0:ref><ns0:ref type='bibr' target='#b74'>Nakagawa et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b69'>Menard, 2000)</ns0:ref> and to the proposal of ad-hoc variations <ns0:ref type='bibr' target='#b106'>(Young, 2000;</ns0:ref><ns0:ref type='bibr' target='#b85'>Renaud and Victoria-Feser, 2010;</ns0:ref><ns0:ref type='bibr' target='#b63'>Lee et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Parallel to the model explanation expressed as the variance, another widely adopted family of measures evaluate the quality of fit in terms of distance of the regressor to the actual training points. The two basic members of such family are the mean average error (MAE) <ns0:ref type='bibr' target='#b87'>(Sammut and Webb, 2010a)</ns0:ref> and the mean squared error (MSE) <ns0:ref type='bibr' target='#b88'>(Sammut and Webb, 2010b)</ns0:ref>, whose difference lies in the evaluating metric, respectively linear L 1 or quadratic L 2 . Once more, the available references are numerous, related to both theoretical <ns0:ref type='bibr' target='#b33'>(David and Sukhatme, 1974;</ns0:ref><ns0:ref type='bibr' target='#b81'>Rao, 1980;</ns0:ref><ns0:ref type='bibr' target='#b94'>So et al., 2013)</ns0:ref> and applicative aspects <ns0:ref type='bibr' target='#b0'>(Allen, 1971;</ns0:ref><ns0:ref type='bibr' target='#b40'>Farebrother, 1976;</ns0:ref><ns0:ref type='bibr' target='#b44'>Gilroy et al., 1990;</ns0:ref><ns0:ref type='bibr' target='#b56'>Imbens et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b59'>Köksoy, 2006;</ns0:ref><ns0:ref type='bibr' target='#b89'>Sarbishei and Radecka, 2011)</ns0:ref>.</ns0:p><ns0:p>As a natural derivation, the square root of mean square error (RMSE) has been widely adopted <ns0:ref type='bibr' target='#b75'>(Nevitt and Hancock, 2000;</ns0:ref><ns0:ref type='bibr' target='#b49'>Hancock and Freeman, 2001;</ns0:ref><ns0:ref type='bibr' target='#b4'>Applegate et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b58'>Kelley and Lai, 2011)</ns0:ref> to standardize the units of measures of MSE. The different type of regularization imposed by the intrinsic metrics reflects on the relative effectiveness of the measure according to the data structure. In particular, as a rule of thumb, MSE is more sensitive to outliers than MAE; in addition to this general note, several further considerations helping researchers in choosing the more suitable metric for evaluating a regression model given the available data and the target task can be drawn <ns0:ref type='bibr' target='#b19'>(Chai and Draxler, 2014;</ns0:ref><ns0:ref type='bibr' target='#b103'>Willmott and Matsuura, 2005;</ns0:ref><ns0:ref type='bibr' target='#b102'>Wang and Lu, 2018)</ns0:ref>. Within the same family of measures, the mean absolute percentage error (MAPE) <ns0:ref type='bibr' target='#b36'>(de Myttenaere et al., 2016)</ns0:ref> focuses on the percentage error, being thus the elective metric when relative variations have a higher impact on the regression task rather than the absolute values. However, MAPE is heavily biased towards low forecasts, making it unsuitable for evaluating tasks where large errors are expected <ns0:ref type='bibr' target='#b6'>(Armstrong and Collopy, 1992;</ns0:ref><ns0:ref type='bibr' target='#b84'>Ren and Glasure, 2009;</ns0:ref><ns0:ref type='bibr' target='#b35'>De Myttenaere et al., 2015)</ns0:ref>. Last but not least, the symmetric mean absolute percentage error (SMAPE) <ns0:ref type='bibr' target='#b5'>(Armstrong, 1985;</ns0:ref><ns0:ref type='bibr' target='#b41'>Flores, 1986;</ns0:ref><ns0:ref type='bibr' target='#b66'>Makridakis, 1993)</ns0:ref> is a recent metric originally proposed to solve some of the issues related to MAPE. Despite the yet not reached agreement on its optimal mathematical expression <ns0:ref type='bibr' target='#b67'>(Makridakis and Hibon, 2000;</ns0:ref><ns0:ref type='bibr' target='#b54'>Hyndman and Koehler, 2006;</ns0:ref><ns0:ref type='bibr' target='#b53'>Hyndman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b24'>Chen et al., 2017)</ns0:ref>, SMAPE is progressively gaining momentum in the machine learning community due to its interesting properties <ns0:ref type='bibr' target='#b65'>(Maiseli, 2019;</ns0:ref><ns0:ref type='bibr' target='#b60'>Kreinovich et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b46'>Goodwin and Lawton, 1999)</ns0:ref>, An interesting discrimination among the aforementioned metrics can be formulated in terms of their output range. The coefficient of determination is upper bounded by the value 1, attained for perfect fit; while R 2 is not lower bounded, the value 0 corresponds to (small perturbations of) the trivial fit provided Manuscript to be reviewed Computer Science by the horizontal line y = K for K the mean of the target value of all the training points. Since all negative values for R 2 indicate a worse fit than the average line, nothing is lost by considering the unit interval as the meaningful range for R 2 . As a consequence, the coefficient of determination is invariant for linear transformations of the independent variables' distribution, and an output value close to one yields a good prediction regardless of the scale on which such variables are measured <ns0:ref type='bibr' target='#b83'>(Reeves, 2021)</ns0:ref>. Similarly, also SMAPE values are bounded, with the lower bound 0% implying a perfect fit, and the upper bound 200% reached when all the predictions and the actual target values are of opposite sign. Conversely, MAE, MSE, RMSE and MAPE output spans the whole positive branch of the real line, with lower limit zero implying a perfect fit, and values progressively and infinitely growing for worse performing models. By definition, these values are heavily dependent on the describing variables' ranges, making them incomparable both mutually and within the same metric: a given output value for a metric has no interpretable relation with a similar value for a different measure, and even the same value for the same metric can reflect deeply different model performance for two distinct tasks <ns0:ref type='bibr' target='#b83'>(Reeves, 2021)</ns0:ref>. Such property cannot be changed even if projecting the output into a bounded range through a suitable transformation (for example, arctangent or rational function). Given these interpretability issues, here we concentrate our comparative analysis on R 2 and SMAPE, both providing a high score only if the majority of the ground truth training points has been correctly predicted by the regressor. Showing the behaviour of these two metrics in several use cases and in two biomedical scenarios on two datasets made of electronic health records , the coefficient of determination is demonstrated to be superior to SMAPE in terms of effectiveness and informativeness, thus being the recommended general performance measure to be used in evaluating regression analyses.</ns0:p><ns0:p>The manuscript organization proceeds as follows. After this Introduction, in the Methods section we introduce the cited metrics, with their mathematical definition and their main properties, and we provide a more detailed description of R 2 and SMAPE and their extreme values (section 2). In the following section Results and Discussion, we present the experimental part (section 3). First, we describe five synthetic use cases, then we introduce and detail the Lichtinghagen dataset and the Palechor dataset of electronic health records, together with the different applied regression models and the corresponding results. We complete that section with a discussion of the implication of all the obtained outcomes. In the Conclusions section, we draw some final considerations and future developments (section 4).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>METHODS</ns0:head><ns0:p>In this section, we first introduce the mathematical background of the analyzed rates (subsection 2.1), then report some relevant information about the coefficient of determination and SMAPE (subsection 2.2).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Mathematical background</ns0:head><ns0:p>In the following formulas, X i is the predicted i th value, and the Y i element is the actual i th value. The regression method predicts the X i element for the corresponding Y i element of the ground truth dataset. Define two constants: the mean of the true values</ns0:p><ns0:formula xml:id='formula_0'>Ȳ = 1 m m ∑ i=1 Y i (1)</ns0:formula><ns0:p>and the mean total sum of squares</ns0:p><ns0:formula xml:id='formula_1'>MST = 1 m m ∑ i=1 (Y i − Ȳ ) 2 (2) Coefficient of determination (R 2 or R-squared) R 2 = 1 − m ∑ i=1 (X i −Y i ) 2 m ∑ i=1 ( Ȳ −Y i ) 2</ns0:formula><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_2'>(worst value = −∞; best value = +1)</ns0:formula><ns0:p>The coefficient of determination <ns0:ref type='bibr' target='#b104'>(Wright, 1921)</ns0:ref> can be interpreted as the proportion of the variance in the dependent variable that is predictable from the independent variables. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Mean square error (MSE)</ns0:p><ns0:formula xml:id='formula_3'>MSE = 1 m m ∑ i=1 (X i −Y i ) 2<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>(best value = 0; worst value = +∞)</ns0:p><ns0:p>MSE can be used if there are outliers that need to be detected. In fact, MSE is great for attributing larger weights to such points, thanks to the L 2 norm: clearly, if the model eventually outputs a single very bad prediction, the squaring part of the function magnifies the error.</ns0:p><ns0:p>Since R 2 = 1 − MSE MST and since MST is fixed for the data at hand, R 2 is monotonically related to MSE (a negative monotonic relationship), which implies that an ordering of regression models based on R 2 will be identical (although in reverse order) to an ordering of models based on MSE or RMSE.</ns0:p><ns0:p>Root mean square error (RMSE)</ns0:p><ns0:formula xml:id='formula_4'>RMSE = 1 m m ∑ i=1 (X i −Y i ) 2<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>(best value = 0; worst value = +∞)</ns0:p><ns0:p>The two quantities MSE and RMSE are monotonically related (through the square root). An ordering of regression models based on MSE will be identical to an ordering of models based on RMSE.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean absolute error (MAE)</ns0:head><ns0:formula xml:id='formula_5'>MAE = 1 m m ∑ i=1 | X i −Y i |<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>(best value = 0; worst value = +∞)</ns0:p><ns0:p>MAE can be used if outliers represent corrupted parts of the data. In fact, MAE is not penalizing too much the training outliers (the L 1 norm somehow smooths out all the errors of possible outliers), thus providing a generic and bounded performance measure for the model. On the other hand, if the test set also has many outliers , the model performance will be mediocre.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean absolute percentage error (MAPE)</ns0:head><ns0:formula xml:id='formula_6'>MAPE = 1 m m ∑ i=1 Y i − X i Y i (7) (best value = 0; worst value = +∞)</ns0:formula><ns0:p>MAPE is another performance metric for regression models, having a very intuitive interpretation in terms of relative error: due to its definition, its use is recommended in tasks where it is more important being sensitive to relative variations than to absolute variations <ns0:ref type='bibr' target='#b36'>(de Myttenaere et al., 2016)</ns0:ref>. However, its has a number of drawbacks, too, the most critical ones being the restriction of its use to strictly positive data by definition and being biased towards low forecasts, which makes it unsuitable for predictive models where large errors are expected <ns0:ref type='bibr' target='#b6'>(Armstrong and Collopy, 1992)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Symmetric mean absolute percentage error (SMAPE)</ns0:head><ns0:formula xml:id='formula_7'>SMAPE = 100% m m ∑ i=1 | X i −Y i | (| X i | + | Y i |)/2<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>(best value = 0; worst value = 2)</ns0:p><ns0:p>Initially defined by <ns0:ref type='bibr' target='#b5'>Armstrong (1985)</ns0:ref>, and then refined in its current version by <ns0:ref type='bibr' target='#b41'>Flores (1986)</ns0:ref> and <ns0:ref type='bibr' target='#b66'>Makridakis (1993)</ns0:ref>, SMAPE was proposed to amend the drawbacks of the MAPE metric. However,</ns0:p></ns0:div>
<ns0:div><ns0:head>4/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_7'>2021:03:59328:1:2:NEW 21 May 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>there is little consensus on a definitive formula for SMAPE, and different authors keep using slightly different versions <ns0:ref type='bibr' target='#b53'>(Hyndman, 2014)</ns0:ref>. The original SMAPE formula defines the maximum value as 200%, which is computationally equivalent to 2. In this manuscript, we are going to use the first value for formal passages, and the second value for numeric calculations.</ns0:p><ns0:p>Informativeness The rates RMSE, MAE, MSE and SMAPE have value 0 if the linear regression model fits the data perfectly, and positive value if the fit is less than perfect. Furthermore, the coefficient of determination has value 1 if the linear regression model fits the data perfectly (that means if MSE = 0), value 0 if MSE = MST, and negative value if the mean squared error, MSE, is greater than mean total sum of squares, MST.</ns0:p><ns0:p>Even without digging into the mathematical properties of the aforementioned statistical rates, it is Given the better robustness of R-squared and SMAPE over the other four rates, we focus the rest of this article on the comparison between these two statistics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>R-squared and SMAPE</ns0:head><ns0:p>R-squared The coefficient of determination can take values in the range (−∞, 1] according to the mutual relation between the ground truth and the prediction model. Hereafter we report a brief overview of the principal cases.</ns0:p><ns0:p>R 2 ≥ 0: With linear regression with no constraints, R 2 is non-negative and corresponds to the square of the multiple correlation coefficient.</ns0:p><ns0:p>R 2 = 0: The fitted line (or hyperplane) is horizontal. With two numerical variables this is the case if the variables are independent, that is, are uncorrelated. Since R 2 = 1 − MSE MST , the relation R 2 = 0 is equivalent to MSE = MST, or, equivalently, to:</ns0:p><ns0:formula xml:id='formula_8'>m ∑ i=1 (Y i − Ȳ ) 2 = m ∑ i=1 (Y i − X i ) 2 (9)</ns0:formula><ns0:p>Now, Equation 9 has the obvious solution X i = Ȳ for 1 ≤ i ≤ m, but, being just one quadratic equation with m unknowns X i , it has infinite solutions, where X i = Ȳ ± ε i for a small ε i , as shown in the following example: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr'>.317571, 40.336481, 5.619065, 44.529437, 71.192687, 32.036909, 6.977097, 66.425010, 95.971166, 5</ns0:ref>.756337}</ns0:p><ns0:formula xml:id='formula_9'>• {Y i 1 ≤ i ≤ 10} = {90</ns0:formula><ns0:p>• Ȳ = 45.91618 <ns0:ref type='bibr'>02545, 43.75556, 41.18064, 42.09511, 44.85773, 44.09390, 41.58419, 43.25487, 44.27568, 49</ns0:ref>.75250}</ns0:p><ns0:formula xml:id='formula_10'>• {X i 1 ≤ i ≤ 10} = {45.</ns0:formula><ns0:formula xml:id='formula_11'>• MSE = MST = 1051.511 • R 2 ≈ 10 −8 .</ns0:formula><ns0:p>R 2 < 0: This case is only possible with linear regression when either the intercept or the slope are constrained so that the 'best-fit' line (given the constraint) fits worse than a horizontal line, for instance if the regression line (hyperplane) does not follow the data (CrossValidated, 2011b). With nonlinear regression, the R-squared can be negative whenever the best-fit model (given the chosen equation, and its constraints, if any) fits the data worse than a horizontal line. Finally, negative R 2 might also occur when omitting a constant from the equation, that is, forcing the regression line to go through the point (0,0).</ns0:p><ns0:p>A final note. The behavior of the coefficient of determination is rather independent from the linearity of the regression fitting model: R 2 can be very low even for completely linear model, and vice versa, a high R 2 can occur even when the model is noticeably non-linear. In particular, a good global R 2 can be split in several local models with low R 2 (CrossValidated, 2011a).</ns0:p><ns0:p>SMAPE By definition, SMAPE values range between 0% and 200%, where the following holds in the two extreme cases: SMAPE = 0: The best case occurs when SMAPE vanishes, that is when</ns0:p><ns0:formula xml:id='formula_12'>100% m m ∑ i=1 | X i −Y i | (| X i | + | Y i |)/2 = 0 equivalent to m ∑ i=1 | X i −Y i | (| X i | + | Y i |)/2 = 0</ns0:formula><ns0:p>and, since the m components are all positive, equivalent to Summarising, SMAPE reaches its worst value 200% if</ns0:p><ns0:formula xml:id='formula_13'>| X i −Y i | | X i | + | Y i | = 0 ∀ 1 ≤ i ≤ m and thus X i = Y i , that is, perfect regression. SMAPE = 2: The worst case SMAPE = 200% occurs instead when 100% m m ∑ i=1 | X i −Y i | (| X i | + | Y i |)/2 = 2 equivalent to m ∑ i=1 | X i −Y i | | X i | + | Y i | = m</ns0:formula><ns0:p>• X i = 0 and Y i = 0 for all i = 1, . . . , m</ns0:p><ns0:p>• X i = 0 and Y i = 0 for all i = 1, . . . , m</ns0:p><ns0:formula xml:id='formula_14'>• X i •Y i < 0 for all i = 1, . . . , m</ns0:formula><ns0:p>, that is, ground truth and prediction always have opposite sign, regardless of their values.</ns0:p><ns0:p>For instance, if the ground truth points are (1, -2, 3, -4, 5, -6, 7, -8, 9, -10),</ns0:p><ns0:p>any prediction vector with all opposite signs (for example, <ns0:ref type='bibr'>(-307.18, 636.16, -469.99, 671.53, -180.55, 838.23, -979.18 , 455.16, -8.32, 366.80</ns0:ref>)) will result in a SMAPE metric reaching 200%.</ns0:p><ns0:p>Explained the extreme cases of R-squared and SMAPE, in the next section we illustrate some significant, informative use cases where these two rates generate discordant outcomes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RESULTS AND DISCUSSION</ns0:head><ns0:p>In this section, we first report some particular use cases where we compare the results of R-squared and SMAPE (subsection 3.1), and then we describe a real biomedical scenario where the analyzed regression rates generate different rankings for the methods involved (subsection 3.2).</ns0:p><ns0:p>As mentioned earlier, we exclude MAE, MSE, RMSE, and MAPE from the selection of the best performing regression rate. These statistics range in the [0, +∞) interval, with 0 meaning perfect regression, and their values alone therefore fail to communicate the quality of the regression performance, both on good cases and in bad cases. We know for example that a negative coefficient of determination and a SMAPE equal to 1.9 clearly correspond to a regression which performed poorly, but we do not have a specific value for MAE, MSE, RMSE, and MAPE that indicates this outcome. Moreover, as mentioned earlier, each value of MAE, MSE, RMSE, and MAPE communicates the quality of the regression only relatively to other regression performances, and not in an absolute manner, like R-squared and SMAPE do. For these reasons, we focus on the coefficient of determination and SMAPE for the rest of our study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Use cases</ns0:head><ns0:p>We list hereafter a number of example use cases where the coefficient of determination and SMAPE produce divergent outcomes, showing that R 2 is more robust and reliable than SMAPE, especially on bad regressions. To simplify comparison between the two measures, define the complementary normalized SMAPE as:</ns0:p><ns0:formula xml:id='formula_15'>cnSMAPE = 1 − SMAPE 200%<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>(worst value = 0; best value = 1)</ns0:p><ns0:p>UC1 Use case Consider the ground truth set REAL = {r i = (i, i) ∈ R 2 , i ∈ N, 1 ≤ i ≤ 100} collecting 100 points with positive integer coordinates on the straight line y = x. Define then the set PRED j = {p i } as Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_16'>p i =      r i if i ≡ 1 (mod 5) r 5k+1 for k ≥ j 0 for i = 5k + 1 , 0 ≤ k < j</ns0:formula><ns0:p>Computer Science <ns0:ref type='formula'>11</ns0:ref>. R 2 : coefficient of determination (Equation <ns0:ref type='formula'>3</ns0:ref>). cnSMAPE: complementary normalized SMAPE (Equation <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>).</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. UC3 Use case. We define N, correct model, and wrong model in the UC3 Use case paragraph. R 2 : coefficient of determination (Equation <ns0:ref type='formula'>3</ns0:ref>). cnSMAPE: complementary normalized SMAPE (Equation <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>).</ns0:p><ns0:p>so that REAL and PRED j coincides apart from the first j points 1, 6, 11, . . . congruent to 1 modulo 5 that are set to 0. Then, for each 5 ≤ j ≤ 20, compute R 2 and cnSMAPE (Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>).</ns0:p><ns0:p>Both measures decrease with the increasing number of non-matching points p 5k+1 = 0, but cnSMAPE decreases linearly, while R 2 goes down much faster, better showing the growing unreliability of the predicted regression. At the end of the process, j = 20 points out of 100 are wrong, but still cnSMAPE is as high as 0.80, while R 2 is 0.236, correctly declaring PRED 20 a very weak prediction set.</ns0:p><ns0:p>UC2 Use case In a second example, consider again the same REAL dataset and define the three predicting sets</ns0:p><ns0:formula xml:id='formula_17'>PRED start = {p s i : 1 ≤ i ≤ 100} p s i = r i for i ≥ 10 0 for i < 10 PRED middle = {p m i : 1 ≤ i ≤ 100} p m i =</ns0:formula><ns0:p>r i for i ≤ 50 and i ≥ 61 0 for 51 ≤ i ≤ 60</ns0:p><ns0:formula xml:id='formula_18'>PRED end = {p e i : 1 ≤ i ≤ 100} p e i =</ns0:formula><ns0:p>r i for i ≤ 90 0 for i ≥ 91</ns0:p><ns0:p>In all the three cases start, middle, end the predicting set coincides with REAL up to 10 points that are set to zero, at the beginning, in the middle and at the end of the prediction, respectively. Interestingly, cnSMAPE is 0.9 in all the three cases, showing that SMAPE is sensible only to the number of nonmatching points, and not to the magnitude of the predicting error. R 2 instead correctly decreases when the zeroed sequence of points is further away in the prediction and thus farthest away from the actual values: R 2 is 0.995 for PRED start , 0.6293 for PRED middle and -0.0955 for PRED end .</ns0:p><ns0:p>UC3 Use case Consider now the as the ground truth the line y = x, and sample the set T including twenty positive integer points</ns0:p><ns0:formula xml:id='formula_19'>T = {t i = (x i , y T i ) = (i, i) 1 ≤ i ≤ 20} on the line. Define REAL = {r i = (x i , y R i ) = (i, i + N(i)) 1 ≤ i ≤ 20}</ns0:formula><ns0:p>as the same points of T with a small amount of noise N(i) on the y axes, so that r i are close but not lying on the y = x straight line. Consider now two predicting regression models:</ns0:p><ns0:p>• The set PRED c = T representing the correct model;</ns0:p><ns0:p>• The set PRED w representing the (wrong) model with points defined as p w i = f (x i ), for f the 10-th degree polynomial exactly passing through the points r i for 1 ≤ i ≤ 10.</ns0:p><ns0:p>Clearly, p w i coincides with r i for 1 ≤ i ≤ 10, but p w i − r i becomes very large for i ≥ 11. On the other hand t i = r i for all i's, but t i − r i is always very small. Compute now the two measures R 2 and cnSMAPE on the first N points i = 1, . . . , N for 2 ≤ N ≤ 20 of the two different regression models c and w with respect to the ground truth set REAL (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:1:2:NEW 21 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For the correct regression model, both measures are correctly showing good results. For the wrong model, both measures are optimal for the first 10 points, where the prediction exactly matches the actual values; after that, R 2 rapidly decreases supporting the inconsistency of the model, while cnSMAPE is not affected that much, arriving for N = 20 to a value 1/2 as a minimum, even if the model is clearly very bad in prediction.</ns0:p><ns0:p>UC4 Use case Consider the following example: the seven actual values are <ns0:ref type='table' target='#tab_7'>(1, 1, 1, 1, 1, 2,</ns0:ref> 3), and the predicted values are <ns0:ref type='table' target='#tab_7'>(1, 1, 1, 1, 1, 1, 1</ns0:ref>). From the predicted values, it is clear that the regression method worked very poorly: it predicted 1 for all the seven values.</ns0:p><ns0:p>If we compute the coefficient of determination and SMAPE here, we obtain R-squared = -0.346 and SMAPE = 0.238. The coefficient of determination illustrates that something is completely off, by having a negative value. On the contrary, SMAPE has a very good score, that corresponds to 88.1% correctness in the cnSMAPE scale.</ns0:p><ns0:p>In this use case, if a inexperienced practitioner decided to check only the value of SMAPE to evaluate her/his regression, she/he would be misled and would wrongly believe that the regression went 88.1%</ns0:p><ns0:p>correct. If, instead, the practitioner decided to verify the value of R-squared, she/he would be alerted about the poor quality of the regression. As we saw earlier, the regression method predicted 1 for all the seven ground truth elements, so it clearly performed poorly.</ns0:p><ns0:p>UC5 Use case Let us consider now a vector of 5 integer elements having values (1, 2, 3, 4, 5), and a regression prediction made by the variables <ns0:ref type='bibr'>(a, b, c, d, e)</ns0:ref>. Each of these variables can assume all the integer values between 1 and 5, included. We compute the coefficient of determination and cnSMAPE for each of the predictions with respect to the actual values. To compare the values of the coefficient of determination and cnSMAPE in the same range, we consider only the cases when R-squared is greater or equal to zero, and we call it non-negative R-squared. We reported the results in Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>. <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>) on the y axis and non-negative R-squared (Equation <ns0:ref type='formula'>3</ns0:ref>) on the x axis, obtained in the UC5 Use case. Blue line: regression line generated with the loess smooth method.</ns0:p><ns0:p>As clearly observable in the plot Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>, there are a number of points where cnSMAPE has a high value (between 0.6 and 1) but R-squared had value 0: in these cases, the coefficient of determination and cnSMAPE give discordant outcomes. One of these cases, for example, is the regression where the predicted values have values (1, 2, 3, 5, 2), R 2 = 0, and cnSMAPE = 0.89.</ns0:p><ns0:p>In this example, cnSMAPE has a very high value, meaning that the prediction is 89% correct, while R 2 is equal to zero. The regression correctly predicts the first three points (1, 2, 3), but fails to classify the forth element (4 is wrongly predicted as 5), and the fifth element (5 is mistakenly labeled as 2). The coefficient of determination assigns a bad outcome to this regression because it fails to correctly classify the only members of the 4 and 5 classes. Diversely, SMAPE assigns a good outcome to this prediction because the variance between the actual values and the predicted values is low, in proportion to the overall mean of the values.</ns0:p><ns0:p>Faced with this situation, we consider the outcome of the coefficient of determination more reliable and trustworthy: similarly to the Matthews correlation coefficient (MCC) <ns0:ref type='bibr' target='#b68'>(Matthews, 1975)</ns0:ref> in binary classification <ns0:ref type='bibr' target='#b25'>(Chicco and Jurman, 2020;</ns0:ref><ns0:ref type='bibr'>Chicco et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b97'>Tötsch and Hoffmann, 2021;</ns0:ref><ns0:ref type='bibr'>Chicco et al., 2021)</ns0:ref>, R-squared generates a high score only if the regression is able to correctly classify most of the elements of each class. In this example, the regression fails to classify all the elements of the 4 class and of the 5 class, so we believe a good metric would communicate this key-message.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Medical scenarios</ns0:head><ns0:p>To further investigate the behavior of R-squared, MAE, MAPE, MSE, RMSE, and SMAPE, we employed these rates to a regression analysis applied to two real biomedical applications.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:1:2:NEW 21 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Hepatitis dataset We trained and applied several machine learning regression methods on the Lichtinghagen dataset <ns0:ref type='bibr' target='#b64'>(Lichtinghagen et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b51'>Hoffmann et al., 2018)</ns0:ref>, which consists of electronic health records of 615 individuals including healthy controls and patients diagnosed with cirrhosis, fibrosis, and hepatitis. This dataset has 13 features, including a numerical variable stating the diagnosis of the patient, and is publicly available in the University of California Irvine Machine Learning Repository (2020).</ns0:p><ns0:p>There are 540 healthy controls (87.8%) and 75 patients diagnosed with hepatitis C (12.2%). Among the 75 patients diagnosed with hepatitis C, there are: 24 with only hepatitis C (3.9%); 21 with hepatitis C and liver fibrosis (3.41%); and 30 with hepatitis C, liver fibrosis, and cirrhosis (4.88%)</ns0:p><ns0:p>Obesity dataset To further verify the effect of the regression rates, we applied the data mining methods to another medical dataset made of electronic health records of young patients with obesity <ns0:ref type='bibr' target='#b77'>(Palechor and De-La-Hoz-Manotas, 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>De-La-Hoz-Correa et al., 2019)</ns0:ref> Methods For the regression analysis, we employed the same machine learning methods two of us authors used in a previous analysis <ns0:ref type='bibr'>(Chicco and Jurman, 2021)</ns0:ref>: Linear Regression <ns0:ref type='bibr' target='#b71'>(Montgomery et al., 2021)</ns0:ref>, Decision Trees <ns0:ref type='bibr' target='#b86'>(Rokach and Maimon, 2005)</ns0:ref>, and Random Forests <ns0:ref type='bibr' target='#b16'>(Breiman, 2001)</ns0:ref>, all implemented and executed in the R programming language <ns0:ref type='bibr' target='#b55'>(Ihaka and Gentleman, 1996)</ns0:ref>. For each method execution, we first shuffled the patients data, and then we randomly selected 80% of the data elements for the training set and used the remaining 20% for the test set. We trained each method model on the training set, applied the trained model to the test set, and saved the regression results measured through R-squared, MAE, MAPE, MSE, RMSE, and SMAPE. For the hepatitis dataset, we imputed the missing data with the Predictive Mean Matching (PMM) approach through the Multiple Imputation by Chained Equations (MICE) method <ns0:ref type='bibr' target='#b18'>(Buuren and Groothuis-Oudshoorn, 2010)</ns0:ref>. We ran 100 executions and reported the results means and the rankings based on the different rates in Table <ns0:ref type='table'>3</ns0:ref> (hepatitis dataset) and in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> (obesity dataset) .</ns0:p><ns0:p>Hepatitis dataset results: different rate, different ranking We measured the results obtained by these regression models on the Lichtinghagen hepatitis dataset with all the rates analyzed in our study: R 2 , MAE, MAPE, RMSE, MSE, and SMAPE (lower part of Table <ns0:ref type='table'>3</ns0:ref>).</ns0:p><ns0:p>These rates generate 3 different rankings. R 2 , MSE, and RMSE share the same ranking (Random Forests, Linear Regression, and Decision Tree). SMAPE and MAPE share the same ranking (Decision Tree, Random Forests, and Linear Regression). MAE has its own ranking (Random Forests, Decision Tree, and Linear Regression).</ns0:p><ns0:p>It is also interesting to notice that these six rates select different methods as top performing method. By comparing all these different standings, a machine learning practitioner could wonder what is the most suitable rate to choose, to understand how the regression experiments actually went and which method outperformed the others. As explained earlier, we suggest the readers to focus on the ranking generated by the coefficient of determination, because it is the only metric that considers the distribution of all the ground truth values, and generates a high score only if the regression correctly predict most of the values of each ground truth category. Additionally, the fact that the ranking indicated by Rsquared (Random Forests, Linear Regression, and Decision Tree) was the same standing generated by 3 rates out of 6 suggests that it is the most informative one (Table <ns0:ref type='table'>3</ns0:ref>). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>3</ns0:ref>. Regression results on the prediction of hepatitis, cirrhosis, and fibrosis from electronic health records, and corresponding rankings based on rates. We performed the analysis on the Lichtinghagen dataset <ns0:ref type='bibr' target='#b64'>(Lichtinghagen et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b51'>Hoffmann et al., 2018)</ns0:ref> with the methods employed by <ns0:ref type='bibr'>Chicco and Jurman (2021)</ns0:ref>. We report here the average values achieved by each method in 100 executions with 80% randomly chosen data elements used for the training set and the remaining 20% used for the test set. R 2 : worst value −∞ and best value +1. SMAPE: worst value 2 and best value 0. MAE, MAPE, MSE, and RMSE: worst value +∞ and best value 0. We reported the complete regression results including the standard deviations in Table <ns0:ref type='table' target='#tab_7'>S1</ns0:ref>. R 2 formula: Equation 3. MAE formula: Equation <ns0:ref type='formula' target='#formula_5'>6</ns0:ref>. MAPE formula: Equation <ns0:ref type='formula'>7</ns0:ref>. MSE formula: Equation <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>. RMSE formula: Equation <ns0:ref type='formula' target='#formula_4'>5</ns0:ref>. SMAPE formula: Equation <ns0:ref type='formula' target='#formula_7'>8</ns0:ref>. Tree resulted being the worst model for 3 rates out of 6. This information confirms that the ranking of R-squared is more reliable than the one of SMAPE (Table <ns0:ref type='table'>3</ns0:ref>).</ns0:p><ns0:p>Obesity dataset results: agreement between rankings, except for SMAPE Differently from the rankings generated on the hepatitis dataset, the rankings produced on the obesity dataset are more concordant ( Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>SMAPE says Linear Regression outperformed Decision Tree, while the other rates say that Decision Tree outperformed Linear Regression.</ns0:p><ns0:p>Since five out of six rankings confirm that Decision Tree generated better results than Linear Regression, and only one of six say vice versa, we believe that is clear that the ranking indicated by the coefficient of determination is more informative and trustworthy than the ranking generated by SMAPE.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSIONS</ns0:head><ns0:p>Even if regression analysis makes a big chunk of the whole machine learning and computational statistics domains, no consensus has been reached on a unified prefered rate to evaluate regression analyses yet.</ns0:p><ns0:p>In this study, we compared several statistical rates commonly employed in the scientific literature for regression task evaluation, and described the advantages of R-squared over SMAPE, MAPE, MAE, MSE, and RMSE.</ns0:p><ns0:p>Despite the fact that MAPE, MAE, MSE, and RMSE are commonly used in machine learning studies, we showed that it is impossible to detect the quality of the performance of a regression method by just looking at their singular values. An MAPE of 0.7 alone, for example, fails to communicate if the regression algorithm performed mainly correctly or poorly. This flaw left room only for R 2 and SMAPE. The first one has negative values if the regression performed poorly, and values between 0 and 1 (included) if the regression was good. A positive value of R-squared can be considered similar to percentage of correctness obtained by the regression. SMAPE, instead, has the value 0 as best value for perfect regressions and has the value 2 as worst value for disastrous ones.</ns0:p><ns0:p>In our study, we showed with several use cases and examples that R 2 is more truthful and informative than SMAPE: R-squared, in fact, generates a high score only if the regression correctly predicted most of the ground truth elements for each ground truth group, considering their distribution. SMAPE, instead, focuses on the relative distance between each predicted value and its corresponding ground truth element, without considering their distribution. In the present study SMAPE turned out to perform bad in identifying bad regression models.</ns0:p><ns0:p>A limitation of R 2 arises in the negative space. When R-squared has negative values, it indicates that the model performed poorly but it is impossible to know how bad a model performed. For example, an R-squared equal to -0.5 alone does not say much about the quality of the model, because the lower bound is −∞. Differently from SMAPE that has values between 0 and 2, the minus sign of the coefficient of determination would however clearly inform the practitioner about the poor performance of the regression.</ns0:p><ns0:p>Although regression analysis can be applied to an infinite number of different datasets, with infinite values, we had to limit the present to a selection of cases, for feasibility purposes. The selection of use cases presented here are to some extent limited, since one could consider infinite many other use cases that we could not analyze here. Nevertheless, we did not find any use cases in which SMAPE turned out to be more informative than R-squared. Based on the results of this study and our own experience, R-squared seems to be the most informative rate in many cases, if compared to SMAPE, MAPE, MAE, MSE, and RMSE. We therefore suggest the employment of R-squared as the standard statistical measure to evaluate regression analyses, in any scientific area.</ns0:p><ns0:p>In the future, we plan to compare R 2 with other regression rates such as Huber metric H δ <ns0:ref type='bibr' target='#b52'>(Huber, 1992)</ns0:ref>, LogCosh loss <ns0:ref type='bibr' target='#b101'>(Wang et al., 2020)</ns0:ref>, and Quantile Q γ <ns0:ref type='bibr' target='#b107'>(Yue and Rue, 2011)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:1:2:NEW 21 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>UC5 Use case: R-squared versus cnSMAPE</ns0:p><ns0:p>Representation plot of the values of cnSMAPE (Equation <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>) on the y axis and non-negative R-squared (Equation <ns0:ref type='formula'>3</ns0:ref>) on the x axis, obtained in the UC5 Use case. Blue line: regression line generated with the loess smooth method. <ns0:ref type='formula'>3</ns0:ref>). cnSMAPE: complementary normalized SMAPE (Equation <ns0:ref type='formula' target='#formula_15'>10</ns0:ref>).</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>clear that it is difficult to interpret sole values of MSE, RMSE, MAE, and MAPE, since they have +∞ as upper bound. An MSE = 0.7, for example, does not say much about the overall quality of a regression model: the value could mean both an excellent regression model and a poor regression model. We cannot know it unless the maximum MSE value for the regression task is provided or unless the distribution of all the ground truth values is known. The same concept is valid for the other rates having +∞ as upper bound, such as RMSE, MAE, and MAPE. The only two regression scores that have strict real values are the non-negative R-squared and SMAPE. R-squared can have negative values, which mean that the regression performed poorly. R-squared can have value 0 when the regression model explains none of the variability of the response data around its mean (Minitab Blog Editor, 2013). The positive values of the coefficient of determination range in the [0, 1] interval, with 1 meaning perfect prediction. On the other side, the values of SMAPE range in the [0, 2], with 0 meaning perfect prediction and 2 meaning worst prediction possible. This is the main advantage of the coefficient of determination and SMAPE over RMSE, MSE, MAE, and MAPE: values like R 2 = 0.8 and SMAPE = 0.1, for example, clearly indicate a very good regression model performance, regardless of the ranges of the ground truth values and their distributions. A value of RMSE, MSE, MAE, or MAPE equal to 0.7, instead, fails to inform us about the quality of the regression performed.This property of R-squared and SMAPE can be useful in particular when one needs to compare the predictive performance of a regression on two different datasets having different value scales. For example, suppose we have a mental health study describing a predictive model where the outcome is a depression scale ranging from 0 to 100, and another study using a different depression scale, ranging from 0 to 10 (Reeves, 2021). Using R-squared or SMAPE we could compare the predictive performance of the two studies without making additional transformations. The same comparison would be impossible with RMSE, MSE, MAE, or MAPE.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>By the triangle inequality | a+c |≤| a | + | c | computed for b = −c, we have that | a−b |≤| a | + | b |, and thus |a−b| |a|+|b| ≤ 1. This yields that SMAPE = 2 if |X i −Y i | |X i |+|Y i | = 1 for all i = 1, . . . , m. Thus we reduced to compute when ξ (a, b) = |a−b| |a|+|b| = 1: we analyse now all possible cases, also considering the symmetry of the relation with respect to a and b, ξ (a, b) = ξ (b, a). 6/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:1:2:NEW 21 May 2021) Manuscript to be reviewed Computer Science If a = 0, ξ (0, b) = |0−b| |0|+|b| = 1 if b = 0. Now suppose that a, b > 0: ξ (a, a) = 0, so we can suppose a > b, thus a = b + ε, with a, b, ε > 0. Then ξ (a, b) = ξ (b + ε, ε) = ε 2b+ε < 1. Same happens when a, b < 0: thus, if ground truth points and the prediction points have the same sign, SMAPE will never reach its maximum value. Finally, suppose that a and b have opposite sign, for instance a > 0 and b < 0. Then b = −c, for c > 0 and thus ξ (a, b) = ξ (a, −c) = |a+c| |a|+|c| = a+c a+c = 1.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:03:59328:1:2:NEW 21 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. UC5 Use case: R-squared versus cnSMAPE. Representation plot of the values of cnSMAPE (Equation10) on the y axis and non-negative R-squared (Equation3) on the x axis, obtained in the UC5 Use case. Blue line: regression line generated with the loess smooth method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>R 2 ,</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>MAE, MSE, and RMSE indicate Random Forests as top performing regression model, while SMAPE and MAPE select Decision Tree for the first position in their rankings. The position of Linear Regression changes, too: on the second rank for R 2 , MSE, and RMSE, while on the last rank for MAE, SMAPE, and MAPE.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Hepatitis dataset results: R 2 provides the most informative outcome Another interesting aspect of these results on the hepatitis dataset regards the comparison between coefficient of determination and 10/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:1:2:NEW 21 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>3/18 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:03:59328:1:2:NEW 21 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>UC1 Use case. Values generated through Equation</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>. This dataset is publicly available in the University of California Irvine Machine Learning Repository (2019) too, and contains data of 2,111 individuals, with 17 variables for each of them. A variable called NObeyesdad indicates the obesity level of each subject, and can be employed as a regression target. In this dataset, there are 272 children with</ns0:figDesc><ns0:table><ns0:row><ns0:cell>insufficient weight (12.88%), 287 children with normal weight (13.6%), 351 children with obesity type</ns0:cell></ns0:row><ns0:row><ns0:cell>I (16.63%), 297 children with obesity type II (14.07%), 324 children with obesity type III (15.35%), 290</ns0:cell></ns0:row><ns0:row><ns0:cell>children with overweight level I (13.74%), and 290 children with overweight level II (13.74%). The</ns0:cell></ns0:row><ns0:row><ns0:cell>original curators synthetically generated part of this dataset (Palechor and De-La-Hoz-Manotas, 2019;</ns0:cell></ns0:row><ns0:row><ns0:cell>De-La-Hoz-Correa et al., 2019).</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Regression results on the prediction of obesity level from electronic health records, including standard deviations. Mean values and standard deviations out of 100 executions with 80% randomly chosen data elements used for the training set and the remaining 20% used for the test set. We performed the analysis on the Palechor dataset<ns0:ref type='bibr' target='#b77'>(Palechor and De-La-Hoz-Manotas, 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>De-La-Hoz-Correa et al., 2019)</ns0:ref> with the methods Linear Regression, Decision Tree, and Random Forests. We report here the average values achieved by each method in 100 executions with 80% randomly chosen data elements used for the training set and the remaining 20% used for the test set. R 2 : worst value −∞ and best value +1. SMAPE: worst value 2 and best value 0. MAE, MAPE, MSE, and RMSE: worst value +∞ and best value 0. We reported the complete regression results including the standard deviations in TableS2. R 2 formula: Equation3. These values mean that the coefficient of determination and SMAPE generate discordant outcomes for these two methods: for R-squared, Random Forests made a very good regression and Decision Tree made a good one; for SMAPE, instead, Random Forests made a catastrophic regression and Decision Tree made an almost perfect one. At this point, a practitioner could wonder which algorithm between Random Forests and Decision Trees made the better regression. Checking the standings of the other rates, we clearly see that Random Forests resulted being the top model for 4 rates out of 6, while Decision</ns0:figDesc><ns0:table><ns0:row><ns0:cell>MAE formula: Equation 6. MAPE formula: Equation 7. MSE formula: Equation 4.</ns0:cell></ns0:row><ns0:row><ns0:cell>RMSE formula: Equation 5. SMAPE formula: Equation 8.</ns0:cell></ns0:row><ns0:row><ns0:cell>SMAPE (Table 3). We do not compare the standing of R-squared with MAE, MSE, RMSE, and MAPE</ns0:cell></ns0:row><ns0:row><ns0:cell>because these four rates can have infinite positive values and, as mentioned earlier, this aspect makes it</ns0:cell></ns0:row><ns0:row><ns0:cell>impossible to detect the quality of a regression from a single score of these rates.</ns0:cell></ns0:row><ns0:row><ns0:cell>R-squared indicates a very good result for Random Forests (R 2 = 0.756), and good results for Linear</ns0:cell></ns0:row><ns0:row><ns0:cell>Regression (R 2 = 0.535) and Decision Tree (R 2 = 0.423). On the contrary, SMAPE generates an excellent</ns0:cell></ns0:row><ns0:row><ns0:cell>result for Decision Tree (SMAPE = 0.073), meaning almost perfect prediction, and poor results for</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forests (SMAPE = 1.808) and Linear Regression (SMAPE = 1.840), very close to the upper</ns0:cell></ns0:row><ns0:row><ns0:cell>bound (SMAPE = 2) representing the worst possible regression.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Table 4). Actually, the ranking of the coefficient of determination, MSE, RMSE, MAE, and MAPE are identical: Random Forests on the first position, Decision Tree on the second position, and</ns0:figDesc><ns0:table /><ns0:note>Linear Regression on the third and last position. All the rates' rankings indicate Random Forests as the top performing method.The only significant difference can be found in the SMAPE standing: differently from the other rankings that all put Decision Tree as second best regressor and Linear Regression as worst regressor, the SMAPE standing indicates Linear Regression as runner-up and Decision Tree on the last position. SMAPE, in fact, swaps the positions of these two methods, compared to R-squared and the other rates:11/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:1:2:NEW 21 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>UC1 Use case. Values generated through Equation 11. R 2 : coefficient of determination (Equation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>j</ns0:cell><ns0:cell>R 2</ns0:cell><ns0:cell>cnSMAPE</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>5 0.9897</ns0:cell><ns0:cell>0.9500</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>6 0.9816</ns0:cell><ns0:cell>0.9400</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>7 0.9701</ns0:cell><ns0:cell>0.9300</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>8 0.9545</ns0:cell><ns0:cell>0.9200</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>9 0.9344</ns0:cell><ns0:cell>0.9100</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>10 0.9090</ns0:cell><ns0:cell>0.9000</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>11 0.8778</ns0:cell><ns0:cell>0.8900</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>12 0.8401</ns0:cell><ns0:cell>0.8800</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>13 0.7955</ns0:cell><ns0:cell>0.8700</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>14 0.7432</ns0:cell><ns0:cell>0.8600</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>15 0.6827</ns0:cell><ns0:cell>0.8500</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>16 0.6134</ns0:cell><ns0:cell>0.8400</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>17 0.5346</ns0:cell><ns0:cell>0.8300</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>18 0.4459</ns0:cell><ns0:cell>0.8200</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>19 0.3465</ns0:cell><ns0:cell>0.8100</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>20 0.2359</ns0:cell><ns0:cell>0.8000</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Article title: “The coefficient of determination R-squared is more informative than SMAPE,
MAE, MAPE, MSE, and RMSE in regression evaluation”
Authors: Davide Chicco, Matthijs J. Warrens, Giuseppe Jurman
Email: davidechicco@davidechicco.it
Journal: PeerJ Computer Science
Article ID: #CS-2021:03:59328:0:1:REVIEW
10th May 2021
Dear editor Yilun Shang,
Thanks for having considered our article and having taken care of its review. Your comments
and the reviewers’ comments helped us prepare a new version of the manuscript.
You can find a point-by-point response to every comment raised by you and the editors in blue
in this letter below.
Together with the new version of the article, we also submit a tracked-changes version of it with
the edits in blue as well.
Best regards
-- Davide Chicco
Editor comments (Yilun Shang)
MAJOR REVISIONS
0) We have received a mixed review report. Two of the reviewers are positive and one is
negative. Please revise the paper according to all three reviewers' comments and provide point
to point responses.
Authors: We thank the editor Yilun Shang for having curated the review of our article; we
addressed all the points reported in here and the points annotated to the manuscript,
and prepared a new version of our article.
Reviewer 1 (Tiago Gonçalves)
Comments for the Author
1.1) The reviewer attaches a PDF document with some additional comments, suggestions and
highlighted typos.
Authors: We thank Tiago Gonçalves for his comments and annotations, which we
studied and addressed.
1.2) The authors split their data into train and test sets. It would be interesting to see results with
train, validation and test sets, for instance.
Authors: We thank Tiago Gonçalves for this interesting comment. In an article presenting
a standard scientific study based upon machine learning, we would agree and would
include the results on the training sets and validation sets. In the present study, however,
our focus is on the statistical rates, and not on the performance of the machine learning
methods. Therefore, we prefer not to include the results of the machine learning
methods on the training set and validation set.
Reviewer 2 (Anonymous)
Basic reporting
2.1) The manuscript follows the defined standards. Includes a thorough review of the literature,
and contextualisation of the work within the state-of-the-art. Methods and experiments are
described thoroughly. Results are clearly and transparently presented. Data is either provided or
freely available online (Lichtinghagen dataset).
Authors: We thank the reviewer for these nice words.
2.2) Language is very good, with only some typos/mistakes to correct (e.g. 'deeper description
of for R2' on line 122, 'despite of the ranges' on line 191).
Authors: We thank the reviewer for these comments; in the new article versions, we fixed
those issues as suggested.
Experimental design
2.3) The choice of R2 and SMAPE is well-justified based on informativeness, among all the
other surveyed metrics. The experiments cover most aspects where R2 and SMAPE differ,
especially the use-cases 1-5. The real medical scenario is interesting, but the manuscript would
be much more complete if other real scenarios were included: other medical scenarios (e.g.,
COVID-19 cases prediction) or other real contexts (e.g., financial predictions).
Authors: We thank the reviewer for this comment, with which we agree. To address it, we
added a new scenario where we employ several regression methods to a dataset of
Palechor et al. to predict the obesity level among children. We added this new part in
pages 10-12.
Validity of the findings
2.4) The findings are valid. The use-cases and the medical experiment cover most differences
between R2 and SMAPE and truly show the benefit of using R2. However, beyond the issue of
the negative space discussed by the authors in the conclusion, no other drawbacks or possible
limitations of R2 have been discussed. Perhaps additional experiments in other real scenarios
or the future comparison with Huber metric Hδ, LogCosh loss, and Quantile Qγ would help
unveil such downsides of R2.
Authors: We thank the reviewer for these interesting considerations. We are aware that
there might be other drawbacks of the coefficient of determination; we decided not to
explore them in the present study because we considered the article already long and
detailed enough. We will surely consider discussing these drawbacks in future studies
about Huber metric Hδ, LogCosh loss, and Quantile Qγ. Regarding additional
experiments in other real scenarios, we added a new part describing an application to
data of obesity among children in pages 10-12, as mentioned earlier.
2.5) Nevertheless, the lack of deeper discussion on these makes the paper appear quite
one-sided (in favour of R2), and also leads to the absence of proposals to improve: what could
be improved in R2, what desirable behaviours should the new R2 verify, how could we change
R2 to achieve it? Since the paper is merely a comparison between R2 and SMAPE, that does
not propose an improvement on either measure, at least it should include this deeper
speculative discussion on what could be done to achieve an improved R2 metric.
Authors: We thank the reviewer for these considerations. Although it would surely be
interesting to design an enhanced variant of R2, that subject goes beyond the scope of
the present manuscript. In the present study, in fact, our goal is to explain why the
coefficient of determination is more informative than SMAPE, MAE, MAPE, RMSE, and
MSE, and not to propose an enhanced R-squared variant. We will consider this
suggestion for a future study though.
Reviewer 3 (Anonymous)
3.1) Basic reporting. Professional English: the writing of the article is mostly good professional
English, with some minor errors.
3.2) Literature references: the article cited a good number of original sources of information.
3.3) Article structure: the article appears to follow common structured used for medical
publications.
Authors: We thank the reviewer for these nice comments.
3.4) Self-contained: The authors did not make a clear statement about the central hypotheses of
this article.
Formal results: No clear criteria was given in how to judge the different metrics.
Experimental design. The core of the work is to compare different quality metrics, however,
since the authors did not establish a concrete standard, it is very hard to say which one is better.
Originality: poor.
Research question: not well defined
Rigor: anecdotal evaluation
Methods: lack of the key evaluation criteria
Validity of the findings This is an anecdotal evaluation of five different quality metrics for
regression, however, due to a lack of clear standard of comparison, it is hard to judge whether
the conclusions are valid or not.
Comments for the Author. This article needs a clearly defined standard for comparing different
quality metrics.
Authors: We thank the reviewer for these comments, which touch a core aspect of our
study: which metrics to use to evaluate the reliability of statistical metrics? We thought
long about it and we eventually decided not to introduce a new metric to evaluate these
rates, but rather to study the behavior of these rates in specific cases evaluated through
the comparison between expected outcomes and actual outcomes. To this end, we
designed five simulated use cases and two real medical scenarios where the regression
analysis generated a clear outcome, and then verified which statistical rate clearly
communicated the expected outcome. Instead of introducing a new rate, we employed
the approach based upon “expected result versus unexpected result”: we designed
experiments, derived the expected outcomes, performed the experiments, and observed
the actual real outcomes. The only statistical rate which always communicated and
confirmed our expected outcomes was the coefficient of determination.
In particular, we selected our five use cases with a specific aspect: they all feature poor
regressions, and we plan to observe if and how R-squared and SMAPE would have
caught and communicated the poor regression or not. As the article explains, R-squared
did it, and SMAPE failed to do so.
Therefore, talking about “standard for comparison”, we can say that we did not define a
new metric to measure these rates, but rather employed regressions with expected
outcomes, and then compared these expected outcomes with the actual outcomes
produced by R-squared and SMAPE. We believe this standard for comparison allowed
us to make a clear and robust comparison of the described regression rates.
" | Here is a paper. Please give your review comments after reading it. |
177 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Regression analysis makes up a large part of supervised machine learning, and consists of the prediction of a continuous independent target from a set of other predictor variables.</ns0:p><ns0:p>The difference between binary classification and regression is in the target range: in binary classification, the target can have only two values (usually encoded as 0 and 1), while in regression the target can have multiple values. Even if regression analysis has been employed in a huge number of machine learning studies, no consensus has been reached on a single, unified, standard metric to assess the results of the regression itself. Many studies employ the mean square error (MSE) and its rooted variant (RMSE), or the mean absolute error (MAE) and its percentage variant (MAPE). Although useful, these rates share a common drawback: since their values can range between zero and +infinity, a single value of them does not say much about the performance of the regression with respect to the distribution of the ground truth elements. In this study, we focus on two rates that actually generate a high score only if the majority of the elements of a ground truth group has been correctly predicted: the coefficient of determination (also known as R-squared or R 2 ) and the symmetric mean absolute percentage error (SMAPE). After showing their mathematical properties, we report a comparison between R 2 and SMAPE in several use cases and in two real medical scenarios. Our results demonstrate that the coefficient of determination (R-squared) is more informative and truthful than SMAPE, and does not have the interpretability limitations of MSE, RMSE, MAE, and MAPE. We therefore suggest the usage of R-squared as standard metric to evaluate regression analyses in any scientific domain.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>technical studies <ns0:ref type='bibr' target='#b98'>(Sykes, 1993;</ns0:ref><ns0:ref type='bibr' target='#b63'>Lane, 2002)</ns0:ref> or articles outlining practical applications <ns0:ref type='bibr' target='#b40'>(Draper and Smith, 1998;</ns0:ref><ns0:ref type='bibr' target='#b84'>Rawlings et al., 2001;</ns0:ref><ns0:ref type='bibr' target='#b22'>Chatterjee and Hadi, 2015)</ns0:ref>, including handbooks <ns0:ref type='bibr' target='#b23'>(Chatterjee and Simonoff, 2013)</ns0:ref> or works covering specific key subtopics <ns0:ref type='bibr' target='#b93'>(Seber and Lee, 2012)</ns0:ref>. However, the reference landscape is far wider: the aforementioned considerations stimulated a steady flow of studies investigating more philosophically oriented arguments <ns0:ref type='bibr' target='#b2'>(Allen, 2004;</ns0:ref><ns0:ref type='bibr' target='#b10'>Berk, 2004)</ns0:ref>, or deeper analysis of implications related to learning <ns0:ref type='bibr' target='#b9'>(Bartlett et al., 2020)</ns0:ref>. Given the aforementioned overall considerations, it comes as no surprise that, similarly to what happened for binary classification, a plethora of performance metrics have been defined and are currently in use for evaluating the quality of a regression model <ns0:ref type='bibr' target='#b95'>(Shcherbakov et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b55'>Hyndman and Koehler, 2006;</ns0:ref><ns0:ref type='bibr' target='#b14'>Botchkarev, 2018b</ns0:ref><ns0:ref type='bibr' target='#b15'>Botchkarev, ,a, 2019))</ns0:ref>. The parallel with classification goes even further: in the scientific community, a shared consensus on a preferential metric is indeed far from being reached, concurring to making comparison of methods and results a daunting task.</ns0:p><ns0:p>The present study provides a contribute towards the detection of critical factors in the choice of a suitable performance metric in regression analysis, through a comparative overview of two measures of current widespread use, namely the coefficient of determination and the symmetric mean absolute percentage error.</ns0:p><ns0:p>Indeed, despite the lack of a concerted standard, a set of well established and preferred metrics does exist and we believe that, as primus inter pares, the coefficient of determination deserves a major role. The coefficient of determination is also known as R-squared or R 2 in the scientific literature. For consistency, we will use all these three names interchangeably in this study.</ns0:p><ns0:p>Introduced by Sewell <ns0:ref type='bibr' target='#b106'>Wright (1921)</ns0:ref> and generally indicated by R 2 , its original formulation quantifies how much the dependent variable is determined by the independent variables, in terms of proportion of variance. Again, given the age and diffusion of R 2 , a wealth of studies about it has populated the scientific literature of the last century, from general references detailing definition and characteristics <ns0:ref type='bibr' target='#b38'>(Di Bucchianico, 2008;</ns0:ref><ns0:ref type='bibr' target='#b7'>Barrett, 2000;</ns0:ref><ns0:ref type='bibr' target='#b17'>Brown, 2009;</ns0:ref><ns0:ref type='bibr' target='#b8'>Barrett, 1974)</ns0:ref>, to more refined interpretative works <ns0:ref type='bibr' target='#b92'>(Saunders et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b49'>Hahn, 1973;</ns0:ref><ns0:ref type='bibr' target='#b75'>Nagelkerke, 1991;</ns0:ref><ns0:ref type='bibr' target='#b78'>Ozer, 1985;</ns0:ref><ns0:ref type='bibr' target='#b30'>Cornell and Berger, 1987;</ns0:ref><ns0:ref type='bibr' target='#b81'>Quinino et al., 2013)</ns0:ref>; efforts have been dedicated to the treatment of particular cases <ns0:ref type='bibr' target='#b1'>(Allen, 1997;</ns0:ref><ns0:ref type='bibr' target='#b12'>Blomquist, 1980;</ns0:ref><ns0:ref type='bibr' target='#b80'>Piepho, 2019;</ns0:ref><ns0:ref type='bibr' target='#b97'>Srivastava et al., 1995;</ns0:ref><ns0:ref type='bibr' target='#b39'>Dougherty et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b31'>Cox and Wermuth, 1992;</ns0:ref><ns0:ref type='bibr' target='#b110'>Zhang, 2017;</ns0:ref><ns0:ref type='bibr' target='#b76'>Nakagawa et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b70'>Menard, 2000)</ns0:ref> and to the proposal of ad-hoc variations <ns0:ref type='bibr' target='#b108'>(Young, 2000;</ns0:ref><ns0:ref type='bibr' target='#b87'>Renaud and Victoria-Feser, 2010;</ns0:ref><ns0:ref type='bibr' target='#b64'>Lee et al., 2012)</ns0:ref>.</ns0:p><ns0:p>Parallel to the model explanation expressed as the variance, another widely adopted family of measures evaluate the quality of fit in terms of distance of the regressor to the actual training points. The two basic members of such family are the mean average error (MAE) <ns0:ref type='bibr' target='#b89'>(Sammut and Webb, 2010a)</ns0:ref> and the mean squared error (MSE) <ns0:ref type='bibr' target='#b90'>(Sammut and Webb, 2010b)</ns0:ref>, whose difference lies in the evaluating metric, respectively linear L 1 or quadratic L 2 . Once more, the available references are numerous, related to both theoretical <ns0:ref type='bibr' target='#b34'>(David and Sukhatme, 1974;</ns0:ref><ns0:ref type='bibr' target='#b83'>Rao, 1980;</ns0:ref><ns0:ref type='bibr' target='#b96'>So et al., 2013)</ns0:ref> and applicative aspects <ns0:ref type='bibr' target='#b0'>(Allen, 1971;</ns0:ref><ns0:ref type='bibr' target='#b41'>Farebrother, 1976;</ns0:ref><ns0:ref type='bibr' target='#b45'>Gilroy et al., 1990;</ns0:ref><ns0:ref type='bibr' target='#b57'>Imbens et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b60'>Köksoy, 2006;</ns0:ref><ns0:ref type='bibr' target='#b91'>Sarbishei and Radecka, 2011)</ns0:ref>.</ns0:p><ns0:p>As a natural derivation, the square root of mean square error (RMSE) has been widely adopted <ns0:ref type='bibr' target='#b77'>(Nevitt and Hancock, 2000;</ns0:ref><ns0:ref type='bibr' target='#b50'>Hancock and Freeman, 2001;</ns0:ref><ns0:ref type='bibr' target='#b4'>Applegate et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b59'>Kelley and Lai, 2011)</ns0:ref> to standardize the units of measures of MSE. The different type of regularization imposed by the intrinsic metrics reflects on the relative effectiveness of the measure according to the data structure. In particular, as a rule of thumb, MSE is more sensitive to outliers than MAE; in addition to this general note, several further considerations helping researchers in choosing the more suitable metric for evaluating a regression model given the available data and the target task can be drawn <ns0:ref type='bibr' target='#b19'>(Chai and Draxler, 2014;</ns0:ref><ns0:ref type='bibr' target='#b105'>Willmott and Matsuura, 2005;</ns0:ref><ns0:ref type='bibr' target='#b104'>Wang and Lu, 2018)</ns0:ref>. Within the same family of measures, the mean absolute percentage error (MAPE) <ns0:ref type='bibr' target='#b37'>(de Myttenaere et al., 2016)</ns0:ref> focuses on the percentage error, being thus the elective metric when relative variations have a higher impact on the regression task rather than the absolute values. However, MAPE is heavily biased towards low forecasts, making it unsuitable for evaluating tasks where large errors are expected <ns0:ref type='bibr' target='#b6'>(Armstrong and Collopy, 1992;</ns0:ref><ns0:ref type='bibr' target='#b86'>Ren and Glasure, 2009;</ns0:ref><ns0:ref type='bibr' target='#b36'>De Myttenaere et al., 2015)</ns0:ref>. Last but not least, the symmetric mean absolute percentage error (SMAPE) <ns0:ref type='bibr' target='#b5'>(Armstrong, 1985;</ns0:ref><ns0:ref type='bibr' target='#b42'>Flores, 1986;</ns0:ref><ns0:ref type='bibr' target='#b67'>Makridakis, 1993)</ns0:ref> is a recent metric originally proposed to solve some of the issues related to MAPE. Despite the yet not reached agreement on its optimal mathematical expression <ns0:ref type='bibr' target='#b68'>(Makridakis and Hibon, 2000;</ns0:ref><ns0:ref type='bibr' target='#b55'>Hyndman and Koehler, 2006;</ns0:ref><ns0:ref type='bibr' target='#b54'>Hyndman, 2014;</ns0:ref><ns0:ref type='bibr' target='#b24'>Chen et al., 2017)</ns0:ref>, SMAPE is progressively gaining momentum in the machine learning community due to its interesting properties <ns0:ref type='bibr' target='#b66'>(Maiseli, 2019;</ns0:ref><ns0:ref type='bibr' target='#b61'>Kreinovich et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b48'>Goodwin and Lawton, 1999)</ns0:ref>, An interesting discrimination among the aforementioned metrics can be formulated in terms of their </ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science output range. The coefficient of determination is upper bounded by the value 1, attained for perfect fit; while R 2 is not lower bounded, the value 0 corresponds to (small perturbations of) the trivial fit provided by the horizontal line y = K for K the mean of the target value of all the training points. Since all negative values for R 2 indicate a worse fit than the average line, nothing is lost by considering the unit interval as the meaningful range for R 2 . As a consequence, the coefficient of determination is invariant for linear transformations of the independent variables' distribution, and an output value close to one yields a good prediction regardless of the scale on which such variables are measured <ns0:ref type='bibr' target='#b85'>(Reeves, 2021)</ns0:ref>. Similarly, also SMAPE values are bounded, with the lower bound 0% implying a perfect fit, and the upper bound 200% reached when all the predictions and the actual target values are of opposite sign. Conversely, MAE, MSE, RMSE and MAPE output spans the whole positive branch of the real line, with lower limit zero implying a perfect fit, and values progressively and infinitely growing for worse performing models. By definition, these values are heavily dependent on the describing variables' ranges, making them incomparable both mutually and within the same metric: a given output value for a metric has no interpretable relation with a similar value for a different measure, and even the same value for the same metric can reflect deeply different model performance for two distinct tasks <ns0:ref type='bibr' target='#b85'>(Reeves, 2021)</ns0:ref>. Such property cannot be changed even if projecting the output into a bounded range through a suitable transformation (for example, arctangent or rational function). Given these interpretability issues, here we concentrate our comparative analysis on R 2 and SMAPE, both providing a high score only if the majority of the ground truth training points has been correctly predicted by the regressor. Showing the behaviour of these two metrics in several use cases and intwo biomedical scenarios on two datasets made of electronic health records, the coefficient of determination is demonstrated to be superior to SMAPE in terms of effectiveness and informativeness, thus being the recommended general performance measure to be used in evaluating regression analyses.</ns0:p><ns0:p>The manuscript organization proceeds as follows. After this Introduction, in the Methods section we introduce the cited metrics, with their mathematical definition and their main properties, and we provide a more detailed description of R 2 and SMAPE and their extreme values (section 2). In the following section Results and Discussion, we present the experimental part (section 3). First, we describe five synthetic use cases, then we introduce and detail the Lichtinghagen dataset and the Palechor dataset of electronic health records, together with the different applied regression models and the corresponding results. We complete that section with a discussion of the implication of all the obtained outcomes. In the Conclusions section, we draw some final considerations and future developments (section 4).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>METHODS</ns0:head><ns0:p>In this section, we first introduce the mathematical background of the analyzed rates (subsection 2.1), then report some relevant information about the coefficient of determination and SMAPE (subsection 2.2).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Mathematical background</ns0:head><ns0:p>In the following formulas, X i is the predicted i th value, and the Y i element is the actual i th value. The regression method predicts the X i element for the corresponding Y i element of the ground truth dataset. Define two constants: the mean of the true values</ns0:p><ns0:formula xml:id='formula_0'>Ȳ = 1 m m ∑ i=1 Y i (1)</ns0:formula><ns0:p>and the mean total sum of squares Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>MST = 1 m m ∑ i=1 (Y i − Ȳ ) 2 (2) Coefficient of determination (R 2 or R-squared) R 2 = 1 − m ∑ i=1 (X i −Y i ) 2 m ∑ i=1 ( Ȳ −Y i ) 2<ns0:label>(3)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The coefficient of determination <ns0:ref type='bibr' target='#b106'>(Wright, 1921)</ns0:ref> can be interpreted as the proportion of the variance in the dependent variable that is predictable from the independent variables.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean square error (MSE)</ns0:head><ns0:formula xml:id='formula_2'>MSE = 1 m m ∑ i=1 (X i −Y i ) 2<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>(best value = 0; worst value = +∞)</ns0:p><ns0:p>MSE can be used if there are outliers that need to be detected. In fact, MSE is great for attributing larger weights to such points, thanks to the L 2 norm: clearly, if the model eventually outputs a single very bad prediction, the squaring part of the function magnifies the error.</ns0:p><ns0:p>Since R 2 = 1 − MSE MST and since MST is fixed for the data at hand, R 2 is monotonically related to MSE (a negative monotonic relationship), which implies that an ordering of regression models based on R 2 will be identical (although in reverse order) to an ordering of models based on MSE or RMSE.</ns0:p><ns0:p>Root mean square error (RMSE)</ns0:p><ns0:formula xml:id='formula_3'>RMSE = 1 m m ∑ i=1 (X i −Y i ) 2<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>(best value = 0; worst value = +∞)</ns0:p><ns0:p>The two quantities MSE and RMSE are monotonically related (through the square root). An ordering of regression models based on MSE will be identical to an ordering of models based on RMSE.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean absolute error (MAE)</ns0:head><ns0:formula xml:id='formula_4'>MAE = 1 m m ∑ i=1 | X i −Y i |<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>(best value = 0; worst value = +∞)</ns0:p><ns0:p>MAE can be used if outliers represent corrupted parts of the data. In fact, MAE is not penalizing too much the training outliers (the L 1 norm somehow smooths out all the errors of possible outliers), thus providing a generic and bounded performance measure for the model. On the other hand, if the test set also has many outliers, the model performance will be mediocre.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mean absolute percentage error (MAPE)</ns0:head><ns0:formula xml:id='formula_5'>MAPE = 1 m m ∑ i=1 Y i − X i Y i (7) (best value = 0; worst value = +∞)</ns0:formula><ns0:p>MAPE is another performance metric for regression models, having a very intuitive interpretation in terms of relative error: due to its definition, its use is recommended in tasks where it is more important being sensitive to relative variations than to absolute variations <ns0:ref type='bibr' target='#b37'>(de Myttenaere et al., 2016)</ns0:ref>. However, its has a number of drawbacks, too, the most critical ones being the restriction of its use to strictly positive data by definition and being biased towards low forecasts, which makes it unsuitable for predictive models where large errors are expected <ns0:ref type='bibr' target='#b6'>(Armstrong and Collopy, 1992)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Symmetric mean absolute percentage error (SMAPE)</ns0:head><ns0:formula xml:id='formula_6'>SMAPE = 100% m m ∑ i=1 | X i −Y i | (| X i | + | Y i |)/2<ns0:label>(8)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Initially defined by <ns0:ref type='bibr' target='#b5'>Armstrong (1985)</ns0:ref>, and then refined in its current version by <ns0:ref type='bibr' target='#b42'>Flores (1986)</ns0:ref> and <ns0:ref type='bibr' target='#b67'>Makridakis (1993)</ns0:ref>, SMAPE was proposed to amend the drawbacks of the MAPE metric. However, there is little consensus on a definitive formula for SMAPE, and different authors keep using slightly different versions <ns0:ref type='bibr' target='#b54'>(Hyndman, 2014)</ns0:ref>. The original SMAPE formula defines the maximum value as 200%, which is computationally equivalent to 2. In this manuscript, we are going to use the first value for formal passages, and the second value for numeric calculations.</ns0:p><ns0:p>Informativeness The rates RMSE, MAE, MSE and SMAPE have value 0 if the linear regression model fits the data perfectly, and positive value if the fit is less than perfect. Furthermore, the coefficient of determination has value 1 if the linear regression model fits the data perfectly (that means if MSE = 0), value 0 if MSE = MST, and negative value if the mean squared error, MSE, is greater than mean total sum of squares, MST.</ns0:p><ns0:p>Even without digging into the mathematical properties of the aforementioned statistical rates, it is This property of R-squared and SMAPE can be useful in particular when one needs to compare the predictive performance of a regression on two different datasets having different value scales. For example, suppose we have a mental health study describing a predictive model where the outcome is a depression scale ranging from 0 to 100, and another study using a different depression scale, ranging from 0 to 10 (Reeves, 2021). Using R-squared or SMAPE we could compare the predictive performance of the two studies without making additional transformations. The same comparison would be impossible with RMSE, MSE, MAE, or MAPE.</ns0:p><ns0:p>Given the better robustness of R-squared and SMAPE over the other four rates, we focus the rest of this article on the comparison between these two statistics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>R-squared and SMAPE</ns0:head><ns0:p>R-squared The coefficient of determination can take values in the range (−∞, 1] according to the mutual relation between the ground truth and the prediction model. Hereafter we report a brief overview of the principal cases.</ns0:p><ns0:p>R 2 ≥ 0: With linear regression with no constraints, R 2 is non-negative and corresponds to the square of the multiple correlation coefficient.</ns0:p><ns0:p>R 2 = 0: The fitted line (or hyperplane) is horizontal. With two numerical variables this is the case if the variables are independent, that is, are uncorrelated. Since R 2 = 1 − MSE MST , the relation R 2 = 0 is equivalent to MSE = MST, or, equivalently, to: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_7'>m ∑ i=1 (Y i − Ȳ ) 2 = m ∑ i=1 (Y i − X i ) 2 (9)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Now, Equation 9 has the obvious solution X i = Ȳ for 1 ≤ i ≤ m, but, being just one quadratic equation with m unknowns X i , it has infinite solutions, where X i = Ȳ ± ε i for a small ε i , as shown in the following example: <ns0:ref type='bibr'>.317571, 40.336481, 5.619065, 44.529437, 71.192687, 32.036909, 6.977097, 66.425010, 95.971166, 5</ns0:ref>.756337}</ns0:p><ns0:formula xml:id='formula_8'>• {Y i 1 ≤ i ≤ 10} = {90</ns0:formula><ns0:p>• Ȳ = 45.91618 <ns0:ref type='bibr'>02545, 43.75556, 41.18064, 42.09511, 44.85773, 44.09390, 41.58419, 43.25487, 44.27568, 49</ns0:ref>.75250}</ns0:p><ns0:formula xml:id='formula_9'>• {X i 1 ≤ i ≤ 10} = {45.</ns0:formula><ns0:formula xml:id='formula_10'>• MSE = MST = 1051.511 • R 2 ≈ 10 −8 .</ns0:formula><ns0:p>R 2 < 0: This case is only possible with linear regression when either the intercept or the slope are constrained so that the 'best-fit' line (given the constraint) fits worse than a horizontal line, for instance if the regression line (hyperplane) does not follow the data (CrossValidated, 2011b). With nonlinear regression, the R-squared can be negative whenever the best-fit model (given the chosen equation, and its constraints, if any) fits the data worse than a horizontal line. Finally, negative R 2 might also occur when omitting a constant from the equation, that is, forcing the regression line to go through the point (0,0).</ns0:p><ns0:p>A final note. The behavior of the coefficient of determination is rather independent from the linearity of the regression fitting model: R 2 can be very low even for completely linear model, and vice versa, a high R 2 can occur even when the model is noticeably non-linear. In particular, a good global R 2 can be split in several local models with low R 2 (CrossValidated, 2011a).</ns0:p><ns0:p>SMAPE By definition, SMAPE values range between 0% and 200%, where the following holds in the two extreme cases: SMAPE = 0: The best case occurs when SMAPE vanishes, that is when</ns0:p><ns0:formula xml:id='formula_11'>100% m m ∑ i=1 | X i −Y i | (| X i | + | Y i |)/2 = 0 equivalent to m ∑ i=1 | X i −Y i | (| X i | + | Y i |)/2 = 0</ns0:formula><ns0:p>and, since the m components are all positive, equivalent to Summarising, SMAPE reaches its worst value 200% if</ns0:p><ns0:formula xml:id='formula_12'>| X i −Y i | | X i | + | Y i | = 0 ∀ 1 ≤ i ≤ m and thus X i = Y i , that is, perfect regression. SMAPE = 2: The worst case SMAPE = 200% occurs instead when 100% m m ∑ i=1 | X i −Y i | (| X i | + | Y i |)/2 = 2 equivalent to m ∑ i=1 | X i −Y i | | X i | + | Y i | = m By the triangle inequality | a+c |≤| a | + | c | computed for b = −c, we have that | a−b |≤| a | + | b |,</ns0:formula><ns0:formula xml:id='formula_13'>and thus |a−b| |a|+|b| ≤ 1. This yields that SMAPE = 2 if |X i −Y i | |X i |+|Y i | =</ns0:formula><ns0:p>• X i = 0 and Y i = 0 for all i = 1, . . . , m</ns0:p><ns0:p>• X i = 0 and Y i = 0 for all i = 1, . . . , m</ns0:p><ns0:formula xml:id='formula_14'>• X i •Y i < 0 for all i = 1, .</ns0:formula><ns0:p>. . , m, that is, ground truth and prediction always have opposite sign, regardless of their values.</ns0:p><ns0:p>For instance, if the ground truth points are (1, -2, 3, -4, 5, -6, 7, -8, 9, -10),</ns0:p><ns0:p>any prediction vector with all opposite signs (for example, <ns0:ref type='bibr'>(-307.18, 636.16, -469.99, 671.53, -180.55, 838.23, -979.18 , 455.16, -8.32, 366.80</ns0:ref>)) will result in a SMAPE metric reaching 200%.</ns0:p><ns0:p>Explained the extreme cases of R-squared and SMAPE, in the next section we illustrate some significant, informative use cases where these two rates generate discordant outcomes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RESULTS AND DISCUSSION</ns0:head><ns0:p>In this section, we first report some particular use cases where we compare the results of R-squared and SMAPE (subsection 3.1), and then we describe a real biomedical scenario where the analyzed regression rates generate different rankings for the methods involved (subsection 3.2).</ns0:p><ns0:p>As mentioned earlier, we exclude MAE, MSE, RMSE, and MAPE from the selection of the best performing regression rate. These statistics range in the [0, +∞) interval, with 0 meaning perfect regression, and their values alone therefore fail to communicate the quality of the regression performance, both on good cases and in bad cases. We know for example that a negative coefficient of determination and a SMAPE equal to 1.9 clearly correspond to a regression which performed poorly, but we do not have a specific value for MAE, MSE, RMSE, and MAPE that indicates this outcome. Moreover, as mentioned earlier, each value of MAE, MSE, RMSE, and MAPE communicates the quality of the regression only relatively to other regression performances, and not in an absolute manner, like R-squared and SMAPE do. For these reasons, we focus on the coefficient of determination and SMAPE for the rest of our study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Use cases</ns0:head><ns0:p>We list hereafter a number of example use cases where the coefficient of determination and SMAPE produce divergent outcomes, showing that R 2 is more robust and reliable than SMAPE, especially on poor quality regressions. To simplify comparison between the two measures, define the complementary normalized SMAPE as: </ns0:p><ns0:formula xml:id='formula_15'>cnSMAPE = 1 − SMAPE 200%<ns0:label>(</ns0:label></ns0:formula><ns0:formula xml:id='formula_16'>p i =      r i if i ≡ 1 (mod 5) r 5k+1 for k ≥ j 0 for i = 5k + 1 , 0 ≤ k < j (11)</ns0:formula><ns0:p>so that REAL and PRED j coincides apart from the first j points 1, 6, 11, . . . congruent to 1 modulo 5 that are set to 0. Then, for each 5 ≤ j ≤ 20, compute R 2 and cnSMAPE (Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>). <ns0:ref type='formula' target='#formula_1'>3</ns0:ref>). cnSMAPE: complementary normalized SMAPE (Equation <ns0:ref type='formula'>10</ns0:ref>).</ns0:p><ns0:p>Both measures decrease with the increasing number of non-matching points p 5k+1 = 0, but cnSMAPE decreases linearly, while R 2 goes down much faster, better showing the growing unreliability of the predicted regression. At the end of the process, j = 20 points out of 100 are wrong, but still cnSMAPE is as high as 0.80, while R 2 is 0.236, correctly declaring PRED 20 a very weak prediction set.</ns0:p><ns0:p>UC2 Use case In a second example, consider again the same REAL dataset and define the three predicting sets</ns0:p><ns0:formula xml:id='formula_17'>PRED start = {p s i : 1 ≤ i ≤ 100} p s i = r i for i ≥ 10 0 for i < 10 PRED middle = {p m i : 1 ≤ i ≤ 100} p m i =</ns0:formula><ns0:p>r i for i ≤ 50 and i ≥ 61 0 for 51 ≤ i ≤ 60</ns0:p><ns0:formula xml:id='formula_18'>PRED end = {p e i : 1 ≤ i ≤ 100} p e i = r i for i ≤ 90 0 for i ≥ 91</ns0:formula><ns0:p>In all the three cases start, middle, end the predicting set coincides with REAL up to 10 points that are set to zero, at the beginning, in the middle and at the end of the prediction, respectively. Interestingly, cnSMAPE is 0.9 in all the three cases, showing that SMAPE is sensible only to the number of nonmatching points, and not to the magnitude of the predicting error. R 2 instead correctly decreases when the zeroed sequence of points is further away in the prediction and thus farthest away from the actual values: R 2 is 0.995 for PRED start , 0.6293 for PRED middle and -0.0955 for PRED end .</ns0:p><ns0:p>UC3 Use case Consider now the as the ground truth the line y = x, and sample the set T including twenty positive integer points</ns0:p><ns0:formula xml:id='formula_19'>T = {t i = (x i , y T i ) = (i, i) 1 ≤ i ≤ 20} on the line. Define REAL = {r i = (x i , y R i ) = (i, i + N(i)) 1 ≤ i ≤ 20}</ns0:formula><ns0:p>as the same points of T with a small amount of noise N(i) on the y axes, so that r i are close but not lying on the y = x straight line. Consider now two predicting regression models:</ns0:p><ns0:p>• The set PRED c = T representing the correct model;</ns0:p><ns0:p>• The set PRED w representing the (wrong) model with points defined as p w i = f (x i ), for f the 10-th degree polynomial exactly passing through the points r i for 1 ≤ i ≤ 10.</ns0:p><ns0:p>Clearly, p w i coincides with r i for 1 ≤ i ≤ 10, but p w i − r i becomes very large for i ≥ 11. On the other hand t i = r i for all i's, but t i − r i is always very small. Compute now the two measures R 2 and cnSMAPE</ns0:p></ns0:div>
<ns0:div><ns0:head>8/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_8'>2</ns0:ref>. UC3 Use case. We define N, correct model, and wrong model in the UC3 Use case paragraph. R 2 : coefficient of determination (Equation <ns0:ref type='formula' target='#formula_1'>3</ns0:ref>). cnSMAPE: complementary normalized SMAPE (Equation <ns0:ref type='formula'>10</ns0:ref>).</ns0:p><ns0:p>on the first N points i = 1, . . . , N for 2 ≤ N ≤ 20 of the two different regression models c and w with respect to the ground truth set REAL (Table <ns0:ref type='table' target='#tab_8'>2</ns0:ref>).</ns0:p><ns0:p>For the correct regression model, both measures are correctly showing good results. For the wrong model, both measures are optimal for the first 10 points, where the prediction exactly matches the actual values; after that, R 2 rapidly decreases supporting the inconsistency of the model, while cnSMAPE is not affected that much, arriving for N = 20 to a value 1/2 as a minimum, even if the model is clearly very bad in prediction.</ns0:p><ns0:p>UC4 Use case Consider the following example: the seven actual values are (1, 1, 1, 1, 1, 2,</ns0:p><ns0:p>3), and the predicted values are (1, 1, 1, 1, 1, 1, 1). From the predicted values, it is clear that the regression method worked very poorly: it predicted 1 for all the seven values.</ns0:p><ns0:p>If we compute the coefficient of determination and SMAPE here, we obtain R-squared = -0.346 and SMAPE = 0.238. The coefficient of determination illustrates that something is completely off, by having a negative value. On the contrary, SMAPE has a very good score, that corresponds to 88.1% correctness in the cnSMAPE scale.</ns0:p><ns0:p>In this use case, if a inexperienced practitioner decided to check only the value of SMAPE to evaluate her/his regression, she/he would be misled and would wrongly believe that the regression went 88.1% correct. If, instead, the practitioner decided to verify the value of R-squared, she/he would be alerted about the poor quality of the regression. As we saw earlier, the regression method predicted 1 for all the seven ground truth elements, so it clearly performed poorly.</ns0:p><ns0:p>UC5 Use case Let us consider now a vector of 5 integer elements having values (1, 2, 3, 4, 5), and a regression prediction made by the variables (a, b, c, d, e). Each of these variables can assume all the integer values between 1 and 5, included. We compute the coefficient of determination and cnSMAPE for each of the predictions with respect to the actual values. To compare the values of the coefficient of determination and cnSMAPE in the same range, we consider only the cases when R-squared is greater or equal to zero, and we call it non-negative R-squared. We reported the results in Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>. <ns0:ref type='formula'>10</ns0:ref>) on the y axis and non-negative R-squared (Equation <ns0:ref type='formula' target='#formula_1'>3</ns0:ref>) on the x axis, obtained in the UC5 Use case. Blue line: regression line generated with the loess smooth method.</ns0:p><ns0:p>As clearly observable in the plot Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>, there are a number of points where cnSMAPE has a high value (between 0.6 and 1) but R-squared had value 0: in these cases, the coefficient of determination and cnSMAPE give discordant outcomes. One of these cases, for example, is the regression where the predicted values have values (1, 2, 3, 5, 2), R 2 = 0, and cnSMAPE = 0.89.</ns0:p><ns0:p>In this example, cnSMAPE has a very high value, meaning that the prediction is 89% correct, while R 2 is equal to zero. The regression correctly predicts the first three points (1, 2, 3), but fails to classify the forth element (4 is wrongly predicted as 5), and the fifth element (5 is mistakenly labeled as Faced with this situation, we consider the outcome of the coefficient of determination more reliable and trustworthy: similarly to the Matthews correlation coefficient (MCC) <ns0:ref type='bibr' target='#b69'>(Matthews, 1975)</ns0:ref> in binary classification <ns0:ref type='bibr' target='#b25'>(Chicco and Jurman, 2020;</ns0:ref><ns0:ref type='bibr' target='#b28'>Chicco et al., 2021a;</ns0:ref><ns0:ref type='bibr' target='#b99'>Tötsch and Hoffmann, 2021;</ns0:ref><ns0:ref type='bibr'>Chicco et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b29'>Chicco et al., 2021b)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science classify most of the elements of each class. In this example, the regression fails to classify all the elements of the 4 class and of the 5 class, so we believe a good metric would communicate this key-message.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Medical scenarios</ns0:head><ns0:p>To further investigate the behavior of R-squared, MAE, MAPE, MSE, RMSE, and SMAPE, we employed these rates to a regression analysis applied to two real biomedical applications.</ns0:p></ns0:div>
<ns0:div><ns0:head>Hepatitis dataset</ns0:head><ns0:p>We trained and applied several machine learning regression methods on the Lichtinghagen dataset <ns0:ref type='bibr' target='#b65'>(Lichtinghagen et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b52'>Hoffmann et al., 2018)</ns0:ref> Methods For the regression analysis, we employed the same machine learning methods two of us authors used in a previous analysis <ns0:ref type='bibr'>(Chicco and Jurman, 2021)</ns0:ref>: Linear Regression <ns0:ref type='bibr' target='#b74'>(Montgomery et al., 2021)</ns0:ref>, Decision Trees <ns0:ref type='bibr' target='#b88'>(Rokach and Maimon, 2005)</ns0:ref>, and Random Forests <ns0:ref type='bibr' target='#b16'>(Breiman, 2001)</ns0:ref>, all implemented and executed in the R programming language <ns0:ref type='bibr' target='#b56'>(Ihaka and Gentleman, 1996)</ns0:ref>. For each method execution, we first shuffled the patients data, and then we randomly selected 80% of the data elements for the training set and used the remaining 20% for the test set. We trained each method model on the training set, applied the trained model to the test set, and saved the regression results measured through Rsquared, MAE, MAPE, MSE, RMSE, and SMAPE. For the hepatitis dataset, we imputed the missing data with the Predictive Mean Matching (PMM) approach through the Multiple Imputation by Chained Equations (MICE) method <ns0:ref type='bibr' target='#b18'>(Buuren and Groothuis-Oudshoorn, 2010)</ns0:ref>. We ran 100 executions and reported the results means and the rankings based on the different rates in Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref> (hepatitis dataset) and in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> (obesity dataset) .</ns0:p><ns0:p>Hepatitis dataset results: different rate, different ranking We measured the results obtained by these regression models on the Lichtinghagen hepatitis dataset with all the rates analyzed in our study: R 2 , MAE, MAPE, RMSE, MSE, and SMAPE (lower part of Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref>).</ns0:p><ns0:p>These rates generate 3 different rankings. R 2 , MSE, and RMSE share the same ranking (Random Forests, Linear Regression, and Decision Tree). SMAPE and MAPE share the same ranking (Decision Tree, Random Forests, and Linear Regression). MAE has its own ranking (Random Forests, Decision Tree, and Linear Regression).</ns0:p><ns0:p>It is also interesting to notice that these six rates select different methods as top performing method. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref>. Regression results on the prediction of hepatitis, cirrhosis, and fibrosis from electronic health records, and corresponding rankings based on rates. We performed the analysis on the Lichtinghagen dataset <ns0:ref type='bibr' target='#b65'>(Lichtinghagen et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b52'>Hoffmann et al., 2018)</ns0:ref> with the methods employed by <ns0:ref type='bibr'>Chicco and Jurman (2021)</ns0:ref>. We report here the average values achieved by each method in 100 executions with 80% randomly chosen data elements used for the training set and the remaining 20% used for the test set. R 2 : worst value −∞ and best value +1. SMAPE: worst value 2 and best value 0. MAE, MAPE, MSE, and RMSE: worst value +∞ and best value 0. We reported the complete regression results including the standard deviations in Table <ns0:ref type='table' target='#tab_7'>S1</ns0:ref>. R 2 formula: Equation 3. MAE formula: Equation <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>. MAPE formula: Equation <ns0:ref type='formula'>7</ns0:ref>. MSE formula: Equation <ns0:ref type='formula' target='#formula_2'>4</ns0:ref>. RMSE formula: Equation <ns0:ref type='formula' target='#formula_3'>5</ns0:ref>. SMAPE formula: Equation <ns0:ref type='formula' target='#formula_6'>8</ns0:ref>. of the values of each ground truth category. Additionally, the fact that the ranking indicated by Rsquared (Random Forests, Linear Regression, and Decision Tree) was the same standing generated by 3 rates out of 6 suggests that it is the most informative one (Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref>).</ns0:p><ns0:p>Hepatitis dataset results: R 2 provides the most informative outcome Another interesting aspect of these results on the hepatitis dataset regards the comparison between coefficient of determination and SMAPE (Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref>). We do not compare the standing of R-squared with MAE, MSE, RMSE, and MAPE because these four rates can have infinite positive values and, as mentioned earlier, this aspect makes it impossible to detect the quality of a regression from a single score of these rates. Tree resulted being the worst model for 3 rates out of 6. This information confirms that the ranking of R-squared is more reliable than the one of SMAPE (Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref>).</ns0:p><ns0:p>Obesity dataset results: agreement between rankings, except for SMAPE Differently from the rankings generated on the hepatitis dataset, the rankings produced on the obesity dataset are more concordant ( Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Linear Regression on the third and last position. All the rates' rankings indicate Random Forests as the top performing method.</ns0:p><ns0:p>The only significant difference can be found in the SMAPE standing: differently from the other rankings that all put Decision Tree as second best regressor and Linear Regression as worst regressor, the SMAPE standing indicates Linear Regression as runner-up and Decision Tree on the last position.</ns0:p><ns0:p>SMAPE, in fact, swaps the positions of these two methods, compared to R-squared and the other rates:</ns0:p><ns0:p>SMAPE says Linear Regression outperformed Decision Tree, while the other rates say that Decision Tree outperformed Linear Regression.</ns0:p><ns0:p>Since five out of six rankings confirm that Decision Tree generated better results than Linear Regression, and only one of six say vice versa, we believe that is clear that the ranking indicated by the coefficient of determination is more informative and trustworthy than the ranking generated by SMAPE.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSIONS</ns0:head><ns0:p>Even if regression analysis makes a big chunk of the whole machine learning and computational statistics domains, no consensus has been reached on a unified prefered rate to evaluate regression analyses yet.</ns0:p><ns0:p>In this study, we compared several statistical rates commonly employed in the scientific literature for regression task evaluation, and described the advantages of R-squared over SMAPE, MAPE, MAE, MSE, and RMSE.</ns0:p><ns0:p>Despite the fact that MAPE, MAE, MSE, and RMSE are commonly used in machine learning studies, we showed that it is impossible to detect the quality of the performance of a regression method by just looking at their singular values. An MAPE of 0.7 alone, for example, fails to communicate if the regression algorithm performed mainly correctly or poorly. This flaw left room only for R 2 and SMAPE. The first one has negative values if the regression performed poorly, and values between 0 and 1 (included) if the regression was good. A positive value of R-squared can be considered similar to percentage of correctness obtained by the regression. SMAPE, instead, has the value 0 as best value for perfect regressions and has the value 2 as worst value for disastrous ones.</ns0:p><ns0:p>In our study, we showed with several use cases and examples that R 2 is more truthful and informative than SMAPE: R-squared, in fact, generates a high score only if the regression correctly predicted most of the ground truth elements for each ground truth group, considering their distribution. SMAPE, instead, focuses on the relative distance between each predicted value and its corresponding ground truth element, without considering their distribution. In the present study SMAPE turned out to perform bad in identifying bad regression models.</ns0:p><ns0:p>A limitation of R 2 arises in the negative space. When R-squared has negative values, it indicates that the model performed poorly but it is impossible to know how bad a model performed. For example, an R-squared equal to -0.5 alone does not say much about the quality of the model, because the lower bound is −∞. Differently from SMAPE that has values between 0 and 2, the minus sign of the coefficient of determination would however clearly inform the practitioner about the poor performance of the regression.</ns0:p><ns0:p>Although regression analysis can be applied to an infinite number of different datasets, with infinite values, we had to limit the present to a selection of cases, for feasibility purposes. The selection of use cases presented here are to some extent limited, since one could consider infinite many other use cases that we could not analyze here. Nevertheless, we did not find any use cases in which SMAPE turned out to be more informative than R-squared. Based on the results of this study and our own experience, R-squared seems to be the most informative rate in many cases, if compared to SMAPE, MAPE, MAE, MSE, and RMSE. We therefore suggest the employment of R-squared as the standard statistical measure to evaluate regression analyses, in any scientific area.</ns0:p><ns0:p>In the future, we plan to compare R 2 with other regression rates such as Huber metric H δ <ns0:ref type='bibr' target='#b53'>(Huber, 1992)</ns0:ref>, LogCosh loss <ns0:ref type='bibr' target='#b103'>(Wang et al., 2020)</ns0:ref>, and Quantile Q γ <ns0:ref type='bibr' target='#b109'>(Yue and Rue, 2011)</ns0:ref>. We will also study some variants of the coefficient of determination, such as the adjusted R-squared <ns0:ref type='bibr' target='#b71'>(Miles, 2014)</ns0:ref> and the coefficient of partial determination <ns0:ref type='bibr' target='#b110'>(Zhang, 2017)</ns0:ref>. Moreovere, we will consider the possibility to design a brand new metric for regression analysis evaluation, that could be even more informative than R-squared.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/18</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 1</ns0:note><ns0:p>UC5 Use case: R-squared versus cnSMAPE</ns0:p><ns0:p>Representation plot of the values of cnSMAPE (Equation <ns0:ref type='formula'>10</ns0:ref>) on the y axis and non-negative R-squared (Equation <ns0:ref type='formula' target='#formula_1'>3</ns0:ref>) on the x axis, obtained in the UC5 Use case. Blue line: regression line generated with the loess smooth method. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Table <ns0:ref type='table' target='#tab_9'>3</ns0:ref>. Regression results on the prediction of hepatitis, cirrhosis, and fibrosis from electronic health records, and corresponding rankings based on rates.</ns0:p><ns0:p>We performed the analysis on the Lichtinghagen dataset <ns0:ref type='bibr' target='#b65'>(Lichtinghagen et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b52'>Hoffmann et al., 2018)</ns0:ref> with the methods employed by <ns0:ref type='bibr'>Chicco and Jurman (2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>worst value = −∞; best value = +1) 3/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>best value = 0; worst value = 2) 4/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>clear that it is difficult to interpret sole values of MSE, RMSE, MAE, and MAPE, since they have +∞ as upper bound. An MSE = 0.7, for example, does not say much about the overall quality of a regression model: the value could mean both an excellent regression model and a poor regression model. We cannot know it unless the maximum MSE value for the regression task is provided or unless the distribution of all the ground truth values is known. The same concept is valid for the other rates having +∞ as upper bound, such as RMSE, MAE, and MAPE. The only two regression scores that have strict real values are the non-negative R-squared and SMAPE. R-squared can have negative values, which mean that the regression performed poorly. R-squared can have value 0 when the regression model explains none of the variability of the response data around its mean (Minitab Blog Editor, 2013). The positive values of the coefficient of determination range in the [0, 1] interval, with 1 meaning perfect prediction. On the other side, the values of SMAPE range in the [0, 2], with 0 meaning perfect prediction and 2 meaning worst prediction possible. This is the main advantage of the coefficient of determination and SMAPE over RMSE, MSE, MAE, and MAPE: values like R 2 = 0.8 and SMAPE = 0.1, for example, clearly indicate a very good regression model performance, regardless of the ranges of the ground truth values and their distributions. A value of RMSE, MSE, MAE, or MAPE equal to 0.7, instead, fails to inform us about the quality of the regression performed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>1 for all i = 1, . . . , m. Thus we reduced 6/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021) Manuscript to be reviewed Computer Science to compute when ξ (a, b) = |a−b| |a|+|b| = 1: we analyse now all possible cases, also considering the symmetry of the relation with respect to a and b, ξ (a, b) = ξ (b, a). If a = 0, ξ (0, b) = |0−b| |0|+|b| = 1 if b = 0. Now suppose that a, b > 0: ξ (a, a) = 0, so we can suppose a > b, thus a = b + ε, with a, b, ε > 0. Then ξ (a, b) = ξ (b + ε, ε) = ε 2b+ε < 1. Same happens when a, b < 0: thus, if ground truth points and the prediction points have the same sign, SMAPE will never reach its maximum value. Finally, suppose that a and b have opposite sign, for instance a > 0 and b < 0. Then b = −c, for c > 0 and thus ξ (a, b) = ξ (a, −c) = |a+c| |a|+|c| = a+c a+c = 1.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>10) (worst value = 0; best value = 1) Consider the ground truth set REAL = {r i = (i, i) ∈ R 2 , i ∈ N, 1 ≤ i ≤ 100} collecting 100 points with positive integer coordinates on the straight line y = x. Define then the set PRED j = {p i } as</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. UC5 Use case: R-squared versus cnSMAPE. Representation plot of the values of cnSMAPE (Equation10) on the y axis and non-negative R-squared (Equation3) on the x axis, obtained in the UC5 Use case. Blue line: regression line generated with the loess smooth method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>2).</ns0:head><ns0:label /><ns0:figDesc>The coefficient of determination assigns a bad outcome to this regression because it fails to correctly classify the only members of the 4 and 5 classes. Diversely, SMAPE assigns a good outcome to this prediction because the variance between the actual values and the predicted values is low, in proportion to the overall mean of the values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>R 2 ,</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>MAE, MSE, and RMSE indicate Random Forests as top performing regression model, while SMAPE and MAPE select Decision Tree for the first position in their rankings. The position of Linear Regression changes, too: on the second rank for R 2 , MSE, and RMSE, while on the last rank for MAE, SMAPE, and MAPE. By comparing all these different standings, a machine learning practitioner could wonder what is the most suitable rate to choose, to understand how the regression experiments actually went and which method outperformed the others. As explained earlier, we suggest the readers to focus on the ranking generated by the coefficient of determination, because it is the only metric that considers the distribution 10/18 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>R</ns0:head><ns0:label /><ns0:figDesc>-squared indicates a very good result for Random Forests (R 2 = 0.756), and good results for Linear Regression (R 2 = 0.535) and Decision Tree (R 2 = 0.423). On the contrary, SMAPE generates an excellent result for Decision Tree (SMAPE = 0.073), meaning almost perfect prediction, and poor results for Random Forests (SMAPE = 1.808) and Linear Regression (SMAPE = 1.840), very close to the upper bound (SMAPE = 2) representing the worst possible regression.These values mean that the coefficient of determination and SMAPE generate discordant outcomes for these two methods: for R-squared, Random Forests made a very good regression and Decision Tree made a good one; for SMAPE, instead, Random Forests made a catastrophic regression and Decision Tree made an almost perfect one. At this point, a practitioner could wonder which algorithm between Random Forests and Decision Trees made the better regression. Checking the standings of the other rates, we clearly see that Random Forests resulted being the top model for 4 rates out of 6, while Decision</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>UC1 Use case. Values generated through Equation 11. R 2 : coefficient of determination (Equation</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>, which consists of electronic health records of 615 individuals including healthy controls and patients diagnosed with cirrhosis, fibrosis, and hepatitis. This dataset has 13 features, including a numerical variable stating the diagnosis of the patient, and is publicly available in the University of California Irvine Machine Learning Repository (2020).Obesity dataset To further verify the effect of the regression rates, we applied the data mining methods to another medical dataset made of electronic health records of young patients with obesity<ns0:ref type='bibr' target='#b79'>(Palechor and De-La-Hoz-Manotas, 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>De-La-Hoz-Correa et al., 2019)</ns0:ref>. This dataset is publicly available in the University of California Irvine Machine Learning Repository (2019) too, and contains data of 2,111 individuals, with 17 variables for each of them. A variable called NObeyesdad indicates the obesity level of each subject, and can be employed as a regression target. In this dataset, there are 272 children with insufficient weight (12.88%), 287 children with normal weight (13.6%), 351 children with obesity type</ns0:figDesc><ns0:table><ns0:row><ns0:cell>There are 540 healthy controls (87.8%) and 75 patients diagnosed with hepatitis C (12.2%). Among the</ns0:cell></ns0:row><ns0:row><ns0:cell>75 patients diagnosed with hepatitis C, there are: 24 with only hepatitis C (3.9%); 21 with hepatitis C and</ns0:cell></ns0:row><ns0:row><ns0:cell>liver fibrosis (3.41%); and 30 with hepatitis C, liver fibrosis, and cirrhosis (4.88%)</ns0:cell></ns0:row></ns0:table><ns0:note>I (16.63%), 297 children with obesity type II (14.07%), 324 children with obesity type III (15.35%), 290 children with overweight level I (13.74%), and 290 children with overweight level II (13.74%). The original curators synthetically generated part of this dataset<ns0:ref type='bibr' target='#b79'>(Palechor and De-La-Hoz-Manotas, 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>De-La-Hoz-Correa et al., 2019)</ns0:ref>.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Regression results on the prediction of obesity level from electronic health records, including standard deviations. Mean values and standard deviations out of 100 executions with 80% randomly chosen data elements used for the training set and the remaining 20% used for the test set. We performed the analysis on the Palechor dataset<ns0:ref type='bibr' target='#b79'>(Palechor and De-La-Hoz-Manotas, 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>De-La-Hoz-Correa et al., 2019)</ns0:ref> with the methods Linear Regression, Decision Tree, and Random Forests. We report here the average values achieved by each method in 100 executions with 80% randomly chosen data elements used for the training set and the remaining 20% used for the test set. R 2 : worst value −∞ and best value +1. SMAPE: worst value 2 and best value 0. MAE, MAPE, MSE, and RMSE: worst value +∞ and best value 0. We reported the complete regression results including the standard deviations in TableS2. R 2 formula: Equation 3. MAE formula: Equation6. MAPE formula: Equation7. MSE formula: Equation4. RMSE formula: Equation5. SMAPE formula: Equation8. of all the ground truth values, and generates a high score only if the regression correctly predict most</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Table 4). Actually, the ranking of the coefficient of determination, MSE, RMSE, MAE, and MAPE are identical: Random Forests on the first position, Decision Tree on the second position, and</ns0:figDesc><ns0:table /><ns0:note>11/18PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>UC1 Use case. Values generated through Equation 11. R 2 : coefficient of determination (Equation 3). cnSMAPE: complementary normalized SMAPE (Equation 10). .8230926 − 2. 149735 × 10 02 0.9090909 12 0.9379797 0.8362582 − 1. 309188 × 10 04 0.8333333 13 0.9439415 0.8447007 − 2. 493881 × 10 05 0.7692308 14 0.9475888 0.8518829 − 2. 752456 × 10 06 0.7142857 15 0.9551004 0.8613108 − 2. 276742 × 10 07 0.6666667 16 0.9600758 0.8679611 − 1. 391877 × 10 08 0.6250000 17 0.9622725 0.8740207 − 7. 457966 × 10 08 0.5882353 18 0.9607997 0.8784127 − 3. 425546 × 10 09 0.5555556 19 0.9659541 0.8837482 − 1. 275171 × 10 10 0.5263158 20 0.9635534 0.8870441 − 4. 583919 × 10 10 0.5000000</ns0:figDesc><ns0:table><ns0:row><ns0:cell>j</ns0:cell><ns0:cell>R 2</ns0:cell><ns0:cell>cnSMAPE</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>5 0.9897</ns0:cell><ns0:cell>0.9500</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>6 0.9816</ns0:cell><ns0:cell>0.9400</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>7 0.9701</ns0:cell><ns0:cell>0.9300</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>8 0.9545</ns0:cell><ns0:cell>0.9200</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>9 0.9344</ns0:cell><ns0:cell>0.9100</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>10 0.9090</ns0:cell><ns0:cell>0.9000</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>11 0.8778</ns0:cell><ns0:cell>0.8900</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>12 0.8401</ns0:cell><ns0:cell>0.8800</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>13 0.7955</ns0:cell><ns0:cell>0.8700</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>14 0.7432</ns0:cell><ns0:cell>0.8600</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>15 0.6827</ns0:cell><ns0:cell>0.8500</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>16 0.6134</ns0:cell><ns0:cell>0.8400</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>17 0.5346</ns0:cell><ns0:cell>0.8300</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>18 0.4459</ns0:cell><ns0:cell>0.8200</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>19 0.3465</ns0:cell><ns0:cell>0.8100</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>20 0.2359</ns0:cell><ns0:cell>0.8000</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>UC3 Use case. We define N, correct model, and wrong model in the UC3 Use case paragraph. R 2 : coefficient of determination (Equation3). cnSMAPE: complementary normalized SMAPE (Equation10).</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>. We report here the average values achieved by each method in 100 executions with 80% randomly chosen data elements used for the training set and the remaining 20% used for the test set. R 2 : worst value −∞ and best value +1. SMAPE: worst value 2 and best value 0. MAE, MAPE, MSE, and RMSE: worst value +∞ and best value 0. We reported the complete regression results including the standard deviations in TableS1. R</ns0:figDesc><ns0:table /><ns0:note>2 formula: Equation 3. MAE formula: Equation 6. MAPE formula: Equation 7. MSE formula: Equation 4. RMSE formula: Equation 5. SMAPE formula: Equation 8 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59328:2:0:NEW 10 Jun 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Article title: “The coefficient of determination R-squared is more informative than SMAPE,
MAE, MAPE, MSE, and RMSE in regression analysis evaluation”
Authors: Davide Chicco, Matthijs J. Warrens, Giuseppe Jurman
Email: davidechicco@davidechicco.it
Journal: PeerJ Computer Science
Article ID: #CS-2021:03:59328:1:2:REVIEW
10th June 2021
Dear editor Yilun Shang,
Thanks for having considered our article and having taken care of its review. Your comments
and the reviewers’ comments helped us prepare a new version of the manuscript.
You can find a point-by-point response to every comment raised by you and the editors in blue
in this letter below.
Together with the new version of the article, we also submit a tracked-changes version of it with
the edits in blue as well.
Best regards
-- Davide Chicco
Editor comments (Yilun Shang)
MAJOR REVISIONS
0) The reviewers are generally positive about the manuscript. Please make the suggested
changes and provide a point to point response.
Authors: We thank the editor Yilun Shang for having curated the review of our article; we
addressed all the points reported in here and the points annotated to the manuscript,
and prepared a new version of our article.
Reviewer 1 (Tiago Gonçalves)
Comments for the Authors
1.1) The authors use the terms 'R-squared' and 'coefficient of determination' interchangeably.
Although they are the same it would benefit the reader if only one term is used throughout the
text.
R2: Another different approach to the term 'R-squared'. Please consider homogenising the term
throughout the text.
Authors: We thank the reviewer Tiago Gonçalves for this interesting feedback. After
discussing it between us authors, we decided to keep the usage of the three different
terms for the coefficient of determination throughout the manuscript. We made this
decision to make the article consistent with the scientific literature: the statistics and
computer science literature, in fact, the three terms can be found interchangeably. To
explain this concept, we added a new sentence at lines #62-63.
1.2) “Bad regressions” → 'poor quality' regressions?
Authors: We thank the reviewer Tiago Gonçalves for this comment, with which we agree.
To address it, we change that word at line #269.
Reviewer 2 (anonymous)
Comments for the Authors
2.1) Overall, the article is good and has been improved since the last review round.
The new scenario of children obesity prediction is exactly what I was looking for, and further
illustrates the conclusions drawn by the authors regarding the superiority of R-squared.
Authors: We thank the reviewer for these kind words.
2.2) Once again, it would be interesting to see a deeper and more insightful discussion on
possible steps towards better measures, and I look forward to seeing it in future studies.
Authors: We thank the reviewer for this comment, with which we agree. To address it, we
added a few sentences at the end of the Conclusions section, at lines #457-460, where
we reveal some additional details about the future directions of this study.
" | Here is a paper. Please give your review comments after reading it. |
178 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The research of the techniques for effective fake news detection has become very needed and attractive. These techniques have a background in many research disciplines, including morphological analysis. Several researchers stated that simple content-related ngrams and POS tagging had been proven insufficient for fake news classification. However, they did not realise any empirical research results, which could confirm these statements experimentally in the last decade. Considering this contradiction, the main aim of the paper is to experimentally evaluate the potential of the common use of n-grams and POS tags for the correct classification of fake and true news. The dataset of published fake or real news about the current Covid-19 pandemic was pre-processed using morphological analysis. As a result, n-grams of POS tags were prepared and further analysed. Three techniques based on POS tags were proposed and applied to different groups of n-grams in the pre-processing phase of fake news detection. The n-gram size was examined as the first. Subsequently, the most suitable depth of the decision trees for sufficient generalization was scoped. Finally, the performance measures of models based on the proposed techniques were compared with the standardised reference TF-IDF technique.</ns0:p><ns0:p>The performance measures of the model like accuracy, precision, recall and f1-score are considered, together with the 10-fold cross-validation technique. Simultaneously, the question, whether the TF-IDF technique can be improved using POS tags was researched in detail. The results showed that the newly proposed techniques are comparable with the traditional TF-IDF technique. At the same time, it can be stated that the morphological analysis can improve the baseline TF-IDF technique. As a result, the performance measures of the model, precision for fake news and recall for real news, were statistically significantly improved.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Fake News is currently the biggest bugbear of the developed world <ns0:ref type='bibr' target='#b13'>(Jang et al., 2018)</ns0:ref>. Although the spreading of false information or false messages for personal or political benefit is certainly nothing new, current trends such as social media enable every individual to create false information easier than ever before <ns0:ref type='bibr' target='#b2'>(Allcott & Gentzkow, 2017)</ns0:ref>. The article deals with evaluating four proposed techniques for fake and true news classification using morphological analysis. Morphological analysis belongs to the basic means for natural language processing research. It deals with the parts of speech tags (POS tags) as morphological characteristics of the word in the context, which can be considered a style-based fake detection method <ns0:ref type='bibr' target='#b38'>(Zafarani et al., 2019)</ns0:ref>. Linguistic-based features are extracted from the text content in terms of document organisations from different levels, such as characters, words, sentences and documents. Sentence-level features refer to all the important attributes that are based on sentence scale. They include parts of speech tagging (POS), the average sentence length, the average length of a tweet/post, the frequency of punctuations, function words, and phrase in a sentence, the average polarity of the sentence (positive, neutral or negative), as well as the sentence complexity <ns0:ref type='bibr' target='#b16'>(Khan et al., 2019)</ns0:ref>. Existing research articles mainly investigate standard linguistic features, including lexical, syntactic, semantic and discourse features, to capture the intrinsic properties of misinformation. Syntactic features can be divided into shallow, where belongs frequency of POS tags and punctuations, and deep syntactic features <ns0:ref type='bibr' target='#b8'>(Feng, Banerjee & Choi, 2012)</ns0:ref>. Morphological analysis of POS tags based on n-grams is used in this paper to evaluate its suitability for successful fake news classification. An N-gram is a sequence of N tokens (words). N-grams are also called multi-word expressions or lexical bundles. N-grams can be generated on any attribute, with word and lemma being the most frequently used ones. The following word expressions represent 2-gram: 'New York', and 3-gram: 'The Three Musketeers'. The analysis of the n-grams is considered more meaningful than the analysis of the individual words (tokens), which constitute the n-grams. Several research articles stated that simple content-related n-grams and POS tagging had been proven insufficient for the classification task <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b4'>(Conroy, Rubin & Chen, 2015)</ns0:ref> <ns0:ref type='bibr' target='#b35'>(Su et al., 2020)</ns0:ref>. However, these findings mainly represent the authors' opinion because they did not realise or publish any empirical research results, confirming these statements in the last decade. Considering this contradiction, the main aim of the paper is to experimentally evaluate the potential of the common use of n-grams and POS tags for the correct classification of fake and true news. Therefore, continuous sequences of n items from a given sample of POS tags (ngrams) were analysed. The techniques based on POS tags were proposed and used in order to meet this aim. Subsequently, these techniques were compared with the standardised reference TF-IDF technique to evaluate their main performance characteristics. Simultaneously, the question, whether the TF-IDF technique can be improved using POS tags was researched in detail. All techniques have been applied in the pre-processing phase on different groups of ngrams. The resulted datasets have been analysed using decision tree classifiers. The article aims to present and evaluate proposed techniques for pre-processing of input vectors of a selected classifier. These techniques are based on creating n-grams from POS tags. The research question is whether the proposed techniques are more suitable than the traditional baseline technique TF-IDF or whether these techniques are able to improve the results of the TF-IDF technique. All proposed techniques have been applied to different levels of n-grams. Subsequently, the outcomes of these techniques were used as the input vectors of the decision tree classifier. The following methodology was used for evaluation of the suitability of a proposed approach based on n-grams of POS tags:</ns0:p><ns0:p> Identification of POS tags in the analysed dataset.</ns0:p><ns0:p> N-grams (1-grams, 2-grams, 3-grams, 4-grams) definition from POS tags. N-gram represents the sequence of the POS tags.</ns0:p><ns0:p> Calculation of frequency of occurrence of an n-gram in documents. In other words, the relative frequency of n-gram in examined fake and true news is calculated.</ns0:p><ns0:p> Definition of input vectors of classifiers using three proposed techniques for POS tags and controlled TF-IDF technique.</ns0:p><ns0:p> Application of decision tree classifiers, parameter tuning concerning the different depths and length of n-grams.</ns0:p><ns0:p> Identification and comparison of the decision trees' characteristics, mainly the accuracy, depth of the trees and time performance. The structure of the article is as follows. The current state of the research in the field of fake news identification is summarised in the second section. The datasets of news Covid-19 used in the research are described in the second section. This section also describes the process of ngrams extraction from POS tags. Simultaneously, three POS tags-based techniques are proposed for preparing input vectors for decision trees classifiers. Subsequently, the same section discusses the process of decision trees modelling, the importance of finding the most suitable ngram length and maximum depth. Finally, statistical evaluation of the performance of the modified techniques based on POS tags for fake news classification is explained in the same section. The most important results, together with an evaluation of model performance and time efficiency of the proposed techniques, are summarised in the fourth section. The detailed discussion about the obtained results and conclusions form the content of the last section of the article.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Related Work</ns0:head><ns0:p>There has been no universal definition for fake news. However, Zhou and Zafarani <ns0:ref type='bibr' target='#b40'>(Zhou & Zafarani, 2020)</ns0:ref> define fake news as intentionally false news published by a news outlet. Simultaneously, they explained related terms in detail and tried to define them with a discussion about the differences based on the huge set of related publications. The same authors categorised automatic detection of fake news from four perspectives: knowledge, style, propagation and source. Considering this, the research described in this paper belongs to the style-based fake news detection category, which methods try to assess news intention <ns0:ref type='bibr' target='#b40'>(Zhou & Zafarani, 2020)</ns0:ref>. According to their definition, fake news style can be defined as a set of quantifiable characteristics (features) that can well represent fake news content and differentiate it from true news content. <ns0:ref type='bibr' target='#b17'>Kumar and Shah (Kumar & Shah, 2018)</ns0:ref> provided a comprehensive review of many facets of fake news distributed over the Internet. They quantified the impact of fake news and characterised the algorithms used to detect and predict them. Moreover, they summarise the current state of the research and approaches applied in the field of fake news content analysis from the linguistic, semantic and knowledge discovery point of view. They did not conclude the overall performance of the style-based methods using ML algorithms despite the overall scope of the review.</ns0:p><ns0:p>Other contemporary surveys <ns0:ref type='bibr' target='#b40'>(Zhou & Zafarani, 2020;</ns0:ref><ns0:ref type='bibr' target='#b39'>Zhang & Ghorbani, 2020;</ns0:ref><ns0:ref type='bibr' target='#b33'>Shu et al., 2017)</ns0:ref> provide further evidence that the research related to the field of fake news is very intense now, mainly due to their negative consequences for society. The authors analysed various aspects of the fake news research, discussed the reasons, creators, resources and methods of their dissemination, as well as the impact and the machine learning algorithms created to detect them effectively. Sharma et al. <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2019)</ns0:ref> also published a comprehensive survey highlighting the technical challenges of fake news. They summarised characteristic features of the datasets of news and outlined the directions for future research. They discussed existing methods and ML techniques applicable to identifying and mitigating fake news, focusing on the significant advances in each method and their advantages and limitations. They discussed the results of the application of different classification algorithms, including decision trees. They concluded that using n-grams alone can not entirely capture finer-grained linguistic information present in fake news writing style. However, their application on the dataset, which contains pre-processed items using POS tagging, is not mentioned.</ns0:p><ns0:p>Zhang and Ghorbani <ns0:ref type='bibr' target='#b39'>(Zhang & Ghorbani, 2020)</ns0:ref> stated that because online fake reviews and rumours are always compacted and information-intensive, their content lengths are often shorter than online fake news. As a result, traditional linguistic processing and embedding techniques such as bag-of-words or n-gram are suitable for processing reviews or rumours. However, they are not powerful enough for extracting the underlying relationship for fake news. For online fake news detection, sophisticated embedding approaches are necessary to capture the key opinion and sequential semantic order in news content. De Oliveira et al. <ns0:ref type='bibr' target='#b6'>(de Oliveira et al. 2021)</ns0:ref> realized the literature survey focused on the preprocessing data techniques used in natural language processing, vectorization, dimensionality reduction, machine learning, and quality assessment of information retrieval. They discuss the role of n-grams and POS tags only partially.</ns0:p><ns0:p>On the other hand, Li et al. <ns0:ref type='bibr' target='#b20'>(Li et al., 2020)</ns0:ref> consider the n-gram approach the most effective linguistic analysis method applied to fake news detection. Apart from word-based features such as n-grams, syntactic features such as POS tags are also exploited to capture linguistic characteristics of texts.</ns0:p><ns0:p>Stoick <ns0:ref type='bibr' target='#b34'>(Stoick, 2019)</ns0:ref> stated that previous linguistic work suggests part-of-speech and n-gram frequencies are often different between fake and real articles. He created two models and concluded that some aspects of the fake articles remained readily identifiable, even when the classifier was trained on a limited number of examples. The second model used n-gram frequencies and neural networks, which were trained on n-grams of different length. He stated that the accuracy was near the same for each n-gram size, which means that some of the same information may be ascertainable across n-grams of different sizes. Ahmed et al. <ns0:ref type='bibr' target='#b1'>(Ahmed, Traore & Saad, 2017)</ns0:ref> further argued that the latest advance in natural language processing (NLP) and deception detection could help to detect deceptive news. They proposed a fake news detection model that analyses n-grams using different features extraction and ML classification techniques.</ns0:p><ns0:p>The combination of TF-IDF as features extraction, together with LSVM classifier, achieved the highest accuracy. Similarly, Jain (Jain, 2020) extracted linguistic/stylometric features, a bag of words TF and BOW TF-IDF vector and applied the various machine learning models, including bagging and boosting methods, to achieve the best accuracy. However, they stated that the lack of available corpora for predictive modelling is an essential limiting factor in designing effective models to detect fake news. Wynne et al. <ns0:ref type='bibr' target='#b37'>(Wynne, 2019)</ns0:ref> investigated two machine learning algorithms using word n-grams and character n-grams analysis. They obtained better results using character n-grams with TF-IDF) and Gradient Boosting Classifier. They did not discuss the pre-processing phase of ngrams, as will be described in this article.</ns0:p><ns0:p>Thorne and Vlachos <ns0:ref type='bibr' target='#b36'>(Thorne & Vlachos, 2018)</ns0:ref> surveyed automated fact-checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors. They identified the subject-predicateobject triples from small knowledge graphs to fact check numerical claims. Once the relevant triple had been found, a truth label was computed through a rule-based approach that considered the error between the claimed values and the retrieved values from the graph. Shu, Silva, Wang, Jiliang and Liu <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref> proposed to use linguistic-based features such as total words, characters per word, frequencies of large words, frequencies of phrases (i.e., n-grams and bag-of-words). They stated that fake contents are generated intentionally by malicious online users, so it is challenging to distinguish between fake information and truth information only by content and linguistic analysis. POS tags were also exploited to capture the linguistic characteristics of the texts. However, several works have found the frequency distribution of POS tags to be closely linked to the genre of the text being considered <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b27'>Ott et al. (Ott et al., 2011)</ns0:ref> examined this variation in POS tag distribution in spam, intending to find if this distribution also exists concerning text veracity. They obtained better classification performance with the n-grams approach but found that the POS tags approach is a strong baseline outperforming the best human judge. Later work has considered more in-depth syntactic features derived from probabilistic context-free grammars (PCFG) trees. They assumed that the approach based only on n-grams is simple and cannot model more complex contextual dependencies in the text. Moreover, syntactic features used alone are less powerful than wordbased n-grams, and a naive combination of the two cannot capture their complex interdependence. They concluded that the weights learned by the classifier are mainly in agreement with the findings of existing theories on deceptive writing <ns0:ref type='bibr' target='#b25'>(Ott, Cardie & Hancock, 2013)</ns0:ref>. Some authors, for example, Conroy, Rubin, and Chen <ns0:ref type='bibr' target='#b4'>(Conroy, Rubin & Chen, 2015)</ns0:ref>, have noted that simple content-related n-grams and POS tagging have been proven insufficient for the classification task. However, they did not research the n-grams from the POS tags. They suggested using Deep Syntax analysis using Probabilistic Con-text-Free Grammars (PCFG) to distinguish rule categories (lexicalised, non-lexicalised, parent nodes, etc.) instead of deception detection with 85-91% accuracy. Su et al. <ns0:ref type='bibr' target='#b35'>(Su et al., 2020)</ns0:ref> also stated that simple content-related n-grams and shallow part-ofspeech (POS) tagging have proven insufficient for the detection task, often failing to account for important context information. On the other hand, these methods have been proven useful only when combined with more complex analysis methods.</ns0:p><ns0:p>Khan et al. <ns0:ref type='bibr' target='#b16'>(Khan et al., 2019)</ns0:ref> stated that meanwhile, the linguistic-based features extracted from the news content are not sufficient for revealing the in-depth underlying distribution patterns of fake news <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref>. Auxiliary features, such as the news author's credibility and the spreading patterns of the news, play more important roles for online fake news prediction.</ns0:p><ns0:p>On the other hand, Qian et al. <ns0:ref type='bibr' target='#b28'>(Qian et al., 2018)</ns0:ref> proposed a similar approach, which is researched further in this paper, based on a convolutional neural network (TCNN) with a user response generator (URG). TCNN captures semantic information from text by representing it at the sentence and word level. URG learns a generative user response model to a text from historical user responses to generate responses to new articles to assist fake news detection. They used POS tags in combination with n-grams as a comparison of the accuracy of the proposed technique of NN based classification. <ns0:ref type='bibr' target='#b10'>Goldani et al. (Goldani, 2021)</ns0:ref> used capsule neural networks in the fake news detection task. They applied different levels of n-grams for feature extraction and subsequently used different embedding models for news items of different lengths. Static word embedding was used for short news items, whereas non-static word embeddings that allow incremental uptraining and updating in the training phase are used for medium length or long news statements. They did not consider POS tags in the pre-processing phase. Finally, Kapusta et al. <ns0:ref type='bibr'>(Kapusta et al., 2020)</ns0:ref> realised a morphological analysis of several news datasets. They analysed the morphological tags and compared the differences in their use in fake news and real news articles. They used morphological analysis for words classification into grammatical classes. Each word was assigned a morphological tag, and these tags were thoughtfully analysed. The first step consisted of creating groups that consisted of related morphological tags. The groups reflected on the basic word classes. The authors identified statistically significant differences in the use of word classes. Significant differences were identified for groups of foreign words, adjectives and nouns favouring fake news and groups of wh-words, determiners, prepositions, and verbs favouring real news. The third dataset was evaluated separately and was used for verification. As a result, significant differences for groups adverb, verbs, nouns were identified. They concluded that it is important that the differences between groups of words exist. It is evident that morphological tags can be used as input into the fake news classifiers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>The dataset analysed by Li <ns0:ref type='bibr' target='#b19'>(Li, 2020)</ns0:ref> was used for the evaluation of proposed techniques. This dataset collects more than 1100 articles (news) and posts from social networks related to Covid-19. It was created in cooperation with the projects Lead Stories, Poynter, FactCheck.org, Snopes, EuVsDisinfo, which monitor, identify and control misleading information. These projects define the true news as an article or post, which truthfulness can be proven and come from trusted resources. Vice versa, as the fake news are considered all articles and post, which have been evaluated as false and come from known fake news resources trying to broadcast misleading information intentionally.</ns0:p></ns0:div>
<ns0:div><ns0:head>POS Tags</ns0:head><ns0:p>Morphological tags were assigned to all words of the news from the dataset using the unique tool called TreeTagger. Schmid <ns0:ref type='bibr' target='#b31'>(Schmid, 1994)</ns0:ref> developed the set of tags called English Penn Treebank using this annotating tool. The final English Penn Treebank tagset contains 36 morphological tags. However, considering the aim of the research, the following tags were not included in the further analysis due to their low frequency of appearance or discrepancy:</ns0:p><ns0:formula xml:id='formula_0'> SYM (symbol),</ns0:formula><ns0:p> LS (list marker).</ns0:p><ns0:p>Therefore, the final number of morphological tags used in the analysis was 33. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the morphological tags divided into groups.</ns0:p></ns0:div>
<ns0:div><ns0:head>N-grams Extraction from POS Tags</ns0:head><ns0:p>N-grams were extracted from POS tags in this data pre-processing step. As a result, sequences of n-grams from a given sample of POS tags were created. Since 1-grams and identified POS tags are identical, the input file with 1-grams used in further research is identical to the file with identified POS tags. The n-grams for the TF-IDF technique were created in the same way. However, it is important to emphasise that this technique used so call terms, which represent the lemmas or stems of a word.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Techniques Used to Pre-process the Input Vectors</ns0:head><ns0:p>The following four techniques have been applied for pre-processing of the input vectors for a selected classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>Term Frequency -Inverse Document Frequency (TF-IDF) Technique</ns0:head><ns0:p>TF-IDF is a traditional technique that leveraged to assess the importance of tokens to one of the documents in a corpus <ns0:ref type='bibr' target='#b30'>(Qin, Xu & Guo, 2016)</ns0:ref>. The TF-IDF approach creates a bias in that frequent terms highly related to a specific domain, which is typically identified as noise, thus leading to the development of lower term weights because the traditional TF-IDF technique is not specifically designed to address large news corpora. Typically, the TF-IDF weight is composed of two terms: the first computes the normalised Term Frequency (TF), the second term is the Inverse Document Frequency (IDF).</ns0:p><ns0:p>Let is a term/word, is a document, is any term in the document. Then the frequency of the 𝑡 𝑑 𝑤 term/word in document d is calculated as follows 𝑡 𝑡𝑓(𝑡,𝑑) = 𝑓(𝑡,𝑑)</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑓(𝑤,𝑑) ,</ns0:head><ns0:p>where is the number of terms/words in document and is the number of all terms 𝑓(𝑡,𝑑) 𝑑 𝑓(𝑤,𝑑) in the document. Simultaneously, the number of all documents is also taken into account in TF-IDF calculation, in which a particular term/word occurs. This number is denoted as . It 𝑖𝑑𝑓(𝑡,𝐷) represents an inverse document frequency expressed as follows</ns0:p><ns0:formula xml:id='formula_1'>𝑖𝑑𝑓(𝑡,𝐷) = ln 𝑁 ∑ (𝑑 ∈ 𝐷 : 𝑡 ∈ 𝑑) + 1</ns0:formula><ns0:p>, where is a corpus of all documents and is a number of documents in the corpus. 𝐷 𝑁 The formula of TfIdf can be written as 𝑡𝑓𝑖𝑑𝑓(𝑡,𝑑,𝐷) = 𝑡𝑓(𝑡,𝑑) × 𝑖𝑑𝑓(𝑡,𝐷) . Formula has various variants such as or ). Similarly, there are 𝑡𝑓 log(𝑡𝑓(𝑡,𝑑)) log (𝑡𝑓(𝑡,𝑑) + 1 several variants, how can be calculated <ns0:ref type='bibr' target='#b3'>(Chen, 2017)</ns0:ref>. Considering this fact, the calculation 𝑖𝑑𝑓 of the TfIdf was realised using the scikit-learn library in Python (https://scikit-learn.org). The TF-IDF technique applied in the following experiment is used as a reference technique for comparison selected characteristics of the new techniques described below. The same dataset was used as an input. However, the stop words were removed before in this case. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>POS Frequency (PosF) Technique</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This technique is an analogy of the Term Frequency technique. However, it calculates with the frequency of POS tags. Let pos is an identified POS tag, d document, w represents any POS tag identified in the document. Then the frequency of POS tag pos in document d can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_2'>𝑃𝑜𝑠𝐹(𝑝𝑜𝑠,𝑑) = 𝑓(𝑝𝑜𝑠,𝑑) 𝑓(𝑤,𝑑) ,</ns0:formula><ns0:p>where is the number of occurrences of POS tag in document and is the 𝑓(𝑝𝑜𝑠,𝑑) 𝑑 𝑓(𝑤,𝑑) number of all identified POS tags in the document. As a result, PosF expresses the relative frequency of each POS tag in the frame of the analysed list of POS tags identified in the document.</ns0:p></ns0:div>
<ns0:div><ns0:head>PosF-IDF Technique</ns0:head><ns0:p>This technique is the analogy of the TF-IDF technique. Similarly to the already introduced PosF technique, it considers the POS tags, which have been identified in each document in the analysed dataset based on individual words and sentences. The documents containing only identified POS tags represented the inputs for the calculation of PosF-IDF. Besides the relative frequency of POS tags in the document, the number of all documents in which a particular POS tag has also been identified is considered.</ns0:p></ns0:div>
<ns0:div><ns0:head>Merged TF-IDF and PosF Technique</ns0:head><ns0:p>This technique was proposed to confirm whether is it possible to improve the traditional TF-IDF technique by using POS tags. Therefore, the following vectors were created for each document:  TfIdf vector,  PosF vector, which represents the relative frequency of POS tags in the document. Subsequently, a result of applying the merged technique is again a vector, which originated by merging the previous vectors. Therefore, both vectors a are considered for 𝑇𝑓𝐼𝑑𝑓(𝑑) 𝑃𝑜𝑠𝐹(𝑑) document d, which were calculated using the techniques TfIdf and PosF mentioned.</ns0:p><ns0:p>,</ns0:p><ns0:formula xml:id='formula_3'>𝑇𝑓𝐼𝑑𝑓(𝑑) = (𝑡 1 ,𝑡 2 ,…,𝑡 𝑛 ) 𝑃𝑜𝑠𝐹(𝑑) = (𝑝 1 ,𝑝 2 ,…,𝑝 𝑚 ) .</ns0:formula><ns0:p>Then, the final vector for document calculated by the merge technique is</ns0:p><ns0:formula xml:id='formula_4'>𝑚𝑒𝑟𝑔𝑒(𝑑) 𝑑 𝑚𝑒𝑟𝑔𝑒(𝑑) = (𝑡 1 ,𝑡 2 ,…,𝑡 𝑚 ,𝑝 1 ,𝑝 2 ,…,𝑝 𝑚 ).</ns0:formula><ns0:p>A set of techniques for pre-processing the input vectors for the selected knowledge discovery classification task was created. These techniques can be considered the variations of the previous TF-IDF technique, in which the POS tags are taken into account additionally to the original terms.</ns0:p><ns0:p>As a result, the four techniques described above represent typical variations, which allow comparing and analysing the basic features of the techniques based on the terms and POS tags.</ns0:p></ns0:div>
<ns0:div><ns0:head>Decision Trees Modelling</ns0:head><ns0:p>Several classifiers like decision tree classifiers, Bayesian classifiers, k-nearest-neighbour classifiers, case-based reasoning, genetic algorithms, rough sets, and fuzzy logic techniques were considered. Finally, the decision trees were selected to evaluate the suitability of the proposed techniques for calculating the input vectors and analyse their features. The decision trees allow not only a simple classification of cases, but they create easily interpretable and understandable classification rules at the same time. In other words, they simultaneously represent functional classifiers and a tool for knowledge discovery and understanding. The same approach was partially used in other similar research papers <ns0:ref type='bibr'>(Kapusta et al., 2020;</ns0:ref><ns0:ref type='bibr'>Kapusta, Benko & Munk, 2020)</ns0:ref>.</ns0:p><ns0:p>The attribute selection measures like Information Gain, Gain Ratio, and Gini Index <ns0:ref type='bibr' target='#b21'>(Lubinsky, 1995)</ns0:ref>, used while decision tree is created, are considered the further important factor, why decision trees had been finally selected. The best feature is always selected in each step of decision tree development. Moreover, it is virtually independent of the number of input attributes. It means that even though there is supplemented a larger amount of the attributes (elements of the input vector) on the input of the selected classifier, the accuracy remains unchanged.</ns0:p></ns0:div>
<ns0:div><ns0:head>K-fold validation</ns0:head><ns0:p>Comparing the decision trees created in the realised experiment is based on the essential characteristics of the decision trees as the number of nodes or leaves. These characteristics define the size of the tree, which should be suitably minimised. Simultaneously, the performance measures of the model like accuracy, precision, recall and f1-score are considered, together with 10-fold cross-validation technique. K-fold validation was used for the evaluation of the models. It generally results in a less biased model compare to other methods because it ensures that every observation from the original dataset has the chance of appearing in training and test set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Setting the Most Suitable N-gram Length</ns0:head><ns0:p>All compared techniques for input vectors pre-processing required identical conditions. Therefore, the highest values of n in n-grams was determined as the first step. Most NLP tasks work usually with n = {1,2,3}. The higher value of n (4-grams, 5-grams, etc.) has significant demands on hardware and software, calculation time, and overall performance. On the other hand, the potential contribution of the higher n-grams in increasing the accuracy of created models is limited. Several decision tree models were created to evaluate this consideration. N-grams (1-gram, 2gram, …, 5-gram) for tokens/words and for POS tags were prepared. Subsequently, the TF-IDF technique was applied to n-grams of tokens/words. At the same time, PosF and PosfIdf techniques were applied on n-grams of POS tags. As a result, 15 files with the input vectors have been created (1-5-grams x 3 techniques). Figure <ns0:ref type='figure'>2</ns0:ref> visualises the individual steps of this process for better clarity. Ten-fold cross-validation led to creating ten decision trees models for each pre-processed file (together 15 files). In all cases, the accuracy was considered the measure of the model performance. Figure <ns0:ref type='figure'>3</ns0:ref> shows a visualisation of all models with different n-gram length. The values on the x-axis represent a range of used n-grams. For instance, n-gram (1,1) means that only unigrams had been used. Other ranges of n-grams will be used in the next experiments. For example, the designation of (1,4) will represent the 1-grams, 2-grams, 3-grams, and 4-grams included together in one input file in this case.</ns0:p><ns0:p>The results show that the accuracy is declining with the length of the n-grams, mostly in the case of applying the TF-IDF technique. Although it was not possible to process longer n-grams (6grams, 7-grams, etc.) due to the limited time and computational complexity, it can be assumed that their accuracy would be declined similarly to the behaviour of the accuracy for 5-grams in the case of all applied techniques. Considering the process of decision tree model creation, it is not surprising that joining the ngrams to one input file achieved the highest accuracy. The best accuracy can be reached by joining n-grams to one input file. Considering this, the most suitable measure will be selected during the creation of the decision tree. As a result, all following experiments will work with the file, consisting of joined 1-grams, 2-grams, 3-grams and 4-grams (1,4).</ns0:p></ns0:div>
<ns0:div><ns0:head>Setting the Maximum Depth of the Decision Tree</ns0:head><ns0:p>Overfitting represents a frequent issue. Although the training error decreases by default with the increasing size of the created tree, the test errors often increase with the increasing size. As a result, the classification of new cases can be inaccurate. Techniques like pruning or hyperparameter tuning can overcome overfitting.</ns0:p><ns0:p>The maximal depth of the decision tree will be analysed to minimise the overfitting issue to find understandable rules for fake news identification.</ns0:p><ns0:p>As was mentioned earlier, the main aim of the article is to evaluate the most suitable techniques for the preparation of input vectors. Simultaneously, the suitable setting of the parameter max_depth will be evaluated. Complete decision trees for n-grams from tokens/words and POS tags were created for finding suitable values of selected characteristics of decision trees (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>).</ns0:p><ns0:p>The results show that the techniques working with the POS tags have a small number of input vectors compared to the reference TF-IDF technique. These findings were expected because while TF-IDF takes all tokens/words, in the case of the PosfIdf as well as PosF techniques, each token/word had been assigned to one of 33 POS tags (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). This simplification is also visible in the size of the generated decision tree (depth, node count, number of leaves). The application of the PosfIdf and PosF techniques led to the simpler decision tree. However, the maximal depth of the decision tree is the essential characteristics for further considerations. While it is equal to 30 for TF-IDF, the maximal depth is lower in the case of both remaining techniques. Therefore, decision trees with different depths will be further considered in the main experiment to ensure the same conditions for all compared techniques. The maximal depth will be set to 30.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Methodology of the Main Experiment</ns0:head><ns0:p>The following experiment's main aim is to evaluate if it is possible to classify the fake news messages using POS tags and compare the performance of the proposed techniques (PosfIdf, PosF, merge) with the reference TF-IDF technique which uses tokens/words. The comparison of these four techniques is joined with the following questions:</ns0:p><ns0:p>Q1: What is the most suitable length of the n-grams for these techniques? Q2: How to create models using these techniques to prevent possible overfitting? Q3: How to compare the models with different hyperparameters, which tune the performance of the models?</ns0:p><ns0:p>The first question (Q1) was answered in the section Setting the Most Suitable N-gram Length.</ns0:p><ns0:p>As its result, the use of joined 1-grams, 2-grams, 3-grams and 4-grams (1,4) is the most suitable.</ns0:p><ns0:p>The second question (Q2) can be answered by experimenting with the maximum depth hyperparameter used in the decision tree classifier. The highest acceptable value of this hyperparameter was found in the section Setting the Maximum Depth of the Decision Tree. The main experiment described later will be realised regarding the last, third question (Q3). The following steps of the methodology will be applied:</ns0:p><ns0:p>1. Identification of POS tags in the dataset.  Testing the quality of the model's predictions on the testing subset. The following new characteristics were established:</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Application of</ns0:head><ns0:p> prec_fake (precision for group fake),</ns0:p><ns0:p> prec_real (precision for group real),</ns0:p><ns0:p> rec_fake (recall for group fake),</ns0:p><ns0:p> rec_real (recall for group real),  f1-score,</ns0:p><ns0:p> time spent on one iteration.</ns0:p><ns0:p> Analysis of the results (evaluation of the models).</ns0:p><ns0:p>The results of steps 1-4 are four input vectors prepared using the before-mentioned four proposed techniques. The fifth step of the proposed methodology is focused on the evaluation of these four examined techniques. The application of the proposed methodology with 10-fold cross-validation resulted in the creation of 1200 different decision trees (30 max_depth values x 4 techniques x 10-fold validation). In other words, 40 decision trees with 10-fold cross-validation were created for each maximal depth. Figure <ns0:ref type='figure'>4</ns0:ref> depicts the individual steps of the methodology of the experiment. The last step of the proposed methodology, analysis of the result, will be described in section Results. All steps of the methodology were implemented in Python and its libraries. Text processing was realised using the NLTK library (https://www.nltk.org/). The tool TreeTagger <ns0:ref type='bibr' target='#b31'>(Schmid, 1994)</ns0:ref> was used for the identification of POS tags. Finally, the scikit-learn library (https://scikit-learn.org) was used for creating decision tree models. The Gini impurity function was applied to measure the quality of a split of decision trees.</ns0:p><ns0:p>The strategy used to choose the split at each node was chosen 'best' split (an alternative is 'best random split'). Subsequently, the maximum depth of the decision trees was examined to prevent overfitting. Other hyperparameters besides the minimum number of samples required to split an internal node or the minimum number of samples required to be at a leaf node were not applied.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Experiment</ns0:head><ns0:p>The quality of the proposed models (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) was evaluated using evaluation measures (prec, rec, f1-sc, prec_fake, rec_fake, prec_real, rec_real), as well as from time effectivity point of view (time). A comparison of the depths of the complete decision trees showed (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>) that there is no point in thinking about the depth greater than 29. Therefore, decision trees with a maximal depth less than 30 were created in line with the methodology referred to in section 3.8. Evaluation measures <ns0:ref type='bibr'>Fig 6a)</ns0:ref> increase up to the depth of five and reach the values smaller than 0.73 in average (rec < 0.727, f1-sc < 0.725, prec < 0.727). Subsequently, they reach stable values greater than 0.73 from the depth of six (rec > 0.732, f1-sc > 0.731, prec > 0.732) and less than 0.75 (rec < 0.742, f1-sc < 0.740, prec < 0.741) As a result, the PosF technique reaches better performance in small values of depth (up to 4) compared to others. While the merge technique originates from the joining of PoSF and TF-IDF technique, its results will be naturally better.</ns0:p><ns0:p>The model performance (prec) for the given depths (< 30) reached the above-average values from the depth of six (Fig <ns0:ref type='figure'>2b</ns0:ref>). The model performance measure prec (p > 0.05) was not statistically significant differences from depth equal to six. Similar results were also obtained for measures rec and f1-sc. As a result, the models' performance will be further examined for depths 6-10. The Kolmogorov-Smirnov test was applied to verify the normality assumption This test was statistically significant in all cases of examined evaluation measures and time (p < 0.05). It means that the assumption was violated because unless the assumption of covariance matrix sphericity is not met, the I. type error increases <ns0:ref type='bibr' target='#b0'>(Ahmad, 2013)</ns0:ref>, <ns0:ref type='bibr' target='#b11'>(Haverkamp & Beauducel, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>(Munkova et al., 2020)</ns0:ref>. Therefore, the degrees of freedom had been adjusted ( , 𝑑𝑓1 = (𝐽 -1)(𝐼 -1) 𝑑𝑓2 = (𝑁 -𝑙)(𝐼 -1) for the used F-test using Greenhouse-Geisser and Huynh-Feldt adjustments (Epsilon). As a result, the declared level of significance was reached Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where I is the number of levels of the factor model (dependent samples), J is the number of the levels of the factor deep (independent samples), and N is the number of cases. The Bonferroni adjustment was used to apply multiple comparisons. This adjustment is usually applied when several dependent and independent samples are simultaneously compared <ns0:ref type='bibr' target='#b18'>(Lee & Lee, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b9'>(Genç & Soysal, 2018)</ns0:ref>. Bonferroni adjustment represents the most conservative approach, in which the level of significance (alpha) for a whole set N of cases is set so that the level of significance for each case is equal to .</ns0:p><ns0:p>𝑎𝑙𝑝ℎ𝑎 𝑁</ns0:p></ns0:div>
<ns0:div><ns0:head>Model Performance</ns0:head><ns0:p>The first phase of the analysis focused on the performance of the models. The performance was analysed by selected evaluation measures (prec, rec, f1-sc, prec_fake, rec_fake, prec_real, rec_real) according to the within-group factor and between-groups factor and their interaction.</ns0:p><ns0:p>The models (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) represented the levels of withingroup factor. The depths of the decision tree (6-10) represented the levels of a between-group factor. Considering the violated assumption of covariance matrix sphericity, the modified tests for repeated measures were applied to assess the effectivity of the examined models <ns0:ref type='bibr' target='#b7'>(Dien, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b23'>(Montoya, 2019)</ns0:ref>. Epsilon represented the degree of violation of this assumption. If Epsilon equals one, the assumption is fulfilled. The values of Epsilon were significantly lower than one in both cases (Epsilon < 0.69). Zero hypotheses claim that there is no statistically significant difference in the quality of the examined models. The zero hypotheses, which claimed that there is not a statistically significant difference in values of evaluation measures prec, rec, and f1-sc between examined models, were rejected at the 0.001 significance level (prec: G-G Epsilon = 0.597, H-F Epsilon = 0.675, adj.p < 0.001; rec: G-G Epsilon = 0.604, H-F Epsilon = 0.684, adj.p < 0.001; f1-sc: G-G Epsilon = 0.599, H-F Epsilon = 0.678, adj.p < 0.001). On the contrary, the zero hypotheses, which claimed that the performance of the models (prec/rec/f1-sc) does not depend on a combination of within-group factor and between-groups factor, were not rejected (p > 0.05) (model x deep). Factor deep has not any impact on the performance of the examined models.</ns0:p><ns0:p>After rejecting the global zero hypotheses, the statistically significant differences between the models in the quality of the model's predictions were researched. Three homogeneous groups were identified based on prec, rec and f1-sc using the multiple comparisons. PosF(1,4) and TfIdf(1,4) techniques reached the same quality of the model's predictions (p > 0.05). Similar results were obtained for the pair PosfIdf(1,4) and PosF(1,4), as well for the pair TfIdf(1,4) and merge(1,4). Statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>3</ns0:ref>) were identified between the models merge(1,4) and Pos (p < 0.05), as well as between the models TfIdf(1,4) and PosfIdf(1,4) (p < 0.05). The merge(1,4) model reached the highest quality, considering the evaluation measures. The values of Epsilon were smaller than one in the case of partial evaluation measures prec_fake and rec_fake for the fake news. This finding was more notable in the case of Greenhouse-Geisser correction (Epsilon < 0.78). The zero hypotheses, which claimed that there is not any significant difference between the values of evaluation measures prec_fake and rec_fake between the examined models, were rejected (prec_fake: G-G Epsilon = 0.779, H-F Epsilon = 0.897, adj.p < 0.001; rec_fake: G-G Epsilon = 0.756, H-F Epsilon = 0.869, adj.p < 0.001). The impact of the between-groups factor deep has not been proven (p > 0.05). The performance of the models (prec_fake/rec_fake) does not depend on the interaction of the factors model and deep. Two homogeneous groups were identified for prec_fake (Table <ns0:ref type='table' target='#tab_0'>4A). PosF(1,4) and TfIdf(1,4), as well as PosF(1,4) and PosfIdf(1,4</ns0:ref>) reached the same quality of the model's predictions (p > 0.05). The statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>4A</ns0:ref>) were identified between merge(1,4) and other models (p < 0.05) and between TfIdf(1,4) and PosfIdf(1,4) (p < 0.05). The merge(1,4) model reached the best quality from the prec_fake point of view. On the other hand, PosF(1,4) model reached the highest quality considering the results of the multiple comparison for rec_fake (Table <ns0:ref type='table'>4B</ns0:ref>). Two homogeneous groups (PosfIdf(1,4), merge(1,4), TfIdf(1,4)) and (merge(1,4), TfIdf(1,4), PosF(1,4)) were identified based on the evaluation measure rec_fake (Table <ns0:ref type='table'>3B</ns0:ref>). The statistically significant differences (Table <ns0:ref type='table'>3B</ns0:ref>) were identified only between the PosF(1,4) and PosfIdf(1,4) models (p < 0.05).</ns0:p><ns0:p>Similarly, the values of Epsilon were smaller than one (G-G Epsilon < 0.85) in case of evaluation measures prec_real and rec_real, which evaluate the quality of the prediction for a partial class of real news. The zero hypotheses, which claimed, that there is no statistically significant difference between the values of evaluation measures prec_real and rec_real in examined models, were rejected at the 0.001 significance level (prec_real: G-G Epsilon = 0.596, H-F Epsilon = 0.675, adj.p < 0.001; rec_real: G-G Epsilon = 0.842, H-F Epsilon = 0.975, adj.p < 0.001). The impact of the between-groups factor deep has not also been proven in this case (p > 0.05). It means that the performance of the models (prec_real/rec_real) does not depend on the interaction of the factors (model x deep). The model merge(1,4) reached the highest quality from the evaluation measures, prec_real a rec_real point of view (Table <ns0:ref type='table'>5</ns0:ref>). Only one homogeneous group was identified from the multiple comparisons for prec_real (Table <ns0:ref type='table'>5A</ns0:ref>). PosF(1,4), TfIdf(1,4) and merge(1,4) reached the same quality of the model's predictions (p > 0.05). The statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>5A</ns0:ref>) were identified between the PosfIdf(1,4) model and other models (p < 0.05). Two homogenous groups (PosfIdf(1,4), PosF(1,4)) a (PosF(1,4), TfIdf(1,4)) were identified from the evaluation measure rec_real point of view (Table <ns0:ref type='table'>5B</ns0:ref>). The statistically significant differences (Table <ns0:ref type='table'>5B</ns0:ref>) were identified between the model merge(1,4) and other models (p < 0.05).</ns0:p></ns0:div>
<ns0:div><ns0:head>Time Efficiency</ns0:head><ns0:p>The time effectivity of the proposed techniques was evaluated in the second phase of the analysis. Time effectivity (time) was analysed in dependence on within-group factor and between-groups factor and their interaction. The models represented the examined levels of a within-group factor, and the decision tree depths represented the between-groups factor. The modified tests for repeated measures were again applied to verify the time effectivity of the PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science proposed models. The values of Epsilon were identical and significantly smaller than one for both corrections (Epsilon < 0.34). The zero hypothesis, which claimed that there is no statistically significant difference in time between the examined models, was rejected at the 0.001 significance level (time: G-G Epsilon = 0.336, H-F Epsilon = 0.366, adj.p < 0.001). Similarly, the zero hypothesis, which claimed that the time effectivity (time) does not depend on the interaction between the within-group factor and between-groups factor, was also rejected at the 0.001 significance level (model x deep). Factor deep has a significant impact on the time effectivity of the examined models. Only one homogenous group based on time was identified from the multiple comparisons (Table <ns0:ref type='table'>6</ns0:ref>). PosF(1,4) and TfIdf(1,4) reached the same time effectivity (p > 0.05). Statistically significant differences in time (Table <ns0:ref type='table'>6</ns0:ref>) were identified between the merge(1,4) model and other models (p < 0.05), as well as between the PosfIdf(1,4) model and other models (p < 0.05). As a result, PosfIdf(1,4) model can be considered the most time effective model, while the merge(1,4) model was considered the least time-effective one. Four homogeneous groups were identified after including between-group factor deep (Table <ns0:ref type='table'>7</ns0:ref>). Models PosfIdf(1,4) with depth 6-10 and TfIdf(1,4) with depth six have the same time effectivity (p > 0.05). The models TfIdf(1,4) and PosF(1,4) have the same time efficiency for all depths (p > 0.05). The models merge(1,4) with depth 6-8 (p > 0.05) and models merge(1,4) with the depth 7-10 (p > 0.05) were less time effective.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The paper analysed a unique dataset of the freely available fake and true news datasets written in English to evaluate if the POS tags created from the n-grams could be used for a reliable fake news classification. Two techniques based on POS tags were proposed and compared with the performance of the reference TF-IDF technique on a given classification task from the natural language processing research field. The results show statistically insignificant differences between the PosF and TF-IDF techniques. These differences were comparable in all observed performance metrics, including accuracy, precision, recall and f1-score. Therefore, it can be concluded that morphological analysis can be applied to fake news classification. Moreover, the charts of description statistic show TF-IDF technique reaches better results, though statistically insignificant. It is necessary to note for completeness that the statistically significant differences in observed performance metrics were identified between the morphological technique PosfIdf and TF-IDF. The reason is that the PosfIdf technique includes the ratio of the relative frequency of POS tags and inverse document function. This division by the number of documents in which the POS tag was observed caused weak results of this technique. It is not surprising, whereas the selected 33 POS tags were included in almost all the dataset documents. Therefore, the value of inverse document frequency was very high, which led to a very low value of the ratio. However, the failure of this technique does not diminish the importance of the findings that applied morphological techniques are comparable with the traditional reference technique TF-IDF. The Manuscript to be reviewed Computer Science aim to find a morphological technique, which will be better than TF-IDF, was fulfilled in the case of the PosF technique. The Merged TF-IDF and PosF technique was included in the experiment to determine whether it is possible to improve the reference TF-IDF technique using POS tags. Considering the final performance measures, mainly precision, it can be concluded that they are higher. It means that the applied techniques of morphological analysis could improve the precision of the TF-IDF technique. However, it has not been proven that this improvement is statistically significant. The fact that the reference TF-IDF technique had been favoured in the presented experiment should be considered. In other words, removing the stop words from the input vector of the TF-IDF technique increased the classification accuracy. On the other hand, removing stop words is not suitable for the techniques based on the POS tags because their removal can cause losing important information about the n-gram structure. This statement is substantiated by comparing the values of accuracy for individual n-grams (Fig. <ns0:ref type='figure'>3</ns0:ref>). PosF technique achieved better results for 2-grams, 3-grams, 4-grams than for unigrams. Contrary, the stop words did not have to be removed from the input vector of the TF-IDF technique. However, the experiment aimed to compare the performance of the proposed improvements with the best prepared TF-IDF technique. The time efficiency of the examined techniques was evaluated simultaneously with their performance. The negligible differences between the time efficiency of the TF-IDF and PosF techniques can be considered most surprising. Although the PosF technique uses only 33 POS tags compared to the large vectors of tokens/words in TF-IDF, the time efficiency is similar. The reason is that the POS tags identification in the text is more time-consuming than tokens identification. On the other hand, the merged technique with the best performance results was the most time-consuming. This finding was expected because the merged vector calculation requires calculating and joining the TF-IDF and PosF vectors. The compared classification models for fake and true news classification are based on the relative frequencies of the occurrences of the morphological tags. It is not important which morphological tags were identified in the rules (nodes of the decision tree) using given selection measures. At the same time, the exact border values for the occurrence of morphological tags can also be considered unimportant because the more important fact is that such differences exist, and it is possible to find values of occurrences of morphological tags, which allow classifying fake and true news correctly. The realized set of experiments is unique in the meaning of the proposed preprocessing techniques used to prepare the input vectors for classifiers. The decision was to use as simple classifiers as possible, thus decision trees, because of their ability to easily interpret the obtained knowledge. In other words, decision trees provide additional information, which POS tags and consequent n-grams are important and characteristic for the fake news and which for real news. On the other hand, it should be emphasized that their classification precision is worse than other types of classifiers like neutral networks.</ns0:p><ns0:p>Deepak and Chitturi (2020) used news content for fake news classification. The authors applied a similar approach based on the content conversion to vector (Bag of word, Word2Vec, GloVe). The classification models reached a higher accuracy (0.8335 -0.9132). However, the authors reached these results using additional secondary features like domain names, which have published the article under consideration, authors, etc. These features were mined by search engines using the content of the news as keywords for querying. The data collected in this study were added to the body of the article. The application of neural networks is the second important difference, which led to the higher classification performance. However, as mentioned earlier, the experiment described in this paper was focused on assessing the suitability of the n-grams and POS-tags for pre-processing of input vectors. The priority was not to reach the best performance measures. It can be accepted that adding additional secondary features and using neural networks similar to the described work of <ns0:ref type='bibr' target='#b5'>(Deepak and Chitturi, 2020)</ns0:ref> would lead to similar results. Similarly, to the previous experiment, <ns0:ref type='bibr'>Meel and Vishwakarma (2021)</ns0:ref> used visual image features using image captioning and forensic analysis for improving fake news identification. However, the results obtained in this study are comparable with the results presented in this article if these additional features have not been taken into account. Moreover, these authors exploited hidden pattern extraction capabilities from text, unlike the morphological analysis used in the research presented in this article. The selection of simple machine learning technique, like decision trees, can be considered the limitation of the research presented in this article. However, the reason why this technique was selected related to the parallel research, which was focused on the finding of the most frequent ngrams in fake and real news and their consequent linguistic analysis. This research extends the previous one and tries to determine if the POS tags and n-grams can be further used for fake news classification. It is possible to assume that the morphological tags can be used as the input to the fake news classifiers. Moreover, the pre-processed datasets are suitable for other classification techniques, improving the accuracy of the fake news classification. It means that whether the relative frequencies of occurrences of the morphological tags are further used as the input layer of the neuron network or added to the training dataset of other classifiers, the found information can improve the accuracy of that fake news classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Despite several authors' statements that the morphological characteristics of the text do not allow fake news classification with sufficient accuracy, the realised experiment proved that the selected morphological technique is comparable with the traditional reference technique TF-IDF widely used in the natural language processing domain. The suitability of the techniques based on the morphological analysis has been proven on the contemporary dataset, including 1100 labelled real and fake news about the Covid-19. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The experiment confirmed the validity of the newly proposed techniques based on the POS tags and n-grams against the traditional technique TF-IDF. The article describes the experiment with a set of pre-processing techniques used to prepare input vectors for data mining classification task. The overall contribution of the proposed improvements was expressed by the characteristic performance measures of the classification task (accuracy, precision, recall and f1-score). Besides the variables defined by the input vectors, the hyperparameters max_depth and n-gram length were observed. K-fold validation was applied to consider the random errors. The global null hypotheses were evaluated using adjusted tests for repeated measures. Subsequently, multiple comparisons with Bonferroni adjustment were used to compare the models. Various performance measures ensured the robustness of the obtained results. The decision trees were chosen to classify fake news because they create easily understandable and interpretable results compared to other classifiers. Moreover, they allow the generalisation of the inputs. An insufficient generalisation can cause overfitting, which leads to the wrong classification of individual observations of the testing dataset. Different values of the parameter of maximal depth were researched to obtain the maximal value of precision. This most suitable value of the parameter was different for each of the proposed techniques. Therefore, the statistical evaluation was realised considering the maximal depth. Besides the fact, the statistically significant difference has not been found, the proposed techniques based on the morphological analysis in combination with the created n-grams are comparable with traditional ones, for example, with the TF-IDF used in this experiment. Moreover, the advantages of the PoSF technique can be listed as follows:</ns0:p><ns0:p> A smaller size of the input vectors. The average number of vectors elements was 69 775, while in the case of TF-IDF, it was 817 213.3 (Table2).</ns0:p><ns0:p> A faster creating of the input vector. The possibility of using the proposed techniques based on POS tags on the classification of new yet untrained fake news datasets is considered the last advantage of the proposed techniques. The reason is that the TF-IDF works with the words and counts their frequencies in fake news. However, the traditional classifiers can fail to correctly classify fake news about a new topic because they have not yet trained the frequencies of new words. On the other hand, the PosF technique is more general and focuses on the primary relationships between POS tags, which are probably also similar in the case of new topics of fake news. This assumption will be evaluated in future research. The current most effective fake news classification is based predominantly on neural networks and a weighted combination of techniques, which deal with news content, social context, credit of a creator/spreader and analysis of target victims. It clear that decision tree classifiers are not so </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1 demonstrates this process using the sentence from the tenth most viewed fake news story shared on Facebook in 2019. The following POS tags were identified from the sentence 'Democrats Vote To Enhance Med Care for Illegals Now':  NNPS (proper noun, plural),  VBP (verb, sing. present, non-3d),  TO (to),  VB (verb, base form),  JJ (adjective), PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021) Manuscript to be reviewed Computer Science  NNP (proper noun, singular),  IN (preposition/subordinating conjunction),  NNS (noun plural),  RB (adverb).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PosF and PosfIdf input vector preparation techniques on identified POS tags. 3. Application of reference TF-IDF technique to create input vectors. This technique uses tokens to represent the words modified by the stemming algorithm. Simultaneously, the stop words are removed. 4. Joining PosF and TF-IDF technique to merge vector. 5. Iteration with different values of maximal depth (1, …, 30):  Randomised distribution of the input vectors of PosF, PosfIdf, TfIdf, and Merge techniques into training and testing subsets in accordance with the requirements of the 10-fold cross-validation.  Calculation of decision tree for each training subset with the given maximal depth.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>. The examined variables (model x evaluation measure, model x time) have normal distribution for all levels of the between-groups factor deep (6: max D < 0.326, p > 0.05, 7: max D < 0.247, p > 0.05, 8: max D < 0.230, p > 0.05, 9: max D < 0.298, p > 0.05, 10: max D < 0.265, p > 0.05). The Mauchley sphericity test was consequently applied for verifying the covariance matrix sphericity assumption for repeated measures with four levels (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) with the following results (prec: W = 0.372, Chi-Square = 43.217, p < 0.001; rec: W = 0.374, Chi-Square = 43.035, p < 0.001; f1-sc: W = 0.375, Chi-Square = 42.885, p < 0.001; prec_fake: W = 0.594, Chi-Square = 22.809, p < 0.001; rec_fake: W = 0.643, Chi-Square = 19.329, p < 0.01; prec_real: W = 0.377, Chi-Square = 42.641, p < 0.001; rec_real: W = 0.716, Chi-Square = 14.599, p < 0.05; time: W = 0.0001, Chi-Square = 413.502, p < 0.001).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>,</ns0:head><ns0:label /><ns0:figDesc>𝑎𝑑𝑗.𝑑𝑓1 = 𝐸𝑝𝑠𝑖𝑙𝑜𝑛(𝐽 -1)(𝐼 -1) , 𝑎𝑑𝑗.𝑑𝑓2 = 𝐸𝑝𝑠𝑖𝑙𝑜𝑛(𝑁 -𝑙)(𝐼 -1) PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>A shorter training phase of the model.  More straightforward and more understandable model. The model based on the PosF technique achieved the best results in smaller maximal decision tree depth values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021)Manuscript to be reviewed Computer Science frequently used. However, this article focused only on a very narrow part of the researched issues, and these classifiers have been used mainly for easier understanding of the problem. Therefore, future work will evaluate the proposed techniques in conjunction with other contemporary fake news classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>*** -homogeneous groups (p > 0.05)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,380.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 : Morphological tags used for news classification (Schmid, 1994).</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>GTAG</ns0:cell><ns0:cell>POS Tags</ns0:cell></ns0:row><ns0:row><ns0:cell>group C</ns0:cell><ns0:cell>CC (coordinating conjunction), CD (cardinal number)</ns0:cell></ns0:row><ns0:row><ns0:cell>group D</ns0:cell><ns0:cell>DT (determiner)</ns0:cell></ns0:row><ns0:row><ns0:cell>group E</ns0:cell><ns0:cell>EX (existential there)</ns0:cell></ns0:row><ns0:row><ns0:cell>group F</ns0:cell><ns0:cell>FW (foreign word)</ns0:cell></ns0:row><ns0:row><ns0:cell>group I</ns0:cell><ns0:cell>IN (preposition, subordinating conjunction)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>JJ (adjective), JJR (adjective, comparative), JJS (adjective,</ns0:cell></ns0:row><ns0:row><ns0:cell>group J</ns0:cell><ns0:cell>superlative)</ns0:cell></ns0:row><ns0:row><ns0:cell>group M</ns0:cell><ns0:cell>MD (modal)</ns0:cell></ns0:row><ns0:row><ns0:cell>group N</ns0:cell><ns0:cell>NN (noun, singular or mass), NNS (noun plural), NNP (proper noun, singular), NNPS (proper noun, plural) ,</ns0:cell></ns0:row><ns0:row><ns0:cell>group P</ns0:cell><ns0:cell>PDT (predeterminer), POS (possessive ending), PP (personal pronoun)</ns0:cell></ns0:row><ns0:row><ns0:cell>group R</ns0:cell><ns0:cell>RB (adverb), RBR (adverb, comparative), RBS (adverb, superlative), RP (particle)</ns0:cell></ns0:row><ns0:row><ns0:cell>group T</ns0:cell><ns0:cell>TO (infinitive 'to')</ns0:cell></ns0:row><ns0:row><ns0:cell>group U</ns0:cell><ns0:cell>UH (interjection)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VB (verb be, base form), VBD (verb be, past tense), VBG</ns0:cell></ns0:row><ns0:row><ns0:cell>group V</ns0:cell><ns0:cell>(verb be, gerund/present participle), VBN (verb be, past participle), VBP (verb be, sing. present, non-3d), VBZ</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(verb be, 3rd person sing. present)</ns0:cell></ns0:row><ns0:row><ns0:cell>group W</ns0:cell><ns0:cell>WDT (wh-determiner), WP (wh-pronoun), WP$ (possessive wh-pronoun), WRB (wh-abverb)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 : Selected characteristics of the complete decision trees.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>TfIdf(1,4) PosfIdf(1,4)</ns0:cell><ns0:cell>PosF(1,4)</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage(deep)</ns0:cell><ns0:cell>25.1</ns0:cell><ns0:cell>15.6</ns0:cell><ns0:cell>17.2</ns0:cell></ns0:row><ns0:row><ns0:cell>min(deep)</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>max(deep)</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>21</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage (node count)</ns0:cell><ns0:cell>171.4</ns0:cell><ns0:cell>157.4</ns0:cell><ns0:cell>156</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage (leaf count)</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>79.2</ns0:cell><ns0:cell>78.5</ns0:cell></ns0:row><ns0:row><ns0:cell>average (number of vectors</ns0:cell><ns0:cell>817213.3</ns0:cell><ns0:cell>69512.4</ns0:cell><ns0:cell>69775</ns0:cell></ns0:row><ns0:row><ns0:cell>elements)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:1:0:CHECK 21 Apr 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "The list of revisions based on reviewers' comments
Dear Editors, Dear Reviewers,
Thank you for your valuable feedback and contribution to improving the article 'Using of N-Grams from Morphological Tags for Fake News Classification'. We have improved our previous research question, research design and discussion based on all reviewers’ recommendations. Please, find below our response to the reviewer’s comments.
Review 1:
The paper is well written and presented well.
Thank you for your positive evaluation of the paper, your comments, and your suggestions. We appreciate your contribution, how the results of the realised research could be improved.
In the abstract, the author did not mention about the rationale behind the proposed system. Also, the abstract has vague information, which need to be replaced with the important information. Also, it is necessary to add simulation parameters in the same section.
The abstract has been improved. Essential information about the research background applied techniques, results, and performance measures of the model have been reconsidered and added.
Introduction is too generic and is lacking important information about the N-Grams from Morphological Tags and proposed scheme. Why the author proposed such scheme is missing in the introduction. Also, it is necessary to add contribution and advantages of the proposed scheme at the end of the proposed scheme.
Some sentences from the Introduction section were removed and replaced by the required information about the n-grams. Simultaneously, the main aim of the paper was introduced in more detail to help the reader to understand the main contribution of the paper better. Finally, the proposed schema was modified in line with the recommendations of the reviewers.
Background and related work is missing drawbacks of the existing scheme. Also, author of the paper has done some old survey. It is highly recommended to add latest references from reputable journals.
The related section of the paper was extended with several articles indexed in the Web of Science database and published in high-quality journals or conferences. The results of similar experiments have been analysed. The used approach was compared with the approach used in our article.
Proposed scheme section is missing an overview of the proposed scheme and also it is necessary to add overflow diagram in the same section.
The overflow diagram has been reconsidered to understand the individual steps of the proposed methodology easily. Simultaneously, the list of the individual steps of the applied methodology has been checked to ensure the more effortless comprehensibility of the order of the partial experiments.
In addition, figures are not very clear. Please re-draw figures and add important information because you have mentioned about various systems but you didn’t include it in the explanation of the proposed figure diagram.
All figures have been created with higher resolution. The section Methodology of the Main Experiment was reworked to improve its clarity and comprehensibility. The added paragraphs should improve the overall readability of the article. Simultaneously, they could contribute to a better understanding of the partial experiments.
Results and evaluation is missing various terminologies. It is highly recommended to add the working of the implementation in algorithm. In addition, which parameters are considered and why they are considered is necessary to be added in this section.
We appreciate your recommendations. We have added descriptions of the essential parameters and hyperparameters of the models to the article. The algorithm itself was created as a set of Jupyter Python Notebooks (.ipynb) and included in supplemental files in the submission management system. We also added the list of Python libraries, which have been used in the experiments. The most important parts of the algorithm were included in the section The Methodology of the Main Experiment. Moreover, we added for better clarity Figure 4 – The individual steps of the experiment for comparing four proposed techniques, 30 values of maximal depth and 10-fold cross-validation. Finally, we extended the last section of the article and added several sentences to better illustrate the various terminologies for evaluation and the results.
Author should compare their scheme with existing schemes. Also, results are not enough to defend the proposed scheme. Please add more results and discussion.
We also modified the discussion section of the article. We compare our results with the outcomes of several other current research papers published on the same topic. However, we would like to emphasise that this comparison in terms of the best-achieved performance metrics is not straightforward. As we have already mentioned in the article, we deliberately used the simplest classification model to understand the impact of individual features. At the same time, we intended to assess the suitability of the proposed pre-processing techniques using n-grams from POS tags for creating input vectors of this classifier. In other words, our priority was to find out if it is possible to improve the classification with proposed techniques, not to achieve the best performance metrics. Although using the most promising classification models based on neural networks would improve the performance metrics, comparing their features and understanding them would be more problematic in the case of neural networks. We also mention this reasoning and constraints in the discussion section.
Conclusion need to revise as it looks like the summary of the paper.
Thank you for your proposal on how to concise the conclusion section of the article. We tried to exclude some paragraphs that looked like the summary and focused on emphasising the overall contribution of the realised experiments, proposed improvements of the pre-processing techniques, and future research directions.
Also, please add latest references.
Considering your recommendations, we have looked for the high-quality articles newly added and indexed in the Web of Knowledge, IEEE Explore and ACM library databases. Short summaries of their outcomes were added to the related work, discussion, and list of references.
Review 2:
The author has present work with quality and the suggested point can improve the quality of the manuscript.
Thank you for your positive evaluation of the paper, your comments, and your suggestions. We appreciate your contribution, how the results of the realised research could be improved.
The flow of the experiment is not clear author should write concisely but make it clear and easy to understand.
Thank you for your feedback and suggestions. We agree with you that the sequence of the partial experiments realised in the paper could be harder to read at first glance. Therefore, we added some visualisations, which explain graphically individual steps of the applied methodology and the reason for the partial experiments. Moreover, the section The Methodology of the Main Experiment has been revised for better clarity and understanding. Several paragraphs have been added, which we believe will improve the readability of this article.
Training and testing records shown be mention in a table for each target class.
As was mentioned in the article, we used k-fold validation. The entire dataset is repeatedly divided into training and testing subsets, in our case, for 10-fold validation in a ratio of 9: 1. As a result, the composition of the training and test set is always different for ten measurements. Providing a table for each target class for training and testing records could be confusing because of the random nature of their creation, mainly because the training and testing set is always different. However, we have given a more detailed description of the k-fold validation process by adding a separate subsection to the article for a better understanding.
Validation should be done by authors
Thank you for your recommendation. The discussion about the validation of the results has been added to the conclusion section. All proposed techniques have been applied to the real dataset of true and fake news. Simultaneously, we realized a validation focused on comparing the obtained results with the recent experiments realized in the examined research field.
Typos and grammar needed to be check throughout the paper.
The native speaker has thoroughly checked the typos and grammar. Some paragraphs have been rewritten for better readability.
Is there any results validation technique used by the author?
A summary of the validation of the results was also added to the conclusion section of the article. Simultaneously, the notion about the k-fold cross-validation was added where required.
The quality of the figures should be improved.
The quality of the figures was improved.
Thank you again for the reviewers‘ effort and all suggestions.
We believe that we have correctly understood the reviewer’s comments and recommendations and included them in this improved version of the manuscript. Simultaneously, we believe, we explained a possible contribution of the paper and improved its readability and comprehensibility.
Besides the final decision, we appreciate the reviewers' valuable feedback, which motivated us to research fake news identification further using morphological analysis.
Best regards
Jozef Kapusta
Corresponding author
" | Here is a paper. Please give your review comments after reading it. |
179 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The research of the techniques for effective fake news detection has become very needed and attractive. These techniques have a background in many research disciplines, including morphological analysis. Several researchers stated that simple content-related ngrams and POS tagging had been proven insufficient for fake news classification. However, they did not realise any empirical research results, which could confirm these statements experimentally in the last decade. Considering this contradiction, the main aim of the paper is to experimentally evaluate the potential of the common use of n-grams and POS tags for the correct classification of fake and true news. The dataset of published fake or real news about the current Covid-19 pandemic was pre-processed using morphological analysis. As a result, n-grams of POS tags were prepared and further analysed. Three techniques based on POS tags were proposed and applied to different groups of n-grams in the pre-processing phase of fake news detection. The n-gram size was examined as the first. Subsequently, the most suitable depth of the decision trees for sufficient generalization was scoped. Finally, the performance measures of models based on the proposed techniques were compared with the standardised reference TF-IDF technique.</ns0:p><ns0:p>The performance measures of the model like accuracy, precision, recall and f1-score are considered, together with the 10-fold cross-validation technique. Simultaneously, the question, whether the TF-IDF technique can be improved using POS tags was researched in detail. The results showed that the newly proposed techniques are comparable with the traditional TF-IDF technique. At the same time, it can be stated that the morphological analysis can improve the baseline TF-IDF technique. As a result, the performance measures of the model, precision for fake news and recall for real news, were statistically significantly improved.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Fake News is currently the biggest bugbear of the developed world <ns0:ref type='bibr' target='#b13'>(Jang et al., 2018)</ns0:ref>. Although the spreading of false information or false messages for personal or political benefit is certainly nothing new, current trends such as social media enable every individual to create false information easier than ever before <ns0:ref type='bibr' target='#b2'>(Allcott & Gentzkow, 2017)</ns0:ref>. The article deals with evaluating four proposed techniques for fake and true news classification using morphological analysis. Morphological analysis belongs to the basic means for natural language processing research. It deals with the parts of speech tags (POS tags) as morphological characteristics of the word in the context, which can be considered a style-based fake detection method <ns0:ref type='bibr' target='#b38'>(Zafarani et al., 2019)</ns0:ref>. Linguistic-based features are extracted from the text content in terms of document organisations from different levels, such as characters, words, sentences and documents. Sentence-level features refer to all the important attributes that are based on sentence scale. They include parts of speech tagging (POS), the average sentence length, the average length of a tweet/post, the frequency of punctuations, function words, and phrase in a sentence, the average polarity of the sentence (positive, neutral or negative), as well as the sentence complexity <ns0:ref type='bibr' target='#b16'>(Khan et al., 2019)</ns0:ref>. Existing research articles mainly investigate standard linguistic features, including lexical, syntactic, semantic and discourse features, to capture the intrinsic properties of misinformation. Syntactic features can be divided into shallow, where belongs frequency of POS tags and punctuations, and deep syntactic features <ns0:ref type='bibr' target='#b8'>(Feng, Banerjee & Choi, 2012)</ns0:ref>. Morphological analysis of POS tags based on n-grams is used in this paper to evaluate its suitability for successful fake news classification. An N-gram is a sequence of N tokens (words). N-grams are also called multi-word expressions or lexical bundles. N-grams can be generated on any attribute, with word and lemma being the most frequently used ones. The following word expressions represent 2-gram: 'New York', and 3-gram: 'The Three Musketeers'. The analysis of the n-grams is considered more meaningful than the analysis of the individual words (tokens), which constitute the n-grams. Several research articles stated that simple content-related n-grams and POS tagging had been proven insufficient for the classification task <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b4'>(Conroy, Rubin & Chen, 2015)</ns0:ref> <ns0:ref type='bibr' target='#b35'>(Su et al., 2020)</ns0:ref>. However, these findings mainly represent the authors' opinion because they did not realise or publish any empirical research results, confirming these statements in the last decade. Considering this contradiction, the main aim of the paper is to experimentally evaluate the potential of the common use of n-grams and POS tags for the correct classification of fake and true news. Therefore, continuous sequences of n items from a given sample of POS tags (ngrams) were analysed. The techniques based on POS tags were proposed and used in order to meet this aim. Subsequently, these techniques were compared with the standardised reference TF-IDF technique to evaluate their main performance characteristics. Simultaneously, the question, whether the TF-IDF technique can be improved using POS tags was researched in detail. All techniques have been applied in the pre-processing phase on different groups of ngrams. The resulted datasets have been analysed using decision tree classifiers. The article aims to present and evaluate proposed techniques for pre-processing of input vectors of a selected classifier. These techniques are based on creating n-grams from POS tags. The research question is whether the proposed techniques are more suitable than the traditional baseline technique TF-IDF or whether these techniques are able to improve the results of the TF-IDF technique. All proposed techniques have been applied to different levels of n-grams. Subsequently, the outcomes of these techniques were used as the input vectors of the decision tree classifier. The following methodology was used for evaluation of the suitability of a proposed approach based on n-grams of POS tags:</ns0:p><ns0:p> Identification of POS tags in the analysed dataset.</ns0:p><ns0:p> N-grams (1-grams, 2-grams, 3-grams, 4-grams) definition from POS tags. N-gram represents the sequence of the POS tags.</ns0:p><ns0:p> Calculation of frequency of occurrence of an n-gram in documents. In other words, the relative frequency of n-gram in examined fake and true news is calculated.</ns0:p><ns0:p> Definition of input vectors of classifiers using three proposed techniques for POS tags and controlled TF-IDF technique.</ns0:p><ns0:p> Application of decision tree classifiers, parameter tuning concerning the different depths and length of n-grams.</ns0:p><ns0:p> Identification and comparison of the decision trees' characteristics, mainly the accuracy, depth of the trees and time performance. The structure of the article is as follows. The current state of the research in the field of fake news identification is summarised in the second section. The datasets of news Covid-19 used in the research are described in the second section. This section also describes the process of ngrams extraction from POS tags. Simultaneously, three POS tags-based techniques are proposed for preparing input vectors for decision trees classifiers. Subsequently, the same section discusses the process of decision trees modelling, the importance of finding the most suitable ngram length and maximum depth. Finally, statistical evaluation of the performance of the modified techniques based on POS tags for fake news classification is explained in the same section. The most important results, together with an evaluation of model performance and time efficiency of the proposed techniques, are summarised in the fourth section. The detailed discussion about the obtained results and conclusions form the content of the last section of the article.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Related Work</ns0:head><ns0:p>There has been no universal definition for fake news. However, Zhou and Zafarani <ns0:ref type='bibr' target='#b40'>(Zhou & Zafarani, 2020)</ns0:ref> define fake news as intentionally false news published by a news outlet. Simultaneously, they explained related terms in detail and tried to define them with a discussion about the differences based on the huge set of related publications. The same authors categorised automatic detection of fake news from four perspectives: knowledge, style, propagation and source. Considering this, the research described in this paper belongs to the style-based fake news detection category, which methods try to assess news intention <ns0:ref type='bibr' target='#b40'>(Zhou & Zafarani, 2020)</ns0:ref>. According to their definition, fake news style can be defined as a set of quantifiable characteristics (features) that can well represent fake news content and differentiate it from true news content. <ns0:ref type='bibr' target='#b17'>Kumar and Shah (Kumar & Shah, 2018)</ns0:ref> provided a comprehensive review of many facets of fake news distributed over the Internet. They quantified the impact of fake news and characterised the algorithms used to detect and predict them. Moreover, they summarise the current state of the research and approaches applied in the field of fake news content analysis from the linguistic, semantic and knowledge discovery point of view. They did not conclude the overall performance of the style-based methods using ML algorithms despite the overall scope of the review.</ns0:p><ns0:p>Other contemporary surveys <ns0:ref type='bibr' target='#b40'>(Zhou & Zafarani, 2020;</ns0:ref><ns0:ref type='bibr' target='#b39'>Zhang & Ghorbani, 2020;</ns0:ref><ns0:ref type='bibr' target='#b33'>Shu et al., 2017)</ns0:ref> provide further evidence that the research related to the field of fake news is very intense now, mainly due to their negative consequences for society. The authors analysed various aspects of the fake news research, discussed the reasons, creators, resources and methods of their dissemination, as well as the impact and the machine learning algorithms created to detect them effectively. Sharma et al. <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2019)</ns0:ref> also published a comprehensive survey highlighting the technical challenges of fake news. They summarised characteristic features of the datasets of news and outlined the directions for future research. They discussed existing methods and ML techniques applicable to identifying and mitigating fake news, focusing on the significant advances in each method and their advantages and limitations. They discussed the results of the application of different classification algorithms, including decision trees. They concluded that using n-grams alone can not entirely capture finer-grained linguistic information present in fake news writing style. However, their application on the dataset, which contains pre-processed items using POS tagging, is not mentioned.</ns0:p><ns0:p>Zhang and Ghorbani <ns0:ref type='bibr' target='#b39'>(Zhang & Ghorbani, 2020)</ns0:ref> stated that because online fake reviews and rumours are always compacted and information-intensive, their content lengths are often shorter than online fake news. As a result, traditional linguistic processing and embedding techniques such as bag-of-words or n-gram are suitable for processing reviews or rumours. However, they are not powerful enough for extracting the underlying relationship for fake news. For online fake news detection, sophisticated embedding approaches are necessary to capture the key opinion and sequential semantic order in news content. De Oliveira et al. <ns0:ref type='bibr' target='#b6'>(de Oliveira et al. 2021)</ns0:ref> realized the literature survey focused on the preprocessing data techniques used in natural language processing, vectorization, dimensionality reduction, machine learning, and quality assessment of information retrieval. They discuss the role of n-grams and POS tags only partially.</ns0:p><ns0:p>On the other hand, Li et al. <ns0:ref type='bibr' target='#b20'>(Li et al., 2020)</ns0:ref> consider the n-gram approach the most effective linguistic analysis method applied to fake news detection. Apart from word-based features such as n-grams, syntactic features such as POS tags are also exploited to capture linguistic characteristics of texts.</ns0:p><ns0:p>Stoick <ns0:ref type='bibr' target='#b34'>(Stoick, 2019)</ns0:ref> stated that previous linguistic work suggests part-of-speech and n-gram frequencies are often different between fake and real articles. He created two models and concluded that some aspects of the fake articles remained readily identifiable, even when the classifier was trained on a limited number of examples. The second model used n-gram frequencies and neural networks, which were trained on n-grams of different length. He stated that the accuracy was near the same for each n-gram size, which means that some of the same information may be ascertainable across n-grams of different sizes. Ahmed et al. <ns0:ref type='bibr' target='#b1'>(Ahmed, Traore & Saad, 2017)</ns0:ref> further argued that the latest advance in natural language processing (NLP) and deception detection could help to detect deceptive news. They proposed a fake news detection model that analyses n-grams using different features extraction and ML classification techniques.</ns0:p><ns0:p>The combination of TF-IDF as features extraction, together with LSVM classifier, achieved the highest accuracy. Similarly, Jain (Jain, 2020) extracted linguistic/stylometric features, a bag of words TF and BOW TF-IDF vector and applied the various machine learning models, including bagging and boosting methods, to achieve the best accuracy. However, they stated that the lack of available corpora for predictive modelling is an essential limiting factor in designing effective models to detect fake news. Wynne et al. <ns0:ref type='bibr' target='#b37'>(Wynne, 2019)</ns0:ref> investigated two machine learning algorithms using word n-grams and character n-grams analysis. They obtained better results using character n-grams with TF-IDF) and Gradient Boosting Classifier. They did not discuss the pre-processing phase of ngrams, as will be described in this article.</ns0:p><ns0:p>Thorne and Vlachos <ns0:ref type='bibr' target='#b36'>(Thorne & Vlachos, 2018)</ns0:ref> surveyed automated fact-checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors. They identified the subject-predicateobject triples from small knowledge graphs to fact check numerical claims. Once the relevant triple had been found, a truth label was computed through a rule-based approach that considered the error between the claimed values and the retrieved values from the graph. Shu, Silva, Wang, Jiliang and Liu <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref> proposed to use linguistic-based features such as total words, characters per word, frequencies of large words, frequencies of phrases (i.e., n-grams and bag-of-words). They stated that fake contents are generated intentionally by malicious online users, so it is challenging to distinguish between fake information and truth information only by content and linguistic analysis. POS tags were also exploited to capture the linguistic characteristics of the texts. However, several works have found the frequency distribution of POS tags to be closely linked to the genre of the text being considered <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b27'>Ott et al. (Ott et al., 2011)</ns0:ref> examined this variation in POS tag distribution in spam, intending to find if this distribution also exists concerning text veracity. They obtained better classification performance with the n-grams approach but found that the POS tags approach is a strong baseline outperforming the best human judge. Later work has considered more in-depth syntactic features derived from probabilistic context-free grammars (PCFG) trees. They assumed that the approach based only on n-grams is simple and cannot model more complex contextual dependencies in the text. Moreover, syntactic features used alone are less powerful than wordbased n-grams, and a naive combination of the two cannot capture their complex interdependence. They concluded that the weights learned by the classifier are mainly in agreement with the findings of existing theories on deceptive writing <ns0:ref type='bibr' target='#b25'>(Ott, Cardie & Hancock, 2013)</ns0:ref>. Some authors, for example, Conroy, Rubin, and Chen <ns0:ref type='bibr' target='#b4'>(Conroy, Rubin & Chen, 2015)</ns0:ref>, have noted that simple content-related n-grams and POS tagging have been proven insufficient for the classification task. However, they did not research the n-grams from the POS tags. They suggested using Deep Syntax analysis using Probabilistic Con-text-Free Grammars (PCFG) to distinguish rule categories (lexicalised, non-lexicalised, parent nodes, etc.) instead of deception detection with 85-91% accuracy. Su et al. <ns0:ref type='bibr' target='#b35'>(Su et al., 2020)</ns0:ref> also stated that simple content-related n-grams and shallow part-ofspeech (POS) tagging have proven insufficient for the detection task, often failing to account for important context information. On the other hand, these methods have been proven useful only when combined with more complex analysis methods.</ns0:p><ns0:p>Khan et al. <ns0:ref type='bibr' target='#b16'>(Khan et al., 2019)</ns0:ref> stated that meanwhile, the linguistic-based features extracted from the news content are not sufficient for revealing the in-depth underlying distribution patterns of fake news <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref>. Auxiliary features, such as the news author's credibility and the spreading patterns of the news, play more important roles for online fake news prediction.</ns0:p><ns0:p>On the other hand, Qian et al. <ns0:ref type='bibr' target='#b28'>(Qian et al., 2018)</ns0:ref> proposed a similar approach, which is researched further in this paper, based on a convolutional neural network (TCNN) with a user response generator (URG). TCNN captures semantic information from text by representing it at the sentence and word level. URG learns a generative user response model to a text from historical user responses to generate responses to new articles to assist fake news detection. They used POS tags in combination with n-grams as a comparison of the accuracy of the proposed technique of NN based classification. <ns0:ref type='bibr' target='#b10'>Goldani et al. (Goldani, 2021)</ns0:ref> used capsule neural networks in the fake news detection task. They applied different levels of n-grams for feature extraction and subsequently used different embedding models for news items of different lengths. Static word embedding was used for short news items, whereas non-static word embeddings that allow incremental uptraining and updating in the training phase are used for medium length or long news statements. They did not consider POS tags in the pre-processing phase. Finally, Kapusta et al. <ns0:ref type='bibr'>(Kapusta et al., 2020)</ns0:ref> realised a morphological analysis of several news datasets. They analysed the morphological tags and compared the differences in their use in fake news and real news articles. They used morphological analysis for words classification into grammatical classes. Each word was assigned a morphological tag, and these tags were thoughtfully analysed. The first step consisted of creating groups that consisted of related morphological tags. The groups reflected on the basic word classes. The authors identified statistically significant differences in the use of word classes. Significant differences were identified for groups of foreign words, adjectives and nouns favouring fake news and groups of wh-words, determiners, prepositions, and verbs favouring real news. The third dataset was evaluated separately and was used for verification. As a result, significant differences for groups adverb, verbs, nouns were identified. They concluded that it is important that the differences between groups of words exist. It is evident that morphological tags can be used as input into the fake news classifiers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>The dataset analysed by Li <ns0:ref type='bibr' target='#b19'>(Li, 2020)</ns0:ref> was used for the evaluation of proposed techniques. This dataset collects more than 1100 articles (news) and posts from social networks related to Covid-19. It was created in cooperation with the projects Lead Stories, Poynter, FactCheck.org, Snopes, EuVsDisinfo, which monitor, identify and control misleading information. These projects define the true news as an article or post, which truthfulness can be proven and come from trusted resources. Vice versa, as the fake news are considered all articles and post, which have been evaluated as false and come from known fake news resources trying to broadcast misleading information intentionally.</ns0:p></ns0:div>
<ns0:div><ns0:head>POS Tags</ns0:head><ns0:p>Morphological tags were assigned to all words of the news from the dataset using the unique tool called TreeTagger. Schmid <ns0:ref type='bibr' target='#b31'>(Schmid, 1994)</ns0:ref> developed the set of tags called English Penn Treebank using this annotating tool. The final English Penn Treebank tagset contains 36 morphological tags. However, considering the aim of the research, the following tags were not included in the further analysis due to their low frequency of appearance or discrepancy:</ns0:p><ns0:formula xml:id='formula_0'> SYM (symbol),</ns0:formula><ns0:p> LS (list marker).</ns0:p><ns0:p>Therefore, the final number of morphological tags used in the analysis was 33. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the morphological tags divided into groups.</ns0:p></ns0:div>
<ns0:div><ns0:head>N-grams Extraction from POS Tags</ns0:head><ns0:p>N-grams were extracted from POS tags in this data pre-processing step. As a result, sequences of n-grams from a given sample of POS tags were created. Since 1-grams and identified POS tags are identical, the input file with 1-grams used in further research is identical to the file with identified POS tags. The n-grams for the TF-IDF technique were created in the same way. However, it is important to emphasise that this technique used so call terms, which represent the lemmas or stems of a word.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Techniques Used to Pre-process the Input Vectors</ns0:head><ns0:p>The following four techniques have been applied for pre-processing of the input vectors for a selected classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>Term Frequency -Inverse Document Frequency (TF-IDF) Technique</ns0:head><ns0:p>TF-IDF is a traditional technique that leveraged to assess the importance of tokens to one of the documents in a corpus <ns0:ref type='bibr' target='#b30'>(Qin, Xu & Guo, 2016)</ns0:ref>. The TF-IDF approach creates a bias in that frequent terms highly related to a specific domain, which is typically identified as noise, thus leading to the development of lower term weights because the traditional TF-IDF technique is not specifically designed to address large news corpora. Typically, the TF-IDF weight is composed of two terms: the first computes the normalised Term Frequency (TF), the second term is the Inverse Document Frequency (IDF).</ns0:p><ns0:p>Let is a term/word, is a document, is any term in the document. Then the frequency of the 𝑡 𝑑 𝑤 term/word in document d is calculated as follows 𝑡 𝑡𝑓(𝑡,𝑑) = 𝑓(𝑡,𝑑)</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑓(𝑤,𝑑) ,</ns0:head><ns0:p>where is the number of terms/words in document and is the number of all terms 𝑓(𝑡,𝑑) 𝑑 𝑓(𝑤,𝑑) in the document. Simultaneously, the number of all documents is also taken into account in TF-IDF calculation, in which a particular term/word occurs. This number is denoted as . It 𝑖𝑑𝑓(𝑡,𝐷) represents an inverse document frequency expressed as follows</ns0:p><ns0:formula xml:id='formula_1'>𝑖𝑑𝑓(𝑡,𝐷) = ln 𝑁 ∑ (𝑑 ∈ 𝐷 : 𝑡 ∈ 𝑑) + 1</ns0:formula><ns0:p>, where is a corpus of all documents and is a number of documents in the corpus. 𝐷 𝑁 The formula of TfIdf can be written as 𝑡𝑓𝑖𝑑𝑓(𝑡,𝑑,𝐷) = 𝑡𝑓(𝑡,𝑑) × 𝑖𝑑𝑓(𝑡,𝐷) . Formula has various variants such as or ). Similarly, there are 𝑡𝑓 log(𝑡𝑓(𝑡,𝑑)) log (𝑡𝑓(𝑡,𝑑) + 1 several variants, how can be calculated <ns0:ref type='bibr' target='#b3'>(Chen, 2017)</ns0:ref>. Considering this fact, the calculation 𝑖𝑑𝑓 of the TfIdf was realised using the scikit-learn library in Python (https://scikit-learn.org). The TF-IDF technique applied in the following experiment is used as a reference technique for comparison selected characteristics of the new techniques described below. The same dataset was used as an input. However, the stop words were removed before in this case. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>POS Frequency (PosF) Technique</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This technique is an analogy of the Term Frequency technique. However, it calculates with the frequency of POS tags. Let pos is an identified POS tag, d document, w represents any POS tag identified in the document. Then the frequency of POS tag pos in document d can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_2'>𝑃𝑜𝑠𝐹(𝑝𝑜𝑠,𝑑) = 𝑓(𝑝𝑜𝑠,𝑑) 𝑓(𝑤,𝑑) ,</ns0:formula><ns0:p>where is the number of occurrences of POS tag in document and is the 𝑓(𝑝𝑜𝑠,𝑑) 𝑑 𝑓(𝑤,𝑑) number of all identified POS tags in the document. As a result, PosF expresses the relative frequency of each POS tag in the frame of the analysed list of POS tags identified in the document.</ns0:p></ns0:div>
<ns0:div><ns0:head>PosF-IDF Technique</ns0:head><ns0:p>This technique is the analogy of the TF-IDF technique. Similarly to the already introduced PosF technique, it considers the POS tags, which have been identified in each document in the analysed dataset based on individual words and sentences. The documents containing only identified POS tags represented the inputs for the calculation of PosF-IDF. Besides the relative frequency of POS tags in the document, the number of all documents in which a particular POS tag has also been identified is considered.</ns0:p></ns0:div>
<ns0:div><ns0:head>Merged TF-IDF and PosF Technique</ns0:head><ns0:p>This technique was proposed to confirm whether is it possible to improve the traditional TF-IDF technique by using POS tags. Therefore, the following vectors were created for each document:  TfIdf vector,  PosF vector, which represents the relative frequency of POS tags in the document. Subsequently, a result of applying the merged technique is again a vector, which originated by merging the previous vectors. Therefore, both vectors a are considered for 𝑇𝑓𝐼𝑑𝑓(𝑑) 𝑃𝑜𝑠𝐹(𝑑) document d, which were calculated using the techniques TfIdf and PosF mentioned.</ns0:p><ns0:p>,</ns0:p><ns0:formula xml:id='formula_3'>𝑇𝑓𝐼𝑑𝑓(𝑑) = (𝑡 1 ,𝑡 2 ,…,𝑡 𝑛 ) 𝑃𝑜𝑠𝐹(𝑑) = (𝑝 1 ,𝑝 2 ,…,𝑝 𝑚 ) .</ns0:formula><ns0:p>Then, the final vector for document calculated by the merge technique is</ns0:p><ns0:formula xml:id='formula_4'>𝑚𝑒𝑟𝑔𝑒(𝑑) 𝑑 𝑚𝑒𝑟𝑔𝑒(𝑑) = (𝑡 1 ,𝑡 2 ,…,𝑡 𝑚 ,𝑝 1 ,𝑝 2 ,…,𝑝 𝑚 ).</ns0:formula><ns0:p>A set of techniques for pre-processing the input vectors for the selected knowledge discovery classification task was created. These techniques can be considered the variations of the previous TF-IDF technique, in which the POS tags are taken into account additionally to the original terms.</ns0:p><ns0:p>As a result, the four techniques described above represent typical variations, which allow comparing and analysing the basic features of the techniques based on the terms and POS tags.</ns0:p></ns0:div>
<ns0:div><ns0:head>Decision Trees Modelling</ns0:head><ns0:p>Several classifiers like decision tree classifiers, Bayesian classifiers, k-nearest-neighbour classifiers, case-based reasoning, genetic algorithms, rough sets, and fuzzy logic techniques were considered. Finally, the decision trees were selected to evaluate the suitability of the proposed techniques for calculating the input vectors and analyse their features. The decision trees allow not only a simple classification of cases, but they create easily interpretable and understandable classification rules at the same time. In other words, they simultaneously represent functional classifiers and a tool for knowledge discovery and understanding. The same approach was partially used in other similar research papers <ns0:ref type='bibr'>(Kapusta et al., 2020;</ns0:ref><ns0:ref type='bibr'>Kapusta, Benko & Munk, 2020)</ns0:ref>.</ns0:p><ns0:p>The attribute selection measures like Information Gain, Gain Ratio, and Gini Index <ns0:ref type='bibr' target='#b21'>(Lubinsky, 1995)</ns0:ref>, used while decision tree is created, are considered the further important factor, why decision trees had been finally selected. The best feature is always selected in each step of decision tree development. Moreover, it is virtually independent of the number of input attributes. It means that even though there is supplemented a larger amount of the attributes (elements of the input vector) on the input of the selected classifier, the accuracy remains unchanged.</ns0:p></ns0:div>
<ns0:div><ns0:head>K-fold validation</ns0:head><ns0:p>Comparing the decision trees created in the realised experiment is based on the essential characteristics of the decision trees as the number of nodes or leaves. These characteristics define the size of the tree, which should be suitably minimised. Simultaneously, the performance measures of the model like accuracy, precision, recall and f1-score are considered, together with 10-fold cross-validation technique. K-fold validation was used for the evaluation of the models. It generally results in a less biased model compare to other methods because it ensures that every observation from the original dataset has the chance of appearing in training and test set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Setting the Most Suitable N-gram Length</ns0:head><ns0:p>All compared techniques for input vectors pre-processing required identical conditions. Therefore, the highest values of n in n-grams was determined as the first step. Most NLP tasks work usually with n = {1,2,3}. The higher value of n (4-grams, 5-grams, etc.) has significant demands on hardware and software, calculation time, and overall performance. On the other hand, the potential contribution of the higher n-grams in increasing the accuracy of created models is limited. Several decision tree models were created to evaluate this consideration. N-grams (1-gram, 2gram, …, 5-gram) for tokens/words and for POS tags were prepared. Subsequently, the TF-IDF technique was applied to n-grams of tokens/words. At the same time, PosF and PosfIdf techniques were applied on n-grams of POS tags. As a result, 15 files with the input vectors have been created (1-5-grams x 3 techniques). Figure <ns0:ref type='figure'>2</ns0:ref> visualises the individual steps of this process for better clarity. Ten-fold cross-validation led to creating ten decision trees models for each pre-processed file (together 15 files). In all cases, the accuracy was considered the measure of the model performance. Figure <ns0:ref type='figure'>3</ns0:ref> shows a visualisation of all models with different n-gram length. The values on the x-axis represent a range of used n-grams. For instance, n-gram (1,1) means that only unigrams had been used. Other ranges of n-grams will be used in the next experiments. For example, the designation of (1,4) will represent the 1-grams, 2-grams, 3-grams, and 4-grams included together in one input file in this case.</ns0:p><ns0:p>The results show that the accuracy is declining with the length of the n-grams, mostly in the case of applying the TF-IDF technique. Although it was not possible to process longer n-grams (6grams, 7-grams, etc.) due to the limited time and computational complexity, it can be assumed that their accuracy would be declined similarly to the behaviour of the accuracy for 5-grams in the case of all applied techniques. Considering the process of decision tree model creation, it is not surprising that joining the ngrams to one input file achieved the highest accuracy. The best accuracy can be reached by joining n-grams to one input file. Considering this, the most suitable measure will be selected during the creation of the decision tree. As a result, all following experiments will work with the file, consisting of joined 1-grams, 2-grams, 3-grams and 4-grams (1,4).</ns0:p></ns0:div>
<ns0:div><ns0:head>Setting the Maximum Depth of the Decision Tree</ns0:head><ns0:p>Overfitting represents a frequent issue. Although the training error decreases by default with the increasing size of the created tree, the test errors often increase with the increasing size. As a result, the classification of new cases can be inaccurate. Techniques like pruning or hyperparameter tuning can overcome overfitting.</ns0:p><ns0:p>The maximal depth of the decision tree will be analysed to minimise the overfitting issue to find understandable rules for fake news identification.</ns0:p><ns0:p>As was mentioned earlier, the main aim of the article is to evaluate the most suitable techniques for the preparation of input vectors. Simultaneously, the suitable setting of the parameter max_depth will be evaluated. Complete decision trees for n-grams from tokens/words and POS tags were created for finding suitable values of selected characteristics of decision trees (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>).</ns0:p><ns0:p>The results show that the techniques working with the POS tags have a small number of input vectors compared to the reference TF-IDF technique. These findings were expected because while TF-IDF takes all tokens/words, in the case of the PosfIdf as well as PosF techniques, each token/word had been assigned to one of 33 POS tags (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). This simplification is also visible in the size of the generated decision tree (depth, node count, number of leaves). The application of the PosfIdf and PosF techniques led to the simpler decision tree. However, the maximal depth of the decision tree is the essential characteristics for further considerations. While it is equal to 30 for TF-IDF, the maximal depth is lower in the case of both remaining techniques. Therefore, decision trees with different depths will be further considered in the main experiment to ensure the same conditions for all compared techniques. The maximal depth will be set to 30.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Methodology of the Main Experiment</ns0:head><ns0:p>The following experiment's main aim is to evaluate if it is possible to classify the fake news messages using POS tags and compare the performance of the proposed techniques (PosfIdf, PosF, merge) with the reference TF-IDF technique which uses tokens/words. The comparison of these four techniques is joined with the following questions:</ns0:p><ns0:p>Q1: What is the most suitable length of the n-grams for these techniques? Q2: How to create models using these techniques to prevent possible overfitting? Q3: How to compare the models with different hyperparameters, which tune the performance of the models?</ns0:p><ns0:p>The first question (Q1) was answered in the section Setting the Most Suitable N-gram Length.</ns0:p><ns0:p>As its result, the use of joined 1-grams, 2-grams, 3-grams and 4-grams (1,4) is the most suitable.</ns0:p><ns0:p>The second question (Q2) can be answered by experimenting with the maximum depth hyperparameter used in the decision tree classifier. The highest acceptable value of this hyperparameter was found in the section Setting the Maximum Depth of the Decision Tree. The main experiment described later will be realised regarding the last, third question (Q3). The following steps of the methodology will be applied:</ns0:p><ns0:p>1. Identification of POS tags in the dataset.  Testing the quality of the model's predictions on the testing subset. The following new characteristics were established:</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Application of</ns0:head><ns0:p> prec_fake (precision for group fake),</ns0:p><ns0:p> prec_real (precision for group real),</ns0:p><ns0:p> rec_fake (recall for group fake),</ns0:p><ns0:p> rec_real (recall for group real),  f1-score,</ns0:p><ns0:p> time spent on one iteration.</ns0:p><ns0:p> Analysis of the results (evaluation of the models).</ns0:p><ns0:p>The results of steps 1-4 are four input vectors prepared using the before-mentioned four proposed techniques. The fifth step of the proposed methodology is focused on the evaluation of these four examined techniques. The application of the proposed methodology with 10-fold cross-validation resulted in the creation of 1200 different decision trees (30 max_depth values x 4 techniques x 10-fold validation). In other words, 40 decision trees with 10-fold cross-validation were created for each maximal depth. Figure <ns0:ref type='figure'>4</ns0:ref> depicts the individual steps of the methodology of the experiment. The last step of the proposed methodology, analysis of the result, will be described in section Results. All steps of the methodology were implemented in Python and its libraries. Text processing was realised using the NLTK library (https://www.nltk.org/). The tool TreeTagger <ns0:ref type='bibr' target='#b31'>(Schmid, 1994)</ns0:ref> was used for the identification of POS tags. Finally, the scikit-learn library (https://scikit-learn.org) was used for creating decision tree models. The Gini impurity function was applied to measure the quality of a split of decision trees.</ns0:p><ns0:p>The strategy used to choose the split at each node was chosen 'best' split (an alternative is 'best random split'). Subsequently, the maximum depth of the decision trees was examined to prevent overfitting. Other hyperparameters besides the minimum number of samples required to split an internal node or the minimum number of samples required to be at a leaf node were not applied.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Experiment</ns0:head><ns0:p>The quality of the proposed models (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) was evaluated using evaluation measures (prec, rec, f1-sc, prec_fake, rec_fake, prec_real, rec_real), as well as from time effectivity point of view (time). A comparison of the depths of the complete decision trees showed (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>) that there is no point in thinking about the depth greater than 29. Therefore, decision trees with a maximal depth less than 30 were created in line with the methodology referred to in section 3.8. Evaluation measures <ns0:ref type='bibr'>Fig 6a)</ns0:ref> increase up to the depth of five and reach the values smaller than 0.73 in average (rec < 0.727, f1-sc < 0.725, prec < 0.727). Subsequently, they reach stable values greater than 0.73 from the depth of six (rec > 0.732, f1-sc > 0.731, prec > 0.732) and less than 0.75 (rec < 0.742, f1-sc < 0.740, prec < 0.741) As a result, the PosF technique reaches better performance in small values of depth (up to 4) compared to others. While the merge technique originates from the joining of PoSF and TF-IDF technique, its results will be naturally better.</ns0:p><ns0:p>The model performance (prec) for the given depths (< 30) reached the above-average values from the depth of six (Fig <ns0:ref type='figure'>2b</ns0:ref>). The model performance measure prec (p > 0.05) was not statistically significant differences from depth equal to six. Similar results were also obtained for measures rec and f1-sc. As a result, the models' performance will be further examined for depths 6-10. The Kolmogorov-Smirnov test was applied to verify the normality assumption This test was statistically significant in all cases of examined evaluation measures and time (p < 0.05). It means that the assumption was violated because unless the assumption of covariance matrix sphericity is not met, the I. type error increases <ns0:ref type='bibr' target='#b0'>(Ahmad, 2013)</ns0:ref>, <ns0:ref type='bibr' target='#b11'>(Haverkamp & Beauducel, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>(Munkova et al., 2020)</ns0:ref>. Therefore, the degrees of freedom had been adjusted ( , 𝑑𝑓1 = (𝐽 -1)(𝐼 -1) 𝑑𝑓2 = (𝑁 -𝑙)(𝐼 -1) for the used F-test using Greenhouse-Geisser and Huynh-Feldt adjustments (Epsilon). As a result, the declared level of significance was reached Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where I is the number of levels of the factor model (dependent samples), J is the number of the levels of the factor deep (independent samples), and N is the number of cases. The Bonferroni adjustment was used to apply multiple comparisons. This adjustment is usually applied when several dependent and independent samples are simultaneously compared <ns0:ref type='bibr' target='#b18'>(Lee & Lee, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b9'>(Genç & Soysal, 2018)</ns0:ref>. Bonferroni adjustment represents the most conservative approach, in which the level of significance (alpha) for a whole set N of cases is set so that the level of significance for each case is equal to .</ns0:p><ns0:p>𝑎𝑙𝑝ℎ𝑎 𝑁</ns0:p></ns0:div>
<ns0:div><ns0:head>Model Performance</ns0:head><ns0:p>The first phase of the analysis focused on the performance of the models. The performance was analysed by selected evaluation measures (prec, rec, f1-sc, prec_fake, rec_fake, prec_real, rec_real) according to the within-group factor and between-groups factor and their interaction.</ns0:p><ns0:p>The models (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) represented the levels of withingroup factor. The depths of the decision tree (6-10) represented the levels of a between-group factor. Considering the violated assumption of covariance matrix sphericity, the modified tests for repeated measures were applied to assess the effectivity of the examined models <ns0:ref type='bibr' target='#b7'>(Dien, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b23'>(Montoya, 2019)</ns0:ref>. Epsilon represented the degree of violation of this assumption. If Epsilon equals one, the assumption is fulfilled. The values of Epsilon were significantly lower than one in both cases (Epsilon < 0.69). Zero hypotheses claim that there is no statistically significant difference in the quality of the examined models. The zero hypotheses, which claimed that there is not a statistically significant difference in values of evaluation measures prec, rec, and f1-sc between examined models, were rejected at the 0.001 significance level (prec: G-G Epsilon = 0.597, H-F Epsilon = 0.675, adj.p < 0.001; rec: G-G Epsilon = 0.604, H-F Epsilon = 0.684, adj.p < 0.001; f1-sc: G-G Epsilon = 0.599, H-F Epsilon = 0.678, adj.p < 0.001). On the contrary, the zero hypotheses, which claimed that the performance of the models (prec/rec/f1-sc) does not depend on a combination of within-group factor and between-groups factor, were not rejected (p > 0.05) (model x deep). Factor deep has not any impact on the performance of the examined models.</ns0:p><ns0:p>After rejecting the global zero hypotheses, the statistically significant differences between the models in the quality of the model's predictions were researched. Three homogeneous groups were identified based on prec, rec and f1-sc using the multiple comparisons. PosF(1,4) and TfIdf(1,4) techniques reached the same quality of the model's predictions (p > 0.05). Similar results were obtained for the pair PosfIdf(1,4) and PosF(1,4), as well for the pair TfIdf(1,4) and merge(1,4). Statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>3</ns0:ref>) were identified between the models merge(1,4) and Pos (p < 0.05), as well as between the models TfIdf(1,4) and PosfIdf(1,4) (p < 0.05). The merge(1,4) model reached the highest quality, considering the evaluation measures. The values of Epsilon were smaller than one in the case of partial evaluation measures prec_fake and rec_fake for the fake news. This finding was more notable in the case of Greenhouse-Geisser correction (Epsilon < 0.78). The zero hypotheses, which claimed that there is not any significant difference between the values of evaluation measures prec_fake and rec_fake between the examined models, were rejected (prec_fake: G-G Epsilon = 0.779, H-F Epsilon = 0.897, adj.p < 0.001; rec_fake: G-G Epsilon = 0.756, H-F Epsilon = 0.869, adj.p < 0.001). The impact of the between-groups factor deep has not been proven (p > 0.05). The performance of the models (prec_fake/rec_fake) does not depend on the interaction of the factors model and deep. Two homogeneous groups were identified for prec_fake (Table <ns0:ref type='table' target='#tab_0'>4A). PosF(1,4) and TfIdf(1,4), as well as PosF(1,4) and PosfIdf(1,4</ns0:ref>) reached the same quality of the model's predictions (p > 0.05). The statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>4A</ns0:ref>) were identified between merge(1,4) and other models (p < 0.05) and between TfIdf(1,4) and PosfIdf(1,4) (p < 0.05). The merge(1,4) model reached the best quality from the prec_fake point of view. On the other hand, PosF(1,4) model reached the highest quality considering the results of the multiple comparison for rec_fake (Table <ns0:ref type='table'>4B</ns0:ref>). Two homogeneous groups (PosfIdf(1,4), merge(1,4), TfIdf(1,4)) and (merge(1,4), TfIdf(1,4), PosF(1,4)) were identified based on the evaluation measure rec_fake (Table <ns0:ref type='table'>3B</ns0:ref>). The statistically significant differences (Table <ns0:ref type='table'>3B</ns0:ref>) were identified only between the PosF(1,4) and PosfIdf(1,4) models (p < 0.05).</ns0:p><ns0:p>Similarly, the values of Epsilon were smaller than one (G-G Epsilon < 0.85) in case of evaluation measures prec_real and rec_real, which evaluate the quality of the prediction for a partial class of real news. The zero hypotheses, which claimed, that there is no statistically significant difference between the values of evaluation measures prec_real and rec_real in examined models, were rejected at the 0.001 significance level (prec_real: G-G Epsilon = 0.596, H-F Epsilon = 0.675, adj.p < 0.001; rec_real: G-G Epsilon = 0.842, H-F Epsilon = 0.975, adj.p < 0.001). The impact of the between-groups factor deep has not also been proven in this case (p > 0.05). It means that the performance of the models (prec_real/rec_real) does not depend on the interaction of the factors (model x deep). The model merge(1,4) reached the highest quality from the evaluation measures, prec_real a rec_real point of view (Table <ns0:ref type='table'>5</ns0:ref>). Only one homogeneous group was identified from the multiple comparisons for prec_real (Table <ns0:ref type='table'>5A</ns0:ref>). PosF(1,4), TfIdf(1,4) and merge(1,4) reached the same quality of the model's predictions (p > 0.05). The statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>5A</ns0:ref>) were identified between the PosfIdf(1,4) model and other models (p < 0.05). Two homogenous groups (PosfIdf(1,4), PosF(1,4)) a (PosF(1,4), TfIdf(1,4)) were identified from the evaluation measure rec_real point of view (Table <ns0:ref type='table'>5B</ns0:ref>). The statistically significant differences (Table <ns0:ref type='table'>5B</ns0:ref>) were identified between the model merge(1,4) and other models (p < 0.05).</ns0:p></ns0:div>
<ns0:div><ns0:head>Time Efficiency</ns0:head><ns0:p>The time effectivity of the proposed techniques was evaluated in the second phase of the analysis. Time effectivity (time) was analysed in dependence on within-group factor and between-groups factor and their interaction. The models represented the examined levels of a within-group factor, and the decision tree depths represented the between-groups factor. The modified tests for repeated measures were again applied to verify the time effectivity of the PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science proposed models. The values of Epsilon were identical and significantly smaller than one for both corrections (Epsilon < 0.34). The zero hypothesis, which claimed that there is no statistically significant difference in time between the examined models, was rejected at the 0.001 significance level (time: G-G Epsilon = 0.336, H-F Epsilon = 0.366, adj.p < 0.001). Similarly, the zero hypothesis, which claimed that the time effectivity (time) does not depend on the interaction between the within-group factor and between-groups factor, was also rejected at the 0.001 significance level (model x deep). Factor deep has a significant impact on the time effectivity of the examined models. Only one homogenous group based on time was identified from the multiple comparisons (Table <ns0:ref type='table'>6</ns0:ref>). PosF(1,4) and TfIdf(1,4) reached the same time effectivity (p > 0.05). Statistically significant differences in time (Table <ns0:ref type='table'>6</ns0:ref>) were identified between the merge(1,4) model and other models (p < 0.05), as well as between the PosfIdf(1,4) model and other models (p < 0.05). As a result, PosfIdf(1,4) model can be considered the most time effective model, while the merge(1,4) model was considered the least time-effective one. Four homogeneous groups were identified after including between-group factor deep (Table <ns0:ref type='table'>7</ns0:ref>). Models PosfIdf(1,4) with depth 6-10 and TfIdf(1,4) with depth six have the same time effectivity (p > 0.05). The models TfIdf(1,4) and PosF(1,4) have the same time efficiency for all depths (p > 0.05). The models merge(1,4) with depth 6-8 (p > 0.05) and models merge(1,4) with the depth 7-10 (p > 0.05) were less time effective.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The paper analysed a unique dataset of the freely available fake and true news datasets written in English to evaluate if the POS tags created from the n-grams could be used for a reliable fake news classification. Two techniques based on POS tags were proposed and compared with the performance of the reference TF-IDF technique on a given classification task from the natural language processing research field. The results show statistically insignificant differences between the PosF and TF-IDF techniques. These differences were comparable in all observed performance metrics, including accuracy, precision, recall and f1-score. Therefore, it can be concluded that morphological analysis can be applied to fake news classification. Moreover, the charts of description statistic show TF-IDF technique reaches better results, though statistically insignificant. It is necessary to note for completeness that the statistically significant differences in observed performance metrics were identified between the morphological technique PosfIdf and TF-IDF. The reason is that the PosfIdf technique includes the ratio of the relative frequency of POS tags and inverse document function. This division by the number of documents in which the POS tag was observed caused weak results of this technique. It is not surprising, whereas the selected 33 POS tags were included in almost all the dataset documents. Therefore, the value of inverse document frequency was very high, which led to a very low value of the ratio. However, the failure of this technique does not diminish the importance of the findings that applied morphological techniques are comparable with the traditional reference technique TF-IDF. The Manuscript to be reviewed Computer Science aim to find a morphological technique, which will be better than TF-IDF, was fulfilled in the case of the PosF technique. The Merged TF-IDF and PosF technique was included in the experiment to determine whether it is possible to improve the reference TF-IDF technique using POS tags. Considering the final performance measures, mainly precision, it can be concluded that they are higher. It means that the applied techniques of morphological analysis could improve the precision of the TF-IDF technique. However, it has not been proven that this improvement is statistically significant. The fact that the reference TF-IDF technique had been favoured in the presented experiment should be considered. In other words, removing the stop words from the input vector of the TF-IDF technique increased the classification accuracy. On the other hand, removing stop words is not suitable for the techniques based on the POS tags because their removal can cause losing important information about the n-gram structure. This statement is substantiated by comparing the values of accuracy for individual n-grams (Fig. <ns0:ref type='figure'>3</ns0:ref>). PosF technique achieved better results for 2-grams, 3-grams, 4-grams than for unigrams. Contrary, the stop words did not have to be removed from the input vector of the TF-IDF technique. However, the experiment aimed to compare the performance of the proposed improvements with the best prepared TF-IDF technique. The time efficiency of the examined techniques was evaluated simultaneously with their performance. The negligible differences between the time efficiency of the TF-IDF and PosF techniques can be considered most surprising. Although the PosF technique uses only 33 POS tags compared to the large vectors of tokens/words in TF-IDF, the time efficiency is similar. The reason is that the POS tags identification in the text is more time-consuming than tokens identification. On the other hand, the merged technique with the best performance results was the most time-consuming. This finding was expected because the merged vector calculation requires calculating and joining the TF-IDF and PosF vectors. The compared classification models for fake and true news classification are based on the relative frequencies of the occurrences of the morphological tags. It is not important which morphological tags were identified in the rules (nodes of the decision tree) using given selection measures. At the same time, the exact border values for the occurrence of morphological tags can also be considered unimportant because the more important fact is that such differences exist, and it is possible to find values of occurrences of morphological tags, which allow classifying fake and true news correctly. The realized set of experiments is unique in the meaning of the proposed preprocessing techniques used to prepare the input vectors for classifiers. The decision was to use as simple classifiers as possible, thus decision trees, because of their ability to easily interpret the obtained knowledge. In other words, decision trees provide additional information, which POS tags and consequent n-grams are important and characteristic for the fake news and which for real news. On the other hand, it should be emphasized that their classification precision is worse than other types of classifiers like neutral networks.</ns0:p><ns0:p>Deepak and Chitturi (2020) used news content for fake news classification. The authors applied a similar approach based on the content conversion to vector (Bag of word, Word2Vec, GloVe). The classification models reached a higher accuracy (0.8335 -0.9132). However, the authors reached these results using additional secondary features like domain names, which have published the article under consideration, authors, etc. These features were mined by search engines using the content of the news as keywords for querying. The data collected in this study were added to the body of the article. The application of neural networks is the second important difference, which led to the higher classification performance. However, as mentioned earlier, the experiment described in this paper was focused on assessing the suitability of the n-grams and POS-tags for pre-processing of input vectors. The priority was not to reach the best performance measures. It can be accepted that adding additional secondary features and using neural networks similar to the described work of <ns0:ref type='bibr' target='#b5'>(Deepak and Chitturi, 2020)</ns0:ref> would lead to similar results. Similarly, to the previous experiment, <ns0:ref type='bibr'>Meel and Vishwakarma (2021)</ns0:ref> used visual image features using image captioning and forensic analysis for improving fake news identification. However, the results obtained in this study are comparable with the results presented in this article if these additional features have not been taken into account. Moreover, these authors exploited hidden pattern extraction capabilities from text, unlike the morphological analysis used in the research presented in this article. The selection of simple machine learning technique, like decision trees, can be considered the limitation of the research presented in this article. However, the reason why this technique was selected related to the parallel research, which was focused on the finding of the most frequent ngrams in fake and real news and their consequent linguistic analysis. This research extends the previous one and tries to determine if the POS tags and n-grams can be further used for fake news classification. It is possible to assume that the morphological tags can be used as the input to the fake news classifiers. Moreover, the pre-processed datasets are suitable for other classification techniques, improving the accuracy of the fake news classification. It means that whether the relative frequencies of occurrences of the morphological tags are further used as the input layer of the neuron network or added to the training dataset of other classifiers, the found information can improve the accuracy of that fake news classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Despite several authors' statements that the morphological characteristics of the text do not allow fake news classification with sufficient accuracy, the realised experiment proved that the selected morphological technique is comparable with the traditional reference technique TF-IDF widely used in the natural language processing domain. The suitability of the techniques based on the morphological analysis has been proven on the contemporary dataset, including 1100 labelled real and fake news about the Covid-19. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The experiment confirmed the validity of the newly proposed techniques based on the POS tags and n-grams against the traditional technique TF-IDF. The article describes the experiment with a set of pre-processing techniques used to prepare input vectors for data mining classification task. The overall contribution of the proposed improvements was expressed by the characteristic performance measures of the classification task (accuracy, precision, recall and f1-score). Besides the variables defined by the input vectors, the hyperparameters max_depth and n-gram length were observed. K-fold validation was applied to consider the random errors. The global null hypotheses were evaluated using adjusted tests for repeated measures. Subsequently, multiple comparisons with Bonferroni adjustment were used to compare the models. Various performance measures ensured the robustness of the obtained results. The decision trees were chosen to classify fake news because they create easily understandable and interpretable results compared to other classifiers. Moreover, they allow the generalisation of the inputs. An insufficient generalisation can cause overfitting, which leads to the wrong classification of individual observations of the testing dataset. Different values of the parameter of maximal depth were researched to obtain the maximal value of precision. This most suitable value of the parameter was different for each of the proposed techniques. Therefore, the statistical evaluation was realised considering the maximal depth. Besides the fact, the statistically significant difference has not been found, the proposed techniques based on the morphological analysis in combination with the created n-grams are comparable with traditional ones, for example, with the TF-IDF used in this experiment. Moreover, the advantages of the PoSF technique can be listed as follows:</ns0:p><ns0:p> A smaller size of the input vectors. The average number of vectors elements was 69 775, while in the case of TF-IDF, it was 817 213.3 (Table2).</ns0:p><ns0:p> A faster creating of the input vector. The possibility of using the proposed techniques based on POS tags on the classification of new yet untrained fake news datasets is considered the last advantage of the proposed techniques. The reason is that the TF-IDF works with the words and counts their frequencies in fake news. However, the traditional classifiers can fail to correctly classify fake news about a new topic because they have not yet trained the frequencies of new words. On the other hand, the PosF technique is more general and focuses on the primary relationships between POS tags, which are probably also similar in the case of new topics of fake news. This assumption will be evaluated in future research. The current most effective fake news classification is based predominantly on neural networks and a weighted combination of techniques, which deal with news content, social context, credit of a creator/spreader and analysis of target victims. It clear that decision tree classifiers are not so </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1 demonstrates this process using the sentence from the tenth most viewed fake news story shared on Facebook in 2019. The following POS tags were identified from the sentence 'Democrats Vote To Enhance Med Care for Illegals Now':  NNPS (proper noun, plural),  VBP (verb, sing. present, non-3d),  TO (to),  VB (verb, base form),  JJ (adjective), PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021) Manuscript to be reviewed Computer Science  NNP (proper noun, singular),  IN (preposition/subordinating conjunction),  NNS (noun plural),  RB (adverb).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PosF and PosfIdf input vector preparation techniques on identified POS tags. 3. Application of reference TF-IDF technique to create input vectors. This technique uses tokens to represent the words modified by the stemming algorithm. Simultaneously, the stop words are removed. 4. Joining PosF and TF-IDF technique to merge vector. 5. Iteration with different values of maximal depth (1, …, 30):  Randomised distribution of the input vectors of PosF, PosfIdf, TfIdf, and Merge techniques into training and testing subsets in accordance with the requirements of the 10-fold cross-validation.  Calculation of decision tree for each training subset with the given maximal depth.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>. The examined variables (model x evaluation measure, model x time) have normal distribution for all levels of the between-groups factor deep (6: max D < 0.326, p > 0.05, 7: max D < 0.247, p > 0.05, 8: max D < 0.230, p > 0.05, 9: max D < 0.298, p > 0.05, 10: max D < 0.265, p > 0.05). The Mauchley sphericity test was consequently applied for verifying the covariance matrix sphericity assumption for repeated measures with four levels (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) with the following results (prec: W = 0.372, Chi-Square = 43.217, p < 0.001; rec: W = 0.374, Chi-Square = 43.035, p < 0.001; f1-sc: W = 0.375, Chi-Square = 42.885, p < 0.001; prec_fake: W = 0.594, Chi-Square = 22.809, p < 0.001; rec_fake: W = 0.643, Chi-Square = 19.329, p < 0.01; prec_real: W = 0.377, Chi-Square = 42.641, p < 0.001; rec_real: W = 0.716, Chi-Square = 14.599, p < 0.05; time: W = 0.0001, Chi-Square = 413.502, p < 0.001).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>,</ns0:head><ns0:label /><ns0:figDesc>𝑎𝑑𝑗.𝑑𝑓1 = 𝐸𝑝𝑠𝑖𝑙𝑜𝑛(𝐽 -1)(𝐼 -1) , 𝑎𝑑𝑗.𝑑𝑓2 = 𝐸𝑝𝑠𝑖𝑙𝑜𝑛(𝑁 -𝑙)(𝐼 -1) PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>A shorter training phase of the model.  More straightforward and more understandable model. The model based on the PosF technique achieved the best results in smaller maximal decision tree depth values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021)Manuscript to be reviewed Computer Science frequently used. However, this article focused only on a very narrow part of the researched issues, and these classifiers have been used mainly for easier understanding of the problem. Therefore, future work will evaluate the proposed techniques in conjunction with other contemporary fake news classifiers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>*** -homogeneous groups (p > 0.05)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,380.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 : Morphological tags used for news classification (Schmid, 1994).</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>GTAG</ns0:cell><ns0:cell>POS Tags</ns0:cell></ns0:row><ns0:row><ns0:cell>group C</ns0:cell><ns0:cell>CC (coordinating conjunction), CD (cardinal number)</ns0:cell></ns0:row><ns0:row><ns0:cell>group D</ns0:cell><ns0:cell>DT (determiner)</ns0:cell></ns0:row><ns0:row><ns0:cell>group E</ns0:cell><ns0:cell>EX (existential there)</ns0:cell></ns0:row><ns0:row><ns0:cell>group F</ns0:cell><ns0:cell>FW (foreign word)</ns0:cell></ns0:row><ns0:row><ns0:cell>group I</ns0:cell><ns0:cell>IN (preposition, subordinating conjunction)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>JJ (adjective), JJR (adjective, comparative), JJS (adjective,</ns0:cell></ns0:row><ns0:row><ns0:cell>group J</ns0:cell><ns0:cell>superlative)</ns0:cell></ns0:row><ns0:row><ns0:cell>group M</ns0:cell><ns0:cell>MD (modal)</ns0:cell></ns0:row><ns0:row><ns0:cell>group N</ns0:cell><ns0:cell>NN (noun, singular or mass), NNS (noun plural), NNP (proper noun, singular), NNPS (proper noun, plural) ,</ns0:cell></ns0:row><ns0:row><ns0:cell>group P</ns0:cell><ns0:cell>PDT (predeterminer), POS (possessive ending), PP (personal pronoun)</ns0:cell></ns0:row><ns0:row><ns0:cell>group R</ns0:cell><ns0:cell>RB (adverb), RBR (adverb, comparative), RBS (adverb, superlative), RP (particle)</ns0:cell></ns0:row><ns0:row><ns0:cell>group T</ns0:cell><ns0:cell>TO (infinitive 'to')</ns0:cell></ns0:row><ns0:row><ns0:cell>group U</ns0:cell><ns0:cell>UH (interjection)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VB (verb be, base form), VBD (verb be, past tense), VBG</ns0:cell></ns0:row><ns0:row><ns0:cell>group V</ns0:cell><ns0:cell>(verb be, gerund/present participle), VBN (verb be, past participle), VBP (verb be, sing. present, non-3d), VBZ</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(verb be, 3rd person sing. present)</ns0:cell></ns0:row><ns0:row><ns0:cell>group W</ns0:cell><ns0:cell>WDT (wh-determiner), WP (wh-pronoun), WP$ (possessive wh-pronoun), WRB (wh-abverb)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 : Selected characteristics of the complete decision trees.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>TfIdf(1,4) PosfIdf(1,4)</ns0:cell><ns0:cell>PosF(1,4)</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage(deep)</ns0:cell><ns0:cell>25.1</ns0:cell><ns0:cell>15.6</ns0:cell><ns0:cell>17.2</ns0:cell></ns0:row><ns0:row><ns0:cell>min(deep)</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>max(deep)</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>21</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage (node count)</ns0:cell><ns0:cell>171.4</ns0:cell><ns0:cell>157.4</ns0:cell><ns0:cell>156</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage (leaf count)</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>79.2</ns0:cell><ns0:cell>78.5</ns0:cell></ns0:row><ns0:row><ns0:cell>average (number of vectors</ns0:cell><ns0:cell>817213.3</ns0:cell><ns0:cell>69512.4</ns0:cell><ns0:cell>69775</ns0:cell></ns0:row><ns0:row><ns0:cell>elements)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:2:0:NEW 20 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Associate Professor
doc. PaedDr. Jozef Kapusta, PhD.
Department of Informatics
Faculty of Natural Sciences
Constantine the Philosopher University in Nitra
Trieda A. Hlinku 1
94901 Nitra
Slovakia
jkapusta@ukf.sk
May 20th, 2021
Cover Letter
Dear Editors, Dear Reviewers,
Thank you for your valuable feedback and contribution to improving the article 'Using of N-Grams from Morphological Tags for Fake News Classification'.
We have improved our previous research question, research design and discussion based on all reviewers’ recommendations.
We believe that the improved manuscript meets the required quality standards and is now suitable for publication in PeerJ Computer Science.
Please, find below our response to the reviewer’s comments.
doc. PaedDr. Jozef Kapusta, PhD.
Associate Professor
On behalf of all authors.
The List of Responses to the Reviewers’ Comments
Review 1:
Thank you for your time, effort, and valuable feedback. We appreciate your contribution to improving the comprehensibility and scientific content of the paper. Please, find our answers to your comments and suggestions below.
The paper is well written and presented well.
Thank you for your positive evaluation of the paper, your comments, and your suggestions. We appreciate your contribution, how the results of the realised research could be improved.
In the abstract, the author did not mention about the rationale behind the proposed system. Also, the abstract has vague information, which need to be replaced with the important information. Also, it is necessary to add simulation parameters in the same section.
The abstract has been improved. Essential information about the research background applied techniques, results, and performance measures of the model have been reconsidered and added [p. 1; line 15, lines 25-35].
Introduction is too generic and is lacking important information about the N-Grams from Morphological Tags and proposed scheme. Why the author proposed such scheme is missing in the introduction. Also, it is necessary to add contribution and advantages of the proposed scheme at the end of the proposed scheme.
Some sentences from the Introduction section were removed and replaced by the required information about the n-grams [p. 2; lines 59-63]. Simultaneously, the main aim of the paper was introduced in more detail to help the reader to understand the main contribution of the paper better [p. 2; lines 77-81]. Finally, the proposed schema was modified in line with the recommendations of the reviewers [p. 11, lines 444-456, p. 12; lines 481-483].
Background and related work is missing drawbacks of the existing scheme. Also, author of the paper has done some old survey. It is highly recommended to add latest references from reputable journals.
The related section of the paper was extended with several articles indexed in the Web of Science database and published in high-quality journals or conferences [p. 4; lines 150-153, p. 5; 158-164, 169-171, 174-177, and p.7; 225-230]. The results of similar experiments have been analysed. The used approach was compared with the approach used in our article [p. 17; lines 698-723].
Proposed scheme section is missing an overview of the proposed scheme and also it is necessary to add overflow diagram in the same section.
The overflow diagram has been reconsidered to understand the individual steps of the proposed methodology easily. Simultaneously, the list of the individual steps of the applied methodology has been checked to ensure the more effortless comprehensibility of the order of the partial experiments [p. 11; lines 444-456, p. 12; lines 479-483].
In addition, figures are not very clear. Please re-draw figures and add important information because you have mentioned about various systems but you didn’t include it in the explanation of the proposed figure diagram.
All figures have been created with higher resolution. The section Methodology of the Main Experiment [p. 11; lines 440-455, p.12; 456-497] was reworked to improve its clarity and comprehensibility. The added paragraphs should improve the overall readability of the article. Simultaneously, they could contribute to a better understanding of the partial experiments.
Results and evaluation is missing various terminologies. It is highly recommended to add the working of the implementation in algorithm. In addition, which parameters are considered and why they are considered is necessary to be added in this section.
We appreciate your recommendations. We have added descriptions of the essential parameters and hyperparameters of the models to the article [p. 10; lines 375-383, p. 11, lines 451-457]. The algorithm itself was created as a set of Jupyter Python Notebooks (.ipynb) and included in supplemental files in the submission management system. We also added the list of Python libraries, which have been used in the experiments [p. 12; lines 488-497]. The most important parts of the algorithm were included in the section The Methodology of the Main Experiment [p. 11; lines 446-457]. Moreover, we added for better clarity Figure 4 – The individual steps of the experiment for comparing four proposed techniques, 30 values of maximal depth and 10-fold cross-validation. Finally, we extended the last section of the article and added several sentences to better illustrate the various terminologies for evaluation and the results [p. 18; lines 744-754].
Author should compare their scheme with existing schemes. Also, results are not enough to defend the proposed scheme. Please add more results and discussion.
We also modified the discussion section of the article. We compare our results with the outcomes of several other current research papers published on the same topic [p. 18; lines 744-754]. However, we would like to emphasise that this comparison in terms of the best-achieved performance metrics is not straightforward. As we have already mentioned in the article, we deliberately used the simplest classification model to understand the impact of individual features. At the same time, we intended to assess the suitability of the proposed pre-processing techniques using n-grams from POS tags for creating input vectors of this classifier. In other words, our priority was to find out if it is possible to improve the classification with proposed techniques, not to achieve the best performance metrics. Although using the most promising classification models based on neural networks would improve the performance metrics, comparing their features and understanding them would be more problematic in the case of neural networks. We also mention this reasoning and constraints in the discussion section [p. 18; lines 744-747, and 748-754].
Conclusion need to revise as it looks like the summary of the paper.
Thank you for your proposal on how to concise the conclusion section of the article. We tried to exclude some paragraphs that looked like the summary and focused on emphasising the overall contribution of the realised experiments, proposed improvements of the pre-processing techniques, and future research directions.
Also, please add latest references.
Considering your recommendations, we have looked for the high-quality articles newly added and indexed in the Web of Knowledge, IEEE Explore and ACM library databases. Short summaries of their outcomes were added to the related work, discussion, and list of references conferences [p. 4; lines 150-153, p. 5; lines 158-164, 169-171, and 174-177, p. 6; lines 225-230, p. 18; lines 744-747, p. 19; lines 748-754].
Review 2:
Thank you for your time, effort, and valuable feedback. We appreciate your contribution to improving the comprehensibility and scientific content of the paper. Please, find our answers to your comments and suggestions below.
The author has present work with quality and the suggested point can improve the quality of the manuscript.
Thank you for your positive evaluation of the paper, your comments, and your suggestions. We appreciate your contribution, how the results of the realised research could be improved.
The flow of the experiment is not clear author should write concisely but make it clear and easy to understand.
Thank you for your feedback and suggestions. We agree with you that the sequence of the partial experiments realised in the paper could be harder to read at first glance. Therefore, we added some visualisations, which explain graphically individual steps of the applied methodology and the reason for the partial experiments. Moreover, the section The Methodology of the Main Experiment has been revised for better clarity and understanding. Several paragraphs have been added, which we believe will improve the readability of this article [p. 11; lines 444-456, p.12; lines 481-483].
Training and testing records shown be mention in a table for each target class.
As was mentioned in the article, we used k-fold validation. The entire dataset is repeatedly divided into training and testing subsets, in our case, for 10-fold validation in a ratio of 9: 1. As a result, the composition of the training and test set is always different for ten measurements. Providing a table for each target class for training and testing records could be confusing because of the random nature of their creation, mainly because the training and testing set is always different. However, we have given a more detailed description of the k-fold validation process by adding a separate subsection to the article for a better understanding [p. 10; lines 375-383].
Validation should be done by authors
Thank you for your recommendation. The discussion about the validation of the results has been added to the conclusion section [p.18; lines 744-754]. All proposed techniques have been applied to the real dataset of true and fake news. Simultaneously, we realized a validation focused on comparing the obtained results with the recent experiments realized in the examined research field [p. 15; lines 698-723].
Typos and grammar needed to be check throughout the paper.
The native speaker has thoroughly checked the typos and grammar. Some paragraphs have been rewritten for better readability.
Is there any results validation technique used by the author?
A summary of the validation of the results was also added to the conclusion section of the article [p. 18; lines 744-754]. Simultaneously, the notion about the k-fold cross-validation was added where required [p.10; lines 375-383].
The quality of the figures should be improved.
The quality of the figures was improved.
Thank you again for the reviewers‘ effort and all suggestions.
We believe that we have correctly understood the reviewer’s comments and recommendations and included them in this improved version of the manuscript.
Simultaneously, we believe, we explained a possible contribution of the paper and improved its readability and comprehensibility.
Besides the final decision, we appreciate the reviewers' valuable feedback, which motivated us to research fake news identification further using morphological analysis.
Best regards
Jozef Kapusta
Corresponding author
" | Here is a paper. Please give your review comments after reading it. |
180 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The research of the techniques for effective fake news detection has become very needed and attractive. These techniques have a background in many research disciplines, including morphological analysis. Several researchers stated that simple content-related ngrams and POS tagging had been proven insufficient for fake news classification. However, they did not realise any empirical research results, which could confirm these statements experimentally in the last decade. Considering this contradiction, the main aim of the paper is to experimentally evaluate the potential of the common use of n-grams and POS tags for the correct classification of fake and true news. The dataset of published fake or real news about the current Covid-19 pandemic was pre-processed using morphological analysis. As a result, n-grams of POS tags were prepared and further analysed. Three techniques based on POS tags were proposed and applied to different groups of n-grams in the pre-processing phase of fake news detection. The n-gram size was examined as the first. Subsequently, the most suitable depth of the decision trees for sufficient generalization was scoped. Finally, the performance measures of models based on the proposed techniques were compared with the standardised reference TF-IDF technique.</ns0:p><ns0:p>The performance measures of the model like accuracy, precision, recall and f1-score are considered, together with the 10-fold cross-validation technique. Simultaneously, the question, whether the TF-IDF technique can be improved using POS tags was researched in detail. The results showed that the newly proposed techniques are comparable with the traditional TF-IDF technique. At the same time, it can be stated that the morphological analysis can improve the baseline TF-IDF technique. As a result, the performance measures of the model, precision for fake news and recall for real news, were statistically significantly improved.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Fake News is currently the biggest bugbear of the developed world <ns0:ref type='bibr' target='#b14'>(Jang et al., 2018)</ns0:ref>. Although the spreading of false information or false messages for personal or political benefit is certainly nothing new, current trends such as social media enable every individual to create false information easier than ever before <ns0:ref type='bibr' target='#b2'>(Allcott & Gentzkow, 2017)</ns0:ref>. The article deals with evaluating four proposed techniques for fake and true news classification using morphological analysis. Morphological analysis belongs to the basic means for natural language processing research. It deals with the parts of speech tags (POS tags) as morphological characteristics of the word in the context, which can be considered a style-based fake detection method <ns0:ref type='bibr' target='#b38'>(Zafarani et al., 2019)</ns0:ref>. Linguistic-based features are extracted from the text content in terms of document organisations from different levels, such as characters, words, sentences and documents. Sentence-level features refer to all the important attributes that are based on sentence scale. They include parts of speech tagging (POS), the average sentence length, the average length of a tweet/post, the frequency of punctuations, function words, and phrase in a sentence, the average polarity of the sentence (positive, neutral or negative), as well as the sentence complexity <ns0:ref type='bibr' target='#b17'>(Khan et al., 2019)</ns0:ref>. Existing research articles mainly investigate standard linguistic features, including lexical, syntactic, semantic and discourse features, to capture the intrinsic properties of misinformation. Syntactic features can be divided into shallow, where belongs frequency of POS tags and punctuations, and deep syntactic features <ns0:ref type='bibr' target='#b9'>(Feng, Banerjee & Choi, 2012)</ns0:ref>. Morphological analysis of POS tags based on n-grams is used in this paper to evaluate its suitability for successful fake news classification. An N-gram is a sequence of N tokens (words). N-grams are also called multi-word expressions or lexical bundles. N-grams can be generated on any attribute, with word and lemma being the most frequently used ones. The following word expressions represent 2-gram: 'New York', and 3-gram: 'The Three Musketeers'. The analysis of the n-grams is considered more meaningful than the analysis of the individual words (tokens), which constitute the n-grams. Several research articles stated that simple content-related n-grams and POS tagging had been proven insufficient for the classification task <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref> <ns0:ref type='bibr' target='#b5'>(Conroy, Rubin & Chen, 2015)</ns0:ref> <ns0:ref type='bibr' target='#b35'>(Su et al., 2020)</ns0:ref>. However, these findings mainly represent the authors' opinion because they did not realise or publish any empirical research results, confirming these statements in the last decade. Considering this contradiction, the main aim of the paper is to experimentally evaluate the potential of the common use of n-grams and POS tags for the correct classification of fake and true news. Therefore, continuous sequences of n items from a given sample of POS tags (ngrams) were analysed. The techniques based on POS tags were proposed and used in order to meet this aim. Subsequently, these techniques were compared with the standardised reference TF-IDF technique to evaluate their main performance characteristics. Simultaneously, the question, whether the TF-IDF technique can be improved using POS tags was researched in detail. All techniques have been applied in the pre-processing phase on different groups of ngrams. The resulted datasets have been analysed using decision tree classifiers. The article aims to present and evaluate proposed techniques for pre-processing of input vectors of a selected classifier. These techniques are based on creating n-grams from POS tags. The research question is whether the proposed techniques are more suitable than the traditional baseline technique TF-IDF or whether these techniques are able to improve the results of the TF-IDF technique. All proposed techniques have been applied to different levels of n-grams. Subsequently, the outcomes of these techniques were used as the input vectors of the decision tree classifier. The following methodology was used for evaluation of the suitability of a proposed approach based on n-grams of POS tags:</ns0:p><ns0:p> Identification of POS tags in the analysed dataset.</ns0:p><ns0:p> N-grams (1-grams, 2-grams, 3-grams, 4-grams) definition from POS tags. N-gram represents the sequence of the POS tags.</ns0:p><ns0:p> Calculation of frequency of occurrence of an n-gram in documents. In other words, the relative frequency of n-gram in examined fake and true news is calculated.</ns0:p><ns0:p> Definition of input vectors of classifiers using three proposed techniques for POS tags and controlled TF-IDF technique.</ns0:p><ns0:p> Application of decision tree classifiers, parameter tuning concerning the different depths and length of n-grams.</ns0:p><ns0:p> Identification and comparison of the decision trees' characteristics, mainly the accuracy, depth of the trees and time performance. The structure of the article is as follows. The current state of the research in the field of fake news identification is summarised in the second section. The datasets of news Covid-19 used in the research are described in the second section. This section also describes the process of ngrams extraction from POS tags. Simultaneously, three POS tags-based techniques are proposed for preparing input vectors for decision trees classifiers. Subsequently, the same section discusses the process of decision trees modelling, the importance of finding the most suitable ngram length and maximum depth. Finally, statistical evaluation of the performance of the modified techniques based on POS tags for fake news classification is explained in the same section. The most important results, together with an evaluation of model performance and time efficiency of the proposed techniques, are summarised in the fourth section. The detailed discussion about the obtained results and conclusions form the content of the last section of the article.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Related Work</ns0:head><ns0:p>There has been no universal definition for fake news. However, Zhou and Zafarani (Zhou & Zafarani, 2020) define fake news as intentionally false news published by a news outlet. Simultaneously, they explained related terms in detail and tried to define them with a discussion about the differences based on the huge set of related publications. The same authors categorised automatic detection of fake news from four perspectives: knowledge, style, propagation and source. Considering this, the research described in this paper belongs to the style-based fake news detection category, which methods try to assess news intention <ns0:ref type='bibr'>(Zhou & Zafarani, 2020)</ns0:ref>. According to their definition, fake news style can be defined as a set of quantifiable characteristics (features) that can well represent fake news content and differentiate it from true news content. <ns0:ref type='bibr' target='#b18'>Kumar and Shah (Kumar & Shah, 2018)</ns0:ref> provided a comprehensive review of many facets of fake news distributed over the Internet. They quantified the impact of fake news and characterised the algorithms used to detect and predict them. Moreover, they summarise the current state of the research and approaches applied in the field of fake news content analysis from the linguistic, semantic and knowledge discovery point of view. They did not conclude the overall performance of the style-based methods using ML algorithms despite the overall scope of the review.</ns0:p><ns0:p>Other contemporary surveys <ns0:ref type='bibr'>(Zhou & Zafarani, 2020;</ns0:ref><ns0:ref type='bibr'>Zhang & Ghorbani, 2020;</ns0:ref><ns0:ref type='bibr' target='#b33'>Shu et al., 2017)</ns0:ref> provide further evidence that the research related to the field of fake news is very intense now, mainly due to their negative consequences for society. The authors analysed various aspects of the fake news research, discussed the reasons, creators, resources and methods of their dissemination, as well as the impact and the machine learning algorithms created to detect them effectively. Sharma et al. <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2019)</ns0:ref> also published a comprehensive survey highlighting the technical challenges of fake news. They summarised characteristic features of the datasets of news and outlined the directions for future research. They discussed existing methods and ML techniques applicable to identifying and mitigating fake news, focusing on the significant advances in each method and their advantages and limitations. They discussed the results of the application of different classification algorithms, including decision trees. They concluded that using n-grams alone can not entirely capture finer-grained linguistic information present in fake news writing style. However, their application on the dataset, which contains pre-processed items using POS tagging, is not mentioned.</ns0:p><ns0:p>Zhang and Ghorbani (Zhang & Ghorbani, 2020) stated that because online fake reviews and rumours are always compacted and information-intensive, their content lengths are often shorter than online fake news. As a result, traditional linguistic processing and embedding techniques such as bag-of-words or n-gram are suitable for processing reviews or rumours. However, they are not powerful enough for extracting the underlying relationship for fake news. For online fake news detection, sophisticated embedding approaches are necessary to capture the key opinion and sequential semantic order in news content. De Oliveira et al. <ns0:ref type='bibr' target='#b7'>(de Oliveira et al. 2021)</ns0:ref> realized the literature survey focused on the preprocessing data techniques used in natural language processing, vectorization, dimensionality reduction, machine learning, and quality assessment of information retrieval. They discuss the role of n-grams and POS tags only partially.</ns0:p><ns0:p>On the other hand, Li et al. <ns0:ref type='bibr' target='#b21'>(Li et al., 2020)</ns0:ref> consider the n-gram approach the most effective linguistic analysis method applied to fake news detection. Apart from word-based features such as n-grams, syntactic features such as POS tags are also exploited to capture linguistic characteristics of texts.</ns0:p><ns0:p>Stoick <ns0:ref type='bibr' target='#b34'>(Stoick, 2019)</ns0:ref> stated that previous linguistic work suggests part-of-speech and n-gram frequencies are often different between fake and real articles. He created two models and concluded that some aspects of the fake articles remained readily identifiable, even when the classifier was trained on a limited number of examples. The second model used n-gram frequencies and neural networks, which were trained on n-grams of different length. He stated that the accuracy was near the same for each n-gram size, which means that some of the same information may be ascertainable across n-grams of different sizes. Ahmed et al. <ns0:ref type='bibr' target='#b1'>(Ahmed, Traore & Saad, 2017)</ns0:ref> further argued that the latest advance in natural language processing (NLP) and deception detection could help to detect deceptive news. They proposed a fake news detection model that analyses n-grams using different features extraction and ML classification techniques.</ns0:p><ns0:p>The combination of TF-IDF as features extraction, together with LSVM classifier, achieved the highest accuracy. Similarly, Jain (Jain, 2020) extracted linguistic/stylometric features, a bag of words TF and BOW TF-IDF vector and applied the various machine learning models, including bagging and boosting methods, to achieve the best accuracy. However, they stated that the lack of available corpora for predictive modelling is an essential limiting factor in designing effective models to detect fake news. Wynne et al. <ns0:ref type='bibr' target='#b37'>(Wynne, 2019)</ns0:ref> investigated two machine learning algorithms using word n-grams and character n-grams analysis. They obtained better results using character n-grams with TF-IDF) and Gradient Boosting Classifier. They did not discuss the pre-processing phase of ngrams, as will be described in this article.</ns0:p><ns0:p>Thorne and Vlachos <ns0:ref type='bibr' target='#b36'>(Thorne & Vlachos, 2018)</ns0:ref> surveyed automated fact-checking research stemming from natural language processing and related disciplines, unifying the task formulations and methodologies across papers and authors. They identified the subject-predicateobject triples from small knowledge graphs to fact check numerical claims. Once the relevant triple had been found, a truth label was computed through a rule-based approach that considered the error between the claimed values and the retrieved values from the graph. Shu, Silva, Wang, Jiliang and Liu <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref> proposed to use linguistic-based features such as total words, characters per word, frequencies of large words, frequencies of phrases (i.e., n-grams and bag-of-words). They stated that fake contents are generated intentionally by malicious online users, so it is challenging to distinguish between fake information and truth information only by content and linguistic analysis. POS tags were also exploited to capture the linguistic characteristics of the texts. However, several works have found the frequency distribution of POS tags to be closely linked to the genre of the text being considered <ns0:ref type='bibr' target='#b32'>(Sharma et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b28'>Ott et al. (Ott et al., 2011)</ns0:ref> examined this variation in POS tag distribution in spam, intending to find if this distribution also exists concerning text veracity. They obtained better classification performance with the n-grams approach but found that the POS tags approach is a strong baseline outperforming the best human judge. Later work has considered more in-depth syntactic features derived from probabilistic context-free grammars (PCFG) trees. They assumed that the approach based only on n-grams is simple and cannot model more complex contextual dependencies in the text. Moreover, syntactic features used alone are less powerful than wordbased n-grams, and a naive combination of the two cannot capture their complex interdependence. They concluded that the weights learned by the classifier are mainly in agreement with the findings of existing theories on deceptive writing <ns0:ref type='bibr' target='#b26'>(Ott, Cardie & Hancock, 2013)</ns0:ref>. Some authors, for example, Conroy, Rubin, and Chen <ns0:ref type='bibr' target='#b5'>(Conroy, Rubin & Chen, 2015)</ns0:ref>, have noted that simple content-related n-grams and POS tagging have been proven insufficient for the classification task. However, they did not research the n-grams from the POS tags. They suggested using Deep Syntax analysis using Probabilistic Con-text-Free Grammars (PCFG) to distinguish rule categories (lexicalised, non-lexicalised, parent nodes, etc.) instead of deception detection with 85-91% accuracy. Su et al. <ns0:ref type='bibr' target='#b35'>(Su et al., 2020)</ns0:ref> also stated that simple content-related n-grams and shallow part-ofspeech (POS) tagging have proven insufficient for the detection task, often failing to account for important context information. On the other hand, these methods have been proven useful only when combined with more complex analysis methods.</ns0:p><ns0:p>Khan et al. <ns0:ref type='bibr' target='#b17'>(Khan et al., 2019)</ns0:ref> stated that meanwhile, the linguistic-based features extracted from the news content are not sufficient for revealing the in-depth underlying distribution patterns of fake news <ns0:ref type='bibr' target='#b33'>(Shu et al., 2017)</ns0:ref>. Auxiliary features, such as the news author's credibility and the spreading patterns of the news, play more important roles for online fake news prediction.</ns0:p><ns0:p>On the other hand, Qian et al. <ns0:ref type='bibr' target='#b29'>(Qian et al., 2018)</ns0:ref> proposed a similar approach, which is researched further in this paper, based on a convolutional neural network (TCNN) with a user response generator (URG). TCNN captures semantic information from text by representing it at the sentence and word level. URG learns a generative user response model to a text from historical user responses to generate responses to new articles to assist fake news detection. They used POS tags in combination with n-grams as a comparison of the accuracy of the proposed technique of NN based classification. <ns0:ref type='bibr' target='#b11'>Goldani et al. (Goldani, 2021)</ns0:ref> used capsule neural networks in the fake news detection task. They applied different levels of n-grams for feature extraction and subsequently used different embedding models for news items of different lengths. Static word embedding was used for short news items, whereas non-static word embeddings that allow incremental uptraining and updating in the training phase are used for medium length or long news statements. They did not consider POS tags in the pre-processing phase. Finally, Kapusta et al. <ns0:ref type='bibr'>(Kapusta et al., 2020)</ns0:ref> realised a morphological analysis of several news datasets. They analysed the morphological tags and compared the differences in their use in fake news and real news articles. They used morphological analysis for words classification into grammatical classes. Each word was assigned a morphological tag, and these tags were thoughtfully analysed. The first step consisted of creating groups that consisted of related morphological tags. The groups reflected on the basic word classes. The authors identified statistically significant differences in the use of word classes. Significant differences were identified for groups of foreign words, adjectives and nouns favouring fake news and groups of wh-words, determiners, prepositions, and verbs favouring real news. The third dataset was evaluated separately and was used for verification. As a result, significant differences for groups adverb, verbs, nouns were identified. They concluded that it is important that the differences between groups of words exist. It is evident that morphological tags can be used as input into the fake news classifiers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>The dataset analysed by Li <ns0:ref type='bibr' target='#b20'>(Li, 2020)</ns0:ref> was used for the evaluation of proposed techniques. This dataset collects more than 1100 articles (news) and posts from social networks related to Covid-19. It was created in cooperation with the projects Lead Stories, Poynter, FactCheck.org, Snopes, EuVsDisinfo, which monitor, identify and control misleading information. These projects define the true news as an article or post, which truthfulness can be proven and come from trusted resources. Vice versa, as the fake news are considered all articles and post, which have been evaluated as false and come from known fake news resources trying to broadcast misleading information intentionally.</ns0:p></ns0:div>
<ns0:div><ns0:head>POS Tags</ns0:head><ns0:p>Morphological tags were assigned to all words of the news from the dataset using the unique tool called TreeTagger. Schmid <ns0:ref type='bibr' target='#b31'>(Schmid, 1994)</ns0:ref> developed the set of tags called English Penn Treebank using this annotating tool. The final English Penn Treebank tagset contains 36 morphological tags. However, considering the aim of the research, the following tags were not included in the further analysis due to their low frequency of appearance or discrepancy:</ns0:p><ns0:formula xml:id='formula_0'> SYM (symbol),</ns0:formula><ns0:p> LS (list marker).</ns0:p><ns0:p>Therefore, the final number of morphological tags used in the analysis was 33. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the morphological tags divided into groups.</ns0:p></ns0:div>
<ns0:div><ns0:head>N-grams Extraction from POS Tags</ns0:head><ns0:p>N-grams were extracted from POS tags in this data pre-processing step. As a result, sequences of n-grams from a given sample of POS tags were created. Since 1-grams and identified POS tags are identical, the input file with 1-grams used in further research is identical to the file with identified POS tags. The n-grams for the TF-IDF technique were created in the same way. However, it is important to emphasise that this technique used so call terms, which represent the lemmas or stems of a word.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Techniques Used to Pre-process the Input Vectors</ns0:head><ns0:p>The following four techniques have been applied for pre-processing of the input vectors for a selected classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>Term Frequency -Inverse Document Frequency (TF-IDF) Technique</ns0:head><ns0:p>TF-IDF is a traditional technique that leveraged to assess the importance of tokens to one of the documents in a corpus <ns0:ref type='bibr' target='#b30'>(Qin, Xu & Guo, 2016)</ns0:ref>. The TF-IDF approach creates a bias in that frequent terms highly related to a specific domain, which is typically identified as noise, thus leading to the development of lower term weights because the traditional TF-IDF technique is not specifically designed to address large news corpora. Typically, the TF-IDF weight is composed of two terms: the first computes the normalised Term Frequency (TF), the second term is the Inverse Document Frequency (IDF).</ns0:p><ns0:p>Let is a term/word, is a document, is any term in the document. Then the frequency of the 𝑡 𝑑 𝑤 term/word in document d is calculated as follows 𝑡 𝑡𝑓(𝑡,𝑑) = 𝑓(𝑡,𝑑)</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑓(𝑤,𝑑) ,</ns0:head><ns0:p>where is the number of terms/words in document and is the number of all terms 𝑓(𝑡,𝑑) 𝑑 𝑓(𝑤,𝑑) in the document. Simultaneously, the number of all documents is also taken into account in TF-IDF calculation, in which a particular term/word occurs. This number is denoted as . It 𝑖𝑑𝑓(𝑡,𝐷) represents an inverse document frequency expressed as follows</ns0:p><ns0:formula xml:id='formula_1'>𝑖𝑑𝑓(𝑡,𝐷) = ln 𝑁 ∑ (𝑑 ∈ 𝐷 : 𝑡 ∈ 𝑑) + 1</ns0:formula><ns0:p>, where is a corpus of all documents and is a number of documents in the corpus. 𝐷 𝑁 The formula of TfIdf can be written as 𝑡𝑓𝑖𝑑𝑓(𝑡,𝑑,𝐷) = 𝑡𝑓(𝑡,𝑑) × 𝑖𝑑𝑓(𝑡,𝐷) . Formula has various variants such as or ). Similarly, there are 𝑡𝑓 log(𝑡𝑓(𝑡,𝑑)) log (𝑡𝑓(𝑡,𝑑) + 1 several variants, how can be calculated <ns0:ref type='bibr' target='#b3'>(Chen, 2017)</ns0:ref>. Considering this fact, the calculation 𝑖𝑑𝑓 of the TfIdf was realised using the scikit-learn library in Python (https://scikit-learn.org). The TF-IDF technique applied in the following experiment is used as a reference technique for comparison selected characteristics of the new techniques described below. The same dataset was used as an input. However, the stop words were removed before in this case.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:3:0:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>POS Frequency (PosF) Technique</ns0:head><ns0:p>This technique is an analogy of the Term Frequency technique. However, it calculates with the frequency of POS tags. Let pos is an identified POS tag, d document, w represents any POS tag identified in the document. Then the frequency of POS tag pos in document d can be calculated as follows:</ns0:p><ns0:formula xml:id='formula_2'>𝑃𝑜𝑠𝐹(𝑝𝑜𝑠,𝑑) = 𝑓(𝑝𝑜𝑠,𝑑) 𝑓(𝑤,𝑑) ,</ns0:formula><ns0:p>where is the number of occurrences of POS tag in document and is the 𝑓(𝑝𝑜𝑠,𝑑) 𝑑 𝑓(𝑤,𝑑) number of all identified POS tags in the document. As a result, PosF expresses the relative frequency of each POS tag in the frame of the analysed list of POS tags identified in the document.</ns0:p></ns0:div>
<ns0:div><ns0:head>PosF-IDF Technique</ns0:head><ns0:p>This technique is the analogy of the TF-IDF technique. Similarly to the already introduced PosF technique, it considers the POS tags, which have been identified in each document in the analysed dataset based on individual words and sentences. The documents containing only identified POS tags represented the inputs for the calculation of PosF-IDF. Besides the relative frequency of POS tags in the document, the number of all documents in which a particular POS tag has also been identified is considered.</ns0:p></ns0:div>
<ns0:div><ns0:head>Merged TF-IDF and PosF Technique</ns0:head><ns0:p>This technique was proposed to confirm whether is it possible to improve the traditional TF-IDF technique by using POS tags. Therefore, the following vectors were created for each document:  TfIdf vector,  PosF vector, which represents the relative frequency of POS tags in the document. Subsequently, a result of applying the merged technique is again a vector, which originated by merging the previous vectors. Therefore, both vectors a are considered for 𝑇𝑓𝐼𝑑𝑓(𝑑) 𝑃𝑜𝑠𝐹(𝑑) document d, which were calculated using the techniques TfIdf and PosF mentioned.</ns0:p><ns0:p>,</ns0:p><ns0:formula xml:id='formula_3'>𝑇𝑓𝐼𝑑𝑓(𝑑) = (𝑡 1 ,𝑡 2 ,…,𝑡 𝑛 ) 𝑃𝑜𝑠𝐹(𝑑) = (𝑝 1 ,𝑝 2 ,…,𝑝 𝑚 ) .</ns0:formula><ns0:p>Then, the final vector for document calculated by the merge technique is</ns0:p><ns0:formula xml:id='formula_4'>𝑚𝑒𝑟𝑔𝑒(𝑑) 𝑑 𝑚𝑒𝑟𝑔𝑒(𝑑) = (𝑡 1 ,𝑡 2 ,…,𝑡 𝑚 ,𝑝 1 ,𝑝 2 ,…,𝑝 𝑚 ).</ns0:formula><ns0:p>A set of techniques for pre-processing the input vectors for the selected knowledge discovery classification task was created. These techniques can be considered the variations of the previous TF-IDF technique, in which the POS tags are taken into account additionally to the original terms. As a result, the four techniques described above represent typical variations, which allow comparing and analysing the basic features of the techniques based on the terms and POS tags.</ns0:p></ns0:div>
<ns0:div><ns0:head>Decision Trees Modelling</ns0:head><ns0:p>Several classifiers like decision tree classifiers, Bayesian classifiers, k-nearest-neighbour classifiers, case-based reasoning, genetic algorithms, rough sets, and fuzzy logic techniques were considered. Finally, the decision trees were selected to evaluate the suitability of the proposed techniques for calculating the input vectors and analyse their features. The decision trees allow not only a simple classification of cases, but they create easily interpretable and understandable classification rules at the same time. In other words, they simultaneously represent functional classifiers and a tool for knowledge discovery and understanding. The same approach was partially used in other similar research papers <ns0:ref type='bibr'>(Kapusta et al., 2020;</ns0:ref><ns0:ref type='bibr'>Kapusta, Benko & Munk, 2020)</ns0:ref>.</ns0:p><ns0:p>The attribute selection measures like Information Gain, Gain Ratio, and Gini Index <ns0:ref type='bibr' target='#b22'>(Lubinsky, 1995)</ns0:ref>, used while decision tree is created, are considered the further important factor, why decision trees had been finally selected. The best feature is always selected in each step of decision tree development. Moreover, it is virtually independent of the number of input attributes. It means that even though there is supplemented a larger amount of the attributes (elements of the input vector) on the input of the selected classifier, the accuracy remains unchanged.</ns0:p></ns0:div>
<ns0:div><ns0:head>K-fold validation</ns0:head><ns0:p>Comparing the decision trees created in the realised experiment is based on the essential characteristics of the decision trees as the number of nodes or leaves. These characteristics define the size of the tree, which should be suitably minimised. Simultaneously, the performance measures of the model like accuracy, precision, recall and f1-score are considered, together with 10-fold cross-validation technique. K-fold validation was used for the evaluation of the models. It generally results in a less biased model compare to other methods because it ensures that every observation from the original dataset has the chance of appearing in training and test set.</ns0:p></ns0:div>
<ns0:div><ns0:head>Setting the Most Suitable N-gram Length</ns0:head><ns0:p>All compared techniques for input vectors pre-processing required identical conditions. Therefore, the highest values of n in n-grams was determined as the first step. Most NLP tasks work usually with n = {1,2,3}. The higher value of n (4-grams, 5-grams, etc.) has significant demands on hardware and software, calculation time, and overall performance. On the other hand, the potential contribution of the higher n-grams in increasing the accuracy of created models is limited. Several decision tree models were created to evaluate this consideration. N-grams (1-gram, 2gram, …, 5-gram) for tokens/words and for POS tags were prepared. Subsequently, the TF-IDF technique was applied to n-grams of tokens/words. At the same time, PosF and PosfIdf techniques were applied on n-grams of POS tags. As a result, 15 files with the input vectors have been created (1-5-grams x 3 techniques). Figure <ns0:ref type='figure'>2</ns0:ref> visualises the individual steps of this process for better clarity. Ten-fold cross-validation led to creating ten decision trees models for each pre-processed file (together 15 files). In all cases, the accuracy was considered the measure of the model performance. Figure <ns0:ref type='figure'>3</ns0:ref> shows a visualisation of all models with different n-gram length. The values on the x-axis represent a range of used n-grams. For instance, n-gram (1,1) means that only unigrams had been used. Other ranges of n-grams will be used in the next experiments. For example, the designation of (1,4) will represent the 1-grams, 2-grams, 3-grams, and 4-grams included together in one input file in this case.</ns0:p><ns0:p>The results show that the accuracy is declining with the length of the n-grams, mostly in the case of applying the TF-IDF technique. Although it was not possible to process longer n-grams (6grams, 7-grams, etc.) due to the limited time and computational complexity, it can be assumed that their accuracy would be declined similarly to the behaviour of the accuracy for 5-grams in the case of all applied techniques. Considering the process of decision tree model creation, it is not surprising that joining the ngrams to one input file achieved the highest accuracy. The best accuracy can be reached by joining n-grams to one input file. Considering this, the most suitable measure will be selected during the creation of the decision tree. As a result, all following experiments will work with the file, consisting of joined 1-grams, 2-grams, 3-grams and 4-grams (1,4).</ns0:p></ns0:div>
<ns0:div><ns0:head>Setting the Maximum Depth of the Decision Tree</ns0:head><ns0:p>Overfitting represents a frequent issue. Although the training error decreases by default with the increasing size of the created tree, the test errors often increase with the increasing size. As a result, the classification of new cases can be inaccurate. Techniques like pruning or hyperparameter tuning can overcome overfitting.</ns0:p><ns0:p>The maximal depth of the decision tree will be analysed to minimise the overfitting issue to find understandable rules for fake news identification.</ns0:p><ns0:p>As was mentioned earlier, the main aim of the article is to evaluate the most suitable techniques for the preparation of input vectors. Simultaneously, the suitable setting of the parameter max_depth will be evaluated. Complete decision trees for n-grams from tokens/words and POS tags were created for finding suitable values of selected characteristics of decision trees (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>).</ns0:p><ns0:p>The results show that the techniques working with the POS tags have a small number of input vectors compared to the reference TF-IDF technique. These findings were expected because while TF-IDF takes all tokens/words, in the case of the PosfIdf as well as PosF techniques, each token/word had been assigned to one of 33 POS tags (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). This simplification is also visible in the size of the generated decision tree (depth, node count, number of leaves). The application of the PosfIdf and PosF techniques led to the simpler decision tree. However, the maximal depth of the decision tree is the essential characteristics for further considerations. While it is equal to 30 for TF-IDF, the maximal depth is lower in the case of both remaining techniques. Therefore, decision trees with different depths will be further considered in the main experiment to ensure the same conditions for all compared techniques. The maximal depth will be set to 30.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Methodology of the Main Experiment</ns0:head><ns0:p>The following experiment's main aim is to evaluate if it is possible to classify the fake news messages using POS tags and compare the performance of the proposed techniques (PosfIdf, PosF, merge) with the reference TF-IDF technique which uses tokens/words. The comparison of these four techniques is joined with the following questions:</ns0:p><ns0:p>Q1: What is the most suitable length of the n-grams for these techniques? Q2: How to create models using these techniques to prevent possible overfitting? Q3: How to compare the models with different hyperparameters, which tune the performance of the models?</ns0:p><ns0:p>The first question (Q1) was answered in the section Setting the Most Suitable N-gram Length.</ns0:p><ns0:p>As its result, the use of joined 1-grams, 2-grams, 3-grams and 4-grams (1,4) is the most suitable.</ns0:p><ns0:p>The second question (Q2) can be answered by experimenting with the maximum depth hyperparameter used in the decision tree classifier. The highest acceptable value of this hyperparameter was found in the section Setting the Maximum Depth of the Decision Tree. The main experiment described later will be realised regarding the last, third question (Q3). The following steps of the methodology will be applied:</ns0:p><ns0:p>1. Identification of POS tags in the dataset.</ns0:p><ns0:p>2. Application of PosF and PosfIdf input vector preparation techniques on identified POS tags.  Testing the quality of the model's predictions on the testing subset. The following new characteristics were established:</ns0:p><ns0:p> prec_fake (precision for group fake),</ns0:p><ns0:p> prec_real (precision for group real),</ns0:p><ns0:p> rec_fake (recall for group fake),</ns0:p><ns0:p> rec_real (recall for group real),</ns0:p><ns0:formula xml:id='formula_5'> f1-score,</ns0:formula><ns0:p> time spent on one iteration.</ns0:p><ns0:p> Analysis of the results (evaluation of the models).</ns0:p><ns0:p>The results of steps 1-4 are four input vectors prepared using the before-mentioned four proposed techniques. The fifth step of the proposed methodology is focused on the evaluation of these four examined techniques. The application of the proposed methodology with 10-fold cross-validation resulted in the creation of 1200 different decision trees (30 max_depth values x 4 techniques x 10-fold validation). In other words, 40 decision trees with 10-fold cross-validation were created for each maximal depth. Figure <ns0:ref type='figure'>4</ns0:ref> depicts the individual steps of the methodology of the experiment. The last step of the proposed methodology, analysis of the result, will be described in section Results. All steps of the methodology were implemented in Python and its libraries. Text processing was realised using the NLTK library (https://www.nltk.org/). The tool TreeTagger <ns0:ref type='bibr' target='#b31'>(Schmid, 1994)</ns0:ref> was used for the identification of POS tags. Finally, the scikit-learn library (https://scikit-learn.org) was used for creating decision tree models. The Gini impurity function was applied to measure the quality of a split of decision trees.</ns0:p><ns0:p>The strategy used to choose the split at each node was chosen 'best' split (an alternative is 'best random split'). Subsequently, the maximum depth of the decision trees was examined to prevent overfitting. Other hyperparameters besides the minimum number of samples required to split an internal node or the minimum number of samples required to be at a leaf node were not applied.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:3:0:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Experiment</ns0:head><ns0:p>The quality of the proposed models (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) was evaluated using evaluation measures (prec, rec, f1-sc, prec_fake, rec_fake, prec_real, rec_real), as well as from time effectivity point of view (time). A comparison of the depths of the complete decision trees showed (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>) that there is no point in thinking about the depth greater than 29. Therefore, decision trees with a maximal depth less than 30 were created in line with the methodology referred to in section 3. This test was statistically significant in all cases of examined evaluation measures and time (p < 0.05). It means that the assumption was violated because unless the assumption of covariance matrix sphericity is not met, the I. type error increases <ns0:ref type='bibr' target='#b0'>(Ahmad, 2013)</ns0:ref>, <ns0:ref type='bibr' target='#b12'>(Haverkamp & Beauducel, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b25'>(Munkova et al., 2020)</ns0:ref>. Therefore, the degrees of freedom had been adjusted ( , 𝑑𝑓1 = (𝐽 -1)(𝐼 -1) 𝑑𝑓2 = (𝑁 -𝑙)(𝐼 -1) for the used F-test using Greenhouse-Geisser and Huynh-Feldt adjustments (Epsilon). As a result, the declared level of significance was reached , 𝑎𝑑𝑗.𝑑𝑓1 = 𝐸𝑝𝑠𝑖𝑙𝑜𝑛(𝐽 -1)(𝐼 -1)</ns0:p><ns0:formula xml:id='formula_6'>, 𝑎𝑑𝑗.𝑑𝑓2 = 𝐸𝑝𝑠𝑖𝑙𝑜𝑛(𝑁 -𝑙)(𝐼 -1)</ns0:formula><ns0:p>where I is the number of levels of the factor model (dependent samples), J is the number of the levels of the factor deep (independent samples), and N is the number of cases.</ns0:p><ns0:p>The Bonferroni adjustment was used to apply multiple comparisons. This adjustment is usually applied when several dependent and independent samples are simultaneously compared <ns0:ref type='bibr' target='#b19'>(Lee & Lee, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b10'>(Genç & Soysal, 2018)</ns0:ref>. Bonferroni adjustment represents the most conservative approach, in which the level of significance (alpha) for a whole set N of cases is set so that the level of significance for each case is equal to .</ns0:p><ns0:p>𝑎𝑙𝑝ℎ𝑎 𝑁</ns0:p></ns0:div>
<ns0:div><ns0:head>Model Performance</ns0:head><ns0:p>The first phase of the analysis focused on the performance of the models. The performance was analysed by selected evaluation measures (prec, rec, f1-sc, prec_fake, rec_fake, prec_real, rec_real) according to the within-group factor and between-groups factor and their interaction.</ns0:p><ns0:p>The models (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) represented the levels of withingroup factor. The depths of the decision tree (6-10) represented the levels of a between-group factor. Considering the violated assumption of covariance matrix sphericity, the modified tests for repeated measures were applied to assess the effectivity of the examined models <ns0:ref type='bibr' target='#b8'>(Dien, 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>(Montoya, 2019)</ns0:ref>. Epsilon represented the degree of violation of this assumption. If Epsilon equals one, the assumption is fulfilled. The values of Epsilon were significantly lower than one in both cases (Epsilon < 0.69). Zero hypotheses claim that there is no statistically significant difference in the quality of the examined models. The zero hypotheses, which claimed that there is not a statistically significant difference in values of evaluation measures prec, rec, and f1-sc between examined models, were rejected at the 0.001 significance level (prec: G-G Epsilon = 0.597, H-F Epsilon = 0.675, adj.p < 0.001; rec: G-G Epsilon = 0.604, H-F Epsilon = 0.684, adj.p < 0.001; f1-sc: G-G Epsilon = 0.599, H-F Epsilon = 0.678, adj.p < 0.001). On the contrary, the zero hypotheses, which claimed that the performance of the models (prec/rec/f1-sc) does not depend on a combination of within-group factor and between-groups factor, were not rejected (p > 0.05) (model x deep). Factor deep has not any impact on the performance of the examined models.</ns0:p><ns0:p>After rejecting the global zero hypotheses, the statistically significant differences between the models in the quality of the model's predictions were researched. Three homogeneous groups were identified based on prec, rec and f1-sc using the multiple comparisons. PosF(1,4) and TfIdf(1,4) techniques reached the same quality of the model's predictions (p > 0.05). Similar results were obtained for the pair PosfIdf(1,4) and PosF(1,4), as well for the pair TfIdf(1,4) and merge(1,4). Statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>3</ns0:ref>) were identified between the models merge(1,4) and Pos (p < 0.05), as well as between the models TfIdf(1,4) and PosfIdf(1,4) (p < 0.05). The merge(1,4) model reached the highest quality, considering the evaluation measures. The values of Epsilon were smaller than one in the case of partial evaluation measures prec_fake and rec_fake for the fake news. This finding was more notable in the case of Greenhouse-Geisser correction (Epsilon < 0.78). The zero hypotheses, which claimed that there is not any significant difference between the values of evaluation measures prec_fake and rec_fake between the examined models, were rejected (prec_fake: G-G Epsilon = 0.779, H-F Epsilon = 0.897, adj.p < 0.001; rec_fake: G-G Epsilon = 0.756, H-F Epsilon = 0.869, adj.p < 0.001). The impact of the PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:3:0:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science between-groups factor deep has not been proven (p > 0.05). The performance of the models (prec_fake/rec_fake) does not depend on the interaction of the factors model and deep. Two homogeneous groups were identified for prec_fake (Table <ns0:ref type='table'>4A</ns0:ref>). PosF(1,4) and TfIdf(1,4), as well as PosF(1,4) and PosfIdf(1,4) reached the same quality of the model's predictions (p > 0.05). The statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>4A</ns0:ref>) were identified between merge(1,4) and other models (p < 0.05) and between TfIdf(1,4) and PosfIdf(1,4) (p < 0.05). The merge(1,4) model reached the best quality from the prec_fake point of view. On the other hand, PosF(1,4) model reached the highest quality considering the results of the multiple comparison for rec_fake (Table <ns0:ref type='table'>4B</ns0:ref>). Two homogeneous groups (PosfIdf(1,4), merge(1,4), TfIdf(1,4)) and (merge(1,4), TfIdf(1,4), PosF(1,4)) were identified based on the evaluation measure rec_fake (Table <ns0:ref type='table'>3B</ns0:ref>). The statistically significant differences (Table <ns0:ref type='table'>3B</ns0:ref>) were identified only between the PosF(1,4) and PosfIdf(1,4) models (p < 0.05).</ns0:p><ns0:p>Similarly, the values of Epsilon were smaller than one (G-G Epsilon < 0.85) in case of evaluation measures prec_real and rec_real, which evaluate the quality of the prediction for a partial class of real news. The zero hypotheses, which claimed, that there is no statistically significant difference between the values of evaluation measures prec_real and rec_real in examined models, were rejected at the 0.001 significance level (prec_real: G-G Epsilon = 0.596, H-F Epsilon = 0.675, adj.p < 0.001; rec_real: G-G Epsilon = 0.842, H-F Epsilon = 0.975, adj.p < 0.001). The impact of the between-groups factor deep has not also been proven in this case (p > 0.05). It means that the performance of the models (prec_real/rec_real) does not depend on the interaction of the factors (model x deep). The model merge(1,4) reached the highest quality from the evaluation measures, prec_real a rec_real point of view (Table <ns0:ref type='table'>5</ns0:ref>). Only one homogeneous group was identified from the multiple comparisons for prec_real (Table <ns0:ref type='table'>5A</ns0:ref>). PosF(1,4), TfIdf(1,4) and merge(1,4) reached the same quality of the model's predictions (p > 0.05). The statistically significant differences in the quality of the model's predictions (Table <ns0:ref type='table'>5A</ns0:ref>) were identified between the PosfIdf(1,4) model and other models (p < 0.05). Two homogenous groups (PosfIdf(1,4), PosF(1,4)) a (PosF(1,4), TfIdf(1,4)) were identified from the evaluation measure rec_real point of view (Table <ns0:ref type='table'>5B</ns0:ref>). The statistically significant differences (Table <ns0:ref type='table'>5B</ns0:ref>) were identified between the model merge(1,4) and other models (p < 0.05).</ns0:p></ns0:div>
<ns0:div><ns0:head>Time Efficiency</ns0:head><ns0:p>The time effectivity of the proposed techniques was evaluated in the second phase of the analysis. Time effectivity (time) was analysed in dependence on within-group factor and between-groups factor and their interaction. The models represented the examined levels of a within-group factor, and the decision tree depths represented the between-groups factor. The modified tests for repeated measures were again applied to verify the time effectivity of the proposed models. The values of Epsilon were identical and significantly smaller than one for both corrections (Epsilon < 0.34). The zero hypothesis, which claimed that there is no statistically significant difference in time between the examined models, was rejected at the 0.001 significance level (time: G-G Epsilon = 0.336, H-F Epsilon = 0.366, adj.p < 0.001). Similarly, the zero hypothesis, which claimed that the time effectivity (time) does not depend on the interaction between the within-group factor and between-groups factor, was also rejected at the 0.001 significance level (model x deep). Factor deep has a significant impact on the time effectivity of the examined models. Only one homogenous group based on time was identified from the multiple comparisons (Table <ns0:ref type='table'>6</ns0:ref>). PosF(1,4) and TfIdf(1,4) reached the same time effectivity (p > 0.05). Statistically significant differences in time (Table <ns0:ref type='table'>6</ns0:ref>) were identified between the merge(1,4) model and other models (p < 0.05), as well as between the PosfIdf(1,4) model and other models (p < 0.05). As a result, PosfIdf(1,4) model can be considered the most time effective model, while the merge(1,4) model was considered the least time-effective one. Four homogeneous groups were identified after including between-group factor deep (Table <ns0:ref type='table'>7</ns0:ref>). Models PosfIdf(1,4) with depth 6-10 and TfIdf(1,4) with depth six have the same time effectivity (p > 0.05). The models TfIdf(1,4) and PosF(1,4) have the same time efficiency for all depths (p > 0.05). The models merge(1,4) with depth 6-8 (p > 0.05) and models merge(1,4) with the depth 7-10 (p > 0.05) were less time effective.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The paper analysed a unique dataset of the freely available fake and true news datasets written in English to evaluate if the POS tags created from the n-grams could be used for a reliable fake news classification. Two techniques based on POS tags were proposed and compared with the performance of the reference TF-IDF technique on a given classification task from the natural language processing research field. The results show statistically insignificant differences between the PosF and TF-IDF techniques. These differences were comparable in all observed performance metrics, including accuracy, precision, recall and f1-score. Therefore, it can be concluded that morphological analysis can be applied to fake news classification. Moreover, the charts of description statistic show TF-IDF technique reaches better results, though statistically insignificant. It is necessary to note for completeness that the statistically significant differences in observed performance metrics were identified between the morphological technique PosfIdf and TF-IDF. The reason is that the PosfIdf technique includes the ratio of the relative frequency of POS tags and inverse document function. This division by the number of documents in which the POS tag was observed caused weak results of this technique. It is not surprising, whereas the selected 33 POS tags were included in almost all the dataset documents. Therefore, the value of inverse document frequency was very high, which led to a very low value of the ratio. However, the failure of this technique does not diminish the importance of the findings that applied morphological techniques are comparable with the traditional reference technique TF-IDF. The aim to find a morphological technique, which will be better than TF-IDF, was fulfilled in the case of the PosF technique.</ns0:p><ns0:p>The Merged TF-IDF and PosF technique was included in the experiment to determine whether it is possible to improve the reference TF-IDF technique using POS tags. Considering the final performance measures, mainly precision, it can be concluded that they are higher. It means that the applied techniques of morphological analysis could improve the precision of the TF-IDF technique. However, it has not been proven that this improvement is statistically significant. The fact that the reference TF-IDF technique had been favoured in the presented experiment should be considered. In other words, removing the stop words from the input vector of the TF-IDF technique increased the classification accuracy. On the other hand, removing stop words is not suitable for the techniques based on the POS tags because their removal can cause losing important information about the n-gram structure. This statement is substantiated by comparing the values of accuracy for individual n-grams (Fig. <ns0:ref type='figure'>3</ns0:ref>). PosF technique achieved better results for 2-grams, 3-grams, 4-grams than for unigrams. Contrary, the stop words did not have to be removed from the input vector of the TF-IDF technique. However, the experiment aimed to compare the performance of the proposed improvements with the best prepared TF-IDF technique. The time efficiency of the examined techniques was evaluated simultaneously with their performance. The negligible differences between the time efficiency of the TF-IDF and PosF techniques can be considered most surprising. Although the PosF technique uses only 33 POS tags compared to the large vectors of tokens/words in TF-IDF, the time efficiency is similar. The reason is that the POS tags identification in the text is more time-consuming than tokens identification. On the other hand, the merged technique with the best performance results was the most time-consuming. This finding was expected because the merged vector calculation requires calculating and joining the TF-IDF and PosF vectors. The compared classification models for fake and true news classification are based on the relative frequencies of the occurrences of the morphological tags. It is not important which morphological tags were identified in the rules (nodes of the decision tree) using given selection measures. At the same time, the exact border values for the occurrence of morphological tags can also be considered unimportant because the more important fact is that such differences exist, and it is possible to find values of occurrences of morphological tags, which allow classifying fake and true news correctly. The realized set of experiments is unique in the meaning of the proposed preprocessing techniques used to prepare the input vectors for classifiers. The decision was to use as simple classifiers as possible, thus decision trees, because of their ability to easily interpret the obtained knowledge. In other words, decision trees provide additional information, which POS tags and consequent n-grams are important and characteristic for the fake news and which for real news. On the other hand, it should be emphasized that their classification precision is worse than other types of classifiers like neutral networks. Table <ns0:ref type='table'>8</ns0:ref> shows the comparison of similar methods for fake news classification <ns0:ref type='bibr' target='#b6'>(Deepak and Chitturi, 2020;</ns0:ref><ns0:ref type='bibr' target='#b23'>Meel and Vishwakarma, 2021)</ns0:ref>. The classification models reached a higher accuracy. However, the authors reached these results using additional secondary features. The application of neural networks is the second important difference, which led to the higher classification performance. However, as mentioned earlier, the experiment described in this paper was focused on assessing the suitability of the n-grams and POS-tags for pre-processing of input vectors. The priority was not to reach the best performance measures. The selection of simple machine learning technique, like decision trees, can be considered the limitation of the research presented in this article. However, the reason why this technique was selected related to the parallel research, which was focused on the finding of the most frequent ngrams in fake and real news and their consequent linguistic analysis. This research extends the previous one and tries to determine if the POS tags and n-grams can be further used for fake news classification. It is possible to assume that the morphological tags can be used as the input to the fake news classifiers. Moreover, the pre-processed datasets are suitable for other classification techniques, improving the accuracy of the fake news classification. It means that whether the relative frequencies of occurrences of the morphological tags are further used as the input layer of the neuron network or added to the training dataset of other classifiers, the found information can improve the accuracy of that fake news classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Despite several authors' statements that the morphological characteristics of the text do not allow fake news classification with sufficient accuracy, the realised experiment proved that the selected morphological technique is comparable with the traditional reference technique TF-IDF widely used in the natural language processing domain. The suitability of the techniques based on the morphological analysis has been proven on the contemporary dataset, including 1100 labelled real and fake news about the Covid-19. The experiment confirmed the validity of the newly proposed techniques based on the POS tags and n-grams against the traditional technique TF-IDF. The article describes the experiment with a set of pre-processing techniques used to prepare input vectors for data mining classification task. The overall contribution of the proposed improvements was expressed by the characteristic performance measures of the classification task (accuracy, precision, recall and f1-score). Besides the variables defined by the input vectors, the hyperparameters max_depth and n-gram length were observed. K-fold validation was applied to consider the random errors. The global null hypotheses were evaluated using adjusted tests for repeated measures. Subsequently, multiple comparisons with Bonferroni adjustment were used to compare the models. Various performance measures ensured the robustness of the obtained results. The decision trees were chosen to classify fake news because they create easily understandable and interpretable results compared to other classifiers. Moreover, they allow the generalisation of the inputs. An insufficient generalisation can cause overfitting, which leads to the wrong classification of individual observations of the testing dataset.</ns0:p><ns0:p>Different values of the parameter of maximal depth were researched to obtain the maximal value of precision. This most suitable value of the parameter was different for each of the proposed techniques. Therefore, the statistical evaluation was realised considering the maximal depth. Besides the fact, the statistically significant difference has not been found, the proposed techniques based on the morphological analysis in combination with the created n-grams are comparable with traditional ones, for example, with the TF-IDF used in this experiment. Moreover, the advantages of the PoSF technique can be listed as follows:</ns0:p><ns0:p> A smaller size of the input vectors. The average number of vectors elements was 69 775, while in the case of TF-IDF, it was 817 213.3 (Table2).</ns0:p><ns0:p> A faster creating of the input vector. The possibility of using the proposed techniques based on POS tags on the classification of new yet untrained fake news datasets is considered the last advantage of the proposed techniques. The reason is that the TF-IDF works with the words and counts their frequencies in fake news. However, the traditional classifiers can fail to correctly classify fake news about a new topic because they have not yet trained the frequencies of new words. On the other hand, the PosF technique is more general and focuses on the primary relationships between POS tags, which are probably also similar in the case of new topics of fake news. This assumption will be evaluated in future research. The current most effective fake news classification is based predominantly on neural networks and a weighted combination of techniques, which deal with news content, social context, credit of a creator/spreader and analysis of target victims. It clear that decision tree classifiers are not so frequently used. However, this article focused only on a very narrow part of the researched issues, and these classifiers have been used mainly for easier understanding of the problem. Therefore, future work will evaluate the proposed techniques in conjunction with other contemporary fake news classifiers. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1 demonstrates this process using the sentence from the tenth most viewed fake news story shared on Facebook in 2019. The following POS tags were identified from the sentence 'Democrats Vote To Enhance Med Care for Illegals Now':  NNPS (proper noun, plural),  VBP (verb, sing. present, non-3d),  TO (to),  VB (verb, base form), PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:3:0:NEW 9 Jun 2021) Manuscript to be reviewed Computer Science  JJ (adjective),  NNP (proper noun, singular),  IN (preposition/subordinating conjunction),  NNS (noun plural),  RB (adverb).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>3.</ns0:head><ns0:label /><ns0:figDesc>Application of reference TF-IDF technique to create input vectors. This technique uses tokens to represent the words modified by the stemming algorithm. Simultaneously, the stop words are removed. 4. Joining PosF and TF-IDF technique to merge vector. 5. Iteration with different values of maximal depth (1, …, 30):  Randomised distribution of the input vectors of PosF, PosfIdf, TfIdf, and Merge techniques into training and testing subsets in accordance with the requirements of the 10-fold cross-validation.  Calculation of decision tree for each training subset with the given maximal depth.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>8.Evaluation measures Fig 6a) increase up to the depth of five and reach the values smaller than 0.73 in average (rec < 0.727, f1-sc < 0.725, prec < 0.727). Subsequently, they reach stable values greater than 0.73 from the depth of six (rec > 0.732, f1-sc > 0.731, prec > 0.732) and less than 0.75 (rec < 0.742, f1-sc < 0.740, prec < 0.741) As a result, the PosF technique reaches better performance in small values of depth (up to 4) compared to others. While the merge technique originates from the joining of PoSF and TF-IDF technique, its results will be naturally better. The model performance (prec) for the given depths (< 30) reached the above-average values from the depth of six (Fig2b). The model performance measure prec (p > 0.05) was not statistically significant differences from depth equal to six. Similar results were also obtained for measures rec and f1-sc. As a result, the models' performance will be further examined for depths 6-10. The Kolmogorov-Smirnov test was applied to verify the normality assumption. The examined variables (model x evaluation measure, model x time) have normal distribution for all levels of the between-groups factor deep (6: max D < 0.326, p > 0.05, 7: max D < 0.247, p > 0.05, 8: max D < 0.230, p > 0.05, 9: max D < 0.298, p > 0.05, 10: max D < 0.265, p > 0.05). The Mauchley sphericity test was consequently applied for verifying the covariance matrix sphericity assumption for repeated measures with four levels (TfIdf(1,4), PosfIdf(1,4), PosF(1,4), merge(1,4)) with the following results (prec: W = 0.372, Chi-Square = 43.217, p < 0.001; rec: W = 0.374, Chi-Square = 43.035, p < 0.001; f1-sc: W = 0.375, Chi-Square = 42.885, p < 0.001; prec_fake: W = 0.594, Chi-Square = 22.809, p < 0.001; rec_fake: W = 0.643, Chi-Square = 19.329, p < 0.01; prec_real: W = 0.377, Chi-Square = 42.641, p < 0.001; rec_real: W = 0.716, Chi-Square = 14.599, p < 0.05; time: W = 0.0001, Chi-Square = 413.502, p < 0.001).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>A shorter training phase of the model.  More straightforward and more understandable model. The model based on the PosF technique achieved the best results in smaller maximal decision tree depth values.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>*** -homogeneous groups (p > 0.05)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,258.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,380.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 : Morphological tags used for news classification (Schmid, 1994).</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>GTAG</ns0:cell><ns0:cell>POS Tags</ns0:cell></ns0:row><ns0:row><ns0:cell>group C</ns0:cell><ns0:cell>CC (coordinating conjunction), CD (cardinal number)</ns0:cell></ns0:row><ns0:row><ns0:cell>group D</ns0:cell><ns0:cell>DT (determiner)</ns0:cell></ns0:row><ns0:row><ns0:cell>group E</ns0:cell><ns0:cell>EX (existential there)</ns0:cell></ns0:row><ns0:row><ns0:cell>group F</ns0:cell><ns0:cell>FW (foreign word)</ns0:cell></ns0:row><ns0:row><ns0:cell>group I</ns0:cell><ns0:cell>IN (preposition, subordinating conjunction)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>JJ (adjective), JJR (adjective, comparative), JJS (adjective,</ns0:cell></ns0:row><ns0:row><ns0:cell>group J</ns0:cell><ns0:cell>superlative)</ns0:cell></ns0:row><ns0:row><ns0:cell>group M</ns0:cell><ns0:cell>MD (modal)</ns0:cell></ns0:row><ns0:row><ns0:cell>group N</ns0:cell><ns0:cell>NN (noun, singular or mass), NNS (noun plural), NNP (proper noun, singular), NNPS (proper noun, plural) ,</ns0:cell></ns0:row><ns0:row><ns0:cell>group P</ns0:cell><ns0:cell>PDT (predeterminer), POS (possessive ending), PP (personal pronoun)</ns0:cell></ns0:row><ns0:row><ns0:cell>group R</ns0:cell><ns0:cell>RB (adverb), RBR (adverb, comparative), RBS (adverb, superlative), RP (particle)</ns0:cell></ns0:row><ns0:row><ns0:cell>group T</ns0:cell><ns0:cell>TO (infinitive 'to')</ns0:cell></ns0:row><ns0:row><ns0:cell>group U</ns0:cell><ns0:cell>UH (interjection)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VB (verb be, base form), VBD (verb be, past tense), VBG</ns0:cell></ns0:row><ns0:row><ns0:cell>group V</ns0:cell><ns0:cell>(verb be, gerund/present participle), VBN (verb be, past participle), VBP (verb be, sing. present, non-3d), VBZ</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(verb be, 3rd person sing. present)</ns0:cell></ns0:row><ns0:row><ns0:cell>group W</ns0:cell><ns0:cell>WDT (wh-determiner), WP (wh-pronoun), WP$ (possessive wh-pronoun), WRB (wh-abverb)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 : Selected characteristics of the complete decision trees.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>TfIdf(1,4) PosfIdf(1,4)</ns0:cell><ns0:cell>PosF(1,4)</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage(deep)</ns0:cell><ns0:cell>25.1</ns0:cell><ns0:cell>15.6</ns0:cell><ns0:cell>17.2</ns0:cell></ns0:row><ns0:row><ns0:cell>min(deep)</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>11</ns0:cell></ns0:row><ns0:row><ns0:cell>max(deep)</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>21</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage (node count)</ns0:cell><ns0:cell>171.4</ns0:cell><ns0:cell>157.4</ns0:cell><ns0:cell>156</ns0:cell></ns0:row><ns0:row><ns0:cell>avarage (leaf count)</ns0:cell><ns0:cell>86.2</ns0:cell><ns0:cell>79.2</ns0:cell><ns0:cell>78.5</ns0:cell></ns0:row><ns0:row><ns0:cell>average (number of vectors</ns0:cell><ns0:cell>817213.3</ns0:cell><ns0:cell>69512.4</ns0:cell><ns0:cell>69775</ns0:cell></ns0:row><ns0:row><ns0:cell>elements)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:3:0:NEW 9 Jun 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:01:57539:3:0:NEW 9 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Associate Professor
doc. PaedDr. Jozef Kapusta, PhD.
Department of Informatics
Faculty of Natural Sciences
Constantine the Philosopher University in Nitra
Trieda A. Hlinku 1
94901 Nitra
Slovakia
jkapusta@ukf.sk
June 9th, 2021
Cover Letter
Dear Editors,
Thank you for your valuable feedback and contribution to improving the article 'Using of N-Grams from Morphological Tags for Fake News Classification'.
We have improved our previous discussion based on your recommendation.
We believe that the improved manuscript meets the required quality standards and is now suitable for publication in PeerJ Computer Science.
Please, find below our response to the comments.
Jozef Kapusta
Associate Professor
On behalf of all authors.
Responses to the Editor’s Comment
Thank you for your time, effort, and valuable feedback. We appreciate your contribution to improving the comprehensibility and scientific content of the paper. Please, find our answer to your comment and suggestion below.
Kindly add a Table for comparison of your work with the previous work to increase the readability of the paper. All other comments are resolved.
Thank you for your positive evaluation of the paper. We have modified the discussion part of the paper [p. 1; lines 703-711]. We added Table 8 for comparison of the results of our work with similar works of other authors to emphasise the differences and contribution of the paper.
Thank you again for all suggestions.
We appreciate the editor’s and reviewers' valuable feedback, which motivated us to research fake news identification further using morphological analysis.
Best regards
Jozef Kapusta
Corresponding author
" | Here is a paper. Please give your review comments after reading it. |
181 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Concrete is the main material in building. Since its poor structural integrity may cause accidents, it is significant to detect defects in concrete. However, it is a challenging topic as the unevenness of concrete would lead to the complex dynamics with uncertainties in the ultrasonic diagnosis of defects. Note that the detection results mainly depend on the direct parameters, e.g. the time of travel through the concrete. The current diagnosis accuracy and intelligence level are difficult to meet the design requirement for automatic and increasingly high-performance demands. To solve the mentioned problems, our contribution of this paper can be summarized as establishing a diagnosis model based on the GA-BPNN method and ultrasonic information extracted that helps engineers identify concrete defects. Potentially, the application of this model helps to improve the working efficiency, diagnostic accuracy and automation level of ultrasonic testing instruments. In particular, we propose a simple and effective signal recognition method for small-size concrete hole defects. This method can be divided into two parts: 1) signal effective information extraction based on wavelet packet transform (WPT), where mean value, standard deviation, kurtosis coefficient, skewness coefficient and energy ratio are utilized as features to characterize the detection signals based on the analysis of the main frequency node of the signals, and 2) defect signal recognition based on GA optimized back propagation neural network (GA-BPNN), where the cross-validation method has been used for the stochastic division of the signal dataset and it leads to the BPNN recognition model with small bias. Finally, we implement this method on 150 detection signal data which are obtained by the ultrasonic testing system with 50kHz working frequency. The experimental test block is a C30 class concrete block with 5mm, 7mm, and 9mm penetrating holes. The information of the experimental environment, algorithmic parameters setting and signal processing procedure are described in detail. The average</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Concrete materials are widely used in modern buildings. It is a non-uniform material mixed with cement, sand, gravel and water. The random distributions of coarse aggregate and cement mortar are the causes of the heterogeneity of concrete. There exist various forms of deterioration and defects of concrete structures because of aging and environmental damage, such as internal voids <ns0:ref type='bibr'>(Bien, Kaminski & Kuzawa,2019)</ns0:ref>. Among the main health problems of concrete, surface defects are relatively easy to be detected. However, internal defects are hidden in the concrete, which is difficult to detect and is more harmful. It is significant to detect and analyze the internal defects of concrete structures in time to avoid the potential related accidents. Commonly used methods of non-destructive testing include electromagnetic, radiological and ultrasound <ns0:ref type='bibr' target='#b1'>(Schabowicz, 2019)</ns0:ref>. Ultrasonic has the advantages of strong penetrating power and high sensitivity, so it is mostly used in material defect detection <ns0:ref type='bibr' target='#b2'>(Janku, et al., 2019)</ns0:ref>. In actual inspection tasks, ultrasonic detection of concrete defects is based on the observation of acoustic parameters, propagation time, amplitude and main frequency of ultrasonic detection signals, etc. <ns0:ref type='bibr' target='#b3'>(Ushakov, Davydov, 2013;</ns0:ref><ns0:ref type='bibr' target='#b5'>Ozsoy, Koyunlu & Ugweje, 2017)</ns0:ref>. For example, NDT James V-C-400 V-Meter MK IV still uses the ultrasonic pulse velocity method to characterize the detection signal. These applied methods are susceptible to individual subjective factors and experience levels. The applications of modern signal processing and artificial intelligence algorithms can achieve the automatic recognition of signals and also improve recognition efficiency and accuracy <ns0:ref type='bibr' target='#b6'>(Tibaduiza-Burgos & Torres-Arredondo, 2015;</ns0:ref><ns0:ref type='bibr' target='#b44'>Javed et al., 2020)</ns0:ref>. It is necessary to obtain effective information to characterize different types of signals before performing detection signal recognition. At present, many methods have been used to find the effective features of complex signals <ns0:ref type='bibr' target='#b7'>(Cheema & Singh, 2019)</ns0:ref>. These signal analysis methods are mainly sparse representation, Hilbert-Huang transform, Fourier transform, wavelet transform, and so on <ns0:ref type='bibr' target='#b10'>(Liu et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b11'>Kumar & Kumar, 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Bochud et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b18'>Rodriguesa et al., 2019)</ns0:ref>. However, it is difficult to extract the key information from concrete ultrasonic detection signals due to the noise influence and many mutational components, such as mutational amplitude <ns0:ref type='bibr' target='#b7'>(Cheema & Singh 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Combet, Gelman, & LaPayne, 2012)</ns0:ref>. Among these signal preprocessing methods, wavelet transform can effectively deal with the non-stationary and high-noise complex signals. This method has been applied to process ultrasonic signals <ns0:ref type='bibr' target='#b17'>(Acciani et al.,2010)</ns0:ref>. Machine learning models are established with simple structures which are suitable for small sample dataset, while the scholars often choose these methods to identify detection signals <ns0:ref type='bibr' target='#b13'>(Iyer et al., 2012)</ns0:ref>. For now, commonly used machine learning algorithms include support vector PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60010:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science machine, neural network, etc. <ns0:ref type='bibr' target='#b15'>(Rajput et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b16'>Xu & Jin, 2018)</ns0:ref>. As a class of neural networks, BP neural network (BPNN) is a classic model. It has strong nonlinear mapping ability and simple structure <ns0:ref type='bibr' target='#b25'>(Wang, 2015)</ns0:ref>. After optimization by genetic algorithm, the fitting ability and running speed can be improved. Note that BPNN is widely used in the field of pattern recognition, where deep learning is one of the most popular methods in pattern recognition. In the field of ultrasonic inspection, deep learning has been used to identify inspection images <ns0:ref type='bibr' target='#b26'>(Slonski, Schabowicz, & Krawczyk, 2020)</ns0:ref>. However, deep learning was rarely used to recognize concrete ultrasonic detection signals due to the high hardware performance requirements <ns0:ref type='bibr' target='#b24'>(Shrestha & Mahmood, 2019)</ns0:ref>. Saechai S. et al. <ns0:ref type='bibr' target='#b19'>(Saechai, Kongprawechnon & Sahamitmongkol, 2012)</ns0:ref> used the support vector machine to identify the defect detection signals of concrete which obtained higher recognition accuracy. Chen Y. et al. <ns0:ref type='bibr' target='#b22'>(Chen & Ma, 2011)</ns0:ref> extracted features of the weld detection signal by wavelet packet transform and used radial basis function neural network to recognize defects. Zhang K.X. et al. <ns0:ref type='bibr' target='#b23'>(Zhang et al., 2020)</ns0:ref> used genetic algorithm-back propagation neural network to evaluate the laser ultrasonic fault signals of uniform metal structures. The composition of the concrete selected in our paper is more complex than the research objects in the literatures. When these methods are used directly to identify concrete detection signals, the performance would be deteriorated. Therefore, a novel ultrasonic-based solution should be developed for concrete defect detection. In this paper, we propose an intelligent method to process the ultrasonic lateral detection signals of penetrating holes in concrete. The main contributions and objective are summarized as follows:  To improve the performance of more effective calculation and high identification accuracy, the ultrasonic detection signals are decomposed by WPT in order to extract the useful information in the detection signal. As a result, we extract the 5 effective features of the processed signal.  Genetic algorithm has been used to optimize the structural parameters of the BP neural network. In the experiments with measured data, the average classification accuracy of GA-BPNN is increased by 4.66%, 4%, and 5.33% compared with BPNN, SVM, and RBF, respectively.  This paper presents a generalized research framework on the processing and recognition of concrete ultrasonic detection signals, which lays the technical foundation for achieving the intelligent and automatic detection of concrete. The rest of the paper has been organized as follows: In section 2, we describe the whole algorithmic procedure and principles briefly. And we present the experimental system and algorithmic parameters setting in the 3rd section. The test results and analysis are presented in the 4th section. We draw a conclusion in the 5th section.</ns0:p></ns0:div>
<ns0:div><ns0:head>THE PROCESS OF THE PROPOSED ALGORITHM</ns0:head><ns0:p>The ultrasonic pulse velocity (UPV) method is widely used in ultrasonic testing instruments which cannot meet the needs of small-size concrete defect detection. The levels of intelligence and automation of concrete testing instruments need to be improved urgently. To solve this problem, PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60010:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we propose a method based on WPT and GA-BPNN. In particular, the presented algorithm in this paper consists of three parts. First, wavelet packet transform is used to attenuate noise and retain effective information from the non-stationary concrete ultrasonic detection signals. Then, the features of processed signals are extracted as the feature vector. Finally, we use the BPNN optimized by the improved GA to identify the detection signals and the K-fold cross-validation is introduced to verify the stability and generalization of GA-BPNN. We describe the main steps in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Wavelet packet transform</ns0:head><ns0:p>Wavelet transform is a multi-resolution analysis method <ns0:ref type='bibr' target='#b27'>(Babouri et al., 2019)</ns0:ref>, which is a process of using wavelet basis functions to decompose a signal into components of different frequency bands, processing the wavelet packet coefficients, and reconstructing these components into a complete signal <ns0:ref type='bibr' target='#b28'>(Kim et al., 2020)</ns0:ref>. When using the wavelet transform to process a non-stationary signal, there are different resolutions at different locations. WPT can accurately obtain the high and low-frequency components of the signal <ns0:ref type='bibr' target='#b29'>(Schimmack & Mercorelli, 2018)</ns0:ref>. Therefore, WPT can be considered as an effective pre-processing algorithm for feature extraction. However, the wavelet transform cannot extract the detailed information of detection signals.</ns0:p></ns0:div>
<ns0:div><ns0:head>Decomposition and reconstruction of WPT</ns0:head><ns0:p>The structure diagram of the three-layer decomposition of wavelet packet is given in Fig. <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>. In Fig. <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>, S means an original signal. Then, S can be decomposed according to the equation ( <ns0:ref type='formula'>1</ns0:ref> </ns0:p><ns0:formula xml:id='formula_0'>                2 2 1 2 1 2 1 [ ] [ ] [ ] [ ] n n j l k j l Z n n j l k j l Z d k h d l d k g d l (1)           2 1 2 2 1 2 2 [ ] [ ] [ ] n n n j k l j k l j l Z l Z d k h d l g d l (2)</ns0:formula><ns0:p>where represents a wavelet packet coefficient sequence of the signal to be decomposed, j after decomposition, h represents orthogonal real coefficients matrices of low-pass filters, and g represents orthogonal real coefficients matrices of high-pass filters. We use the cost function to select the wavelet packet basis for the signal decomposition. At present, the Shannon entropy <ns0:ref type='bibr' target='#b42'>(Shi et al., 2021)</ns0:ref> is widely used where the entropy of the wavelet packet coefficient sequence d={d j } is defined by equation (3). <ns0:ref type='bibr' target='#b32'>& Li, 2014)</ns0:ref> only extracted one type of feature including the energy ratio of each node after wavelet packet decomposition as the feature vector of identifying defects in carbon-fiber-reinforced polymer. Wang X.K. et al. <ns0:ref type='bibr' target='#b33'>(Wang et al., 2019)</ns0:ref> selected 9 features including the average peak spacing, dominant frequency, etc., to identify weld quality defects. Furthermore, scholars also choose features such as mean value, standard deviation, kurtosis, etc. as the inputs of ultrasonic detection signal recognition models <ns0:ref type='bibr'>(Virmani et al, 2017)</ns0:ref>.</ns0:p><ns0:p>Based on commonly used features in the field of ultrasonic testing, we have selected useful and non-redundant features by analyzing the calculation formulas of the features and conducting experimental tests. For example, the calculation formulas and physical meaning of mean square value and energy are very similar, and they are not used as features collectively. Finally, the five features of mean value, standard deviation, kurtosis coefficient, skewness coefficient and energy ratio are retained <ns0:ref type='bibr' target='#b32'>(Wan & Li, 2014;</ns0:ref><ns0:ref type='bibr' target='#b43'>Zhang, Duffy & Orlandi, 2017)</ns0:ref>.</ns0:p><ns0:p>In order to make the feature values in the same order of magnitude and improve the convergence speed of the model, we normalize the extracted features <ns0:ref type='bibr' target='#b35'>(Bagan et al., 2009)</ns0:ref>. The five feature values are mapped to [-1,1] and the normalized feature values are taken as input variables of the BPNN model in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>GA-BP neural network (GA-BPNN) BP Neural Network (BPNN)</ns0:head><ns0:p>A BPNN is made up of an input layer, a hidden layer, and an output layer. The input signal of BPNN propagates forward, and the error propagates backward. It can approximate the function of finite discontinuities <ns0:ref type='bibr' target='#b25'>(Wang, 2015)</ns0:ref>. In addition, it has a powerful ability to deal with nonlinear problems. The structure is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. In Fig. <ns0:ref type='figure'>2</ns0:ref>, x is input data, ω ij is the weight between the input layer i and the hidden layer j, α is biases of the hidden layer, f 1 is activation functions used for the hidden layer, ω jk is the weight between the hidden layer j and the output layer k, β is biases of the output layer, f 2 is the activation function used for the output layer, and y is the output of the network.</ns0:p></ns0:div>
<ns0:div><ns0:head>Genetic algorithm (GA)</ns0:head><ns0:p>The outputs of the BPNN are calculated according to its input-output function built on the generated weights, biases and number of hidden nodes. In this paper, the improved GA <ns0:ref type='bibr' target='#b37'>(Peng et al., 2013)</ns0:ref> has been used to optimize initial weights, biases and the number of hidden layer nodes. This method can make BPNN convergent fast with higher precision <ns0:ref type='bibr' target='#b38'>(Han & Huang, 2019)</ns0:ref>.</ns0:p><ns0:p>According to the description of the improved GA <ns0:ref type='bibr' target='#b37'>(Peng et al., 2013)</ns0:ref>, we use binary to code variables of the number of hidden layer node and use real numbers to code variables of the corresponding weights and biases for building candidate solutions in GA. We assume the maximum number of hidden layer node in the BPNN is l, and the number of input and output layer nodes in the network are n and m, respectively. Then the total number of optimization variables is</ns0:p><ns0:p>. The coding of all parameters in a candidate solution is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>. For</ns0:p><ns0:formula xml:id='formula_1'>      1 ( ) ( ) l n l l m m</ns0:formula><ns0:p>instance, if the number of hidden layer nodes is represented by a q-bit 0 to 1 string, the range of the number of hidden layer node is 0 to 2 q -1, and l = 2 q -1. In Fig. <ns0:ref type='figure'>3</ns0:ref>, q is the number of candidates hidden layer node, l is the maximum number of hidden layer node, 0 < q ≤ l, n is the number of nodes in the input layer, m is the number of nodes in the output layer. Furthermore, errors between the actual values and the output values of the BPNN are calculated, then the reciprocal of the sum of squared errors is used as the fitness function in the GA. In this paper, the roulette wheel method is used as the selection operator. Two individuals are selected by the selection operator. Then we use the one-point crossover method to process the binary coding arrays. The arithmetic crossover operator is used for the real number encoding sequences. For binary coding arrays, the simple mutation operator is used. We apply the nonuniform mutation operator to the real coding sequences in this paper. Briefly, we first encode the variables that need to be optimized; next, the fitness values in the initial population are calculated; then, we perform selection, crossover, and mutation operations to generate a new generation of population and obtain the maximum fitness value of each generation; finally, the optimal variable values are obtained by decoding the individual with the largest fitness value among all offspring.</ns0:p></ns0:div>
<ns0:div><ns0:head>Overall steps of the WPT and GA-BPNN method</ns0:head><ns0:p>To describe the proposed concrete ultrasonic detection signal identification method, the main steps are summarized as follows.</ns0:p><ns0:p>Step1: The ultrasonic detection signal is decomposed into three layers by WPT sub-algorithm, and the wavelet packet coefficients in the main frequency node are extracted to reconstruct the signal; Step2: The five feature variables of the reconstructed signals are calculated to establish the feature dataset;</ns0:p><ns0:p>Step3: Adopt K-fold cross-verification method to divide the dataset; Step4: The genetic algorithm is executed to calculate the optimal configuration parameter of BPNN; select the optimal parameters of BPNN from the optimal solutions of GA, then obtain an optimized BPNN; Step5: Use the test dataset to test the BPNN, output the recognition results. The flowchart of the proposed method is given in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL ENVIRONMENT AND TEST</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60010:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Experimental set-up and dataset</ns0:head><ns0:p>As an engineering application, we apply the ultrasonic transmission detection method to the practical ultrasonic detection system in which we use the P28F ultrasonic probes with the 50kHz working frequency to generate the ultrasonic signals. An ±80-Volt square wave pulse signal is generated at the transmitting end to excite the ultrasonic probe vibration. The signal sampling frequency of the receiving end is 1MHz. The analog-to-digital conversion module used for data acquisition is 12 bits. Each detection signal used in this paper contains a total of 18,000 sampling points in 6 cycles. The Fig. <ns0:ref type='figure'>6</ns0:ref> is concrete test sample. Concrete is mainly composed of cement, sand, coarse aggregate and water, the material ratio of C30 class concrete is 461, 175, 512 and 1252 kg/m 3 . The size of the sample block is pre-specified as follows: the length is 30cm, the width is 20cm and the height is 20cm. The experimental data are obtained by sampling repeatedly at the different positions shown in Fig. <ns0:ref type='figure'>7</ns0:ref>. The white dots of test points shown in Fig. <ns0:ref type='figure'>7</ns0:ref> are all measured evenly. Hole defects are available in three sizes. The distance between two penetrating holes is 85 mm. The diameters of penetrating holes are 5mm,7mm and 9mm, respectively. Three test points are placed on the surface of each kind of hole defect. Six test points of the defect-free structure are located between the points over the holes. The horizontal and vertical distances between the detection positions are both 45mm. Fifteen detection positions are arranged on the concrete surface, and 10 detection data signals are obtained for each detection point. In this case study, a total of 150 ultrasonic transmission detection data samples are obtained through the experimental device in Fig. <ns0:ref type='figure'>8</ns0:ref>, including 60 sample data signals from the non-defective structure, and 90 sample data signals from the defective structure. Fig. <ns0:ref type='figure'>5</ns0:ref> shows the experimental data acquisition process of the detection system.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm parameter settings</ns0:head><ns0:p>According to the wavelet basis function selection rule and the distribution of these detection signals' time-frequency characteristics, the db15 wavelet function is selected to perform the threelayer WPT. And most of the valid information of the signal is included in the first node of the third layer after decomposing the detection signals.</ns0:p><ns0:p>In algorithm experiments, our computer is 64-bit Windows operation system. The hardware configuration includes 2.08 GHz CPU, Inter Core i5-8400 with 6 cores, and 32GB 2400MHz DDR4 memory. The application software is MATLAB R2014a version. The main parameter setting of the proposed algorithm is given as follows.</ns0:p><ns0:p>The WPT parameters setting is: the wavelet basis function is db15, the number of decomposition levels is 3, and the optimal wavelet basis Shannon entropy is selected. The GA algorithmic parameters setting is: the maximum genetic algebra g is 100, the population size p is 50, the binary code length q is 5, the crossover probability Pc is 0.7, and the mutation probability Pm is 0. The parameters setting of the K-fold cross-validation is: K=3, N=150. That is, 150 samples of experimental data are randomly divided into 3 groups, and 2 groups are selected as the training data of the GA-BPNN in turn, and the remaining 1 group is used as the testing data. So, the recognition rate of each test is recorded and the final result is the average of 3 recognition rates.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results & Discussion</ns0:head><ns0:p>Experimental data analysis Four typical waveform samples of raw detection signals are randomly selected from the experimental data, and their last period data are drawn in Fig. <ns0:ref type='figure'>9</ns0:ref> The figure shows the similarities and differences of the ultrasonic propagating in the concrete test block. Based on the physical mechanism of the ultrasonic propagation, the different diameters of holes are the main reason for the difference between ultrasonic detection signal waveforms. In addition, the sizes and the shapes of gravel at different locations are different in the concrete, which is another important reason for the different detection waveforms <ns0:ref type='bibr' target='#b40'>(Garnier et al., 2009)</ns0:ref>. Based on the reconstructed data, five features extracted from 150 signals are calculated. The five features are separately shown in Figs. 10-14. Five features of the reconstructed defective and defect-free signals do not show obvious regularity or organization from Figs. 10-14. The figures show that the feature values are different more or less even they are extracted from the same defect shared the same diameters of penetrating holes, or at the same detection points. Five features are aliasing and these reconstructed signals are inseparable linearly based on the mere measurement of single feature. On the one hand, the uneven distribution of coarse aggregate in concrete will generate acoustic measurement uncertainty, and that causes the complexity of ultrasonic detection signal. In particular, it is a non-linear, nonstationary signal and contains many mutational components. On the other hand, the stability and accuracy of the hardware system influence the output deviation, so the detection signals exist a certain distortion inevitably. Nevertheless, it can be seen that partial feature data are distributed centrally, such as the kurtosis coefficient of 9 mm defect detection data in Fig. <ns0:ref type='figure' target='#fig_5'>12</ns0:ref>. Although Different detection signals have similarities on a single feature, we can distinguish differences between different signals on multiple features fusion. Then, five features are regarded as essential characteristics for the classification of defects in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison</ns0:head><ns0:p>The optimal solution is used to initialize the configuration parameters for the proposed GA-BPNN algorithm. The optimal number of hidden layer nodes of BPNN calculated by GA with the threefold cross-validation method is 12, and then the number of each layer's nodes is 5, 12, and 2. To demonstrate the advantages and disadvantages of the GA-BPNN, a BPNN without optimization is utilized for algorithmic performance analysis, and we further draw their convergent curves. We use a default function in the neural network toolbox to initialize weights and biases, with 11 hidden layer nodes, according to the empirical rule (2*5+1) in the paper of <ns0:ref type='bibr' target='#b39'>Guo Z.H. et al. (Guo et al., 2011)</ns0:ref>. Some parameters of the BPNN include the number of input nodes, the number of output 15-16, where two figures show how the outputs of models converge to the actual tag value. The feature data picked up for operating and drawing the curves are randomly selected from the training dataset and the test dataset respectively. The error set by the BPNN in this paper is 0.001, and the epochs required by BPNN are more than twice that of GA-BPNN. The computational cost of the BPNN is higher than that of GA-BPNN. In addition, the GA-BPNN also converges faster in the early stage of operation. In Figs. 15-16, it can be seen from the mean squared error curves that the GA-BPNN takes fewer epochs under the same termination conditions. The GA-BPNN has higher operating efficiency and convergence speed to approach the model's predictive values. The statistical results on 100 training data calculated by GA-BPNN with the three-fold cross-validation are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, the statistical results on the 50 test data are shown in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. Although the convergence speed of GA-BPNN is higher, it has to spend much time to solve the optimum in the training stage, i.e., it is about 489.049 seconds to search the optimum. Its average training time is about 0.0993 seconds and the average test time is about 0.0053 seconds. Correspondingly, the average training time of BPNN is about 0.1413 seconds and its average test time is about 0.0057 seconds. Its test recognition accuracy is about 86.67% which is less than the recognition accuracy of GA-BPNN. Three defect recognition accuracies from the three-fold cross-validation are all higher than 90% shown in the statistical results, which can prove the extracted features are effective in characterizing the presence or absence of defects in concrete, and the GA-BPNN is feasible as a concrete defect-recognition model. Furthermore, the proposed method can identify the defects automatically from detection data, then operators do not need to possess professional detection knowledge for reading and identifying recognition results. It is quite important for its practical engineering applications. According to previous research on the recognition method of concrete ultrasonic detection signal, we choose radial basis function network (RBF) <ns0:ref type='bibr' target='#b22'>(Chen & Ma, 2011)</ns0:ref> and support vector machine (SVM) <ns0:ref type='bibr' target='#b19'>(Saechai, Kongprawechnon & Sahamitmongkol, 2012)</ns0:ref> using the data in this paper to carry out classification experiments. Also, under the 3-fold cross-validation, 150 concrete ultrasonic data consisting of 5 features are used. The results of the comparative experiment are shown in Table <ns0:ref type='table'>3</ns0:ref>. The recognition accuracy of SVM, RBF, and BPNN methods have little differences, but none of them reaches 90%. Compared with previous studies, the size of the concrete defects in this paper are smaller and therefore the detection signal is more challenging to be identified. The method we proposed is more accurate than the above three methods. It is shown that the proposed method leads to the performance approaching high recognition accuracy. When measuring the acoustic, the degree of adhesion and contact force of the ultrasonic probe to the concrete surface may cause the recognition error due to the fact that concrete is a complex and multi-phase medium. Therefore, the obtained detection signals are complex and diverse. Although it is hard to completely identify all modes of the complex ultrasonic detection signals from concrete, more defect-type will be further investigated as our future works.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions and future work</ns0:head><ns0:p>In order to recognize the concrete defects with high reliability and accuracy by using ultrasonic testing signals, we propose an intelligent method which includes a signal processing sub-algorithm and a recognition sub-algorithm. We extract fundamental information from the first node of the third layer by using wavelet packet transform (WPT) and calculate five feature variables of the reconstructed signals. Moreover, the GA-BPNN-based sub-algorithm identifies the concrete defects, where GA optimized BP neural network (GA-BPNN) model has been proposed embedding a K-fold cross-validation method. As a practical application of a typical type of hole defects in concrete, we utilize the method to identify the defects in a C30 class concrete test block. Based upon the test points, we obtained 150 ultrasonic detection signals containing no defect and hole defects at various locations, and then performed identification experiments based on these data sets using the method in this paper. GA-BPNN has higher diagnosis accuracy and faster running speed than existing methods. The experimental results show the effectiveness of the proposed method while the concrete hole defects have been recognized with high accuracy.</ns0:p><ns0:p>In the future, we will further verify the effectiveness of this method in more types of concrete defect (e.g., cracks and foreign matter) ultrasonic detection signal data and develop more effective methods to solve complex problems in the field <ns0:ref type='bibr' target='#b45'>(Mittal et al., 2020.)</ns0:ref>, such as characteristic indexes, optimizer, machine learning. Then these effective methods will be extended to more detection signal fields. Simultaneously, the sensor network solution is also our future directly for information fusion <ns0:ref type='bibr' target='#b14'>(Naeem et al.,2021)</ns0:ref>. Simultaneously, the uneven distribution of coarse aggregate could be considered as a stochastic distribution optimization problem <ns0:ref type='bibr' target='#b41'>(Ren, Zhang & Zhang, 2019)</ns0:ref>, its influence on the accuracy of detection signal recognition is another theoretical perspective for our future works. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>) to obtain A and D. A is a low-frequency component and D is a high-frequency component after each decomposition of an original signal. Continuously, we decompose A and D in the same way. Finally, S is decomposed into eight components at different frequency bands. The basic calculation formulas of WPT are shown in equations (1-2).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>factor, n represents the number of frequency bands, k and l are the positions of coefficients sequences</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>How to choose the appropriate wavelet basis function is vital besides WPT decomposes ultrasonic detection signals precisely.<ns0:ref type='bibr' target='#b30'>Samaratunga D. et al. (Samaratunga, Jha & Gopalakrishnan, 2016)</ns0:ref> think the time-frequency change of non-stationary signal is well represented by the Daubechies wavelet function in the time-frequency domain. In this paper, the db15 wavelet is selected as the base function of WPT according to the decomposition experiment analysis.Ultrasonic signal features selectionIn pattern recognition, feature extraction is normally used for two processes: object feature data collection and classification. The quality and property of feature data greatly affect the design and the performance of pattern recognition classifiers, e.g., monotonicity, which is a key problem of pattern recognition. Scholars used wavelet coefficients after wavelet transform as feature vectors, which resulted in the very high-dimensional input data of the recognition model<ns0:ref type='bibr' target='#b31'>(Cruz et al., 2016)</ns0:ref>.Wan P. et al. (Wan </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>05. The BPNN algorithmic parameters setting is: the number of input nodes is 5, the number of output nodes is 2, the training stop condition is that the model error reaches 0.001 or the epochs of training reaches 1000, and the learning rate is 0.01. Simultaneously, the cross-validation is used for training and testing the GA-BPNN model. The hidden layer function used in this paper is the hyperbolic PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60010:1:1:NEW 9 Jun 2021) Manuscript to be reviewed Computer Science tangent sigmoid transfer function (tansig), and the output layer function is the Log-sigmoid transfer function (logsig).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60010:1:1:NEW 9 Jun 2021) Manuscript to be reviewed Computer Science nodes, the training stop condition, learning rate and activating function are the same as the parameters setting of GA-BPNN in Section 3. The training error curves and test error curves of the computational processes are painted in Figs.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,322.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,115.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,344.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,426.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,262.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Training dataset recognition results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Recognition</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>result No. Class True False Accuracy(%) Average Accuracy(%)</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell>No defect</ns0:cell><ns0:cell /><ns0:cell>38</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Defect</ns0:cell><ns0:cell>5mm 7mm</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>9mm</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>No defect</ns0:cell><ns0:cell /><ns0:cell>36</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Defect</ns0:cell><ns0:cell>5mm 7mm</ns0:cell><ns0:cell>59</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>95</ns0:cell><ns0:cell>95.33</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>9mm</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>No defect</ns0:cell><ns0:cell /><ns0:cell>37</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Defect</ns0:cell><ns0:cell>5mm 7mm</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>95</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>9mm</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Test dataset recognition results.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Recognition</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>result No. Class True False Accuracy(%) Average Accuracy(%)</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell>No defect</ns0:cell><ns0:cell /><ns0:cell>19</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>Defect</ns0:cell><ns0:cell>5mm 7mm</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>9mm</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>No defect</ns0:cell><ns0:cell /><ns0:cell>18</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>Defect</ns0:cell><ns0:cell>5mm 7mm</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>90</ns0:cell><ns0:cell>91.33</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>9mm</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>No defect</ns0:cell><ns0:cell /><ns0:cell>20</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>Defect</ns0:cell><ns0:cell>5mm 7mm</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>9mm</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60010:1:1:NEW 9 Jun 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60010:1:1:NEW 9 Jun 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60010:1:1:NEW 9 Jun 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Editors
We would like to thank the editor and the reviewers for their generous comments on the manuscript and the manuscript has been further revised to address their concerns.
In particular, we would like to thank the editor for giving us a chance to revise the paper, and also thank the reviewers for giving us the valuable suggestions that help us regarding to the English writing and the quality improvement of the paper.
We are very pleased to submit a revised version of our manuscript with the title “Ultrasonic based Concrete Defects Identification via Wavelet Packet Transform and GA-BP Neural Network”. The revised manuscript has been modified thoroughly according to the reviewers’ suggestions. In addition, we also proof-read the manuscript, corrected all the typos and improved the readability of the manuscript. With respect to the concerns from reviewers, we have sufficiently responded all the comments while the point-to-point responses are shown as follows. The modifications in the revised manuscript have been highlighted in red to reflect the comments and our responses.
We believe that the manuscript has been improved and now we hope it meets the criteria for publishing our contribution in PeerJ Computer Science.
Dr. Jinhui Zhao
Associate Professor of Computer Science and Biology
On behalf of all authors.
Reviewer 1 (Anonymous)
Basic reporting
The paper is written in a good manner. Some minor touches can improve this paper more.
The quality of the figures can be improved more
The contributions of the authors are not clear. They have mentioned in first contribution.
Several paragraphs contain trivial information and should be dropped.
I found some English mistakes please check them.
Future research directions section is core, however, it is not good at all.
Thanks for your comment. We have updated the figures with higher quality. Since the figures would be compacted through the PDF conversion, the reviewer probably meets the figure problem as pixel loss. If necessary, we would like to upload all the high-quality figure source files separately to enhance the readability.
We are very sorry for our insufficient description about our contribution. The main contribution of our work is that we find a simple and effective method for automatically identifying defects in concrete. Although ultrasonic technology has been widely used in concrete inspection projects, the existing ultrasonic inspection instruments still rely on manual judgment, and there is a lack of research on the identification of small-size defects. The method in this paper improves the efficiency of recognition. As an action, we have added the content about the main contribution in the introduction.
We carefully checked every section and paragraph of the paper and deleted some trivial information. At the same time, we have checked the full text language and corrected the errors via a careful proof reading. We hope the readability has met the criterion of readability. In the conclusion, we carefully revised the content of future research. We have clarified future research goals and reasonable and feasible research directions. Hope to make further contributions to the intelligent inspection of concrete. In particular, the reviewer has recommended several interesting results which are also cited in conclusion as a potential extension of our future work.
Experimental design
- What are the computational resources reported in the state of the art for the same purpose?
There are currently no public data resources from ultrasonic testing of concrete. Previous studies in the same field did not provide a data set. The data used in this paper are all measured on concrete test blocks by the ultrasonic detection system. In the future, after obtaining further research results, we will provide readers with our data sets and algorithm programs.
- Please cite each equation and clearly explain its terms.
In this paper, we explained the terms used in the equations, and all our algorithms and equations have been declared and cited.
- Clearly highlight the terms used in the algorithm and explain them in the text.
Thank you for your comment. Your opinion makes our paper easier to understand by readers. We have checked and explained the terms in the algorithm.
Validity of the findings
- What are the evaluations used for the verification of results?
This paper mainly presents the working process of the proposed identification algorithm, that is, how to process concrete ultrasonic defect signals and the procedures of GA-BPNN. Finally, we compare the computational result with those of other methods. This is also a means to verify the feasibility of the proposed method.
Actually, the detection experiment of concrete defects described in this paper is carried out on the universal model blocks of concrete test. The data used in this paper are obtained from the actual test of the ultrasonic inspection system, rather than the data set provided by others or synthetic simulation data. The recognition results in this paper are drawn from the actual values.
Comments for the Author
More recent papers must be included in all the sections and subsections.
1) Anomaly Detection in Automated Vehicles Using Multistage Attention-Based Convolutional Neural Network, IEEE Transactions on Intelligent Transportation Systems
2) Analysis of security and energy efficiency for shortest route discovery in low‐energy adaptive clustering hierarchy protocol using Levenberg‐Marquardt neural network, Transactions on Emerging Telecommunications Technologies, e3997
Thank you for providing us with references which are inspirable to our works. In the revised manuscript, we have updated the reference list and our future work in conclusions. Moreover, an additional reference is also included in the revised paper which is entitled ‘DARE-SEP: A Hybrid Approach of Distance Aware Residual Energy-Efficient SEP for WSN.’ This paper introduced the sensor network solution which is also our future directly for information fusion. Thank you again for your kind suggestions.
Reviewer 2 (Anonymous)
Basic reporting
1. In the abstract, the background knowledge on the problem addressed need to be added.
Thank you for your comment. We have added background knowledge of the problem solving to the abstract.
2. In the abstract, the wide range of applications and its possible solutions need to be added.
At present, we have only used this method in the field of concrete ultrasonic testing and proved it to be effective. Of course, we believe that this method will also have accurate identification results in the different field of diagnosis signals. Based on your comment, we added the range of applications and its possible solutions to the section of abstract.
3. In the abstract, the problem addressed need to be justified with more details.
We revised the abstract. The problem description has been improved and also the proposed approach we solved the problem is given with more details.
4. In the Introduction section, the drawbacks of each conventional technique should be described clearly.
According to previous studies, ultrasonic technology is the most widely used in the field of concrete inspection with the ultrasonic pulse velocity method. The main content of this paper is the research of intelligent recognition based on the ultrasonic detection signal of concrete. The application of many detection algorithms is not the focus of this paper, so we are ungrounded to describe the traditional technologies too much.
5. Introduction section can be extended to add the issues in the context of the existing work
We have modified the relevant content to make the paper more organized. In the introduction section, we have described the problems in the existing research works and pointed out the problems in the diagnostic methods of the ultrasonic instruments currently used for concrete defect detection. Meanwhile, we analyze the shortcomings of the previous research on concrete ultrasonic detection signal recognition, including the difficulty of processing complex signals and the low accuracy of automatic recognition.
6. Literature review techniques have to be strengthened by including the issues in the current system and how the author proposes to overcome the same.
In the introduction, we have analyzed the problems in the current systems. That is, the existence of insufficiency in concrete ultrasonic testing equipment with the ultrasonic pulse velocity (UPV) method, nor open system. Another problem is the deficiency of the research of such signal processing and intelligent recognition technology even though there is difference of opinions. Based on the above problems, we have proposed our solutions.
7. What is the motivation of the proposed work?
Based on the research of current concrete testing equipment and related research papers, we find that there are some problems, such as, low detection efficiency and accuracy in actual projects. Moreover, the size of concrete hole defects is more than 2cm used in the relevant literature. It is the motivation of this paper that we try to find a simple and easy in use, and effective automatic identification method, to improve the efficiency and accuracy of concrete detection, and realize the ultrasonic detection signal recognition of concrete in small-size defects.
8. Research gaps, objectives of the proposed work should be clearly justified.
In the introduction, we analyzed the current research status of concrete ultrasonic detection signal recognition. There are deficiencies in the field of intelligent inspection, especially in concrete defect inspection. Based on previous research, we proposed an intelligent recognition method to realize the automatic recognition of concrete ultrasonic detection signals. We revised the content of the article so that research gaps and objectives of the proposed work are clearly justified.
9. The authors should consider more recent research done in the field of their study (especially in the years 2018 and 2020 onwards).
Thanks for your suggestion, we investigated the current research status of concrete defect detection, especially the use of ultrasonic detection technology. We refer to the latest research literature in the same field as much as possible, and the research status of similar fields is also within our consideration.
In addition to the latest individual methods, other algorithms for systematic research and comparison analysis are also indispensable. And it accumulates sufficient experience for the promotion of intelligent algorithm products.
10. An error and statistical analysis of data should be performed.
Realizing the pattern recognition of ultrasonic signals is the main purpose of this article. Wavelet packet filtering is performed first, then feature extraction is performed, and finally BPNN classification and recognition are performed. Each step of the operation, this article has been evaluated and analyzed, and the recognition error also has been analyzed. The reason is that wavelet packet decomposition or other filtering methods are difficult to perform accurate error analysis. Because the signal used is not a synthetic signal, it cannot be accurately quantified and analyzed, but the error range is also analyzed in the article which can be considered as the analysis requested by reviewer.
11. The conclusion should state scope for future work.
Thank you for your comment. We have revised the future work content of the conclusion.
12. Discuss the future plans with respect to the research state of progress and its limitations.
The discussion about future research work can show the direction of our further research in this field. Based on your comments, we discussed the research state of progress and its limitations at the end of the conclusion.
13. Kindly refer the below paper:
1. Rajput, D.S., Basha, S.M., Xin, Q. et al. Providing diagnosis on diabetes using cloud computing environment to the people living in rural areas of India. J Ambient Intell Human Comput (2021). https://doi.org/10.1007/s12652-021-03154-4
Thank you for providing the paper information. Some of the involved methods are also used as comparison methods in our paper. The paper about diabetes diagnosis provides us with more methods that may be feasible. We believe that many of the machine learning algorithms in this reference can be studied in the field of concrete inspection. Therefore, we have updated the reference list with the suggested paper.
In the case study, our proposed method has the higher recognition accuracy by comparing the methods used in the field of concrete ultrasonic inspection. In the future, we will do further research in the machine learning algorithms applicable to the field of concrete ultrasonic inspection.
Experimental design
1. The authors should consider more recent research done in the field of their study (especially in the years 2018 and 2020 onwards). 6. The paper needs to provide significant experimental details to correctly assess its contribution: What is the validation procedure used?
When we started our research work, we have fully investigated the latest literature in recent years.
The data used in this article is the actual measured data collected from the actual detection system, not synthetic simulation data, and this article focuses on the research of algorithms that accurately identify defects, and the comparison experiment is done to verify each other between different algorithms. Newer and higher performance practical algorithms will be continuously studied in validity and accuracy verification in the future.
Yes, the experimental details that can be added to the experiment in this article if they are indispensable, such as the description of the experimental process, such as duration, interval, temperature and humidity, etc.
2. Kindly provide several references to substantiate the claim made in the abstract (that is, provide references to other groups who do or have done research in this area).
References related to the claims made in the abstract and introduction are cited in the paper. We list the references below again.
Ozsoy U, Koyunlu G, Ugweje OC. 2017. Nondestructive Testing of Concrete using Ultrasonic Wave Propagation. In Proceedings of 13th International Conference on Electronics, Computer and Computation (ICECCO), Abuja, Nigeria, 28-29.
Iyer S, Sinha SK, Tittmann BR, Pedrick MK. 2012. Ultrasonic signal processing methods for detection of defects in concrete pipes. Automation in Construction 22:135-148.
Saechai S, Kongprawechnon W, Sahamitmongkol R. 2012. Test system for defect detection in construction materials with ultrasonic waves by support vector machine and neural network. In Proceedings of 6th International Conference on Soft Computing and Intelligent Systems (SCIS), Kobe, Japan, 20-22, November, pp.1034-1039.
Xu YD, Jin RY. 2018. Measurement of reinforcement corrosion in concrete adopting ultrasonic tests and artificial neural network. Construction and Building Materials 177:125-133.
Janku M, Cikrle P, Grosek J, Anton O, Stryk J. 2019. Comparison of infrared thermography, ground-penetrating radar and ultrasonic pulse echo for detecting delaminations in concrete bridges. Construction and Building Materials 225:1098-1111.
Validity of the findings
An error and statistical analysis of data should be performed.
We have already answered this question in basic reporting above, please see the response to the basic comment.
Comments for the Author
1. In the abstract, the background knowledge on the problem addressed need to be added.
2. In the abstract, the wide range of applications and its possible solutions need to be added.
3. In the abstract, the problem addressed need to be justified with more details.
4. In the Introduction section, the drawbacks of each conventional technique should be described clearly.
We have already answered these comments above. Thank you for your review.
5. The introduction needs to explain the main contributions of the work more clearly.
Considering the reviewer’s suggestion, we have added a description of the main contributions of the work in the introduction of the paper.
6. The author should emphasize the difference between other methods to clarify the position of this work further.
Thank you for your suggestion. We have revised the content of the paper to emphasize the difference between the proposed method and other methods. We have described the shortcomings of other existing methods in this field in the introduction section. Furthermore, the algorithm proposed in this paper is better than other algorithms, such as in performance of recognition accuracy after the comparison analysis.
7. The Wide ranges of applications need to be addressed in Introductions
The purpose of our research is to find the core identification problems impeded the automation of defect recognition in ultrasonic testing of concrete. The main test object is concrete. The detection of concrete defects is a difficult problem in the field of ultrasonic inspection. The method of our paper is mainly used for the identification of concrete ultrasonic detection signals, and we prove that this method is effective. Of course, we believe that the application of this method to the identification of other fault signals will also have more accurate diagnostic results.
8. The objective of the research should be clearly defined in the last paragraph of the introduction section.
Thank you for your suggestion again. We have revised the introduction, and the related content of the research objective is more clearly defined.
9. Add the advantages of the proposed system in one quoted line for justifying the proposed approach in the Introduction section.
Thank you for your suggestion. It is truth that ultrasonic has the advantages of strong penetrating power and high sensitivity, so it is the most widely used in concrete inspection. Increasingly, machine learning has been proven effective in the field of signal recognition. At present, the microcontroller of the ultrasonic inspector could deploy intelligent algorithms, which helps to realize the intelligent recognition of the test results.
10. The motivation for the present research would be clearer, by providing a more direct link between the importance of choosing your own method.
By analyzing the current concrete ultrasonic instrument, we found that the identification method of the instrument has shortcomings. However, in the current research, there is a lack of research on the recognition of concrete ultrasonic detection signals. With the development of artificial intelligence technology together with ultrasonic sensors detection systems, it is a trend to realize intelligent recognition of detection signals. Therefore, for this purpose we propose a recognition method based on wavelet packet transform and GA-BPNN with a higher efficiency and accuracy of concrete defect detection.
11. In the introduction, the findings of the present research work should be compared with the recent work of the same field towards claiming the contribution made.
Thank you for your suggestion. In the introduction section, we analyzed the current research status in the field of concrete ultrasonic testing and summarized the deficiencies of previous studies. We conducted our research after comparing the research objects and methods of recent works. We have added more research comparative descriptions of recent work in order to claim our contributions.
12. Introduction section can be extended to add the issues in the context of the existing work
13. Literature review techniques have to be strengthened by including the issues in the current system and how the author proposes to overcome the same.
14. The paper needs to provide significant experimental details to correctly assess its contribution: What is the validation procedure used?
15. Kindly provide several references to substantiate the claim made in the abstract (that is, provide references to other groups who do or have done research in this area).
16. An error and statistical analysis of data should be performed.
17. The conclusion should state scope for future work.
18. Discuss the future plans with respect to the research state of progress and its limitations.
You have mentioned these problems in Basic reporting, Experimental design and Validity of the findings. To avoid duplicate responses, please see the responses to these comments above. Thank you again for your kind comments.
" | Here is a paper. Please give your review comments after reading it. |
182 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Owing to mathematical theory and computational power evolution, modern cryptosystems demand ingenious trapdoor functions as their foundation to extend the gap between an enthusiastic interceptor and sensitive information. This paper introduces an adaptive block encryption scheme. This system is based on product, exponent, and modulo operation on a finite field. At the heart of this algorithm lies an innovative and robust trapdoor function that operates in the Galois Field and is responsible for the superior speed and security offered by it. Prime number theorem plays a fundamental role in this system, to keep unwelcome adversaries at bay. This is a self-adjusting cryptosystem that autonomously optimizes the system parameters thereby reducing effort on the user's side while enhancing the level of security. This paper provides an extensive analysis of a few notable attributes of this cryptosystem such as its exponential rise in security with an increase in the length of plaintext while simultaneously ensuring that the operations are carried out in feasible runtime. Additionally, an experimental analysis is also performed to study the trends and relations between the cryptosystem parameters, including a few edge cases.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Cryptography is the art of hiding messages to provide it with a certain level of security to maintain confidentiality and integrity. This new idea, whether it was to hide secret messages, or to transform the original message to make it look fancy, dignified, etc. continued through the medieval ages, the renaissance period saw the birth of the polyalphabetic substitution cipher, called the Vigenère Cipher <ns0:ref type='bibr' target='#b14'>Rubinstein-Salzedo (2018)</ns0:ref>. An encryption device called the Enigma machine <ns0:ref type='bibr' target='#b19'>Singh (1999)</ns0:ref> was used by the Nazi Germans during World War II. Although history suggests that it has been in use for ages, systematic study of cryptology as a science (and perhaps an art) just started around one hundred years ago <ns0:ref type='bibr' target='#b18'>Sidhpurwala (2013)</ns0:ref>.</ns0:p><ns0:p>But it was not until the 1970s, that studies in cryptography got serious. Data Encryption Standard (DES) was introduced by IBM in 1976 <ns0:ref type='bibr' target='#b23'>Tuchman (1997)</ns0:ref> followed by Diffie Hellman Key Exchange in the same year <ns0:ref type='bibr' target='#b12'>Kallam (2015)</ns0:ref>. In 1977, RSA came along <ns0:ref type='bibr' target='#b2'>Calderbank (2007)</ns0:ref> and in 2002, AES was accepted as a standard security protocol to be used in both hardware and software <ns0:ref type='bibr' target='#b7'>Dworkin et al. (2001)</ns0:ref>. And thus cryptography became popular.</ns0:p><ns0:p>The strength or foundation of a modern encryption protocol relies upon the inherent Trapdoor Function.</ns0:p><ns0:p>As classical cryptography evolved, it has become clear that some key components are essential in making stronger trapdoor functions, also known as one-way functions. Studies have shown that prime numbers are an essential part of numerous cryptosystems, and with a bit of effort, numerous mathematical concepts can be used to generate stronger cryptosystems.</ns0:p><ns0:p>Cryptography algorithms rely on integer mathematics, in particular, number theory to perform invertible operations such as addition, multiplication, exponentiation, etc. over a finite set of integers. Finite Fields, also known as Galois Fields, are fundamental to any cryptographic understanding. A field can be defined as a set of numbers that we can add, subtract, multiply and divide together and only ever end up with a result that exists in our set of numbers. This is mainly advantageous in cryptography since PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:1:1:NEW 26 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>we can only work with a small number of incredibly huge numbers. <ns0:ref type='bibr' target='#b13'>Kohli (2019)</ns0:ref>. When cryptography algorithms rely solely on converting raw string data in ASCII format, we are restricted to 256 different characters only. Doing such leaves us with only a handful amount of invertible operations in modulo 256. On the other hand, the Galois Field GF(2 8 ) offers numerous such operations. In fact, Advanced Encryption Standard (AES) <ns0:ref type='bibr' target='#b4'>Daemen and Rijmen (2001)</ns0:ref> uses the multiplicative inverse in GF(2 8 ). Using the Galois Field also shines forth the opportunity to use the concepts of irreducible polynomials <ns0:ref type='bibr' target='#b17'>Shoup (1990)</ns0:ref>. In AES, addition and subtraction is a simple XOR operation. For multiplication, it uses the product modulo an irreducible polynomial. For example, the integer 283 refers to the irreducible polynomial f (x) = x 8 +x 4 +x 3 +x+1 in GF(2 8 ) whose coefficients are in GF(2) <ns0:ref type='bibr' target='#b6'>Desoky and Ashikhmin (2006)</ns0:ref>.</ns0:p><ns0:p>Threshold cryptography is a form of security lock where private keys are distributed among multiple clients or systems. They are even asked to provide digital signature authentication for verification purposes. Only when these keys are combined, can information be effectively decrypted. In practice, this lock is an electronic cryptosystem that protects confidential information, such as a bank account number or an authorization to transfer money from that account <ns0:ref type='bibr' target='#b9'>Henderson (2020)</ns0:ref>. The encryption scheme described in this paper has traits that resemble threshold cryptography.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>LITERATURE REVIEW</ns0:head><ns0:p>Recently in the field of Internet of Things (IoT), research has been conducted on flexible privacy-preserving data publishing schemes in the sector of smart agriculture. Their study shows that over the years protection and privacy concerns for smart agriculture have grown in importance. In these IoT-enabled systems, the internet is used for communicating with participants. Since the cloud is often untrustworthy, higher privacy standards are needed <ns0:ref type='bibr' target='#b20'>Song et al. (2020)</ns0:ref>. Numerous IoT devices have been introduced and successfully deployed in recent years to address a variety of people's needs <ns0:ref type='bibr' target='#b27'>Zhang et al. (2021)</ns0:ref>.</ns0:p><ns0:p>IoT networks are typically comprised of a network of interconnected sensors and information relaying units that communicate in real-time with one another. Individual nodes typically have specialized sensor units for detecting specific environmental attributes and have fewer computing resources available.</ns0:p><ns0:p>For example, in a house, various technologies such as facial recognition, video monitoring, smart lighting, and so on will all function in tandem. Security and privacy are key impediments to the realistic deployment of smart home technologies <ns0:ref type='bibr' target='#b16'>Shen et al. (2018)</ns0:ref>.</ns0:p><ns0:p>The majority of the network's elements use sensitive user data and seamlessly exchange information with one another in real-time. To keep intruders out of such a network, a dependable and stable solution based on edge computing is preferred. Another real-world application of secure edge computing lies in the domain of smart grids. Smart grids are recognized as the next-generation intelligent network that maximizes energy efficiency <ns0:ref type='bibr' target='#b25'>Wang et al. (2020)</ns0:ref>. Smart grid solutions help to monitor, measure and control power flow in real-time that can contribute to the identification of losses, and thereby appropriate technical and managerial actions can be taken to prevent the same. Smart grids generally rely on data recorded by energy meters from different houses. Since electricity usage data can be classified as confidential user metrics, there is a need for implementing a layer of security before transmitting this data to other parties for further analysis. Encryption of this data at the smart energy meter stage itself can be beneficial.</ns0:p><ns0:p>However, this requires the development of a lightweight encryption protocol that can be easily integrated with microprocessors with minimal compute power.</ns0:p><ns0:p>The amount of data provided by users during numerous online activities has increased dramatically over the last decade. Celestine <ns0:ref type='bibr'>Iwendi et al. performed</ns0:ref> research that used a model-based data analysis technique for handling applications with Big Data Streaming to glean useful information from this massive amount of data. The method suggested in their research has been tested to add value to large text data processing <ns0:ref type='bibr' target='#b10'>Iwendi et al. (2019)</ns0:ref>. Our proposed schematic leverages an ingenious trapdoor function based on the finite field to handle data encryption in scenarios involving large string lengths within feasible runtime, proving it to be considerably lightweight.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:1:1:NEW 26 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='3'>TRAPDOOR FUNCTION</ns0:head><ns0:p>The essence of any cryptosystem relies on some special mathematical trapdoor function that makes it practically impossible for an unwelcome interceptor to gain access to secretive information. Simultaneously, these functions also ensure that the authorized parties (who know the secret key) can continue sharing data among themselves.</ns0:p><ns0:p>A trapdoor function is a mathematical transformation that is easy to compute in one direction, but extremely difficult (practically impossible) to compute in the opposite direction in feasible runtime unless some special information is known (private key). Analogously, this can be thought of like the lock and key in modern cryptography where until and unless someone has access to the exact key, they can't open the lock. In mathematical terms, if f is a trapdoor function, then y = f (x) easy to calculate but x = f −1 (y) is tremendously hard to compute without some special knowledge k (called key). In case k is known, it becomes easy to compute the inverse x = f −1 (x, k).</ns0:p><ns0:p>The components of the proposed system in this paper that act as the trapdoor function is the modulo operation on a Galois field <ns0:ref type='bibr' target='#b0'>Benvenuto (2012)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>PRIME NUMBER THEOREM</ns0:head><ns0:p>Positive integers that are divisible by 1 and itself, are known as prime numbers. The sequence begins like the following... <ns0:ref type='bibr'>2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37,</ns0:ref> • • • and has held untold fascination for mathematicians, both professionals and amateurs alike. A result that gives an idea about an asymptotic distribution of primes is known as the prime number theorem <ns0:ref type='bibr' target='#b8'>Goldstein (1973)</ns0:ref>.</ns0:p><ns0:p>π(x) is the prime-counting function that gives the number of primes less than or equal to x, for any real number x. This can be written as</ns0:p><ns0:formula xml:id='formula_0'>π(x) = ∑ p≤x 1 (1)</ns0:formula><ns0:p>It is seen via graphing, that x ln x is a good approximation to π(x), in the sense that the limit of the quotient of the two functions π(x) and x ln x as x increases without bound is 1.</ns0:p><ns0:formula xml:id='formula_1'>lim x→∞ π(x) ln x x = 1 (2)</ns0:formula><ns0:p>This result can be rewritten in asymptotic notation as</ns0:p><ns0:formula xml:id='formula_2'>π(x) ∼ x ln x (3)</ns0:formula><ns0:p>The logarithmic integral provides a good estimate to the prime density function.</ns0:p><ns0:formula xml:id='formula_3'>π(x) x ∼ li(x) = x 2 1 lnt dt (4)</ns0:formula><ns0:p>To get an idea of the distribution of primes, it is important to count the number of primes in a given range and find the percentage of primes. Consider an infinitely tall tree, losing its leaves. The leaves represent prime numbers. Most leaves are found near the root, and the number of leaves reduces as we walk away from the center. But no matter how far we are from the center, we always find more leaves.</ns0:p><ns0:p>These leaves are unpredictably scattered in an infinite area surrounding the tree. This is the situation with the distribution of primes as resembled by table 1. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the prime density and logarithmic integral on the left, and the asymptotic nature of the prime counting function on the right. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>GALOIS FIELD IN CRYPTOGRAPHY</ns0:head><ns0:p>Galois Field, named after Evariste Galois, also known as finite field, refers to a field in which there exist finitely many elements. A computer only understands the binary data format, which consists of a combination of 0's and 1's. If we consider GF(2), which is simply the Galois Field of order 2, this representation becomes possible which enables us to apply mathematical operations for functional data scrambling. The elements of Galois Field GF(p n ) is defined as</ns0:p><ns0:formula xml:id='formula_4'>GF(p n ) =(0, 1, 2, • • • , )∪ (p, p + 1, p + 2, • • • , p + p − 1)∪ (p 2 , p 2 + 1, • • • , p 2 + p − 1) ∪ • • • ∪ (p n−1 , p n−1 + 1, p n−1 + 2, • • • , p n−1 + p − 1)</ns0:formula><ns0:p>where p ∈ P and n ∈ Z + . The order of the field is given by p n while p is called the characteristic of the field. On the other hand, GF, as one may have guessed it, stands for Galois Field. Also note that the degree of polynomial of each element is at most n − 1 Benvenuto (2012).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Properties of finite field</ns0:head><ns0:p>For arbitrary elements a, b, c and binary operations (+, •) in a finite field F, the following properties hold <ns0:ref type='bibr' target='#b22'>Stallings (2006)</ns0:ref>; <ns0:ref type='bibr' target='#b13'>Kohli (2019)</ns0:ref>.</ns0:p><ns0:p>1. Closure: For any two elements a, b, a Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_5'>+ b ∈ F, a • b ∈ F 2. Associativity: (a + b) + c = a + (b + c), (a • b) • c = a • (b • c) 3. Commutativity: a + b = b + a, a • b = b • a 4/</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>4. Identity: There exists a 0 such that for any element a in the field a + 0 = 0 + a = a known as additive identity. There exists a 1 such that for any element a in the field a • 1 = 1 • a = a multiplicative identity.</ns0:p><ns0:p>5. Every arbitrary element a has an additive inverse a −1 such that a + a −1 = a −1 + a = 0 and a</ns0:p><ns0:formula xml:id='formula_6'>multiplicative inverse a −1 such that a • a −1 = a −1 • a = 1</ns0:formula></ns0:div>
<ns0:div><ns0:head n='5.2'>Finite Field Operations</ns0:head><ns0:p>Let f (p) and g(p) two polynomials in the Galois field GF(p n ) with the respective coefficients</ns0:p><ns0:formula xml:id='formula_7'>A = a 0 , a 1 , • • • , a n and B = b 0 , b 1 , • • • , b n for f , g.</ns0:formula><ns0:p>Then the following operations are valid 1. Addition and Subtraction</ns0:p><ns0:formula xml:id='formula_8'>c k ≡ a k ± b k mod p (5)</ns0:formula><ns0:p>2. Multiplication and Multiplicative Inverse For an irreducible polynomial m(p) with a degree of at least n, we have the following</ns0:p><ns0:formula xml:id='formula_9'>h(p) ≡ f (p) • g(p) mod m(p) (6)</ns0:formula><ns0:p>and polynomials x(p), y(p) are called multiplicative inverses of each other iff</ns0:p><ns0:formula xml:id='formula_10'>x(p)y(p) ≡ 1 mod m(p) (7)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='5.3'>Applications in Cryptography</ns0:head><ns0:p>Cryptography is the most prominent and extensively used application of Galois Field. There are many different representations of data. One such representation is a vector in a finite field. Once the data is in this desired format, finite field arithmetic easily facilitates calculations during encryption and decryption <ns0:ref type='bibr' target='#b0'>Benvenuto (2012)</ns0:ref>. In the 1970's, IBM developed Data Encryption Standard (DES) <ns0:ref type='bibr' target='#b23'>Tuchman (1997)</ns0:ref>.</ns0:p><ns0:p>However, a humble 56-bit key usage never posed a serious challenge to a supercomputer, which was able to break the key in less than 24 hours. Thus the need for a refined algorithm to replace the existing </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>GENERATING UPPER BOUND FOR Q</ns0:head><ns0:p>The proposed algorithm, requires generating a list of primes p s ∈ P whose size equals the block size s. It also requires a prime q ∈ P which is greater than all p s 's. The algorithm involves generating inverses of all p s 's within a Galois field GF(q m ) where m is an arbitrarily chosen positive integer.</ns0:p><ns0:p>Generation of the shuffled list of primes requires us to know the prime q. On the other hand, determining the value of q requires us to know the largest prime present in the shuffled list. This poses a paradoxical problem.</ns0:p><ns0:p>To get around this paradox, we consider the following.</ns0:p><ns0:p>• The number of primes to be stored in the shuffled list p s is equivalent to the block size s.</ns0:p><ns0:p>• The value q must be chosen such that it is prime and larger than the maximum prime present in p s .</ns0:p><ns0:p>• Prime Number Theorem is used to find is used to find the number below which s primes are available. Let this required number be denoted by x.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:1:1:NEW 26 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_11'>s = x ln x s ln x = x s ln x = e ln x ln xe − ln x = 1 s − ln xe − ln x = − 1 s − ln x = W n − 1 s x = e −W n( − 1 s )</ns0:formula><ns0:p>Where W n is the Lambert W Function also known as the product log function. This means that it is possible to generate the upper limit for generating a list of primes for an arbitrary block size.</ns0:p><ns0:p>Since the number of primes is positive, x must be positive <ns0:ref type='bibr' target='#b26'>Weisstein (2002)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Corless et al. (1996)</ns0:ref>.</ns0:p><ns0:p>This means that it is possible to have a Lambert W Function in the branch of n = 0 or n = −1 since e y ≥ 0 ∀y ∈ R. If x = a + ib that is x ∈ C, we consider ⌊ℜ(x)⌋ to generate the upper bound.</ns0:p><ns0:p>• The prime larger than this x is set to be q.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>FERMAT'S FACTORIZATION</ns0:head><ns0:p>Fermat's factorization method, named after Pierre de Fermat, is based on the representation of an odd integer as the difference of two squares De Fermat (1891). For a given number N, the objective is to find a, b such that</ns0:p><ns0:formula xml:id='formula_12'>N = a 2 − b 2 (8)</ns0:formula><ns0:p>To start, the square root of N is taken, and the nearest integer a is squared and subtracted. If the resulting number is a square then, a, b has been found. If it is not the case, then a is increased by 1 and the process is repeated. This is what is used to generate an algorithm for the block size depending on the size of the plaintext.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>OPTIMAL CHOICE FOR BLOCK SIZE</ns0:head><ns0:p>The following is an algorithm that automatically chooses an optimal value for the block size so as to ensure minimum time of execution for the algorithm.</ns0:p><ns0:p>1. Get the length N of plaintext. Take square root and consider the ceiling of the resulting real number, i.e. √ N</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>If</ns0:head><ns0:p>√ N 2 ≤ N and N ≡ 1 mod 2, then optimal block size is √ N</ns0:p></ns0:div>
<ns0:div><ns0:head n='9'>PROPOSED ALGORITHM</ns0:head><ns0:p>The proposed algorithm for this cryptosystem involves numerous sections, making it a robust and impenetrable layer of security. Note that this algorithm autonomously sets the critical private key parameters of the system to their optimal values based on the user's secret message. Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> illustrates a flow chart for the proposed schematic.</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.1'>Plaintext pre-processing</ns0:head><ns0:p>1. An input message is provided and split into characters.</ns0:p><ns0:p>2. The optimum block size is evaluated as discussed in section 8.</ns0:p><ns0:p>3. The plaintext is split into blocks and padding is applied to maintain consistency of the modified plain text.</ns0:p><ns0:p>4. Each character in each block is converted to its designated ASCII equivalent.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:1:1:NEW 26 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='9.2'>Key generation</ns0:head><ns0:p>1. User inputs a non negative integer m, which is used to determine the Galois Field GF(q m ).</ns0:p><ns0:p>2. The value of q is obtained via the calculation described in section 6.</ns0:p><ns0:p>3. A function sets an optimal block size depending on the length of plaintext via the algorithm described in section 8.</ns0:p><ns0:p>4. A list of unique primes is randomly generated between [p 1 , p blocksize ] and p −1 k mod q m exists, such that each block has the same primes. The size of this list is equal to the block size.</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.3'>Key permutation algorithm</ns0:head><ns0:p>1. Since each block is assigned the same primes, entropy is introduced into the system by rearranging the order of primes for each block in the list of primes.</ns0:p><ns0:p>2. A central element is kept fixed, the primes on the left side are left-shifted and the ones on the right are right-shifted a certain number of times. The shift factor consequently increases linearly as the block index increases. It is crucial that the block size must be an odd number for such a rearrangement procedure to take place.</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.4'>Encryption</ns0:head><ns0:p>For each block, take the product of the corresponding ASCII values a k and the prime number from the permuted prime list. Then take this product modulo q m . a k p k ≡ c k mod q m (9)</ns0:p><ns0:p>These lists of c k are arranged in a matrix of order n s × s, where n =length of padded plaintext and s =block size. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.5'>Decryption</ns0:head><ns0:p>For each encrypted value c k in the block, multiply with the inverse of corresponding prime in the permuted list in the field q m . c k p −1 k ≡ a k mod q m (10)</ns0:p><ns0:p>Padding is removed, and the remaining characters are joined to return the original message.</ns0:p></ns0:div>
<ns0:div><ns0:head n='10'>OBSERVED SECURITY FEATURES</ns0:head><ns0:p>In cryptography, confusion and diffusion are two properties of the operation of a secure cipher which were identified by Claude Shannon in his paper Communication Theory of Secrecy Systems, published in 1949 <ns0:ref type='bibr' target='#b15'>Shannon (1949)</ns0:ref>.</ns0:p><ns0:p>Confusion is a technique that ensures confidentiality, that is, a ciphertext gives no clue about the plain text. This is commonly used in the block and stream cipher method. This can be achieved by the substitution method.</ns0:p><ns0:p>An extensive analysis was performed to study the behavior of the encryption scheme when the same plaintext was encrypted twice with only 1 single character changed. Two simple messages ('abcdefghijklmnopqrstuvwxyz') and ('abcdefghijklmnopqrstuvwxyp') were considered. Note that both messages are identical except for one single character at the end ('z' and 'p'). An identical set of encryption parameters were set up for this analysis and the generated ciphertext in both cases were noted down: </ns0:p><ns0:formula xml:id='formula_13'>abcdefghijklmnopqrstuvwxyz Encryption − −−−−− → m=7        <ns0:label>679</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>       </ns0:formula><ns0:p>If each element of both the ciphertext matrices are compared element-wise (order matters), one can easily notice that there is 0% similarity. This implies that cryptanalysis techniques that rely on the similarity of elements in ciphertexts will fail to crack this cryptosystem.</ns0:p><ns0:p>In diffusion, the statistical structure of the plaintext is dissipated into long-range statistics of the ciphertext <ns0:ref type='bibr' target='#b22'>Stallings (2006)</ns0:ref>. This increases the redundancy of the plaintext by spreading it across rows and columns. It is only used in block cipher protocols. This phenomenon can be achieved by a permutation technique known as Transposition. A perfect example of diffusion and confusion is the AES cryptosystem.</ns0:p><ns0:p>Additionally, the encryption scheme was also tested with strings which have the same character repeating multiple time. For instance, the plaintext ('she sells sea shells on the sea shore') was encrypted using regular encryption parameters. The following ciphertext matrix was returned by the algorithm: </ns0:p><ns0:formula xml:id='formula_15'>           </ns0:formula><ns0:p>From an unauthorized eavesdropper's perspective, the ciphertext matrix will give the impression of being just a random sequence of numbers which makes it all the more difficult to come up with a logical approach to retrieve the secret message without any knowledge of the private key.</ns0:p></ns0:div>
<ns0:div><ns0:head n='11'>EXPERIMENTAL ANALYSIS</ns0:head></ns0:div>
<ns0:div><ns0:head n='11.1'>Stringlength vs time for encrypt-decrypt cycle</ns0:head><ns0:p>For this benchmark test, the value of m was fixed at 7 different values m = 2 n where n ∈ [0, 6], n ∈ Z while the length of plaintext was successively increased in powers of 10 starting from 100 till 1, 000, 000 while noting down the time it takes for successful encrypt-decrypt cycles. The tabular data shown in table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> and 3 shows the variance of runtime upon altering string lengths. Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref> shows the implementation of the program on a Intel ® Core TM i7-10750H CPU @ 2.60GHz and a Raspberry Pi 4 Model B (Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz) respectively. Here m is the exponent of a prime finite field. It means that any arbitrary value a mod q will generate non-negative integers within [0, q − 1]. But for any m > 0, m ∈ Z, we have q m > q which consequently means a mod q m generates non negative integers within [0, q m − 1] which gives larger values. This takes a bit of time to process. Hence smaller values of m will be less impactful on the time constraint.</ns0:p></ns0:div>
<ns0:div><ns0:head>Values for m</ns0:head><ns0:p>To study the behavior of this algorithm on devices with low compute power, it was benchmarked on a Raspberry Pi 4 Model B (8 GB RAM variant). The Raspberry Pi is a low-cost, credit-card-sized Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Raspberry Pi models can boot directly from the network, but in general, file-system storage, such as a micro SD card, is necessary <ns0:ref type='bibr' target='#b11'>Johnston and Cox (2017)</ns0:ref>. The Raspberry Pi features GPIO (general purpose input/output) pins that allow one to manipulate electronic components and low-powered sensors for physical computing and explore the Internet of Things.</ns0:p><ns0:p>In this case, it was observed that if we limit the string length to 20, 000 characters, the encryptiondecryption cycle completes within a mere 5 seconds. Altering the m-values and repeating the benchmark made a negligible difference, as seen on the graph. It should be noted that executing this benchmark on the raspberry pi for 100, 000 characters takes up to 35 seconds or more. However, most IoT applications involve the collection of data from various sensors and transmitting them in discrete chunks to servers across multiple timesteps for further processing. In such scenarios dealing with limited batches of data, the proposed cryptosystem can achieve feasible encryption in real-time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='11.2'>Variation of runtime with value of m</ns0:head><ns0:p>This section analyzes how changing the exponent of the prime finite field m has an influence on the operational runtime of the encrypt and decrypt functions when a fixed string of random alphanumeric values having a size of 1000 characters is fed into the proposed algorithm. The algorithm autonomously sets the block size to 31. This is clearly visible from the data in table <ns0:ref type='table' target='#tab_7'>4</ns0:ref>. it consequently means that a larger block size makes it nearly impossible for an attacker to correctly guess all the permuted blocks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='12.2'>Hensel's Lifting Lemma</ns0:head><ns0:p>Lemma 1 (Hensel's Lemma) Given prime p, e ≥ 2, and f</ns0:p><ns0:formula xml:id='formula_16'>(x) ∈ Z[x], if a is a solution to f (x) ≡ 0 (mod p e−1 )</ns0:formula><ns0:p>. Then if gcd(p, f ′ (a)) = 1, there exists a solution to f (x) ≡ 0 (mod p e ) of the form b = a + kp e−1 where k satisfies f (a)</ns0:p><ns0:formula xml:id='formula_17'>p e−1 + k f ′ (a) ≡ 0 (mod p)</ns0:formula><ns0:p>This cryptosystem requires the user to pick an exponent m for an automatically generated q value, allowing the use of the field of order q m . When it comes to an interceptor, they only have access to a number b where b = q m , and have no idea of q and m separately. Hence, they would have to apply heuristic, or brute force approach to solve for q and m given the value of b. This is because there are no known, deterministic methods to solve an equation with two unknowns.</ns0:p></ns0:div>
<ns0:div><ns0:head n='13'>REMARKS ON EDGE CASES</ns0:head><ns0:p>An imperative consequence of the block size optimization function described in section 8 is that No. of Blocks ≥ Block size</ns0:p><ns0:p>The key generation paradigm demands that the size of the final list of unique primes should be the same as the block size and all elements of this list should be smaller than the value of q that is evaluated in the background using the prime number theorem as discussed in section 6. This implies that if a plaintext with very few characters is chosen such that Block size ≤ 3, the shuffled list of unique random primes can hold only 2 possible elements i.e. 2, 3 which are both lesser than the calculated value of q = 5. In this scenario, since 3 primes are not available, the system gets hung up in an infinite loop and fails to encrypt the message. For instance, when the message 'hello' was passed, it resulted in the optimal block size being set to 3 and q = 5. Irrespective of the value of the prime finite field exponent m chosen by the user, it was observed that the key generation algorithm breaks down.</ns0:p><ns0:p>One simple way to solve this issue would be to add a different special padding scheme in case the message entered by the user is too small to ensure that the optimal block size evaluates to a number greater than or equal to 5. This way, the key generation algorithm has enough prime numbers available to work with.</ns0:p></ns0:div>
<ns0:div><ns0:head n='14'>CONCLUSION</ns0:head><ns0:p>In this paper, a new block cipher encryption scheme was discussed in detail. It was observed that longer messages provide better security whereas shorter messages provide faster execution assuming sufficient Manuscript to be reviewed</ns0:p><ns0:p>Computer Science padding. This system can come in handy, especially in social media sites where the short messaging system (SMS) is common. For example, Twitter, which has a current maximum string length of 280 characters <ns0:ref type='bibr' target='#b24'>Twitter (2021)</ns0:ref>. The time of execution was benchmarked on a modern-day computer CPU (Intel ® Core TM i7-10750H processor) as well as on a Raspberry Pi 4 Model B. It was found that the proposed schematic can easily be integrated into IoT networks involving low compute microprocessors to provide a layer of security. Other applications of this system could be in encrypting confidential military files that are large. As a threshold cryptosystem candidate, this system can find multiple applications in swarm robotics in cases where slaves communicate with a master robot over an insecure network. The list of permuted primes that constitutes the private key of this system could be scattered across multiple slaves and could be used collectively to ensure that none of the nodes in the system gets attacked by an unauthorized party and/or fails at any given time. Whether or not this could be used as an industry standard is beyond the scope of this paper. The progress so far has been compiled into a GitHub repository <ns0:ref type='bibr' target='#b1'>Bhowmik (2020)</ns0:ref>.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Demonstration of the Prime Number Theorem. Figure (A) shows Prime density and logarithmic integral while (B) resembles the asymptotic form of prime counting function</ns0:figDesc><ns0:graphic coords='5,141.73,231.21,413.56,137.86' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>DES arise. Rijndael, a much more sophisticated algorithm devised by Vincent Rijmen and John Daemon in 2001, has been known as the Advanced Encryption Standard (AES) ever since. An issue regarding this breakthrough was published by Federal Information Processing Standards Publications (FIPS) on November 26, 2001 Dworkin et al. (2001).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Algorithm Flow Diagram</ns0:figDesc><ns0:graphic coords='8,141.73,430.49,413.57,262.70' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Stringlength vs Runtime (seconds) for fixed values of m.Figure (A) and (B) shows the demonstration on an Intel ® Core TM i7-10750H and a Raspberry Pi 4 Model B respectively</ns0:figDesc><ns0:graphic coords='11,141.73,63.78,413.60,216.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 Figure 4 .</ns0:head><ns0:label>44</ns0:label><ns0:figDesc>Figure 4 demonstrates that m vs runtime follows a fairly linear trend. There is a slight imperfection in the 272</ns0:figDesc><ns0:graphic coords='12,141.73,356.24,413.56,216.46' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Prime density and approximation to logarithmic integral</ns0:figDesc><ns0:table><ns0:row><ns0:cell>× 100</ns0:cell></ns0:row></ns0:table><ns0:note>3/14PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:1:1:NEW 26 May 2021)Manuscript to be reviewedComputer ScienceSearch Size x # of Primes Density (%) li(x) li(x) − π(x) π(x)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Effect of length of plaintext on runtime (seconds) for different values of m for Intel ® Core TM</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>String Length 1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>10 2</ns0:cell><ns0:cell cols='7'>0.0005 0.0002 0.0002 0.0003 0.0004 0.0007 0.0004</ns0:cell></ns0:row><ns0:row><ns0:cell>10 3</ns0:cell><ns0:cell cols='7'>0.0024 0.0024 0.0024 0.0027 0.0030 0.0037 0.0038</ns0:cell></ns0:row><ns0:row><ns0:cell>10 4</ns0:cell><ns0:cell cols='7'>0.0311 0.0393 0.0321 0.0342 0.0358 0.0418 0.0470</ns0:cell></ns0:row><ns0:row><ns0:cell>10 5</ns0:cell><ns0:cell cols='7'>0.3143 0.3201 0.3596 0.3580 0.3975 0.4288 0.5501</ns0:cell></ns0:row><ns0:row><ns0:cell>10 6</ns0:cell><ns0:cell cols='7'>3.2605 3.3822 3.9073 4.0346 4.4580 5.0636 6.0518</ns0:cell></ns0:row><ns0:row><ns0:cell>i7-10750H</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Values for m</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>String Length 1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>10 2</ns0:cell><ns0:cell>0.0010</ns0:cell><ns0:cell>0.0011</ns0:cell><ns0:cell>0.0012</ns0:cell><ns0:cell>0.0012</ns0:cell><ns0:cell>0.0013</ns0:cell><ns0:cell>0.0015</ns0:cell><ns0:cell>0.0018</ns0:cell></ns0:row><ns0:row><ns0:cell>10 3</ns0:cell><ns0:cell>0.0101</ns0:cell><ns0:cell>0.0102</ns0:cell><ns0:cell>0.0116</ns0:cell><ns0:cell>0.0122</ns0:cell><ns0:cell>0.0138</ns0:cell><ns0:cell>0.0160</ns0:cell><ns0:cell>0.0205</ns0:cell></ns0:row><ns0:row><ns0:cell>10 4</ns0:cell><ns0:cell>0.1176</ns0:cell><ns0:cell>0.1248</ns0:cell><ns0:cell>0.1379</ns0:cell><ns0:cell>0.1399</ns0:cell><ns0:cell>0.1587</ns0:cell><ns0:cell>0.1911</ns0:cell><ns0:cell>0.2529</ns0:cell></ns0:row><ns0:row><ns0:cell>10 5</ns0:cell><ns0:cell>1.3059</ns0:cell><ns0:cell>1.4381</ns0:cell><ns0:cell>1.4981</ns0:cell><ns0:cell>1.6747</ns0:cell><ns0:cell>1.8104</ns0:cell><ns0:cell>2.1556</ns0:cell><ns0:cell>2.9447</ns0:cell></ns0:row><ns0:row><ns0:cell>10 6</ns0:cell><ns0:cell cols='7'>14.3205 16.3886 16.7864 18.4380 20.6584 24.5239 34.1969</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Effect of length of plaintext on runtime (seconds) for different values of m for Raspberry pi</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Effect of exponent of prime finite field m on runtime for fixed string (size = 1000)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>10/14PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:1:1:NEW 26 May 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Department of Mathematics
160 Convent Ave
New York, NY 10031
May 26, 2021
Dear Editors
We thank the reviewers for their generous and invaluable remarks on the manuscript and have edited the
manuscript accordingly to address their concerns.
We believe that the manuscript is now suitable for publication in PeerJ.
Awnon Bhowmik
Graduate Student
Department of Mathematics
The City College of New York
abhowmik901@york.cuny.edu
On behalf of all authors.
Reviewer 1
Basic reporting
The paper introduces an adaptive block encryption scheme. This system is based on product, exponent and
modulo operation on a finite field. At the heart of this algorithm lies an innovative and robust trapdoor
function that operates in the Galois Field and is responsible for the superior speed and security offered by
it. Prime number theorem plays a fundamental role in this system, to keep unwelcome adversaries at bay.
This is a self-adjusting cryptosystem that autonomously optimizes the system parameters thereby reducing
effort on the user's side while enhancing the level of security. This paper provides an extensive analysis on a
few notable attributes of this cryptosystem such as it's exponential rise in security with an increase in the
length of plaintext while simultaneously ensuring that the operations are carried out in feasible runtime.
Additionally, an experimental analysis is also performed to study the trends and relations between the
different parameters of the cryptosystem including a few edge cases.
Experimental design
no comment
Validity of the findings
no comment
Comments for the author
1. Please improve overall readability of the paper.
We have thoroughly revised the paper several times and made numerous adjustments from the grammatical
and structural aspects.
2. The objectives of this paper need to be polished.
As mentioned in the abstract, this is a design for a robust, lightweight cryptosystem. We have added in
comparison on two different platforms, a workstation running on Intel® Core™ i7-10750H, and a Raspberry
Pi 4 Model B. Since this is an autonomous adjusting security protocol, we have covered the various possible
sectors where the system would be able to replace existing ones.
3. Introduction is poorly written.
We have made corrections to the typos, and added in some mathematical details, which suggests that using a
Galois Field foundation for our system enables us to perform calculations with irreducible polynomials,
rather than being limited to simple arithmetic operations such as add, subtract, product, etc.
4. Relevant literature review of latest similar research studies on the topic at hand must be
discussed.
We have added a literature review section and added in the suggested references, explaining how our system
can be integrated in Big Data Analysis to ensure security and privacy of large amount of personal data.
5. Result section need to be polished.
We have added in tables and graphs showing the performance comparison of the system benchmarked on
two different platforms.
6. There are some grammar and typo errors.
All grammatical issues have been resolved.
7. Improve the quality of figures
We have rerun the entire program and recollected the tabular and graphical data from the benchmarks and
added into the manuscript accordingly. We have renamed the figures and tables, as per the PeerJ author
guideline standards. The figure resolutions have been rescaled to meet PeerJ standards.
8. Define all the variables before using
Care has been taken to address details about each variable before using them.
The authors can cite the following
1.A Novel PCA-Firefly based XGBoost classification model for Intrusion Detection in Networks using GPU
2. Fake Review Classification Using Supervised Machine Learning
The suggested references has been added into the literature review section of the manuscript.
Reviewer 2
Basic reporting
no comment
Experimental design
no comment
Validity of the findings
no comment
Comments for the author
In this paper, the authors designed a novel symmetric encryption method based on finite field. Although the
proposed protocol has no obvious mistakes and question, some revisions are still needed for this paper. The
authors may refer to the following comments for revision.
1. Related work part cannot be found in this paper.
The literature review section has been added.
2. No comparative experiments conducted, the authors should compare their proposed protocol with other
similiar encryption protocol in terms of computation cost and other metrics.
The program was re-executed on an Intel® Core™ i7-10750H and a Raspberry Pi 4 Model B. This was due
to the popularity of the Raspberry Pi in IoT and smart devices, where our encryption protocol can be an
alternate potential candidate. This is especially compatible with scenarios where sensor data is transmitted in
chunks over a network in discrete timesteps. Tables and figures resembling the comparison have been
added.
3. Some typos and grammar errors exist in this paper, the authors should double check the paper.
The paper has been revised and all grammatical issues have been addressed.
4. More supplementary references should be used in the manuscript. For authors' convenience, we list the
related references as follows.
1. Song J, Zhong Q, Wang W, et al. FPDP: Flexible privacy-preserving data publishing scheme for smart
agriculture. IEEE Sensors Journal, 2020, doi: 10.1109/JSEN.2020.3017695.
2. Wang W, Su C. Ccbrsn: a system with high embedding capacity for covert communication in bitcoin//IFIP
International Conference on ICT Systems Security and Privacy Protection. Springer, Cham, 2020: 324-337.
3. Wang W, Huang H, Zhang L, et al. Secure and efficient mutual authentication protocol for smart grid
under blockchain. Peer-to-Peer Networking and Applications, 2020: 1-13.
4. Zhang L, Zou Y, Wang W, et al. Resource Allocation and Trust Computing for Blockchain-Enabled Edge
Computing System. Computers \& Security, 2021: 102249.
5. Zhang L, Zhang Z, Wang W, et al. Research on a Covert Communication Model Realized by Using Smart
Contracts in Blockchain Environment. IEEE Systems Journal, doi: 10.1109/JSYST.2021.3057333.
All the references have been added in as per suggestions.
Reviewer 3
Basic reporting
Professional English is required to improve the paper before publication.
Literature review is not sufficient. Latest and related paper like enhancement of the Cryptosystem should be
compared with https://ieeexplore.ieee.org/abstract/document/6294370
How Big data and cryptography should be analysed with https://www.mdpi.com/2079-9292/8/11/1331
You can apply optimization approach
https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2797
Figures need improvement
Equations are not numbered
More clarity in results needed
Experimental design
Research questions need to be refined
Methods okay in mathematical analysis
Validity of the findings
Findings are okay
Conclusion can be improved
Comments for the author
Professional English is required to improve the paper before publication.
We have revised the entire manuscript and improved the paper quality to the best of our abilities. Typos,
grammatical and structural issues have been addressed.
Literature review is not sufficient. Latest and related paper like enhancement of the Cryptosystem should be
compared with https://ieeexplore.ieee.org/abstract/document/6294370
How Big data and cryptography should be analysed with https://www.mdpi.com/2079-9292/8/11/1331
Literature review section and the suggested references have been added.
Figures need improvement
Code was re executed, and tables and figures from the results have been reentered into the manuscript with
high resolution images meeting the PeerJ standards.
Equations are not numbered
All useful equations have now been numbered.
More clarity in results needed
Content in literature review now shows that our devised schematic could be used as a potential alternative in
sectors that deals with IoT enabled smart devices. To demonstrate the same, we benchmarked our program
on a Raspberry Pi 4 Model B, for comparison purposes.
Research questions need to be refined
Section 12.2 now talks about Hensel’s Lifting Lemma, which shows a possible way for an interceptor can
break our system, but it would require nondeterministic, brute force approach to solve an equation with two
unknowns. No such methods are currently known, without necessary assumptions available at hand.
Methods okay in mathematical analysis
Technical changes
We have taken care to address all issues with the figures as pointed out by the editor(s).
1. We have added the raw data, according to the PeerJ guidelines following some sample, published
articles.
2. We have now cited all Figures and Tables in order before they appear on the manuscript.
3. We have removed sub figures and combined, for example, Figure 2A and Figure 2B into a single
Figure 2, and so on.
4. We have rescaled the images to fit a minimum of 900 pixels and a maximum of 3000 pixels on all
sides as per the PeerJ standard.
5. In order to address accessibility difficulties with persons who are color blind, we eliminated color
legends and replaced them with stylized lines.
" | Here is a paper. Please give your review comments after reading it. |
183 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Owing to mathematical theory and computational power evolution, modern cryptosystems demand ingenious trapdoor functions as their foundation to extend the gap between an enthusiastic interceptor and sensitive information. This paper introduces an adaptive block encryption scheme. This system is based on product, exponent, and modulo operation on a finite field. At the heart of this algorithm lies an innovative and robust trapdoor function that operates in the Galois Field and is responsible for the superior speed and security offered by it. Prime number theorem plays a fundamental role in this system, to keep unwelcome adversaries at bay. This is a self-adjusting cryptosystem that autonomously optimizes the system parameters thereby reducing effort on the user's side while enhancing the level of security. This paper provides an extensive analysis of a few notable attributes of this cryptosystem such as its exponential rise in security with an increase in the length of plaintext while simultaneously ensuring that the operations are carried out in feasible runtime. Additionally, an experimental analysis is also performed to study the trends and relations between the cryptosystem parameters, including a few edge cases.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Cryptography is the art of hiding messages to provide it with a certain level of security to maintain confidentiality and integrity. This new idea, whether it was to hide secret messages, or to transform the original message to make it look fancy, dignified, etc. continued through the medieval ages, the renaissance period saw the birth of the polyalphabetic substitution cipher, called the Vigenère Cipher <ns0:ref type='bibr' target='#b21'>Rubinstein-Salzedo (2018)</ns0:ref>. An encryption device called the Enigma machine <ns0:ref type='bibr' target='#b26'>Singh (1999)</ns0:ref> was used by the Nazi Germans during World War II. Although history suggests that it has been in use for ages, systematic study of cryptology as a science (and perhaps an art) just started around one hundred years ago <ns0:ref type='bibr' target='#b25'>Sidhpurwala (2013)</ns0:ref>.</ns0:p><ns0:p>But it was not until the 1970s, that studies in cryptography got serious. Data Encryption Standard (DES) was introduced by IBM in 1976 <ns0:ref type='bibr' target='#b29'>Tuchman (1997)</ns0:ref> followed by Diffie Hellman Key Exchange in the same year <ns0:ref type='bibr' target='#b18'>Kallam (2015)</ns0:ref>. In 1977, RSA came along <ns0:ref type='bibr' target='#b5'>Calderbank (2007)</ns0:ref> and in 2002, AES was accepted as a standard security protocol to be used in both hardware and software <ns0:ref type='bibr' target='#b13'>Dworkin et al. (2001)</ns0:ref>. And thus cryptography became popular.</ns0:p><ns0:p>The strength or foundation of a modern encryption protocol relies upon the inherent Trapdoor Function.</ns0:p><ns0:p>As classical cryptography evolved, it has become clear that some key components are essential in making stronger trapdoor functions, also known as one-way functions. Studies have shown that prime numbers are an essential part of numerous cryptosystems, and with a bit of effort, numerous mathematical concepts can be used to generate stronger cryptosystems. Conventional, widely used algorithms such as RSA rely on integer products involving large primes. Breaking this system is essentiatlly an attempt to solve the integer factorization problem, which can be readily attained using Shor's algorithm, Pollard's Rho algorithm, etc <ns0:ref type='bibr' target='#b0'>Aminudin and Cahyono (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b11'>de Lima Marquezino et al. (2019)</ns0:ref>.</ns0:p><ns0:p>Cryptography algorithms rely on integer mathematics, in particular, number theory to perform invertible operations such as addition, multiplication, exponentiation, etc. over a finite set of integers. Finite PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:2:0:NEW 28 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Fields, also known as Galois Fields, are fundamental to any cryptographic understanding. A field can be defined as a set of numbers that we can add, subtract, multiply and divide together and only ever end up with a result that exists in our set of numbers. This is mainly advantageous in cryptography since we can only work with a small number of incredibly huge numbers. <ns0:ref type='bibr' target='#b19'>Kohli (2019)</ns0:ref>. When cryptography algorithms rely solely on converting raw string data in ASCII format, we are restricted to 256 different characters only. Doing such leaves us with only a handful amount of invertible operations in modulo 256. On the other hand, the Galois Field GF(2 8 ) offers numerous such operations. In fact, Advanced Encryption Standard (AES) <ns0:ref type='bibr' target='#b8'>Daemen and Rijmen (2001)</ns0:ref> uses the multiplicative inverse in GF(2 8 ). Using the Galois Field also shines forth the opportunity to use the concepts of irreducible polynomials <ns0:ref type='bibr' target='#b24'>Shoup (1990)</ns0:ref>. In AES, addition and subtraction is a simple XOR operation. For multiplication, it uses the product modulo an irreducible polynomial. For example, the integer 283 refers to the irreducible polynomial f (x) = x 8 +x 4 +x 3 +x+1 in GF(2 8 ) whose coefficients are in GF(2) <ns0:ref type='bibr' target='#b12'>Desoky and Ashikhmin (2006)</ns0:ref>.</ns0:p><ns0:p>Threshold cryptography is a form of security lock where private keys are distributed among multiple clients or systems. They are even asked to provide digital signature authentication for verification purposes. Only when these keys are combined, can information be effectively decrypted. In practice, this lock is an electronic cryptosystem that protects confidential information, such as a bank account number or an authorization to transfer money from that account <ns0:ref type='bibr' target='#b15'>Henderson (2020)</ns0:ref>. The encryption scheme described in this paper has traits that resemble threshold cryptography. Existing threshold cryptosystem protocols might benefit from the positive aspects of our system, making it a viable alternative contender soon. The suggested approach can also be integrated into intelligent systems that use master-slave communication topologies, such as swarm robots <ns0:ref type='bibr' target='#b6'>Chen and Ng (2021)</ns0:ref>.</ns0:p><ns0:p>The technique suggested in this study uses an inventive trapdoor function based on the finite field to handle data encryption in cases with enormous string lengths in a reasonable amount of time, demonstrating that it is extremely light. This is a self-adjusting cryptosystem that optimizes the system parameters on its own, saving the user time and effort while increasing security. The inherent lightness of this cryptosystem makes it an ideal contender for applications involving IoT devices with limited computational power. Confusion and diffusion, covered in section 10, are two aspects of a safe cipher's functioning in cryptography. Due to the system's demonstration of confusion and diffusion properties, it could potentially be used in scenarios such as encrypting bank transaction details, where a high degree of variance in the ciphertext is desirable upon altering few characters in the plaintext. Furthermore, data from our benchmarks in section 11 shows promising results when tested on large chunks of data proving that given sufficient computing power, this system could potentially be used for confidential military applications or as a layer of security for the compilation of large datasets in Big Data analytics.</ns0:p><ns0:p>The remainder of this paper is organized as follows. 'Literature Review' gives a brief description of numerous sectors where the proposed system can be introduced. 'Trapdoor Function' section explains in brief, the working of a traditional trapdoor function from a mathematical perspective. Sections 4, 5, 6 and 7 describes the required preliminaries for a better understanding of the algorithm that follows in section 9. Next section talks about two essential properties of the operation of a secure cipher, before moving onto 'Experimental Analysis'. Next, a few ways is covered in which an adversary might try to break into systems running this cryptosystem. Section 12 shows that it would be near impossible for them to achieve their goal. Section 13 addresses an edge case of the system that revolves around the inbuilt block size optimization function. The paper concludes by briefly summarizing the study's overall accomplishments and providing important insights into future research directions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>LITERATURE REVIEW</ns0:head><ns0:p>Security and privacy at the physical layer have become a serious challenge in recent years for numerous communication technologies, including the internet of things (IoT) and, most notably, the coming fifth-generation (5G) cellular network. Joseph Henry Anajemba et al. investigated an efficient sequential convex estimation optimization technique to overcome this difficulty and improve physical layer security in a three-node wireless communication network <ns0:ref type='bibr' target='#b1'>Anajemba et al. (2020)</ns0:ref>. IoT networks are typically comprised of a network of interconnected sensors and information relaying units that communicate in real-time with one another. Individual nodes typically have specialized sensor units for detecting specific environmental attributes and have fewer computing resources available. For example, in a house, various technologies such as facial recognition, video monitoring, smart lighting, and so on will all function in tandem. Security and privacy are key impediments to the realistic deployment of smart home technologies <ns0:ref type='bibr' target='#b23'>Shen et al. (2018)</ns0:ref>.</ns0:p><ns0:p>The majority of the network's elements use sensitive user data and seamlessly exchange information with one another in real-time. To keep intruders out of such a network, a dependable and stable solution based on edge computing is preferred. Another real-world application of secure edge computing lies in the domain of smart grids. Smart grids are recognized as the next-generation intelligent network that maximizes energy efficiency <ns0:ref type='bibr' target='#b31'>Wang et al. (2020)</ns0:ref>. Smart grid solutions help to monitor, measure and control power flow in real-time that can contribute to the identification of losses, and thereby appropriate technical and managerial actions can be taken to prevent the same. Smart grids generally rely on data recorded by energy meters from different houses. Since electricity usage data can be classified as confidential user metrics, there is a need for implementing a layer of security before transmitting this data to other parties for further analysis. Encryption of this data at the smart energy meter stage itself can be beneficial.</ns0:p><ns0:p>However, this requires the development of a lightweight encryption protocol that can be easily integrated with microprocessors with minimal compute power.</ns0:p><ns0:p>The amount of data provided by users during numerous online activities has increased dramatically over the last decade. Celestine <ns0:ref type='bibr'>Iwendi et al. performed</ns0:ref> research that used a model-based data analysis technique for handling applications with Big Data Streaming to glean useful information from this massive amount of data. The method suggested in their research has been tested to add value to large text data processing <ns0:ref type='bibr' target='#b16'>Iwendi et al. (2019)</ns0:ref>. Our proposed schematic leverages an ingenious trapdoor function based on the finite field to handle data encryption in scenarios involving large string lengths within feasible runtime, proving it to be considerably lightweight.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>TRAPDOOR FUNCTION</ns0:head><ns0:p>The essence of any cryptosystem relies on some special mathematical trapdoor function that makes it practically impossible for an unwelcome interceptor to gain access to secretive information. Simultaneously, these functions also ensure that the authorized parties (who know the secret key) can continue sharing data among themselves.</ns0:p><ns0:p>A trapdoor function is a mathematical transformation that is easy to compute in one direction, but extremely difficult (practically impossible) to compute in the opposite direction in feasible runtime unless some special information is known (private key). Analogously, this can be thought of like the lock and key in modern cryptography where until and unless someone has access to the exact key, they can't open the lock. In mathematical terms, if f is a trapdoor function, then y = f (x) easy to calculate but x = f −1 (y) is tremendously hard to compute without some special knowledge k (called key). In case k is known, it becomes easy to compute the inverse</ns0:p><ns0:formula xml:id='formula_0'>x = f −1 (x, k).</ns0:formula><ns0:p>The components of the proposed system in this paper that act as the trapdoor function is the modulo operation on a Galois field <ns0:ref type='bibr' target='#b2'>Benvenuto (2012)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>PRIME NUMBER THEOREM</ns0:head><ns0:p>Positive integers that are divisible by 1 and itself, are known as prime numbers. The sequence begins like the following... <ns0:ref type='bibr'>2,</ns0:ref><ns0:ref type='bibr'>3,</ns0:ref><ns0:ref type='bibr'>5,</ns0:ref><ns0:ref type='bibr'>7,</ns0:ref><ns0:ref type='bibr'>11,</ns0:ref><ns0:ref type='bibr'>13,</ns0:ref><ns0:ref type='bibr'>17,</ns0:ref><ns0:ref type='bibr'>19,</ns0:ref><ns0:ref type='bibr'>23,</ns0:ref><ns0:ref type='bibr'>29,</ns0:ref><ns0:ref type='bibr'>31,</ns0:ref><ns0:ref type='bibr'>37</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science and has held untold fascination for mathematicians, both professionals and amateurs alike. A result that gives an idea about an asymptotic distribution of primes is known as the prime number theorem <ns0:ref type='bibr' target='#b14'>Goldstein (1973)</ns0:ref>.</ns0:p><ns0:p>π(x) is the prime-counting function that gives the number of primes less than or equal to x, for any real number x. This can be written as</ns0:p><ns0:formula xml:id='formula_1'>π(x) = ∑ p≤x 1 (1)</ns0:formula><ns0:p>It is seen via graphing, that x ln x is a good approximation to π(x), in the sense that the limit of the quotient of the two functions π(x) and x ln x as x increases without bound is 1.</ns0:p><ns0:formula xml:id='formula_2'>lim x→∞ π(x) ln x x = 1 (2)</ns0:formula><ns0:p>This result can be rewritten in asymptotic notation as</ns0:p><ns0:formula xml:id='formula_3'>π(x) ∼ x ln x (3)</ns0:formula><ns0:p>The logarithmic integral provides a good estimate to the prime density function.</ns0:p><ns0:formula xml:id='formula_4'>π(x) x ∼ li(x) = x 2 1 lnt dt<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>To get an idea of the distribution of primes, it is important to count the number of primes in a given range and find the percentage of primes. Consider an infinitely tall tree, losing its leaves. The leaves represent prime numbers. Most leaves are found near the root, and the number of leaves reduces as we walk away from the center. But no matter how far we are from the center, we always find more leaves.</ns0:p><ns0:p>These leaves are unpredictably scattered in an infinite area surrounding the tree. This is the situation with the distribution of primes as resembled by table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows the prime density and logarithmic integral on the left, and the asymptotic nature of the prime counting function on the right. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>GALOIS FIELD IN CRYPTOGRAPHY</ns0:head><ns0:p>Galois Field, named after Evariste Galois, also known as finite field, refers to a field in which there exist finitely many elements. A computer only understands the binary data format, which consists of a combination of 0's and 1's. If we consider GF(2), which is simply the Galois Field of order 2, this representation becomes possible which enables us to apply mathematical operations for functional data scrambling. The elements of Galois Field GF(p n ) is defined as</ns0:p><ns0:formula xml:id='formula_5'>GF(p n ) =(0, 1, 2, • • • , )∪ (p, p + 1, p + 2, • • • , p + p − 1)∪ (p 2 , p 2 + 1, • • • , p 2 + p − 1) ∪ • • • ∪ (p n−1 , p n−1 + 1, p n−1 + 2, • • • , p n−1 + p − 1)</ns0:formula><ns0:p>where p ∈ P and n ∈ Z + . The order of the field is given by p n while p is called the characteristic of the field. On the other hand, GF, as one may have guessed it, stands for Galois Field. Also note that the degree of polynomial of each element is at most n − 1 Benvenuto (2012).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Properties of finite field</ns0:head><ns0:p>For arbitrary elements a, b, c and binary operations (+, •) in a finite field F, the following properties hold <ns0:ref type='bibr' target='#b28'>Stallings (2006)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>Kohli (2019)</ns0:ref>.</ns0:p><ns0:p>1. Closure: For any two elements a, b, a</ns0:p><ns0:formula xml:id='formula_6'>+ b ∈ F, a • b ∈ F 2. Associativity: (a + b) + c = a + (b + c), (a • b) • c = a • (b • c) 3. Commutativity: a + b = b + a, a • b = b • a 4.</ns0:formula><ns0:p>Identity: There exists a 0 such that for any element a in the field a + 0 = 0 + a = a known as additive identity. There exists a 1 such that for any element a in the field a • 1 = 1 • a = a multiplicative identity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Every arbitrary element a has an additive inverse a</ns0:head><ns0:formula xml:id='formula_7'>−1 such that a + a −1 = a −1 + a = 0 and a multiplicative inverse a −1 such that a • a −1 = a −1 • a = 1</ns0:formula></ns0:div>
<ns0:div><ns0:head n='5.2'>Finite Field Operations</ns0:head><ns0:p>Let f (p) and g(p) two polynomials in the Galois field GF(p n ) with the respective coefficients </ns0:p><ns0:formula xml:id='formula_8'>A = a 0 , a 1 , • • • , a n and B = b 0 , b 1 , • • • , b n for f ,</ns0:formula></ns0:div>
<ns0:div><ns0:head n='6'>GENERATING UPPER BOUND FOR Q</ns0:head><ns0:p>The proposed algorithm, requires generating a list of primes p s ∈ P whose size equals the block size s. It also requires a prime q ∈ P which is greater than all p s 's. The algorithm involves generating inverses of all p s 's within a Galois field GF(q m ) where m is an arbitrarily chosen positive integer.</ns0:p><ns0:p>Generation of the shuffled list of primes requires us to know the prime q. On the other hand, determining the value of q requires us to know the largest prime present in the shuffled list. This poses a paradoxical problem.</ns0:p><ns0:p>To get around this paradox, we consider the following.</ns0:p><ns0:p>• The number of primes to be stored in the shuffled list p s is equivalent to the block size s.</ns0:p><ns0:p>• The value q must be chosen such that it is prime and larger than the maximum prime present in p s .</ns0:p><ns0:p>• Prime Number Theorem is used to find is used to find the number below which s primes are available. Let this required number be denoted by x.</ns0:p><ns0:formula xml:id='formula_9'>s = x ln x s ln x = x s ln x = e ln x ln xe − ln x = 1 s − ln xe − ln x = − 1 s − ln x = W n − 1 s x = e −W n( − 1 s )</ns0:formula><ns0:p>Where W n is the Lambert W Function also known as the product log function. This means that it is possible to generate the upper limit for generating a list of primes for an arbitrary block size.</ns0:p><ns0:p>Since the number of primes is positive, x must be positive <ns0:ref type='bibr' target='#b32'>Weisstein (2002)</ns0:ref>; <ns0:ref type='bibr' target='#b7'>Corless et al. (1996)</ns0:ref>.</ns0:p><ns0:p>This means that it is possible to have a Lambert W Function in the branch of n = 0 or n = −1 since e y ≥ 0 ∀y ∈ R. If x = a + ib that is x ∈ C, we consider ⌊ℜ(x)⌋ to generate the upper bound.</ns0:p><ns0:p>• The prime larger than this x is set to be q.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:2:0:NEW 28 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>FERMAT'S FACTORIZATION</ns0:head><ns0:p>Fermat's factorization method, named after Pierre de Fermat, is based on the representation of an odd integer as the difference of two squares De Fermat (1891). For a given number N, the objective is to find a, b such that</ns0:p><ns0:formula xml:id='formula_10'>N = a 2 − b 2 (8)</ns0:formula><ns0:p>To start, the square root of N is taken, and the nearest integer a is squared and subtracted. If the resulting number is a square then, a, b has been found. If it is not the case, then a is increased by 1 and the process is repeated. This is what is used to generate an algorithm for the block size depending on the size of the plaintext.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>OPTIMAL CHOICE FOR BLOCK SIZE</ns0:head><ns0:p>The following is an algorithm that automatically chooses an optimal value for the block size so as to ensure minimum time of execution for the algorithm.</ns0:p><ns0:p>1. Get the length N of plaintext. Take square root and consider the ceiling of the resulting real number, i.e. √ N</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>If</ns0:head><ns0:p>√ N 2 ≤ N and N ≡ 1 mod 2, then optimal block size is √ N</ns0:p></ns0:div>
<ns0:div><ns0:head n='9'>PROPOSED ALGORITHM</ns0:head><ns0:p>The proposed algorithm for this cryptosystem involves numerous sections, making it a robust and impenetrable layer of security. Note that this algorithm autonomously sets the critical private key parameters of the system to their optimal values based on the user's secret message. Figure <ns0:ref type='figure'>2</ns0:ref> illustrates a flow chart for the proposed schematic.</ns0:p><ns0:p>For IoT-enabled networking devices, an additional layer of intrusion detection protocol can be appended to the proposed scheme to enhance the existing security. The Internet of Medical Things (IoMT) is a subset of IoT in which medical equipments connect to share sensitive data. In such scenarios, machine learning methods are commonly employed in Intrusion Detection Systems (IDS) for dynamically identifying and categorizing threats at the network and host levels <ns0:ref type='bibr' target='#b20'>RM et al. (2020)</ns0:ref>. This feature can also be achieved by using PCA-Firefly-based XGBoost classification models with GPUs <ns0:ref type='bibr' target='#b3'>Bhattacharya et al. (2020)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.1'>Plaintext pre-processing</ns0:head><ns0:p>1. An input message is provided and split into characters.</ns0:p><ns0:p>2. The optimum block size is evaluated as discussed in section 8.</ns0:p><ns0:p>3. The plaintext is split into blocks and padding is applied to maintain consistency of the modified plaintext.</ns0:p><ns0:p>4. Each character in each block is converted to its designated ASCII equivalent.</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.2'>Key generation</ns0:head><ns0:p>1. User inputs a non negative integer m, which is used to determine the Galois Field GF(q m ).</ns0:p><ns0:p>2. The value of q is obtained via the calculation described in section 6.</ns0:p><ns0:p>3. A function sets an optimal block size depending on the length of plaintext via the algorithm described in section 8.</ns0:p><ns0:p>4. A list of unique primes is randomly generated between [p 1 , p blocksize ] and p −1 k mod q m exists, such that each block has the same primes. The size of this list is equal to the block size.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:2:0:NEW 28 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.3'>Key permutation algorithm</ns0:head><ns0:p>1. Since each block is assigned the same primes, entropy is introduced into the system by rearranging the order of primes for each block in the list of primes.</ns0:p><ns0:p>2. A central element is kept fixed, the primes on the left side are left-shifted and the ones on the right are right-shifted a certain number of times. The shift factor consequently increases linearly as the block index increases. It is crucial that the block size must be an odd number for such a rearrangement procedure to take place.</ns0:p></ns0:div>
<ns0:div><ns0:head n='9.4'>Encryption</ns0:head><ns0:p>For each block, take the product of the corresponding ASCII values a k and the prime number from the permuted prime list. Then take this product modulo q m . a k p k ≡ c k mod q m (9)</ns0:p><ns0:p>These lists of c k are arranged in a matrix of order n s × s, where n =length of padded plaintext and s =block size.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 2. Algorithm Flow Diagram</ns0:head></ns0:div>
<ns0:div><ns0:head n='9.5'>Decryption</ns0:head><ns0:p>For each encrypted value c k in the block, multiply with the inverse of corresponding prime in the permuted list in the field q m .</ns0:p><ns0:formula xml:id='formula_11'>c k p −1 k ≡ a k mod q m (10)</ns0:formula><ns0:p>Padding is removed, and the remaining characters are joined to return the original message.</ns0:p></ns0:div>
<ns0:div><ns0:head n='10'>OBSERVED SECURITY FEATURES</ns0:head><ns0:p>In cryptography, confusion and diffusion are two properties of the operation of a secure cipher which were identified by Claude Shannon in his paper Communication Theory of Secrecy Systems, published in 1949 <ns0:ref type='bibr' target='#b22'>Shannon (1949)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:2:0:NEW 28 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Confusion is a technique that ensures confidentiality, that is, a ciphertext gives no clue about the plaintext. This is commonly used in the block and stream cipher method. This can be achieved by the substitution method.</ns0:p><ns0:p>An extensive analysis was performed to study the behavior of the encryption scheme when the same plaintext was encrypted twice with only 1 single character changed. Two simple messages ('abcdefghijklmnopqrstuvwxyz') and ('abcdefghijklmnopqrstuvwxyp') were considered. Note that both messages are identical except for one single character at the end ('z' and 'p'). An identical set of encryption parameters were set up for this analysis and the generated ciphertext in both cases were noted down: </ns0:p><ns0:formula xml:id='formula_12'>abcdefghijklmnopqrstuvwxyz Encryption − −−−−− → m=7        <ns0:label>679</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>       </ns0:formula><ns0:p>If each element of both the ciphertext matrices are compared element-wise (order matters), one can easily notice that there is 0% similarity. This implies that cryptanalysis techniques that rely on the similarity of elements in ciphertexts will fail to crack this cryptosystem.</ns0:p><ns0:p>In diffusion, the statistical structure of the plaintext is dissipated into long-range statistics of the ciphertext <ns0:ref type='bibr' target='#b28'>Stallings (2006)</ns0:ref>. This increases the redundancy of the plaintext by spreading it across rows and columns. It is only used in block cipher protocols. This phenomenon can be achieved by a permutation technique known as Transposition. A perfect example of diffusion and confusion is the AES cryptosystem.</ns0:p><ns0:p>Additionally, the encryption scheme was also tested with strings which have the same character repeating multiple time. For instance, the plaintext ('she sells sea shells on the sea shore') was encrypted using regular encryption parameters. The following ciphertext matrix was returned by the algorithm: she sells sea shells on the sea shore </ns0:p><ns0:formula xml:id='formula_14'>Encryption − −−−−− → m=3            <ns0:label>345</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>           </ns0:formula><ns0:p>From an unauthorized eavesdropper's perspective, the ciphertext matrix will give the impression of being just a random sequence of numbers which makes it all the more difficult to come up with a logical approach to retrieve the secret message without any knowledge of the private key.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:2:0:NEW 28 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='11'>EXPERIMENTAL ANALYSIS 11.1 Stringlength vs time for encrypt-decrypt cycle</ns0:head><ns0:p>For this benchmark test, the value of m was fixed at 7 different values m = 2 n where n ∈ [0, 6], n ∈ Z while the length of plaintext was successively increased in powers of 10 starting from 100 till 1, 000, 000 while noting down the time it takes for successful encrypt-decrypt cycles. The tabular data shown in table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> and 3 shows the variance of runtime upon altering string lengths. Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows the implementation of the program on a Intel ® Core TM i7-10750H CPU @ 2.60GHz and a Raspberry Pi 4 Model B (Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz) respectively. Here m is the exponent of a prime finite field. It means that any arbitrary value a mod q will generate non-negative integers within [0, q − 1]. But for any m > 0, m ∈ Z, we have q m > q which consequently means a mod q m generates non negative integers within [0, q m − 1] which gives larger values. This takes a bit of time to process. Hence smaller values of m will be less impactful on the time constraint.</ns0:p></ns0:div>
<ns0:div><ns0:head>Values for m</ns0:head><ns0:p>To study the behavior of this algorithm on devices with low compute power, it was benchmarked on a Raspberry Pi 4 Model B (8 GB RAM variant). The Raspberry Pi is a low-cost, credit-card-sized device that connects to a computer monitor or TV and operates with a regular keyboard and mouse. It is sometimes referred to as a Single Board Computer (SBC) because it runs a complete operating system and has enough peripherals (memory, Processor, power regulation) to begin execution without the inclusion of hardware. The Raspberry Pi can run various operating systems and needs only power to boot. Some Raspberry Pi models can boot directly from the network, but in general, file-system storage, such as a micro SD card, is necessary <ns0:ref type='bibr' target='#b17'>Johnston and Cox (2017)</ns0:ref>. The Raspberry Pi features GPIO (general purpose input/output) pins that allow one to manipulate electronic components and low-powered sensors for physical computing and explore the Internet of Things.</ns0:p><ns0:p>In this case, it was observed that if we limit the string length to 20, 000 characters, the encryptiondecryption cycle completes within a mere 5 seconds. Altering the m-values and repeating the benchmark made a negligible difference, as seen on the graph. It should be noted that executing this benchmark on the raspberry pi for 100, 000 characters takes up to 35 seconds or more. However, most IoT applications involve the collection of data from various sensors and transmitting them in discrete chunks to servers across multiple timesteps for further processing. In such scenarios dealing with limited batches of data, the proposed cryptosystem can achieve feasible encryption in real-time. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> demonstrates that m vs runtime follows a fairly linear trend. There is a slight imperfection in the trend since at this magnified time scale, changes in the memory usage patterns of the system can lead to noticeable changes in the efficiency of the algorithm. Note that these are of the order of 10 −3 seconds when executed on a workstation. Testing this on the Raspberry Pi, however, took 10 times more time than that on the computer.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:2:0:NEW 28 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed it consequently means that a larger block size makes it nearly impossible for an attacker to correctly guess all the permuted blocks.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='12.2'>Hensel's Lifting Lemma</ns0:head><ns0:p>Lemma 1 (Hensel's Lemma) Given prime p, e ≥ 2, and f</ns0:p><ns0:formula xml:id='formula_16'>(x) ∈ Z[x], if a is a solution to f (x) ≡ 0 (mod p e−1 )</ns0:formula><ns0:p>. Then if gcd(p, f ′ (a)) = 1, there exists a solution to f (x) ≡ 0 (mod p e ) of the form b = a + kp e−1 where k satisfies f (a)</ns0:p><ns0:formula xml:id='formula_17'>p e−1 + k f ′ (a) ≡ 0 (mod p)</ns0:formula><ns0:p>This cryptosystem requires the user to pick an exponent m for an automatically generated q value, allowing the use of the field of order q m . When it comes to an interceptor, they only have access to a number b where b = q m , and have no idea of q and m separately. Hence, they would have to apply heuristic, or brute force approach to solve for q and m given the value of b. This is because there are no known, deterministic methods to solve an equation with two unknowns. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='13'>REMARKS ON EDGE CASES</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The key generation paradigm demands that the size of the final list of unique primes should be the same as the block size and all elements of this list should be smaller than the value of q that is evaluated in the background using the prime number theorem as discussed in section 6. This implies that if a plaintext with very few characters is chosen such that Block size ≤ 3, the shuffled list of unique random primes can hold only 2 possible elements i.e. 2, 3 which are both lesser than the calculated value of q = 5. In this scenario, since 3 primes are not available, the system gets hung up in an infinite loop and fails to encrypt the message. For instance, when the message 'hello' was passed, it resulted in the optimal block size being set to 3 and q = 5. Irrespective of the value of the prime finite field exponent m chosen by the user, it was observed that the key generation algorithm breaks down.</ns0:p><ns0:p>One simple way to solve this issue would be to add a different special padding scheme in case the message entered by the user is too small to ensure that the optimal block size evaluates to a number greater than or equal to 5. This way, the key generation algorithm has enough prime numbers available to work with.</ns0:p></ns0:div>
<ns0:div><ns0:head n='14'>CONCLUSION</ns0:head><ns0:p>In this paper, a new block cipher encryption scheme was discussed in detail. It was observed that longer messages provide better security whereas shorter messages provide faster execution assuming sufficient padding. This system can come in handy, especially in social media sites where the short messaging system (SMS) is common. For example, Twitter, which has a current maximum string length of 280 characters <ns0:ref type='bibr' target='#b30'>Twitter (2021)</ns0:ref>. The time of execution was benchmarked on a modern-day computer CPU (Intel ® Core TM i7-10750H processor) as well as on a Raspberry Pi 4 Model B. It was found that the proposed schematic can easily be integrated into IoT networks involving low compute microprocessors to provide a layer of security. Other applications of this system could be in encrypting confidential military files that are large. As a threshold cryptosystem candidate, this system can find multiple applications in swarm robotics in cases where slaves communicate with a master robot over an insecure network. The list of permuted primes that constitutes the private key of this system could be scattered across multiple slaves and could be used collectively to ensure that none of the nodes in the system gets attacked by an unauthorized party and/or fails at any given time. Whether or not this could be used as an industry standard is beyond the scope of this paper. The progress so far has been compiled into a GitHub repository <ns0:ref type='bibr' target='#b4'>Bhowmik (2020)</ns0:ref>.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Demonstration of the Prime Number Theorem. Figure (A) shows Prime density and logarithmic integral while (B) resembles the asymptotic form of prime counting function</ns0:figDesc><ns0:graphic coords='6,141.73,63.78,413.56,137.86' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>g. Then the following operations are valid 1. Addition and Subtraction c k ≡ a k ± b k mod p (5) Multiplication and Multiplicative Inverse For an irreducible polynomial m(p) with a degree of at least n, we have the following h(p) ≡ f (p) • g(p) mod m(p) (6) and polynomials x(p), y(p) are called multiplicative inverses of each other iff x(p)y(p) ≡ 1 mod m(p) most prominent and extensively used application of Galois Field. There are many different representations of data. One such representation is a vector in a finite field. Once the data is in this desired format, finite field arithmetic easily facilitates calculations during encryption and decryption Benvenuto (2012). In the 1970's, IBM developed Data Encryption Standard (DES) Tuchman (1997). However, a humble 56-bit key usage never posed a serious challenge to a supercomputer, which was able to break the key in less than 24 hours. Thus the need for a refined algorithm to replace the existing DES arise. Rijndael, a much more sophisticated algorithm devised by Vincent Rijmen and John Daemon in 2001, has been known as the Advanced Encryption Standard (AES) ever since. An issue regarding this breakthrough was published by Federal Information Processing Standards Publications (FIPS) on November 26, 2001 Dworkin et al. (2001).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Stringlength vs Runtime (seconds) for fixed values of m. Figure (A) and (B) shows the demonstration on an Intel ® Core TM i7-10750H and a Raspberry Pi 4 Model B respectively</ns0:figDesc><ns0:graphic coords='12,141.73,63.78,413.59,180.49' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Prime finite field exponent m vs Runtime for plaintext of 1000 characters. Figure (A) and (B) shows the demonstration on an Intel ® Core TM i7-10750H and a Raspberry Pi 4 Model B respectively.</ns0:figDesc><ns0:graphic coords='13,141.73,63.78,413.59,165.93' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='9,141.73,272.99,413.57,262.70' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Prime density and approximation to logarithmic integral</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Search Size x # of Primes Density (%) li(x)</ns0:cell><ns0:cell>li(x) − π(x) π(x)</ns0:cell><ns0:cell>× 100</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>6.16</ns0:cell><ns0:cell>54.14</ns0:cell></ns0:row><ns0:row><ns0:cell>10 2</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>30.13</ns0:cell><ns0:cell>20.50</ns0:cell></ns0:row><ns0:row><ns0:cell>10 3</ns0:cell><ns0:cell>168</ns0:cell><ns0:cell>16.8</ns0:cell><ns0:cell>177.61</ns0:cell><ns0:cell>5.72</ns0:cell></ns0:row><ns0:row><ns0:cell>10 4</ns0:cell><ns0:cell>1229</ns0:cell><ns0:cell>12.3</ns0:cell><ns0:cell>1246.14</ns0:cell><ns0:cell>1.39</ns0:cell></ns0:row><ns0:row><ns0:cell>10 5</ns0:cell><ns0:cell>9592</ns0:cell><ns0:cell>9.6</ns0:cell><ns0:cell>9629.81</ns0:cell><ns0:cell>0.39</ns0:cell></ns0:row><ns0:row><ns0:cell>10 6</ns0:cell><ns0:cell>78498</ns0:cell><ns0:cell>7.8</ns0:cell><ns0:cell>78625</ns0:cell><ns0:cell>0.17</ns0:cell></ns0:row><ns0:row><ns0:cell>10 7</ns0:cell><ns0:cell>664579</ns0:cell><ns0:cell>6.6</ns0:cell><ns0:cell>664918</ns0:cell><ns0:cell>0.05</ns0:cell></ns0:row><ns0:row><ns0:cell>10 8</ns0:cell><ns0:cell>5761455</ns0:cell><ns0:cell>5.8</ns0:cell><ns0:cell>5.76 × 10 6</ns0:cell><ns0:cell>0.01</ns0:cell></ns0:row></ns0:table><ns0:note>4/14PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:59766:2:0:NEW 28 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Effect of length of plaintext on runtime (seconds) for different values of m for Intel ® Core TM i7-10750H</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>String Length 1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>10 2</ns0:cell><ns0:cell cols='7'>0.0005 0.0002 0.0002 0.0003 0.0004 0.0007 0.0004</ns0:cell></ns0:row><ns0:row><ns0:cell>10 3</ns0:cell><ns0:cell cols='7'>0.0024 0.0024 0.0024 0.0027 0.0030 0.0037 0.0038</ns0:cell></ns0:row><ns0:row><ns0:cell>10 4</ns0:cell><ns0:cell cols='7'>0.0311 0.0393 0.0321 0.0342 0.0358 0.0418 0.0470</ns0:cell></ns0:row><ns0:row><ns0:cell>10 5</ns0:cell><ns0:cell cols='7'>0.3143 0.3201 0.3596 0.3580 0.3975 0.4288 0.5501</ns0:cell></ns0:row><ns0:row><ns0:cell>10 6</ns0:cell><ns0:cell cols='7'>3.2605 3.3822 3.9073 4.0346 4.4580 5.0636 6.0518</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Values for m</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>String Length 1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>10 2</ns0:cell><ns0:cell>0.0010</ns0:cell><ns0:cell>0.0011</ns0:cell><ns0:cell>0.0012</ns0:cell><ns0:cell>0.0012</ns0:cell><ns0:cell>0.0013</ns0:cell><ns0:cell>0.0015</ns0:cell><ns0:cell>0.0018</ns0:cell></ns0:row><ns0:row><ns0:cell>10 3</ns0:cell><ns0:cell>0.0101</ns0:cell><ns0:cell>0.0102</ns0:cell><ns0:cell>0.0116</ns0:cell><ns0:cell>0.0122</ns0:cell><ns0:cell>0.0138</ns0:cell><ns0:cell>0.0160</ns0:cell><ns0:cell>0.0205</ns0:cell></ns0:row><ns0:row><ns0:cell>10 4</ns0:cell><ns0:cell>0.1176</ns0:cell><ns0:cell>0.1248</ns0:cell><ns0:cell>0.1379</ns0:cell><ns0:cell>0.1399</ns0:cell><ns0:cell>0.1587</ns0:cell><ns0:cell>0.1911</ns0:cell><ns0:cell>0.2529</ns0:cell></ns0:row><ns0:row><ns0:cell>10 5</ns0:cell><ns0:cell>1.3059</ns0:cell><ns0:cell>1.4381</ns0:cell><ns0:cell>1.4981</ns0:cell><ns0:cell>1.6747</ns0:cell><ns0:cell>1.8104</ns0:cell><ns0:cell>2.1556</ns0:cell><ns0:cell>2.9447</ns0:cell></ns0:row><ns0:row><ns0:cell>10 6</ns0:cell><ns0:cell cols='7'>14.3205 16.3886 16.7864 18.4380 20.6584 24.5239 34.1969</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Effect of length of plaintext on runtime (seconds) for different values of m for Raspberry pi</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>This is clearly visible from the data in table 4. Effect of exponent of prime finite field m on runtime for fixed string (size = 1000)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Runtime (seconds)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Values of m Intel ® Core TM i7-10750H Raspberry Pi 4 Model B</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>0.0025</ns0:cell><ns0:cell>0.0099</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>0.0025</ns0:cell><ns0:cell>0.0010</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>0.0024</ns0:cell><ns0:cell>0.0111</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>0.0025</ns0:cell><ns0:cell>0.0120</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>0.0028</ns0:cell><ns0:cell>0.0134</ns0:cell></ns0:row><ns0:row><ns0:cell>32</ns0:cell><ns0:cell>0.0035</ns0:cell><ns0:cell>0.0160</ns0:cell></ns0:row><ns0:row><ns0:cell>64</ns0:cell><ns0:cell>0.0036</ns0:cell><ns0:cell>0.0204</ns0:cell></ns0:row><ns0:row><ns0:cell>128</ns0:cell><ns0:cell>0.0054</ns0:cell><ns0:cell>0.0301</ns0:cell></ns0:row><ns0:row><ns0:cell>256</ns0:cell><ns0:cell>0.0076</ns0:cell><ns0:cell>0.0556</ns0:cell></ns0:row><ns0:row><ns0:cell>512</ns0:cell><ns0:cell>0.0153</ns0:cell><ns0:cell>0.1294</ns0:cell></ns0:row><ns0:row><ns0:cell>1024</ns0:cell><ns0:cell>0.0416</ns0:cell><ns0:cell>0.3402</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Department of Mathematics
160 Convent Ave
New York, NY 10031
May 28, 2021
Dear Editors
We thank the reviewers for their generous and invaluable remarks on the manuscript and have edited the
manuscript accordingly to address their concerns.
We believe that the manuscript is now suitable for publication in PeerJ.
Awnon Bhowmik
Graduate Student
Department of Mathematics
The City College of New York
abhowmik901@york.cuny.edu
On behalf of all authors.
Reviewer 1
Basic reporting
The paper introduces an adaptive block encryption scheme. This system is based on product, exponent and
modulo operation on a finite field. At the heart of this algorithm lies an innovative and robust trapdoor
function that operates in the Galois Field and is responsible for the superior speed and security offered by
it. Prime number theorem plays a fundamental role in this system, to keep unwelcome adversaries at bay.
This is a self-adjusting cryptosystem that autonomously optimizes the system parameters thereby reducing
effort on the user's side while enhancing the level of security. This paper provides an extensive analysis on a
few notable attributes of this cryptosystem such as it's exponential rise in security with an increase in the
length of plaintext while simultaneously ensuring that the operations are carried out in feasible runtime.
Additionally, an experimental analysis is also performed to study the trends and relations between the
different parameters of the cryptosystem including a few edge cases.
Experimental design
no comment
Validity of the findings
no comment
Comments for the author
1. Please improve overall readability of the paper.
We have thoroughly revised the paper several times and made numerous adjustments from the grammatical
and structural aspects.
2. The objectives of this paper need to be polished.
As mentioned in the abstract, this is a design for a robust, lightweight cryptosystem. We have added in
comparison on two different platforms, a workstation running on Intel® Core™ i7-10750H, and a Raspberry
Pi 4 Model B. Since this is an autonomous adjusting security protocol, we have covered the various possible
sectors where the system would be able to replace existing ones.
3. Introduction is poorly written.
We have made corrections to the typos, and added in some mathematical details, which suggests that using a
Galois Field foundation for our system enables us to perform calculations with irreducible polynomials,
rather than being limited to simple arithmetic operations such as add, subtract, product, etc.
4. Relevant literature review of latest similar research studies on the topic at hand must be
discussed.
We have added a literature review section and added in the suggested references, explaining how our system
can be integrated in Big Data Analysis to ensure security and privacy of large amount of personal data.
5. Result section need to be polished.
We have added in tables and graphs showing the performance comparison of the system benchmarked on
two different platforms.
6. There are some grammar and typo errors.
All grammatical issues have been resolved.
7. Improve the quality of figures
We have rerun the entire program and recollected the tabular and graphical data from the benchmarks and
added into the manuscript accordingly. We have renamed the figures and tables, as per the PeerJ author
guideline standards. The figure resolutions have been rescaled to meet PeerJ standards.
8. Define all the variables before using
Care has been taken to address details about each variable before using them.
The authors can cite the following
1.A Novel PCA-Firefly based XGBoost classification model for Intrusion Detection in Networks using GPU
2. Fake Review Classification Using Supervised Machine Learning
The suggested references has been added into the literature review section of the manuscript.
Reviewer 2
Basic reporting
no comment
Experimental design
no comment
Validity of the findings
no comment
Comments for the author
In this paper, the authors designed a novel symmetric encryption method based on finite field. Although the
proposed protocol has no obvious mistakes and question, some revisions are still needed for this paper. The
authors may refer to the following comments for revision.
1. Related work part cannot be found in this paper.
The literature review section has been added.
2. No comparative experiments conducted, the authors should compare their proposed protocol with other
similiar encryption protocol in terms of computation cost and other metrics.
The program was re-executed on an Intel® Core™ i7-10750H and a Raspberry Pi 4 Model B. This was due
to the popularity of the Raspberry Pi in IoT and smart devices, where our encryption protocol can be an
alternate potential candidate. This is especially compatible with scenarios where sensor data is transmitted in
chunks over a network in discrete timesteps. Tables and figures resembling the comparison have been
added.
3. Some typos and grammar errors exist in this paper, the authors should double check the paper.
The paper has been revised and all grammatical issues have been addressed.
4. More supplementary references should be used in the manuscript. For authors' convenience, we list the
related references as follows.
1. Song J, Zhong Q, Wang W, et al. FPDP: Flexible privacy-preserving data publishing scheme for smart
agriculture. IEEE Sensors Journal, 2020, doi: 10.1109/JSEN.2020.3017695.
2. Wang W, Su C. Ccbrsn: a system with high embedding capacity for covert communication in bitcoin//IFIP
International Conference on ICT Systems Security and Privacy Protection. Springer, Cham, 2020: 324-337.
3. Wang W, Huang H, Zhang L, et al. Secure and efficient mutual authentication protocol for smart grid
under blockchain. Peer-to-Peer Networking and Applications, 2020: 1-13.
4. Zhang L, Zou Y, Wang W, et al. Resource Allocation and Trust Computing for Blockchain-Enabled Edge
Computing System. Computers \& Security, 2021: 102249.
5. Zhang L, Zhang Z, Wang W, et al. Research on a Covert Communication Model Realized by Using Smart
Contracts in Blockchain Environment. IEEE Systems Journal, doi: 10.1109/JSYST.2021.3057333.
All the references have been added in as per suggestions.
Reviewer 3
Basic reporting
Professional English is required to improve the paper before publication.
Literature review is not sufficient. Latest and related paper like enhancement of the Cryptosystem should be
compared with https://ieeexplore.ieee.org/abstract/document/6294370
How Big data and cryptography should be analysed with https://www.mdpi.com/2079-9292/8/11/1331
You can apply optimization approach
https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2797
Figures need improvement
Equations are not numbered
More clarity in results needed
Experimental design
Research questions need to be refined
Methods okay in mathematical analysis
Validity of the findings
Findings are okay
Conclusion can be improved
Comments for the author
Professional English is required to improve the paper before publication.
We have revised the entire manuscript and improved the paper quality to the best of our abilities. Typos,
grammatical and structural issues have been addressed.
Literature review is not sufficient. Latest and related paper like enhancement of the Cryptosystem should be
compared with https://ieeexplore.ieee.org/abstract/document/6294370
How Big data and cryptography should be analysed with https://www.mdpi.com/2079-9292/8/11/1331
Literature review section and the suggested references have been added.
Figures need improvement
Code was re executed, and tables and figures from the results have been reentered into the manuscript with
high resolution images meeting the PeerJ standards.
Equations are not numbered
All useful equations have now been numbered.
More clarity in results needed
Content in literature review now shows that our devised schematic could be used as a potential alternative in
sectors that deals with IoT enabled smart devices. To demonstrate the same, we benchmarked our program
on a Raspberry Pi 4 Model B, for comparison purposes.
Research questions need to be refined
Section 12.2 now talks about Hensel’s Lifting Lemma, which shows a possible way for an interceptor can
break our system, but it would require nondeterministic, brute force approach to solve an equation with two
unknowns. No such methods are currently known, without necessary assumptions available at hand.
Methods okay in mathematical analysis
Technical changes
We have taken care to address all issues with the figures as pointed out by the editor(s).
1. We have added the raw data, according to the PeerJ guidelines following some sample, published
articles.
2. We have now cited all Figures and Tables in order before they appear on the manuscript.
3. We have removed sub figures and combined, for example, Figure 2A and Figure 2B into a single
Figure 2, and so on.
4. We have rescaled the images to fit a minimum of 900 pixels and a maximum of 3000 pixels on all
sides as per the PeerJ standard.
5. In order to address accessibility difficulties with persons who are color blind, we eliminated color
legends and replaced them with stylized lines.
Minor Revision (2)
Reviewer 1
Basic reporting
paper introduces an adaptive block encryption scheme. This system is
based on product, exponent, and modulo operation on a finite field. At the heart of this algorithm lies
an innovative and robust trapdoor function that operates in the Galois Field and is responsible for the
superior speed and security offered by it. Prime number theorem plays a fundamental role in this system,
to keep unwelcome adversaries at bay. This is a self-adjusting cryptosystem that autonomously optimizes
the system parameters thereby reducing effort on the user’s side while enhancing the level of security.
This paper provides an extensive analysis of a few notable attributes of this cryptosystem such as its
exponential rise in security with an increase in the length of plaintext while simultaneously ensuring that
the operations are carried out in feasible runtime. Additionally, an experimental analysis is also performed
to study the trends and relations between the cryptosystem parameters, including a few edge cases
Experimental design
.
Validity of the findings
.
Comments for the author
• In Introduction section, the drawbacks of each conventional technique should be described clearly.
The introduction section has been edited thoroughly and all discussed issues have been addressed.
• Introduction needs to explain the main contributions of the work more clearly.
We have added in how existing, conventional systems dependent on various mathematical trapdoors such as
the integer factorization problem, etc. can be broken by Shor’s and Pollard Rho algorithms. Our system,
however, does not depend on it and thus aforementioned algorithms should have no effect whatsoever if
employed by an adversary.
• The authors should emphasize the difference between other methods to clarify the position of this work
further.
The shortcomings of existing methods were discussed, where our system could be a strong contender soon.
• The Wide ranges of applications need to be addressed in Introductions
We have added in explanations as to how this system can be used in bank transaction details, confidential
military applications, Big Data Analytics, secure swarm robotics, etc.
• The objective of the research should be clearly defined in the last paragraph of the introduction section.
We have restated the fact that our proposed security schematic could be used in numerous devices and
systems stemming from various branches of computer science, due to its light weightiness. We have also
outlined the inherent properties which have later been discussed in detail under various sections that follow.
• Add the advantages of the proposed system in one quoted line for justifying the proposed approach in the
Introduction section.
Our system is light and fast, and could be very useful in IoT, IoMT, and even in threshold cryptographic
protocols. This has now been explained in more detail.
In preprocessing the authors can refer the following
A Novel PCA-Firefly based XGBoost classification model for Intrusion Detection in Networks using GPU
An effective feature engineering for DNN using hybrid PCA-GWO for intrusion detection in IoMT
architecture
We have added in the suggested references into the literature review, and data preprocessing section.
Reviewer 2
Basic reporting
The authors have well addressed my comments, so it is ready for publication now
Experimental design
The authors have well addressed my comments, so it is ready for publication now
Validity of the findings
The authors have well addressed my comments, so it is ready for publication now
Comments for the author
The authors have well addressed my comments, so it is ready for publication now
Reviewer 3
Basic reporting
The Paper is improved but lack recent References
Contribution is still lacking at the end of Introduction
Figures still not clear enough. Some further improvement needed
Experimental design
improved
Validity of the findings
Improved
Comments for the author
The Paper is improved but lack recent References
Add
https://www.mdpi.com/1424-8220/20/9/2609
Suggested reference has been cited.
Contribution is still lacking at the end of Introduction
Figures still not clear enough. Some further improvement needed
We have recreated the high resolution images and improved line colors and style as per PeerJ standards.
" | Here is a paper. Please give your review comments after reading it. |
184 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Hearing deficiency is the world's most common sensation of impairment and impedes human communication and learning. Early and precise hearing diagnosis using electroencephalogram (EEG) is referred to as the optimum strategy to deal with this issue.</ns0:p><ns0:p>Among a wide range of EEG control signals, the most relevant modality for hearing loss diagnosis is auditory evoked potential (AEP) which is produced in the brain's cortex area through an auditory stimulus. This study aims to develop a robust intelligent auditory sensation system utilizing a pre-train deep learning framework by analyzing and evaluating the functional reliability of the hearing based on the AEP response. First, the raw AEP data is transformed into time-frequency images through the wavelet transformation. Then, lower-level functionality is eliminated using a pre-trained network.</ns0:p><ns0:p>Here, an improved-VGG16 architecture has been designed based on removing some convolutional layers and adding new layers in the fully connected block. Subsequently, the higher levels of the neural network architecture are fine-tuned using the labelled timefrequency images. Finally, the proposed method's performance has been validated by a reputed publicly available AEP dataset, recorded from sixteen subjects when they have heard specific auditory stimuli in the left or right ear. The proposed method outperforms the state-of-art studies by improving the classification accuracy to 96.87% (from 57.375%), which indicates that the proposed improved-VGG16 architecture can significantly deal with AEP response in early hearing loss diagnosis.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Hearing deficiency is the widespread form of human sensory disability; it is the partial or complete inability to listen to the ear's sound. The world health organization (WHO) reports that 466 million people were living with hearing loss in 2018, projected to exceed 630 million by 2030 and more than 900 million by 2050 ('Deafness and hearing loss'). An early and effective hearing screening test is essential for address the vast population concern. That helps to reduce the hearing deficiency by taking necessary steps at an appropriate time. Conventional listening tests and audiograms appear to be subjective assessments that significantly demand medical and health services. The audiogram reflects the hearing threshold across the speech frequency spectrum, usually between 125 and 8000 Hz. The traditional hearing impairment testing technique is very time-consuming, takes sufficient clinical time and expertise to interpret and maintain since it requires the person to respond directly. In the application of hearing aid, other issues, such as hearing loss's consequence <ns0:ref type='bibr' target='#b20'>(Holmes, Kitterick & Summerfield, 2017)</ns0:ref>, the circumstances of the auditory stimulus (such as the background noise of the stimulus, locations of the stimulus <ns0:ref type='bibr'>(Das, Bertrand & Francart, 2018)</ns0:ref> <ns0:ref type='bibr' target='#b9'>(Das et al., 2016)</ns0:ref> ), attention-altering techniques is still an open question.</ns0:p><ns0:p>Various hearing impairment testing techniques have been conducted to address these issues, and among them, EEG-based auditory evoked potentials (AEPs) are most widely used <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2006)</ns0:ref> <ns0:ref type='bibr' target='#b31'>(Mahmud et al., 2019)</ns0:ref>. Nowadays, the classification of AEP signal is most commonly used in many brain-computer interface (BCI) applications <ns0:ref type='bibr' target='#b15'>(Gao, Wang & Gao, 2014)</ns0:ref> and brain hearing issues <ns0:ref type='bibr' target='#b48'>(Sriraam, 2012)</ns0:ref>. In fact, the AEP signal is widely used to recognize hearing capability, assessment, and neurological hearing impairment identification. The AEP signals are reflected by the brain's electrical activity changes in the body's sensory mechanisms in response to the auditory stimulus. The diagnosis of hearing loss typically involves four main stages: acquisition of data, data pre-processing, feature extraction and selection, and classification. The feature extraction is traditionally conducted by analyzing the time-domain, frequency-domain, and time-frequency domain techniques, which help to extract the information from the original raw data. The extracted features are then used as an input to the machine learning or deep learning models for training. However, traditional diagnosis methods have some drawbacks. For example, traditional hearing loss approaches are often based on manual feature selection. As a consequence, if the manually chosen features are ineffective for this task, the hearing loss recognition performance will decrease considerably. Furthermore, Handcrafted features for different classification tasks are task-specific, meaning that features that render predictions correctly are not acceptable under certain conditions for other scenarios <ns0:ref type='bibr' target='#b0'>(Acir, Erkan & Bahtiyar, 2013)</ns0:ref> <ns0:ref type='bibr' target='#b2'>(Acir, Özdamar & Güzeliş, 2006)</ns0:ref>.</ns0:p><ns0:p>Although the researchers have employed a wide range of machine learning and deep learning algorithms, recognizing the most effective classifier is still an open question. Among machine learning-based classifiers, support vector machine (SVM) <ns0:ref type='bibr' target='#b31'>(Mahmud et al., 2019)</ns0:ref>, k-nearest neighbors (k-NN) <ns0:ref type='bibr' target='#b51'>(Thorpe & Dussard, 2018)</ns0:ref> <ns0:ref type='bibr' target='#b41'>(Rashid et al., 2021)</ns0:ref>, artificial neural network (ANN) <ns0:ref type='bibr' target='#b32'>(Mccullagh et al., 1996)</ns0:ref>, linear discriminant analysis (LDA) <ns0:ref type='bibr'>(Grent-'t-Jong et al., 2021)</ns0:ref> Naïve Bayesian (NB) <ns0:ref type='bibr' target='#b46'>(Shirzhiyan et al., 2019)</ns0:ref> are widely used in neurological response classification. Nowadays, the convolutional neural networks (CNNs) are the most preferred approach in the different classification tasks, particularly in image classification <ns0:ref type='bibr' target='#b26'>(Lecun, Bengio & Hinton, 2015)</ns0:ref>. In some recent studies, CNNs have shown promising performances in EEG signal classification: in seizure detection <ns0:ref type='bibr' target='#b3'>(Ansari et al., 2019)</ns0:ref>, depression detection <ns0:ref type='bibr' target='#b29'>(Liu et al., 2018)</ns0:ref>, and sleep stage classification <ns0:ref type='bibr' target='#b4'>(Ansari et al., 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b7'>Ciccarelli et al. (Ciccarelli et al., 2019)</ns0:ref> proposed a novel architecture of the neural network and showed that their approach outperforms the linear methods in decision windows of 10s. They have used eleven subjects in the experiment: with the wet EEG, the decoding accuracy was improved from 66% to 81%, and with the dry EEG, the decoding accuracy was improved from 59% to 87%. <ns0:ref type='bibr' target='#b33'>McKearney et al. (McKearney & MacKinnon, 2019)</ns0:ref> used a deep neural network approach to classify paired auditory brainstem responses. They used 232 paired ABR waveforms (190 paired ABR waveforms for training the model and 42 paired waveforms for performance evaluation) from eight normal hearing subjects and achieved 92.9% testing accuracy. Although they achieved an excellent performance to identify the auditory brainstem response, the testing set is too small, and more dataset is needed to test the model performance. <ns0:ref type='bibr' target='#b32'>Mccullagh et al. (Mccullagh et al., 1996)</ns0:ref> reported a 73.7% accuracy using the artificial neural network to classify 166 auditory brainstem responses (ABRs) with 2000 repetitions. Ibrahim et al. <ns0:ref type='bibr' target='#b23'>(Ibrahim, Ting & Moghavvemi, 2019)</ns0:ref> used multiple classification techniques for detecting the hearing condition; the SVM algorithm outperforms the other algorithms by achieving a classification accuracy of 90%. They used a nonlinear feature extraction method to extract adequate information from the AEP signals. Dietl1 et al. <ns0:ref type='bibr' target='#b13'>(Dietl & Weiss, 2004)</ns0:ref> evaluated an application to achieve detection of frequency-specific hearing loss where they used the wavelet packet transform (WPT) as a feature extraction method and support vector machines (SVM) classifier to transient evoked otoacoustic emissions (TEOAE). They achieved a maximum of 74.7% accuracy with the testing dataset. Nonetheless, the overall accuracy is not favourable enough to be utilized in real-life applications. Tang et al. <ns0:ref type='bibr' target='#b50'>(Tang & Lee, 2019)</ns0:ref> proposed a novel hearing deficiency diagnosis method using three-level wavelet entropy, followed by MLP, trained by hybrid Tabu search-Particle Swarm Optimization (TS-PSO). Their approach achieved 86.17% testing accuracy; it still needs improvement for real-time applications. Sanjay et al. <ns0:ref type='bibr' target='#b43'>(Sanjay et al., 2020)</ns0:ref> used machine learning approaches for human auditory threshold prediction. The absolute threshold test (ATT) method was used for feature extraction from the auditory signals. The extracted feature was then classified using multiple classification methods. Among all the classification methods, a maximum of 93.94% accuracy was achieved with the SVM classifier. Xue et al. <ns0:ref type='bibr' target='#b53'>(Xue et al., 2018)</ns0:ref> used participants' articulatory movements with or without hearing impairment during nasal finals for hearing impairment diagnosis. Six different kinematic features: standard deviation of velocity, minimum velocity, maximum velocity, mean velocity, duration, displacement was used to extract the information from the hearing impairment (HI) patient and normal hearing (NH) participants. The classification was conducted with a support vector machine, radial basis function network, random forest, and C4.5. The maximum accuracy was 87.12% using a random forest classifier via (displacement and duration feature). Zhang et al. <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2006)</ns0:ref> proposed an auditory brainstem response classification method. They used wavelet analysis for feature extraction and Bayesian networks to classify the auditory responses. Discrete wavelets transform (DWT) was used to extract the time-frequency information from the raw signals. A maximum of 78.80% testing accuracy was achieved in their proposed approach; it needs more improvement in testing accuracy.</ns0:p><ns0:p>The emphasis in our study is on a concise decision window. However, a concise window contains less information and more difficult to achieve high performance but provide an effective solution for early detection of hearing disorder. The short decision window is considered one of the prerequisites to develop the real-life application, but limited studies have been carried out to investigate this issue <ns0:ref type='bibr' target='#b12'>(Deckers et al., 2018)</ns0:ref>. Moreover, selecting a short decision window makes the system faster by reducing the computational complexity of the system. On the other hand, Deep learning (DL) approaches can provide an effective solution because of their effective feature learning capability to overcome the above limitations <ns0:ref type='bibr' target='#b24'>(Krizhevsky, Sutskever & Hinton, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b39'>(Nossier et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b45'>(Shao et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b5'>(Bari et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b30'>)(Mahendra Kumar et al., 2021)</ns0:ref>. Deep learning models have several hidden layers that can explicitly learn hierarchical representations. From model training, deep architectures can select discriminatory representations, which are helpful for precise predictions according to the training data in subsequent classification stages. Although the DL models have successful application in hearing loss diagnosis tasks, there are still some issues with DL approaches. A few investigations <ns0:ref type='bibr' target='#b7'>(Ciccarelli et al., 2019</ns0:ref><ns0:ref type='bibr' target='#b33'>) (McKearney & MacKinnon, 2019)</ns0:ref> have been conducted using deep models with more than ten hidden layers for hearing loss diagnosis. A large number of labelled data and computations resources are typically required during the training model from scratch. In the proposed study, we used the transfer learning (TL) method to address the challenges of training a deep model from scratch. The TL method is used to expedite the deep learning model training phase and effectively learns the hierarchical representations. The process is accomplished by using the pre-trained TL method that has been pre-trained on vast datasets of natural images. The proposed pre-trained model provides the lower-level weights for the target neural network, while the higher-level weights are fine-tuned for the hearing deficiency diagnosis task. Consequently, the proposed TL method offers a rational initialization for the target model and decreases the number of model's parameters. In this manner, TL significantly enhances the performance of the training process. Here, we summarized the main contribution of this paper.</ns0:p><ns0:p> We have presented a hearing deficiency identification system based on deep CNN, where a transfer learning strategy has been used to improve the training process. To fit the AEP dataset in our model, we fine-tune the high-level parameters, consisting of unfreezing some part of the pre-training model and re-training it. The lower-level parameters are transferred from the previous trained deep architecture.  In the proposed approach, we also changed some high-level parameters, reduced the number of parameters and complexity of the TL architecture, which helps in improving the performance of the VGG16 model for our dataset and reduces the computational time of the training process.</ns0:p><ns0:p> The experiment is conducted in a short decision window (1s and 2s), minimizing the impact of additional features and reducing time consumption, which shows the proposed system robustness and applicability in real-life application.</ns0:p><ns0:p>The rest of the manuscript is arranged as follows: a detailed data description, data preprocessing, and the transformation process of CWT are implemented in the Materials and Methodology section. A detailed description of the development of the proposed pre-trained model and fine-tuning procedure for hearing deficiency diagnosis is also described in this section. Experimental performance to determine the models' validation is described in the Result of the Experiment and Analysis section. The Discussion section exhibits a discussion on the comparison of the proposed model with related studies, along with the key advantages of our proposed method over the previous studies. The Conclusion section represents the outcome of the present study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methodology</ns0:head><ns0:p>The aim of this study is to build an intelligent auditory sensation system for hearing loss diagnosis with high performance. The overall procedure of the proposed hearing loss diagnosis method is demonstrated in Figure <ns0:ref type='figure'>1</ns0:ref>. The proposed framework consists of few steps, including data collection, pre-processing, time-frequency analysis, and building a pre-trained model with fine-tuning. We have used a publicly available online dataset in the data collection phase instead of data collection ourselves. We converted the raw signal into a time-frequency image using continuous wavelet transform (CWT). Then, the proposed deep CNN (improved-VGG16) method is applied in the time-frequency images for diagnosis the hearing loss. In the TL model, the pre-trained ImageNet dataset has been used, and the size of the images is 224 * 224 pixels in RGB. The entire dataset has been converted into a time-frequency image after data collection and resized in height-224 * width-224 * depth-3. The VGG16 uses natural images which are different from the time-frequency images of AEP. So, to fit the AEP dataset in the TL model, we replaced some VGG16 layers with the new layers and then fine-tuned the improved VGG16 model.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref>: The overall procedure of hearing deficiency diagnosis method.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Description</ns0:head><ns0:p>Experimental AEP datasets are provided by ExpORL, Dept. Neurosciences, KULeuven, and Dept. Electrical Engineering (ESAT), KULeuven <ns0:ref type='bibr' target='#b10'>(Das, Francart & Bertrand, 2020)</ns0:ref>. A 64-channel BioSemi Active Two system was used for recording the AEP data, which was 8196 Hz sampling rate. The entire data was collected from 16 normal-hearing subjects, and the trial was repeated 20 times from each subject. The recordings were conducted in a soundproof, electromagnetically shielded space. The auditory stimuli were presented at 60 dBA by Etymotic ER3 insert earphones and were low-pass filtered with a cut-off frequency of 4 kHz. As simulation software, APEX 3 was used <ns0:ref type='bibr' target='#b14'>(Francart, van Wieringen & Wouters, 2008)</ns0:ref>. Three male Flemish speakers narrated four Dutch stories as auditory stimulation ('Radio books for children'). Every story lasted 12 minutes and was divided into two segments of 6 minutes each. Silent segments that lasted more than 500 milliseconds were shortened to 500 milliseconds. The stimuli were equal in root-mean-square intensity and perceived as equally loud. The experiment was divided into eight sections, each lasting six minutes. Subjects were presented with two parts of two storylines in each trial. The left received one part, while the right ear received the other part. To prevent the lateralization bias described by <ns0:ref type='bibr' target='#b9'>(Das et al., 2016)</ns0:ref>, the attended ear was alternated over successive trials to ensure that each ear received an equal volume of data. Each subject received stimuli in the same order, either dichotically or after head-related transfer function (HRTF) filtering (simulating sound coming from ±90°). As with the attended ear, the HRTF/dichotic condition was randomized and balanced within and over subjects.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Preprocessing</ns0:head><ns0:p>The pre-processing of the AEP data is the first phase after data collection. In this study, the trials were filtered with a high pass (0.5 Hz cut off) and downsampled from the sampling rate of 8192 Hz to 128 Hz. Here, we have investigated sixteen subjects, and each trial has been segmented into the same length. The entire dataset has been segmented into short decision windows (1s and 2s) and considered each decision window an observation. The straightforward reason to select the concise decision windows is to reduce the computational complexity and make the system faster, which will help detect the early hearing disorder. From each subject, 200 observations have been picked, and finally, we achieved a total of 3200 observations. After data filtering and window selection, the AEP data of subject-1, channel-1 in the time domain, is shown in Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref> when the subject hears auditory stimulus through headphones defined as left and right labels. CWT is a time-frequency feature extraction approach that offers multi-scale signal refinement by scaling and translating operations. After the data pre-processing step, the segmented dataset transforms from the time domain to the time-frequency domain using the CWT. The CWT can automatically adapt the time-frequency signal analysis criteria and clearly explain the signal frequency change with time <ns0:ref type='bibr' target='#b54'>(Yan, Gao & Chen, 2014</ns0:ref>). The CWT is widely used for feature extraction and can be considered a mathematical tool for transforming time-series into a different feature space. This study uses CWT as a feature extraction method that converts the raw signal into 2-D time-frequency images from 1-D time-domain signals. An internal signal operation and a series of wavelets are performed by the wavelet transforms. The mother wavelet is scaled and translated to create the wavelet set, which is a family of wavelets , shown as</ns0:p><ns0:formula xml:id='formula_0'>𝜓(𝑡) 𝜓 𝑆,𝜏 (𝑡) = 1 𝑆 𝜓 ( 𝑡 -𝜏 𝑆 )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Here, S represents the scale parameter inversely related to frequency, and represents the τ translation parameter.</ns0:p><ns0:p>The signal can be achieved by a complex conjugate convolution operation, mathematically 𝑥(𝑡) defined as follows <ns0:ref type='bibr' target='#b22'>(Huang & Wang, 2018)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_1'>𝑊(𝑠,𝜏) = ⟨𝑥(𝑡),𝜓 𝑆,𝜏 ⟩ = 1 𝑠 ∫ 𝑥(𝑡)𝜓 * ( 𝑡 -𝜏 𝑆 ) 𝑑𝑡<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>Where denotes the complex conjugate of the above function and This operation</ns0:p><ns0:formula xml:id='formula_2'>𝛹 * (•) 𝛹(•)</ns0:formula><ns0:p>decomposes the signal in a series of wavelet coefficients, in which the base function is the 𝑥(𝑡) wavelet family. In the equation, the s and τ are two types of parameters in the family wavelets. The signal x(t) is transformed and projected to the time and scale dimensions of the family wavelets.</ns0:p><ns0:p>In this study, we use wavelet basis functions (Mother Wavelets). The time-frequency images are then used as the input of the proposed TL model. The transformation process of CWT is shown in Figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Finally, we concatenate the 64 channels data in (M*M) for preparing an observation, where the value of M is set to 8. So, each observation provides the time-frequency information of 64 channels. Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref> shows the time-frequency image of 64 channels. The proposed system presented a deep TL method based on improved-VGG16 architecture for hearing loss diagnosis. The VGG16 uses natural images which are different from the timefrequency images of AEP. The improvement consists of replacing some VGG16 layers with the new layers and then fine-tuning the layers to fit the time-frequency AEP dataset in the model. Convolutional Neural Network Architecture <ns0:ref type='bibr' target='#b27'>LeCun (LeCun et al., 1998)</ns0:ref> proposed the convolutional neural networks (CNN), one of the best pattern recognition methods. The locally trained filters are used in this system to extract the visual features through the input image. CNN's internal layer structure consists of a convolution layer, pooling layer, and fully connected layer. The complete procedure of CNN is shown in Figure. </ns0:p></ns0:div>
<ns0:div><ns0:head> Convolution Layer</ns0:head><ns0:p>The convolutional operations provide the more advanced feature representation. Several fixed-size filters allow the complex functions to be used in the input image <ns0:ref type='bibr' target='#b42'>(Ravi et al., 2017)</ns0:ref>. The same weights and bias values are used in the whole image in each filter. This technique is called the weight-sharing mechanism, and it makes it possible to represent the entire image with the same characteristic. A neuron's local receptive field reflects the neuron's region in the previous layer. This study uses the 'ReLU activation function <ns0:ref type='bibr'>(Nielsen)</ns0:ref>. Let c × c is the size of the kernel or filter, and i represent the time-frequency image. The weight and bias of the filter are denoted by w and b, respectively. The output can be computed using Eq. ( <ns0:ref type='formula'>3</ns0:ref>), where f denotes the 𝑂 0,0 activation function. This study used the ReLU activation function. In most of the classification tasks, the ReLU activation function has demonstrated superior performance in terms of accelerating convergence and mitigating the issue of vanishing gradients <ns0:ref type='bibr' target='#b24'>(Krizhevsky, Sutskever & Hinton, 2017)</ns0:ref>. The mathematical representation of the ReLU activation function can also be seen in Eq. ( <ns0:ref type='formula' target='#formula_3'>4</ns0:ref>), Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_3'>𝑂 0,0 = 𝑓(𝑏 + 𝑐 ∑ 𝑡 = 0 𝑐 ∑ 𝑟 = 0 𝑤 𝑡,𝑟 𝑖 0 + 𝑡,0 + 𝑟 ) (3) 𝑓(𝑥) = { 𝑥 0 𝑥 > 0 𝑒𝑙𝑠𝑒 .<ns0:label>(4</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head> Pooling Layer</ns0:head><ns0:p>The pooling method is used in the feature maps, which have gone through convolution and activation function. The pooling layer computes the local average or maximum value, reducing the complexity and retaining the essential features, thus enhancing feature extraction performance.</ns0:p><ns0:p> Fully connected layer</ns0:p><ns0:p>The convolutional and pooling layers alternately transfer the image features; after that, the fully connected layer received the image feature as an input. One or more hidden layers may have in the fully connected layer. By the data from the previous layer, each neuron multiplies the connection weights and adds a bias value. Before transmission to the next layer, the measured value is passed via the activation function. Eq. ( <ns0:ref type='formula' target='#formula_4'>5</ns0:ref>). displays neuronal calculations in this layer.</ns0:p><ns0:formula xml:id='formula_4'>𝑓𝑐1 = 𝑓(𝑏 + 𝑀 ∑ 𝑞 = 1 𝑤 1,𝑞 * 𝑂 𝑞 ) (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>)</ns0:formula><ns0:p>Where f is the activation function, w is the weight vector, O is the input vector of the neuron, 𝑞 𝑡ℎ and b is the bias value.</ns0:p></ns0:div>
<ns0:div><ns0:head> SoftMax</ns0:head><ns0:p>The SoftMax activation function variates the logistic regression adapted to multiple classes and used in the output layer for classification purposes. It can be determined by Eq. ( <ns0:ref type='formula' target='#formula_6'>6</ns0:ref>) <ns0:ref type='bibr' target='#b44'>(Sermanet et al., 2013)</ns0:ref>,</ns0:p><ns0:formula xml:id='formula_6'>𝑐𝑙𝑎𝑠𝑠 𝑗 = exp (𝑠𝑓 𝑗 ) ∑ 𝑞 exp (𝑠𝑓 𝑞 )<ns0:label>(6</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>) Proposed Pretrained Model Building and Fine-tuning</ns0:head><ns0:p>In the convolutional neural network, the convolutional layers are used to extract the features from the dataset in a different manner, whereas the fully connected layers are used to classify the extracted features. The most forthright approach for enhancing the feature learning capability is to increase the depth or width of the deep neural network. However, this can lead to two issues: the first concern is that a deeper or wider model typically has more parameters, rendering the expanded network more vulnerable to overfitting. The second concern is that it raises the use of computing resources substantially.</ns0:p><ns0:p>To overcome these flowing issues and extract the AEP feature efficiently, the VGG16 network utilizes several parallel layers with different convolutional kernel sizes. It concatenates the outputs at the end of the pre-trained network. In the proposed TL model, we replace some layers of VGG16 with the new layers to fit the AEP dataset in the pre-trained network, which enhances hearing loss identification performance. The replacement process consists of adding PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59631:1:0:NEW 6 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science some dense layers in the fully connected block of VGG16 architecture and adding the dropout layers after every dense layer. A densely connect layer learns features from all the previous layer's features. The dense layer performs a matrix-vector multiplication, and with the help of backpropagation, the parameters can be trained and updated. The dense layer is used to change the vector's dimensions and applies in other operations like rotation, scaling, and translation. <ns0:ref type='bibr' target='#b34'>Mele & Altarelli (Mele & Altarelli, 1993)</ns0:ref> reported that on the CIFAT-10 dataset, the error rate 16.6% when testing the dataset in a convolutional neural network. They improved the model's performance with an error rate of 15.6% when the dropout layer was utilized in the last hidden layer. We add the dropout layer after every dense layer in the fully connected block to reduce the model complexity and prevent overfitting. The neuron is temporarily dropped with the probability p at each iteration. Then, at every training step, the dropped-out neuron is resampled with the probability p, and a dropped-out neuron will be active at the next step. Here, the hyperparameter p is the dropout rate. Since the VGG16 uses the 'ImageNet' weight, which is trained with the natural image, and the proposed time-frequency images are not similar, more layers need to be fine-tuned where the weight is updated with the 'ImageNet' weight. This process helps to fit the timefrequency images with the TL architecture. The proposed fine-tuning consists of unfreezing some pre-trained network layers and re-train with the AEP dataset.</ns0:p><ns0:p>In the proposed approach, at first, we remove all the layers of VGG16 after the first 3x3 convolution layer of convolutional block 5, as shown in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>, and replace the fully connected block there. Then, we add multiple dense layers at the end of the VGG16 model, and after every dense layer, we add the dropout layer. In the case of CNN, the convolutional layers extract the feature from the dataset, whereas the fully connected layers try to classify the extracted features. Consequently, adding more layers to the dense section can empower the network's robustness and improve classification accuracy. So, despite using the two dense layers of the VGG16, here, we add three new dense layers units of 1024, 512, and 288 in the fully connected block. Then, we add a dropout layer after each dense layer, and the dropout value is set to 0.2, 0.4, and 0.6, respectively. The reason behind adding the dropout layers is that the deep learning model reduces the performance due to overfitting, and the dropout layers reduce the model complexity and prevent overfitting. These techniques help in enhancing the performance in the hearing loss diagnosis. We also remove the top layer and adding a SoftMax layer (output layer) based on the targeted class. Based on the hyperparameters tuning technique, the proposed approach uses the 'Adam' optimizer to adjust the network weight with the batch size 64, and the learning rate is set to 0.0001. The parameters selection is made with the help of the 'Keras-Tuner' library. This library helps to select the most optimal set of hyperparameters for our architecture. Hyperparameters are the variable that governs the training process of the DL model and structure. There are two types of hyperparameters: first, model hyperparameters that help in selecting the number and width of the multiple hidden layers. Second, algorithm hyperparameters help to influence the speed and quality of the learning algorithm. All the hyperparameters selected to build the proposed architecture are based on ten different runs of the model. The following steps are used to train the model for hearing loss identification, shown in Box 1. The detailed information of the parameter of the proposed TL architecture is shown in Table <ns0:ref type='table'>1</ns0:ref>. Here, C means the targeted class.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>: Parameter of proposed TL architecture.</ns0:p><ns0:p>During the training process, all the layers before convolutional block four are frozen. The weights are updated in the trainable layers, which helps in minimizing the errors between the predicted labels and the actual labels. The complete architecture of the proposed TL has demonstrated in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Result of the Experiment and Analysis</ns0:head><ns0:p>This section represents the proposed hearing loss diagnosis method's performance based on CWT and deep CNN architecture (improved-VGG16). First, we converted the time domain signal to time-frequency domain images. Then, the images are resized into height-224 * width-224 * depth-3, which is the suitable size of the proposed model. In this study, two different decision windows were tested: 1s and 2s. This term refers to the quantity of data required to make a single left/right decision. The practical reason behind selecting the shorter decision window is to detect the hearing condition quickly. The entire dataset was randomly split into the training set and testing set. Here, we used 70% dataset to train the architecture, and the rest of the dataset was used to test the model's validation. This experiment has conducted with sixteen subjects where the subjects hear the auditory track. Based on listening to the auditory track with the ear, the dataset has been divided into two classes. The 'Class1' means the subject hears the auditory track with the left ear and the 'Class2' means the subject hears the auditory track with the right ear. With the (1s and 2s) decision windows, we randomly selected 200 observations from each subject. A total of 2240 observations has been used for training the model and 960 observations for testing the performance.</ns0:p><ns0:p>For 1s window length, the performance of the proposed approach for each subject in terms of accuracy, precision, recall, f1-score and cohen's kappa of all subjects is demonstrated in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>: Performance of proposed model for 1s decision window.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> illustrates that in the case of subject-5, subject-7, and subject-16, our network achieves an unprecedented performance of 100%. Except for six subjects <ns0:ref type='bibr'>6,</ns0:ref><ns0:ref type='bibr'>9,</ns0:ref><ns0:ref type='bibr'>11,</ns0:ref><ns0:ref type='bibr'>13 and 14)</ns0:ref>, all subjects have achieved more than 90% accuracy. However, comparatively lower classification accuracy has been noticed by Subjects-3 (86.67%), Subject-6 (83.33%), Subject-9 (76.67%), PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59631:1:0:NEW 6 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Subject-11 (81.67%), Subject-13 (78.33%), and Subject-14 (76.67%). In the case of 1s decision window length, the average classification accuracy is 91.56%, whereas the standard deviation is 8.91%. Besides classification accuracy, other performance evaluation techniques (such as precision, recall, f1-score, and cohen kappa score) are also calculated to check the proposed model's acuity. The average value of precision, recall, f1-score, and cohen kappa for sixteen subjects are 90.74%, 93.63%, 91.92%, 82.71%, respectively, whereas standard deviations are 10.47%, 8.25%, 8.79%, 18.34%, respectively. Figure <ns0:ref type='figure'>7</ns0:ref> shows the overall accuracy and loss curve of the proposed TL method for the 1s decision window.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref>: The overall accuracy and loss curve of the proposed TL method for 1s decision window.</ns0:p><ns0:p>For 2s window length, the performance of the proposed architecture is illustrated in Table <ns0:ref type='table' target='#tab_0'>3</ns0:ref>. In this case, a maximum of 100% accuracy has achieved for subject-6, subject-7, subject-10, subject-16. Here, in the case of subject-16, we achieved 1.67% more accuracy compared to the 1s time window analysis. However, the proposed architecture achieves an unprecedented improvement (more than or equal to 90% for decision windows of 2s) in each subject. The lowest accuracy of 90% has been obtained in subject-13. With the 2s decision window, the average value of accuracy precision, recall, f1-score, and cohen kappa for sixteen subjects are 96.87%, 96.49%, 97.57%, 97% and 93.73%, respectively. On the other hand, the standard deviation of precision, recall, f1-score, and cohen kappa are 2.78%, 3.50%, 2.76%, 2.64% and 5.57%, respectively. Figure <ns0:ref type='figure'>8</ns0:ref> shows the overall accuracy and loss curve of the proposed TL method.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>8</ns0:ref>: The overall accuracy and loss curve of the proposed TL method for 2s decision window.</ns0:p><ns0:p>To illustrate the performance of the proposed TL model in depth, the confusion matrix of all subjects has been given separately. A confusion matrix can be used to estimate the classification accuracy of a model visually. Figure <ns0:ref type='figure' target='#fig_8'>9</ns0:ref> represent the confusion matrix with 1s decision windows analysis, whereas Figure <ns0:ref type='figure' target='#fig_9'>10</ns0:ref> represent the confusion matrix with 2s decision window analysis. In both figures, the letter A to P denotes the confusion matrix of subject-1 to subject-16, respectively. Manuscript to be reviewed The correct predictions are on the diagonal in the confusion matrix, while the incorrect predictions are off the diagonal. For example, in the case of Figure <ns0:ref type='figure' target='#fig_9'>10</ns0:ref>(A) that denotes subject-1, a total of 59 observations (29 observations for class1, 30 observations for class2) have been recognized accurately among 60 observations. In both decision windows, the total testing set for sixteen subjects consists of 960 observations, in which 464 observations are in 'Class1', and 496 observations are in 'Class2'. For 1s decision windows, our network correctly detects 876 observations whilst 84 observations have been misclassified (shown in Figure <ns0:ref type='figure' target='#fig_8'>9</ns0:ref>). On the other hand, for 2s decision windows, 930 observations have been accurately detected, whereas only 30 observations have been misclassified (shown in Figure <ns0:ref type='figure' target='#fig_9'>10</ns0:ref>). Therefore, 2s decision windows provide a significant performance compared to the 1s decision windows.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Furthermore, to study the relationship between window length and detection performance, this study includes a comparison. Figure <ns0:ref type='figure' target='#fig_10'>11</ns0:ref> visualizes the average performance of two decision windows over our network. Figure <ns0:ref type='figure' target='#fig_10'>11</ns0:ref> shows that the proposed TL network with a 2s decision window significantly improves recognition accuracy compared to the 1s decision window analysis. The main goal of this study is to enhance the performance for detecting the hearing condition with a concise decision window, so that we can efficiently use this system in real-life application. For this purpose, first, we analyze the 1s decision window and achieve 91.56% recognition accuracy; still not so high to apply this system in real-life application. Furthermore, to enhance the performance of our proposed diagnosis system, we move on to the 2s decision windows length, and this time we achieve a 5.31% improvement in accuracy compared to the 1s decision window length. In the case of other performance evaluation techniques such as precision, recall, F1 score and Cohen's kappa, we achieve 5.74%, 3.94%, 5.08%, and 11.02%, improvement, respectively. The improvement indicates the robustness and applicability of our proposed system.</ns0:p><ns0:p>Despite the impressive performance of the proposed system, in some cases, the performance of our network is unsatisfactory. The possible reason for this poorer performance compared to the other successful cases is that in EEG-based BCI application studies, a small SNR and different noise sources are among the greatest challenges. Furthermore, Unwanted signals contained in the main signal can be termed noise, artifact, or interference. Sometimes, the brain may produce some unwanted noise due to the lack of the subject's proper attention or muscle movement, affecting the detection results. In the experiment, we select concise decision windows (1s and 2s), and working with a short window have many advantages but still very challenging PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59631:1:0:NEW 6 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b16'>(Geirnaert, Francart & Bertrand, 2020)</ns0:ref>. For these possible reasons, some subjects may provide a lower accuracy compared to the other's subject (shown in Table2, and Table <ns0:ref type='table' target='#tab_0'>3</ns0:ref>). Suppose in the 2s decision windows length; if we avoid the two subjects that perform poorer than the other subjects (shown in Table <ns0:ref type='table' target='#tab_0'>3</ns0:ref>), we will achieve 97.62% recognition accuracy. However, the average training and testing accuracy of sixteen subjects with 2s windows length is 100% and 96.87%, respectively, after 100 epochs, whereas the standard deviation is 2.78%.</ns0:p><ns0:p>Furthermore, to study the robustness of the proposed method with a 2s decision window (1s decision windows is not considered in the subsequent analysis), the performance of the proposed model has been compared with other widely used TL architectures. Six popular transfer learning algorithms namely, InceptionResNetV2 <ns0:ref type='bibr' target='#b25'>(Längkvist, Karlsson & Loutfi, 2014)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b40'>(Pan et al., 2020)</ns0:ref>, ResNet50 <ns0:ref type='bibr' target='#b19'>(He et al., 2016)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b47'>(Simonyan & Zisserman, 2015)</ns0:ref>, VGG19 <ns0:ref type='bibr' target='#b47'>(Simonyan & Zisserman, 2015)</ns0:ref>, and Xception <ns0:ref type='bibr' target='#b6'>(Chollet, 2017)</ns0:ref> have employed to the timefrequency image of AEP dataset for hearing loss diagnosis. The input size is the same (height-224* width-224* depth-3) for all the TL architectures. Figure <ns0:ref type='figure' target='#fig_0'>12</ns0:ref> illustrates the performance comparison of six popular TL models with the proposed model. According to Figure <ns0:ref type='figure' target='#fig_0'>12</ns0:ref>, the proposed model achieved higher accuracy compared to the other TL models.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>12</ns0:ref>: The performance comparison with other pre-trained architectures.</ns0:p><ns0:p>We also reduced the model parameters of VGG16 which help in reducing the model complexity and minimize the computational resources. The total number of all model parameters and performance is represented in Table <ns0:ref type='table'>4</ns0:ref>. Table <ns0:ref type='table'>4</ns0:ref> reported that the overall accuracy is less than 61% in all the pre-trained networks, where the models used pre-trained 'ImageNet' weights for hearing impairment identification.</ns0:p><ns0:p>Table <ns0:ref type='table'>4</ns0:ref>: Performance comparison with six popular TL models.</ns0:p><ns0:p>In the proposed TL methods (Improved-VGG16), we reduced the total number of parameters of <ns0:ref type='bibr'>VGG16 (134,</ns0:ref><ns0:ref type='bibr'>268,</ns0:ref><ns0:ref type='bibr'>738 to 113,</ns0:ref><ns0:ref type='bibr'>429,</ns0:ref><ns0:ref type='bibr'>666)</ns0:ref>. Although we reduced the number of parameters, the testing accuracy was still improved to 96.87% from 57.37%. The reason behind the higher accuracy of the proposed model compared to the other TL models is the replacement of some VGG16's layers with the new layers and fine-tune the higher higher-level parameters, which helps to fit the AEP dataset in the pre-trained network. This replacement consists of adding some dense layers in the fully connected block of VGG16 architecture and adding the dropout layers after every dense layer (shown in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>). In the fine-tuning block, the time-frequency images are updated with the 'ImageNet' weight. This technique helps to fit the dataset in the proposed TL architecture and enhance the overall performance for the hearing loss diagnosis. This experiment is carried out in python, where we used Google colab, Windows 10, Intel(R) Xeon(R) CPU @ 2.30GHz, Tesla K80, and CUDA Version: 10.1. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>A hearing deficiency detection method based on CWT and improved-VGG16 is proposed in this paper and achieved significantly outperform performance with the shorter decision windows (2s) than the previous state-of-art studies. The proposed improved-VGG16 architecture achieved an average accuracy, precision, recall, f1-score, and Cohen kappa of 96.87%, 96.50%, 97.58%, 97.01%, and 93.74%, respectively.</ns0:p><ns0:p>From Figure <ns0:ref type='figure' target='#fig_0'>12</ns0:ref>, it is clear that our network achieved more than 35% significant improvement compared to the others TL algorithms. In this experiment, we also found a significant effect of the decision window length on the overall performance. We achieved the improvement in the 2s decision window: 5.31% accuracy, 5.74% precision, 3.94% recall, 5.08% in F1 score, and 11.02% Cohen's kappa than the 1s decision window. The improvement is because the concise decision windows (1s) contain less information and sometimes provide unsatisfactory performance. However, this study aims to build an efficient network that can detect the hearing condition with a concise decision window so that we can able to achieve the decision quickly and can provide more effectiveness in real-life application. Furthermore, a comparison of the proposed model with existing related studies is represented in Table <ns0:ref type='table' target='#tab_1'>5</ns0:ref>. As seen in Table <ns0:ref type='table' target='#tab_1'>5</ns0:ref>, <ns0:ref type='bibr' target='#b18'>Hallac et al. (Hallac et al., 2019)</ns0:ref> and Dass et al. <ns0:ref type='bibr' target='#b11'>(Dass, Holi & Soundararajan, 2016)</ns0:ref> utilized the convolutional neural network-based classification approach and achieved higher accuracy compared to the other related studies. Hallac et al. <ns0:ref type='bibr' target='#b18'>(Hallac et al., 2019)</ns0:ref> reported that with the raw AEP data and CNN, they achieved 94.1% accuracy. Dass et al. <ns0:ref type='bibr' target='#b11'>(Dass, Holi & Soundararajan, 2016</ns0:ref>) used both the time and frequency domain feature to extract the information from the raw AEP data. They used a feed-forward multilayer network to classify the AEP signal and achieved 90.74% testing accuracy. Both studies achieved a very encouraging performance but need more testing observations to validate the model's robustness.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>5</ns0:ref>: Performance comparison of related AEP studies.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b13'>(Dietl & Weiss, 2004)</ns0:ref> <ns0:ref type='bibr' target='#b31'>(Mahmud et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b49'>(Tan et al., 2013)</ns0:ref> <ns0:ref type='bibr' target='#b28'>(Li et al., 2019)</ns0:ref>, the SVM classifier was used to classify the AEP dataset. Their approach achieved 78.80%, 85.71, 87%, and 78.7% accuracy, respectively. The obtained overall performance is not enough to apply the models in real-life application. Tang et al. <ns0:ref type='bibr' target='#b50'>(Tang & Lee, 2019)</ns0:ref> proposed a TS-PSO hybrid model to classify the two-class AEP dataset. They used Wavelet entropy as a feature extraction method and achieved 86.17% testing accuracy. Zhang et al. <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2006)</ns0:ref> proposed a combination of wavelet analysis and Bayesian networks to classify auditory brainstem response (ABR) signals.</ns0:p><ns0:p>For the wavelet analysis, they used the DWT method. Although they conducted an excellent analysis, the overall accuracy is reported 78.80%, which needs improvement.</ns0:p><ns0:p>The experimental outcomes demonstrated that the proposed architecture gain an impressive performance than the other related study for hearing deficiency diagnosis reported in the literature. Although the proposed approach outperforms state-of-art hearing deficiency detection methods, some difficulties are also faced during the experimental analysis. For example, to check the cross-validation and prove the feasibility of our proposed network, a wide range of similar datasets is needed. However, we did not find such dataset for further validation of the proposed method. Another issue is the absence of clear speech envelopes in the dataset. In the previous research, several types of EEG headsets were used to detect the hearing conditions, and these contain a different number of electrodes (1-256). So, the number of electrodes and which electrodes are required to achieve acceptable performance should be determined <ns0:ref type='bibr' target='#b35'>(Mirkovic et al., 2015</ns0:ref><ns0:ref type='bibr' target='#b36'>)(Montoya-Martínez, Bertrand & Francart, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Narayanan & Bertrand, 2018)</ns0:ref>. In most of the studies, the analysis is carried out with ordinary machine learning algorithms, and a few studies are investigated with the deep learning approaches <ns0:ref type='bibr' target='#b24'>(Krizhevsky, Sutskever & Hinton, 2017)</ns0:ref> <ns0:ref type='bibr' target='#b39'>(Nossier et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b45'>(Shao et al., 2019)</ns0:ref>. However, most of the studies' testing accuracy is not enough to use the model in real-time as well as real-life applications. A fast and more accurate approach can be an efficient tool for future hearing devices and provide a great application in reallife uses. Our study proposed the time-frequency distribution with a deep learning method and achieved superior performance to other related approaches for hearing loss diagnosis reported in the literature. The key advantages of our proposed method compared to previous studies are written below:</ns0:p><ns0:p> Instead of training the AEP dataset with the deep learning architecture from scratch, the proposed study is conducted with a transfer learning strategy, which helps in faster training and better accuracy.</ns0:p><ns0:p> To fit our time-frequency AEP dataset with the pre-trained model weight, we fine-tuned some higher-level parameters where the pre-trained weights are updated with the provided dataset. This strategy helps in enhancing the overall performance for detecting hearing deficiency.</ns0:p><ns0:p> We compare the model's performance with the six popular TL methods, including VGG16 <ns0:ref type='bibr' target='#b47'>(Simonyan & Zisserman, 2015)</ns0:ref>, VGG19 <ns0:ref type='bibr' target='#b47'>(Simonyan & Zisserman, 2015)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b40'>(Pan et al., 2020)</ns0:ref>, ResNet50 <ns0:ref type='bibr' target='#b19'>(He et al., 2016)</ns0:ref>, InceptionResNetV2 <ns0:ref type='bibr' target='#b25'>(Längkvist, Karlsson & Loutfi, 2014)</ns0:ref>, and Xception <ns0:ref type='bibr' target='#b6'>(Chollet, 2017)</ns0:ref> algorithms where the proposed architecture is superior for hearing deficiency diagnosis.  We also changed some higher-level parameters (after the first layer of the convolutional block 5, we remove all the layers and add the new fully connected layer shown in Figure <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>). This approach also helps in reducing the VGG16 parameters and increasing the performance of the proposed improved-VGG16 model.</ns0:p><ns0:p> The proposed approach achieved the height classification accuracy of 96.87%, compared to the previous studies <ns0:ref type='bibr' target='#b7'>(Ciccarelli et al., 2019</ns0:ref><ns0:ref type='bibr' target='#b33'>) (McKearney & MacKinnon, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b23'>(Ibrahim, Ting & Moghavvemi, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b13'>(Dietl & Weiss, 2004)</ns0:ref> <ns0:ref type='bibr' target='#b50'>(Tang & Lee, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b43'>(Sanjay et al., 2020)</ns0:ref> <ns0:ref type='bibr' target='#b53'>(Xue et al., 2018)</ns0:ref> <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2006)</ns0:ref> <ns0:ref type='bibr' target='#b50'>(Tang & Lee, 2019)</ns0:ref> <ns0:ref type='bibr' target='#b31'>(Mahmud et al., 2019</ns0:ref>) <ns0:ref type='bibr' target='#b13'>(Dietl & Weiss, 2004)</ns0:ref> <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2006)</ns0:ref> <ns0:ref type='bibr' target='#b49'>(Tan et al., 2013)</ns0:ref> <ns0:ref type='bibr' target='#b28'>(Li et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b18'>(Hallac et al., 2019)</ns0:ref> <ns0:ref type='bibr' target='#b11'>(Dass, Holi & Soundararajan, 2016)</ns0:ref>.</ns0:p><ns0:p> The impact of different decision windows is also exhibited in the proposed study, whereas our network provides a significant outcome with a concise decision window.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The proposed hearing loss diagnosis framework consists of two major steps: signal to image transformation and building a hearing deficiency diagnosis system using deep CNN. In the proposed study, the CWT is used to convert the raw signals to time-frequency images. Then, CNNbased improved-VGG16 is used to classify the time-frequency images. This approach achieved better outcomes with fewer trainable parameters, which help to reduce the training time of the model. The applicability and effectiveness of the proposed method are verified by the publicly available AEP dataset, and it achieved 96.87% testing accuracy with a concise decision window. Moreover, this study will help to identify early hearing disorders efficiently. Because of the unstable and subject-specific characteristics of the AEP signal, identification of the AEP signal is challenging. Thus, to enhance the detection system's accuracy, other AEPs features need to be investigated, and the use of more data variance and conditions can also be improved the outcome.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>The overall procedure of hearing deficiency diagnosis method. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: AEP raw data plotting in 2s decision window: (A) hear auditory stimulus with the left ear (B) hear auditory stimulus with the right ear.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: The Transformation process from time-domain signal to time-frequency domain image.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: The time-frequency image of 64 channels data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>5.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Typical Convolutional Neural Network Architecture.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>)</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59631:1:0:NEW 6 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59631:1:0:NEW 6 Jun 2021) Manuscript to be reviewed Computer Science Box 1: Training procedure of proposed TL architecture.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Transfer Learning Procedure of the Proposed Method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Confusion matrix for 1s decision windows (A)subject1 (B)subject2 (C)subject3 (D)subject4 (E)subject5 (F)subject6 (G)subject7 (H)subject8 (I)subject9 (J)subject10 (K)subject11 (L)subject12 (M)subject13 (N)subject14 (O)subject15 (P)subject16</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Confusion matrix for 2s decision windows (A)subject1 (B)subject2 (C)subject3 (D)subject4 (E)subject5 (F)subject6 (G)subject7 (H)subject8 (I)subject9 (J)subject10 (K)subject11 (L)subject12 (M)subject13 (N)subject14 (O)subject15 (P)subject16</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Hearing deficiency detection performance of the proposed TL architecture for two different window lengths.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>.</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59631:1:0:NEW 6 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,435.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,337.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,178.87,525.00,167.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,141.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,70.87,309.15,672.95' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,178.87,525.00,292.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,292.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,219.37,525.00,420.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,199.12,525.00,301.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Performance of proposed model for 2s decision window.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Performance comparison of related AEP studies.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell>Year</ns0:cell><ns0:cell>Data</ns0:cell><ns0:cell /><ns0:cell>Feature</ns0:cell><ns0:cell>Classification</ns0:cell><ns0:cell>Classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Subject</ns0:cell><ns0:cell>Class</ns0:cell><ns0:cell>Extraction</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell>Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>[15]</ns0:cell><ns0:cell>2018</ns0:cell><ns0:cell>180</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>WE</ns0:cell><ns0:cell>TS-PSO</ns0:cell><ns0:cell>86.17</ns0:cell></ns0:row><ns0:row><ns0:cell>[32]</ns0:cell><ns0:cell>2018</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>Global and</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>85.71</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>nodal graph</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>[14]</ns0:cell><ns0:cell>2004</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>WPT</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>74.7</ns0:cell></ns0:row><ns0:row><ns0:cell>[18]</ns0:cell><ns0:cell>2006</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>DWT</ns0:cell><ns0:cell>Bayesian</ns0:cell><ns0:cell>78.80</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>network</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>classification</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>[33]</ns0:cell><ns0:cell>2013</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>SIFT</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>87</ns0:cell></ns0:row><ns0:row><ns0:cell>[19]</ns0:cell><ns0:cell>2019</ns0:cell><ns0:cell>Observation:</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>FFT</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>78.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>671</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>[34]</ns0:cell><ns0:cell>2019</ns0:cell><ns0:cell>Observation:671</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>Raw AEP</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell>94.1</ns0:cell></ns0:row><ns0:row><ns0:cell>[35]</ns0:cell><ns0:cell>2016</ns0:cell><ns0:cell>Observation:</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>latency,</ns0:cell><ns0:cell>A feed-</ns0:cell><ns0:cell>90.74</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>280</ns0:cell><ns0:cell /><ns0:cell>FFT and</ns0:cell><ns0:cell>forward</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Subjects: 151</ns0:cell><ns0:cell /><ns0:cell>DWT</ns0:cell><ns0:cell>multilayer</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>perceptron</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Observations:</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>CWT</ns0:cell><ns0:cell>Improved-</ns0:cell><ns0:cell>96.87</ns0:cell></ns0:row><ns0:row><ns0:cell>Work</ns0:cell><ns0:cell /><ns0:cell>3200</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>VGG16</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59631:1:0:NEW 6 Jun 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Editor’s Comment:
The paper was evaluated by two reviewers who provided thorough comments and suggestions. Both of them find that the paper contains effort to conduct the proposed tasks. The reviewers have given some additional comments and suggestions on how the paper could be improved. Please see their full comments for that. I suggest the authors highlight all modifications in the revised version and answer point-by-point the reviewer's comments. It is also expected that the authors can clearly highlight the feasibility of the experiments.
Author Response:
We would like to express our gratitude for giving us the opportunity to submit a revised draft of our manuscript titled 'Diagnosis of hearing deficiency using EEG based AEP signals: CWT and improved-VGG16 pipeline'. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on our manuscript. We are grateful to the reviewers for their insightful comments on our manuscript. We have incorporated changes to reflect most of the suggestions provided by the reviewers. We are uploading (a) our point-by-point response to the comments (below) (Author Response) and (b) highlight the feasibility of our experiment. We have highlighted the changes within the manuscript.
Here is a point-by-point response to the reviewers’ comments and concerns.
Reviewer’s Comment (Reviewer 1):
Basic reporting
In this study authors aimed to develop a robust intelligent auditory sensation system utilizing a pre-train deep learning framework by analyzing and evaluating the functional reliability of the hearing based on the AEP response. The proposed method outperformed the state-of-art studies by improving the classification accuracy from 57.375% to 96.87%. The subject of the paper is interesting and hopefully it will be eventually useful.
Author Response
We appreciate the reviewer’s comments.
Reviewer’s Comment-1
In line 179, please add the units of “224*224”.
Author Response-1
We agree with the reviewer concern. In the revised manuscript, we added the units of “224*224” (Line 200).
Reviewer’s Comment-2
In line 211, please explain why the raw signal was converted into a time-frequency image using a “2-second” time window.
Author Response-2
We appreciate the Reviewer's concern. Our study aims to enhance the performance with a concise time window. The practical reason behind selecting the shorter time window is to detect the hearing condition quickly. However, a concise window contains less information and more difficult to achieve high performance but provide an effective solution for early detection of hearing disorder. Moreover, selecting a short decision window makes the system faster by reducing the computational complexity of the system. Additionally, we include the analysis of the 1s decision window in the revised manuscript (section ‘Result of the Experiment and Analysis’). In the revised manuscript, we explained the effect of different time windows for hearing deficiency diagnosis, included comparison and reason for selecting a concise time window (the changes can be found in line: 143-148, line: 178-180, line: 233-236, Table 2, Figure 7, Figure 9, Figure 11, line: 489-499, line: 557-564).
Reviewer’s Comment-3
In equation 3, please the explain the “b”, “c”, “t” and “r”.
Author Response-3
Thank you for your suggestion. In the revised manuscript, we have included an explanation of the convolutional function (line: 300-307).
Reviewer’s Comment-4
In line 334-340, please explain why the new dense layers and dropout layer were added in VGG16, and how to select the parameters like dropout value, batch size and learning rate. Or authors should compare different architectures and parameters to test model performance.
Author Response-4
Thank you for pointing out this, and we greatly appreciate this comment. In the revised manuscript, we have explained the reasons for adding new layers in VGG16 architecture and included the parameter selection technique to build our proposed architecture (line: 351-355, line: 368-384).
Reviewer’s Comment-5
In Discussion, the content of algorithms comparisons should be moved in “Result of the Experiment and Analysis”.
Author Response-5
We appreciate the reviewer’s suggestion. In the revised manuscript, we have moved the algorithms comparisons content in ‘Result of the Experiment and Analysis’ from the ‘Discussion’ section.
Reviewer’s Comment-6
There are some typos like “This operation” in line 239.
Author Response-6
We agree with the reviewer concern and have addressed the issues accordingly in the revised manuscript.
Reviewer’s Comment (Reviewer 2):
Basic reporting
a. the article must be clearly written and use unambiguous technically correct text. Several of them have been awarded in the attachment. Please modify other paragraphs that are easy to be ambiguous.
b. the article should include sufficient introduction and background to demonstrate how the work fits into the broader field of knowledge. Relevant prior literature should be appropriately referenced. There are already many commercial products, and their problems have not appeared in this article.
Author Response
(a) We greatly appreciate this comment. The annotated attachment provided here was really helpful for us. We have revised the entire manuscript and tried to avoid ambiguous text.
(b) In the revised manuscript, we added a detailed background along with the existing technology for hearing deficiency diagnosis to make the introduction comprehensive. In the introduction, We also include the exiting problems in the application of hearing aid.
Validity of the findings
The experimental results obtained by the author are higher than those of previous studies.
However, the author's experimental cases are too few to prove the feasibility scientifically, which needs to be highlighted in the Discussion part.
Author Response
We appreciate the reviewer’s suggestion. In the discussion section, we included this concern as well as some existing limitation of hearing deficiency diagnosis (Line: 583-596).
Reviewer’s Comment-1
Scientific research on hearing impairment is of great practical significance. Early hearing screening is helpful to improve the problem. The study of this paper aims to propose a robust intelligent auditory sensation system utilizing a pre-train deep learning framework by analyzing and evaluating the functional reliability of the hearing based on the AEP response. The experimental results obtained by the author are higher than those of previous studies.However, the author's experimental cases are too few to prove the feasibility scientifically, which needs to be highlighted in the Discussion part.
Author Response-1
Thank you for raising an important point. In the discussion section, we included this concern as well as some existing limitation of hearing deficiency diagnosis (Line: 583-596).
" | Here is a paper. Please give your review comments after reading it. |
185 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Three-dimensional (3D) data of many kinds is produced at an increasing rate throughout all scientific disciplines. The Portable Document Format (PDF) is the de-facto standard for the exchange of electronic documents and allows for embedding three-dimensional models. Therefore, it is a well suited medium for the visualization and the publication of this kind of data. The generation of the appropriate files has been cumbersome so far. This article presents the first release of a software toolbox which integrates the complete workflow for generating 3D model files and ready-to-publish 3D PDF documents for scholarly publications in a consolidated working environment. It can be used out-of-the-box as a simple working tool or as a basis for specifically tailored solutions. A comprehensive documentation, an example project and a project wizard facilitate the customization. It is available royalty-free and for Windows, MacOS and Linux.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Throughout many scientific disciplines, the availability -and thus the importance -of threedimensional (3D) data has grown in the recent years. Consequently, this data is often the basis for scientific publications, and in order to avoid a loss of information, the visualization of this data should be 3D whenever possible (Tory & Möller, 2004). In contrary to that, almost all contemporary visualization means (paper printouts, computer screens, etc.) only provide a twodimensional (2D) interface.</ns0:p><ns0:p>The most common workaround for this limitation is to project the 3D data onto the available 2D plane <ns0:ref type='bibr' target='#b31'>(Newe, 2015)</ns0:ref>, which results in the so-called '2.5D visualization' (Tory & Möller, 2004). This projection yields two main problems: limited depth perception and objects that occlude each other. A simple but effective solution of these problems is interaction: by changing the projection angle of a 2.5D visualization (i.e., by changing the point of view), depth perception is improved (Tory & Möller, 2004), and at the same time objects that had previously been occluded (e.g., the backside) can be brought to sight.</ns0:p><ns0:p>A means of application of this simple solution has been available for many years: the Portable Document Format (PDF) from Adobe (Adobe, 2014). This file format is the de-facto standard for the exchange of electronic documents and almost every scientific article that is published nowadays is available as PDF -as well as even articles from the middle of the last century (Hugh-Jones, 1955). PDF allows for embedding 3D models and the Adobe Reader (http://get.adobe.com/reader/otherversions/) can be used to display these models interactively.</ns0:p><ns0:p>Nevertheless, this technology seems not to have found broad acceptance among the scientific community until now, although journals encourage authors to use this technology <ns0:ref type='bibr' target='#b27'>(Maunsell, 2010;</ns0:ref><ns0:ref type='bibr'>Elsevier, 2015)</ns0:ref>. One reason might be that the creation of the appropriate model files and of the final PDF documents is still cumbersome. Not everything that is technically possible is accepted by those who are expected to embrace the innovation if the application of this innovation is hampered by inconveniences <ns0:ref type='bibr' target='#b18'>(Hurd, 2000)</ns0:ref>. Generally suitable protocols and procedures have been proposed by a number of authors before, but they all required of toolchain of at least three This article presents a comprehensive and highly integrated software tool for the creation of both the 3D model files (which can be embedded into PDF documents) and the final, ready-topublish PDF documents with embedded interactive 3D figures. The presented solution is based on MeVisLab, available for all major operating systems (Windows, MacOS and Linux) and requires no commercial license. The source code is available but does not necessarily need to be compiled since binary add-on installers for all platforms are available. A detailed online documentation, an example project and an integrated wizard facilitate re-use and customization.</ns0:p></ns0:div>
<ns0:div><ns0:head>Background and Related Work The Portable Document Format</ns0:head><ns0:p>The Portable Document Format is a document description standard for the definition of electronic documents independently of the software, the hardware or the operating system that is used for creating or consuming (displaying, printing…) it (Adobe, 2008a). A PDF file can comprise all necessary information and all resources to completely describe the layout and the content of an electronic document, including texts, fonts, images and multimedia elements like audio, movies or 3D models. Therefore, it fulfils all requirements for an interactive publication document as proposed by (Thoma et al., 2010).</ns0:p><ns0:p>Although it is an ISO standard <ns0:ref type='bibr'>(ISO 32000-1:2008</ns0:ref><ns0:ref type='bibr'>(ISO, 2008)</ns0:ref>), the specification is available to the full extent from the original developer Adobe (Adobe, 2015) and can be used royalty-free.</ns0:p></ns0:div>
<ns0:div><ns0:head>Embedding 3D Models into PDF</ns0:head><ns0:p>The fifth edition of the PDF specification (PDF version 1.6 (Adobe, 2004)), published in 2004, was the first to support so-called '3D Artwork' as an embedded multimedia feature. In January <ns0:ref type='bibr' target='#b7'>Barnes & Fluke, 2008)</ns0:ref>. Since then, the number of publications that apply PDF 3D technology either in theory or in practice has increased almost every year (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). The most sophisticated implementation so far is the reporting of planning results for liver surgery where the PDF roots are hidden behind a user interface which emulates a stand-alone software application (Newe, Becker & Schenk, 2014). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The Universal 3D (U3D) file format </ns0:p></ns0:div>
<ns0:div><ns0:head>Creating 3D Model Files and PDF Documents</ns0:head><ns0:p>Although many tools and libraries are available that support the creation of 3D model files and of final PDF documents, the whole process is still cumbersome. The problems are manifold: some tools require programming skills; some do not support features those are of interest for scientific 3D data (like polylines (Newe, 2015) and point clouds (Barnes & Fluke, 2008)).</ns0:p><ns0:p>Operating system platform support is another issue, as well as royalty-free use.</ns0:p><ns0:p>As regards the creation of the 3D model files, most of these problems have been addressed in a previous article (Newe, 2015). The main problem, however, remains the creation of the final PDFs. Specifying the content and (in particular) the layout of a document can be a complex task and is usually the domain of highly specialized word processor software. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science descriptive text. If the figure is intended to be provided as supplemental information file instead of being integrated into the main article text, some additional information is necessary as well: At least a general headline and an optional reference to the main article should be provided. If the document content is modularized to these five key elements (Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>), the creation of the PDF itself becomes a rather simple task, because the layout can be pre-defined. </ns0:p></ns0:div>
<ns0:div><ns0:head>MeVisLab</ns0:head><ns0:p>MeVisLab is a framework for image processing and environment for visual development, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>published</ns0:formula><ns0:p>Computer Science converted with little effort into complete applications with an own GUI.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Elicitation of Requirements</ns0:head><ns0:p>As described above, the generation of the necessary 3D model data and particularly of the final PDF is still subject to a number of difficulties. Therefore, the first step was the creation of a list of requirements specifications with the aim to create a tool that overcomes these known drawbacks. </ns0:p></ns0:div>
<ns0:div><ns0:head>ID Requirement Specification</ns0:head></ns0:div>
<ns0:div><ns0:head>R1</ns0:head><ns0:p>The software shall create ready-to-publish PDF documents with embedded 3D models.</ns0:p><ns0:p>R1.1 The software shall offer an option to specify the activation mode and the deactivation mode for the 3D models.</ns0:p></ns0:div>
<ns0:div><ns0:head>R2</ns0:head><ns0:p>The software shall provide an integrated, single-window user interface that comprises all necessary steps.</ns0:p></ns0:div>
<ns0:div><ns0:head>R3</ns0:head><ns0:p>The software shall be executable under Windows, MacOS and at least one Linux distribution.</ns0:p></ns0:div>
<ns0:div><ns0:head>R4</ns0:head><ns0:p>The software shall be executable without the need to purchase a commerical license.</ns0:p></ns0:div>
<ns0:div><ns0:head>R5</ns0:head><ns0:p>The software shall create 3D model files in U3D format.</ns0:p><ns0:p>R5.1 The software shall create view definitions for the 3D model.</ns0:p></ns0:div>
<ns0:div><ns0:head>R5.2</ns0:head><ns0:p>The software shall create poster images for the PDF document.</ns0:p></ns0:div>
<ns0:div><ns0:head>R6</ns0:head><ns0:p>The software shall import mesh geometry from files in OBJ, STL and PLY format.</ns0:p><ns0:p>R6.1 The software should import mesh geometry from other file formats as well.</ns0:p><ns0:p>R6.2 The software shall offer an option to reduce the number of triangles of imported meshes.</ns0:p></ns0:div>
<ns0:div><ns0:head>R6.3</ns0:head><ns0:p>The software shall offer an option to specify the U3D object name and the color of imported meshes.</ns0:p></ns0:div>
<ns0:div><ns0:head>R7</ns0:head><ns0:p>The software shall import line set geometry from files in text format.</ns0:p><ns0:p>R7.1 The software shall offer an option to specify the U3D object name and the color of imported line sets.</ns0:p></ns0:div>
<ns0:div><ns0:head>R8</ns0:head><ns0:p>The software shall import point set geometry from files in text format.</ns0:p><ns0:p>R8.1 The software shall offer an option to specify the U3D object name of imported point sets.</ns0:p><ns0:p>Two requirements have been identified to be the most important ones: 1) the demand for a tool that creates 'ready-to-publish' PDF documents without the need for commercial software and 2) the integration of all necessary steps into a single and easy-to-use interface. Besides these two main requirements, a number of additional requirements have then been identified as well. See Manuscript to be reviewed</ns0:p><ns0:p>Computer Science development.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creation of an 'App' for MeVisLab</ns0:head><ns0:p>MeVisLab-based solutions presented in previous work (Newe & Ganslandt, 2013; Newe, 2015) already provide the possibility to create U3D files without requiring programming skills and without the need for an intensive training. However, they still needed some basic training as regards assembling the necessary processing chains in MeVisLab. Furthermore, the creation of the final PDF was not possible so far.</ns0:p><ns0:p>Therefore, a new macro module was created for MeVisLab. A macro module encapsulates complex processing networks and can provide an integrated user interface. In this way, the internal processes can be hidden away from the user, who can focus on a streamlined workflow instead. Designed in an appropriate way, a macro module can also be considered as an 'app' inside of MeVisLab.</ns0:p><ns0:p>In order to provide the necessary functionality, some auxiliary tool modules (e.g., for the creation of the actual PDF file) needed to be developed as well. Along with the modules for U3D export mentioned above, these auxiliary tool modules were integrated into the internal processing network of the main 'app' macro. The technical details of these internal modules are not within the scope of this article. However, the source code is available and interested readers are free to explore the code and to use it for own projects.</ns0:p><ns0:p>The user interface of the app was designed in a way that it guides novice users step-by-step without treating experienced users too condescendingly, though. Finally, a comprehensive documentation including an example project, a wizard for creating tailored PDF modules and a verbose help text was set up.</ns0:p></ns0:div>
<ns0:div><ns0:head>Deployment of Core Functionality</ns0:head><ns0:p>For the creation of the actual PDF files, version 2.2.0 of the cross-platform, open source library libHaru (http://libharu.org/) was selected, slightly modified and integrated as third-party contribution into MeVisLab.</ns0:p><ns0:p>Next, the application programming interface (API) of libHaru was wrapped into an abstract base module for MeVisLab in order to provide an easy access to all functions of the library and in order to hide away standard tasks like creating a document or releasing memory. A large Manuscript to be reviewed</ns0:p><ns0:p>Computer Science number of convenience functions were added to this base module and an exemplary MeVisLab project was set up in order to demonstrate how to use the base module for tailored applications. This base module also served as basis for the PDF creation of the app macro described above. Finally, a project wizard was integrated into the MeVisLab GUI.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The 'Scientific3DFigurePDFApp' module</ns0:p><ns0:p>The new macro module 'Scientific3DFigurePDFApp' for MeVisLab provides an integrated user interface for all steps that are necessary for the creation of U3D models files and for the creation of the final PDF documents with embedded 3D models. The app produces U3D models files that are compatible with version 4 of the ECMA-363 standard, poster images in Portable Network Graphics (PNG) format and PDF documents that are compliant with PDF version 1.7</ns0:p><ns0:p>(ISO 32000-1:2008). An example PDF is available as Supplemental File S1.</ns0:p><ns0:p>The user interface is arranged in tabs, whereas each tab comprises all functions for one step of the workflow. By processing the tabs consecutively, the user can assemble and modify 3D models, save them in U3D format, create views and poster images for the PDF document, and finally create the PDF itself step by step (Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>).</ns0:p><ns0:p>The raw model data can be collected in two ways: either by feeding it to the input connectors or by assembling it by means of the built-in assistant. The former option is intended for experienced MeVisLab users that want to attach the module at the end of a processing chain.</ns0:p><ns0:p>The latter option addresses users that simply want to apply the app for converting existing 3D models and for creating an interactive figure for scholar publishing.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016)</ns0:p><ns0:p>Manuscript to be reviewed The software allows for importing 39 different 3D formats, including point clouds and line sets from files in character-separated value (CSV) format (see Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref> for a full list).</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Objects from different sources can be combined and their U3D properties (colour, name, position in the object tree) can be specified. The density of imported meshes can be adjusted interactively and multiple views (i.e., the specification of camera, lighting and render mode) can be pre-defined interactively as well. Finally, it is also possible to create a poster image which can replace an inactive 3D model in the PDF document if the model itself is disabled or if it cannot be displayed for some reason (e.g., because the reading software does not provide the necessary features). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>All functions are explained in detail in a comprehensive documentation which can be accessed directly inside MeVisLab. A stand-alone copy of the documentation is available as Supplementary File S2. In order to use the app, it simply needs to be instantiated (via the MeVisLab menu: Modules  PDF  Apps  Scientific3DFigurePDFApp). A full feature list is available in Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Additional Features for Tailored PDF Creation</ns0:head><ns0:p>The abstract module which wraps the API of the PDF library libHaru into a MeVisLab module was made public ('PDFGenerator' module) and can be used for the development of tailored MeVisLab modules. In order to facilitate the re-use of this abstract base module, an exemplary project was set up (/Projects/PDFExamples/SavePDFTemplate). This project demonstrates how to derive a customized module from the PDFGenerator base module and how to specify the content of the PDF file that will be created by means of the new module. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science models). The output of the SavePDFTemplate module is illustrated in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Availability</ns0:head><ns0:p>The whole PDF project for MeVisLab (which includes the Scientific3DFigurePDFApp, the Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(https://github.com/MeVisLab/communitymodules/tree/master/Community). This approach, however, requires compiling the source code and is intended only for experienced users or for users that are willing to become acquainted with MeVisLab. Note, that there are multiple versions available for Windows, depending on the compiler that is intended to be used.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>A Toolbox for the Creation of 3D PDFs</ns0:head><ns0:p>The utilization of 3D PDF technology for scholarly publishing has been revealed and proven both useful and necessary by several authors in the past years. The mainstream application of 3D PDF in science, however, is yet to come.</ns0:p><ns0:p>One reason might be the difficult process that has so far been necessary to create appropriate data and relevant electronic documents. This article presents an all-in-one solution for the creation of such files which requires no extraordinary skills. It can be used by low-end users as an out-of-the-box tool as well as a basis for sophisticated tailored solutions for highend users.</ns0:p><ns0:p>Many typical problems as regards the creation of 3D model files have been addressed and solved. All steps of the workflow are integrated seamlessly. The software is available for all OS platforms and can import and process objects from many popular 3D formats, including polylines and point clouds (Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref>). The density of imported meshes can be adjusted interactively which enables the user to find the best balance between the desired level of detail and the file size.</ns0:p><ns0:p>The main contribution, however, is the possibility to create ready-to-publish PDF documents with a minimum of steps. This approach was proposed to be the ideal solution by (Kumar et al., 2010). To best knowledge, this is the first time that such an integrated and comprehensive solution is made available for the scientific community.</ns0:p></ns0:div>
<ns0:div><ns0:head>Applications</ns0:head><ns0:p>The areas of application (see an example in Supplemental File S1) are manifold and not limited to a specific scientific discipline. On the contrary: every field of research that produces </ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>Although the presented software pulls down the major thresholds that impede the creation of interactive figures for scholarly publishing, some limitations still need to be considered.</ns0:p><ns0:p>A general concern is the suitability of PDF as a means to visualize and to exchange 3D models.</ns0:p><ns0:p>PDF and U3D (or PRC) do not support all features that other modern 3D formats provide and that would be of interest for the scientific community (e.g., volumetric models). On the other hand, PDF is commonly accepted and de-facto the only file format that is used for the electronic exchange of scholarly articles. Therefore, PDF may not be the perfect solution, but it is the best solution that is currently available.</ns0:p><ns0:p>The presented software requires MeVisLab as background framework and the installation of MeVisLab requires a medium-sized download of about 1 GB (depending on the operating Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>system), which could be considered rather large for a PDF creator. On the other hand, MeVisLab integrates a large library for the processing and the visualization of (biomedical) image data. Furthermore, other frameworks (like MeshLab) do not provide all necessary features (e.g., polylines or point clouds) and therefore were not considered to meet basic requirements for the development of the software tool.</ns0:p><ns0:p>The import of 3D models is based on the Open Asset Import Library (http://www.assimp.org/) which does not support all features of all 3D formats. For example, textures and animations cannot be imported and should thus not be embedded into a model file that is intended to be imported. Very large model files should also be avoided. If a large model fails to import, it should be separated into several sub-models. A mesh reduction can be applied after the import, but a previously reduced mesh speeds up the import process.</ns0:p></ns0:div>
<ns0:div><ns0:head>Suitable Reading Software</ns0:head><ns0:p>The Adobe Reader (http://get.adobe.com/reader/otherversions/) is available free of charge for all major operating systems (MS Windows, Mac OS, Linux). It is currently the only 100% standard compliant software that allows can be used to display embedded 3D models and to let the user interact with them (zooming, panning, rotating, selection of components). However, even the Adobe Reader does not support all U3D features (Adobe, 2007), e.g., Glyphs and View</ns0:p><ns0:p>Nodes.</ns0:p><ns0:p>Experience shows that many users do not expect a PDF document to be interactive.</ns0:p><ns0:p>Therefore, possible consumers should be notified that it is possible to interact with the document and they should also be notified that the original Adobe Reader is required for this.</ns0:p><ns0:p>Although poster images are a workaround to avoid free areas in PDF readers that are not capable of rendering 3D scenes, missing 3D features of a certain reader could be confusing for a user.</ns0:p></ns0:div>
<ns0:div><ns0:head>A Basis for Own Modules</ns0:head><ns0:p>As pointed out in previous work (Newe, 2015), the authoring of a PDF document is usually a complex task and thus in most cases it cannot be avoided to separate the generation of 3D Manuscript to be reviewed</ns0:p><ns0:p>Computer Science limited to a certain use case and a pre-defined PDF layout.</ns0:p><ns0:p>However, the API of the core PDF functionality is public and designed in a way that facilitates the creation of own PDF export modules. The large number of convenience functions for the abstract base module (PDFGenerator) facilitates the creation of derived modules. These functions massively lighten the programmer's workload by providing a simple access to routine tasks like writing text at a defined position or like embedding a 3D model which would normally require a whole series of API calls. Finally, the built-in wizard generates all necessary project files and source code files to create a fully functional module barebone which only needs to be outfitted with the desired functionality. </ns0:p></ns0:div>
<ns0:div><ns0:head>Outlook</ns0:head></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>Three-dimensional data is produced at an increasing rate throughout all scientific disciplines.</ns0:p><ns0:p>The Portable Document Format is a well suited medium for the visualization and the publication of this kind of data. With the software presented in this article, the complete workflow for generating 3D model files and 3D PDF documents for scholarly publications can be processed in a consolidated working environment, free of license costs and with all major operating systems.</ns0:p><ns0:p>The software addresses novices as well as experienced users: On the one hand, it provides an out-of-the-box solution that can be used like a stand-alone application, and on the other and all sources and APIs are freely available for specifically tailored extensions.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>List of abbreviations </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>(Kumar et al., 2010; Danz & Katsaros, 2011) or even four PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016) Manuscript to be reviewed Computer Science (Phelps, Naeger & Marcovici, 2012; Lautenschlager, 2014) different software applications and up to 22 single steps until the final PDF was created. Furthermore, some of the proposed workflows were limited to a certain operating system (OS) (Phelps, Naeger & Marcovici, 2012), required programming skills (Barnes et al., 2013) or relied on commercial software (Ruthensteiner & Heß, 2008). Especially the latter might be an important limiting factor which hampers the proliferation of the 3D PDF format in scientific publishing (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>2005, the Acrobat 7 product family provided the first implementation of tools for creating and displaying these 3D models (Adobe, 2005). The latest version (PDF version 1.7 (Adobe, 2008a)) supports three types of geometry (meshes, polylines and point clouds), textures, animations, 15 render modes, 11 lighting schemes and several other features. The only 3D file format that is supported by the ISO standard (ISO, 2008) is Universal 3D (U3D, see section below). Support for another 3D format (Product Representation Compact, PRC) has been added by Adobe (Adobe, 2008b) and has been proposed to be integrated into the replacement Norm ISO 32000-2 (PDF 2.0). However, this new standard is currently only available as draft version (ISO, 2014) and has not yet been adopted. Although the first application in scientific context was proposed in November 2005 (Zlatanova & Verbree, 2005) and thus quite soon after this new technology was available, it took three more years before the first applications were really demonstrated in scholarly articles (Ruthensteiner & Heß, 2008; Kumar et al., 2008;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figures and supplements for scholarly publications, on the other hand, usually have a specific layout where only the contents of (a limited number of) pre-defined elements vary. There are at least three common elements for a scientific figure: the figure itself, a short caption text and a longer PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. General layout of a scholarly figure if provided as supplemental material.</ns0:figDesc><ns0:graphic coords='7,72.00,181.86,448.50,472.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. User interface of the app. The user interface comprises all necessary steps for the creation of 3D model files and PDF files. It is arranged in tabs for each step.</ns0:figDesc><ns0:graphic coords='12,72.00,72.00,468.00,351.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Output of the SavePDFTemplate module.</ns0:figDesc><ns0:graphic coords='15,72.00,93.97,309.77,436.46' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Project wizard for creating customized PDF modules.</ns0:figDesc><ns0:graphic coords='16,72.00,72.00,468.00,140.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>three-dimensional data can and should harness this technology in order to get the best out of PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016) Manuscript to be reviewed Computer Science that data. One (arbitrary) example for the possible use of mesh models from the recent literature is 3D ultrasound. Dahdouh et.al. recently published about the results of segmentation of obstetric 3D ultrasound images (Dahdouh et al., 2015). That article contains several figures that project three-dimensional models on the available two-dimensional surface. A presentation in native 3D would have enabled the reader to interactively explore the whole models instead of just one pre-defined snapshot. Another example is the visualization of molecular structures as demonstrated by (Kumar et al., 2008). Polylines can be used to illustrate nervous fibre tracking. (Mitter et al., 2015) used 2Dprojections of association fibres in the foetal brain to visualize their results. A real 3D visualization would have been very helpful in this case as well: While some basic knowledge about a depicted object helps to understand 2D projections of 3D structures, the possibility to preserve at least a little depth perception decreases with an increasing level of abstraction (mesh objects vs. polylines).This particularly applies to point clouds which can be observed, for example, in an article by<ns0:ref type='bibr' target='#b33'>(Qin et al., 2015)</ns0:ref>: Although these authors added three-dimensional axes to their figure (no. 6) it is still hard to get an impression of depth and therefore of the real position of the points in 3D space.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Although this article represents an important milestone, the development of the PDF project for MeVisLab is ongoing. Future goals are the integration of virtual volume rendering (Barnes et al., 2013), animations (van de Kamp et al., 2014) and the parsing of U3D files that have been created with external software. The progress can be tracked via GitHub (https://github.com/MeVisLab/communitymodules/tree/master/Community) and updates to the binary files will be published regularly.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 . Number of publications related to 3D PDFs in biomedical sciences since 2008 (not comprehensive). Year Number of publications with embedded/supplemental 3D PDF Number of publications dealing with/mentioning 3D PDF</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>2005</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>2008</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2009</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>2010</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>2011</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>2012</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>2013</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>2014</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>2015</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Newe & Ganslandt, 2013).</ns0:head><ns0:label /><ns0:figDesc>As outlined above, the U3D file format is the only 3D format that is supported by the current ISO specification of PDF. Initially designed as an exchange format for Computer Aided</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Construction (CAD), it was later standardized by Ecma International (formerly known as</ns0:cell></ns0:row><ns0:row><ns0:cell>European Computer Manufacturers Association, ECMA) as ECMA-363 (Universal 3D File</ns0:cell></ns0:row><ns0:row><ns0:cell>Format). The latest version is the 4 th edition from June 2007 (ECMA, 2007).</ns0:cell></ns0:row><ns0:row><ns0:cell>U3D is a binary file format that comprises all information to describe a 3D scene graph. A U3D</ns0:cell></ns0:row><ns0:row><ns0:cell>scene consists of an arbitrary number of objects that can be sorted in an object tree. The</ns0:cell></ns0:row><ns0:row><ns0:cell>geometry of each object can be defined as a triangular mesh, a set of lines or a set of points. A</ns0:cell></ns0:row><ns0:row><ns0:cell>proprietary bit encoding algorithm allows for a highly compressed storage of the geometry</ns0:cell></ns0:row><ns0:row><ns0:cell>data. A number of additional features and entities (textures, lighting, views, animations) can be</ns0:cell></ns0:row><ns0:row><ns0:cell>defined; details are described in previously published articles (The scholarly publishing company Elsevier invites authors to supplement their articles with</ns0:cell></ns0:row><ns0:row><ns0:cell>3D models in U3D format (Elsevier, 2015) and many 3D software tools provide the possibility to</ns0:cell></ns0:row><ns0:row><ns0:cell>export in U3D format. However, most of them are commercial software, but open source</ns0:cell></ns0:row><ns0:row><ns0:cell>solutions like MeshLab (http://meshlab.sourceforge.net/) are available as well.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Koenig et al., 2006; Heckel, Schwier & Peitgen, 2009; Ritter et al., 2011</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>by MeVis Medical Solutions AG and Fraunhofer MEVIS in Bremen, Germany. It is available via download (http://www.mevislab.de/download/) for all major platforms (Microsoft Windows, Mac OS and Linux) and has a licensing option which is free for use in non-commercial organizations and research ('MeVisLab SDK Unregistered' license, http://www.mevislab.de/mevislab/versions-and-licensing/). Besides the development features, MeVisLab can be used as a framework for creating sophisticated applications with graphical user interfaces that hide the underlying platform and that can simply be used without any programming knowledge (). MeVisLab has been evaluated as a very good platform for creating application prototypes (Bitter et al., 2007), is very well documented (http://www.mevislab.de/developer/documentation/) and supported by an active online community (http://www.mevislab.de/developer/community/; https://github.com/MeVisLab/communitymodules/tree/master/Community).All algorithms and functions included into MeVisLab are represented and accessed by 'modules', which can be arranged and connected to image processing networks or data processing networks on a graphical user interface (GUI) following the visual data-flow development paradigm. By means of so-called 'macro modules', these networks can then be PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Requirements for the development of the software tool. The two main requirements are highlighted in bold font.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>List of supported 3D formats for import.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>File format</ns0:cell><ns0:cell>Typical File Extension(s)</ns0:cell></ns0:row><ns0:row><ns0:cell>Stereolithography</ns0:cell><ns0:cell>*.stl</ns0:cell></ns0:row><ns0:row><ns0:cell>Stanford Polygon Library</ns0:cell><ns0:cell>*.ply</ns0:cell></ns0:row><ns0:row><ns0:cell>Wavefront Object</ns0:cell><ns0:cell>*.obj</ns0:cell></ns0:row><ns0:row><ns0:cell>Object File Format</ns0:cell><ns0:cell>*.off</ns0:cell></ns0:row><ns0:row><ns0:cell>Blender</ns0:cell><ns0:cell>*.blend</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw Triangles</ns0:cell><ns0:cell>*.raw</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw Point Clouds</ns0:cell><ns0:cell>*.csv; *.txt</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw Line Sets</ns0:cell><ns0:cell>*.csv; *.txt</ns0:cell></ns0:row><ns0:row><ns0:cell>3D GameStudio Model</ns0:cell><ns0:cell>*.mdl</ns0:cell></ns0:row><ns0:row><ns0:cell>3D GameStudio Terrain</ns0:cell><ns0:cell>*.hmp</ns0:cell></ns0:row><ns0:row><ns0:cell>3D Studio Max 3DS</ns0:cell><ns0:cell>*.3ds</ns0:cell></ns0:row><ns0:row><ns0:cell>3D Studio Max ASE</ns0:cell><ns0:cell>*.ase</ns0:cell></ns0:row><ns0:row><ns0:cell>AC3D</ns0:cell><ns0:cell>*.ac</ns0:cell></ns0:row><ns0:row><ns0:cell>AutoCAD DXF</ns0:cell><ns0:cell>*.dxf</ns0:cell></ns0:row><ns0:row><ns0:cell>Autodesk DXF</ns0:cell><ns0:cell>*.dxf</ns0:cell></ns0:row><ns0:row><ns0:cell>Biovision BVH</ns0:cell><ns0:cell>*.bvh</ns0:cell></ns0:row><ns0:row><ns0:cell>CharacterStudio Motion</ns0:cell><ns0:cell>*.csm</ns0:cell></ns0:row><ns0:row><ns0:cell>Collada</ns0:cell><ns0:cell>*.dae; *.xml</ns0:cell></ns0:row><ns0:row><ns0:cell>DirectX X</ns0:cell><ns0:cell>*.x</ns0:cell></ns0:row><ns0:row><ns0:cell>Doom 3</ns0:cell><ns0:cell>*.md5mesh; *.md5anim; *.md5camera</ns0:cell></ns0:row><ns0:row><ns0:cell>Irrlicht Mesh</ns0:cell><ns0:cell>*.irrmesh; *.xml</ns0:cell></ns0:row><ns0:row><ns0:cell>Irrlicht Scene</ns0:cell><ns0:cell>*.irr; *.xml</ns0:cell></ns0:row><ns0:row><ns0:cell>LightWave Model</ns0:cell><ns0:cell>*.lwo</ns0:cell></ns0:row><ns0:row><ns0:cell>LightWave Scene</ns0:cell><ns0:cell>*.lws</ns0:cell></ns0:row><ns0:row><ns0:cell>Milkshape 3D</ns0:cell><ns0:cell>*.ms3d</ns0:cell></ns0:row><ns0:row><ns0:cell>Modo Model</ns0:cell><ns0:cell>*.lxo</ns0:cell></ns0:row><ns0:row><ns0:cell>Neutral File Format</ns0:cell><ns0:cell>*.nff</ns0:cell></ns0:row><ns0:row><ns0:cell>Ogre</ns0:cell><ns0:cell>*.mesh.xml, *.skeleton.xml, *.material</ns0:cell></ns0:row><ns0:row><ns0:cell>Quake I</ns0:cell><ns0:cell>*.mdl</ns0:cell></ns0:row><ns0:row><ns0:cell>Quake II</ns0:cell><ns0:cell>*.md2</ns0:cell></ns0:row><ns0:row><ns0:cell>Quake III</ns0:cell><ns0:cell>*.md3</ns0:cell></ns0:row><ns0:row><ns0:cell>Quake 3 BSP</ns0:cell><ns0:cell>*.pk3</ns0:cell></ns0:row><ns0:row><ns0:cell>Quick3D</ns0:cell><ns0:cell>*.q3o; *q3s</ns0:cell></ns0:row><ns0:row><ns0:cell>RtCW</ns0:cell><ns0:cell>*.mdc</ns0:cell></ns0:row><ns0:row><ns0:cell>Sense8 WorldToolkit</ns0:cell><ns0:cell>*.nff</ns0:cell></ns0:row><ns0:row><ns0:cell>Terragen Terrain</ns0:cell><ns0:cell>*.ter</ns0:cell></ns0:row><ns0:row><ns0:cell>TrueSpace</ns0:cell><ns0:cell>*.cob, *.scn</ns0:cell></ns0:row><ns0:row><ns0:cell>Valve Model</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016) Manuscript to be reviewed Computer Science 234 *.smd, *.vta XGL *.xgl, *.zgl PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:1:0:NEW 4 Apr 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>List of features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell>Features</ns0:cell></ns0:row><ns0:row><ns0:cell>Data Import</ns0:cell><ns0:cell>Import external data, import MeVisLab data, import point clouds,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>import line sets, import meshes from 37 file formats, adjust mesh</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>density, preview import</ns0:cell></ns0:row><ns0:row><ns0:cell>Point Cloud Editing</ns0:cell><ns0:cell>Specify point cloud name, specify position in model tree, preview</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>settings</ns0:cell></ns0:row><ns0:row><ns0:cell>Line Set Editing</ns0:cell><ns0:cell>Specify line set name, specify position in model tree, specify colour,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>preview settings</ns0:cell></ns0:row><ns0:row><ns0:cell>Mesh Editing</ns0:cell><ns0:cell>Specify mesh name, specify position in model tree, specify colour,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>specify opacity, preview settings</ns0:cell></ns0:row><ns0:row><ns0:cell>View Specification</ns0:cell><ns0:cell>Specify view name, specify background colour, specify lighting</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>scheme, specify render mode, preview settings, specify multiple</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>views</ns0:cell></ns0:row><ns0:row><ns0:cell>U3D Creation</ns0:cell><ns0:cell>Store model in U3D format, preview scene</ns0:cell></ns0:row><ns0:row><ns0:cell>Poster Image Creation</ns0:cell><ns0:cell>Store poster in PNG format, preview scene, specify superimposed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>text</ns0:cell></ns0:row><ns0:row><ns0:cell>PDF Creation</ns0:cell><ns0:cell>Store document in PDF (v1.7) format, specify header citation text,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>specify header headline text, specify U3D file, specify poster file,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>specify model activation mode, specify model deactivation mode,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>specify toolbar enabling, specify navigation bar enabling, specify</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>animation start mode, specify caption, specify description text</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "#CS-2016:02:9236:0:0:REVIEW
Revised submission: Enriching scientific publications with interactive 3D PDF figures: A complete toolbox
Response to editor & reviewers
Dear Editor, dear Reviewers,
I am pleased to submit the revised version of #CS-2016:02:9236:0:0:REVIEW “Enriching scientific publications with
interactive 3D PDF figures: A complete toolbox” for publication in PeerJ Computer Science.
First of all, I’d like to address a general comment to the editor:
Two of the reviewers voted positively (#2 and #4), two voted more or less negatively (#1 and #3). Although I’d truly
like to satisfy everybody, the negative reviews seem to be largely unfounded to me. At this point, I’d like to justify
this claim in general, before I respond in detail.
It is my impression that reviewer #3 has not conceived the intention of the article. The manuscript presents a new
method to generate 3D PDF figures, but as far as I understood the comments, reviewer #3 criticizes mainly that
other articles about the application/usage of 3D PDF have not been not considered and cited. I addressed each of his
concerns below, but I did not make any changes to the manuscript since the comments do not appear to be valid to
me.
The main criticism of reviewer #1 seems to be his disbelief in 3D PDF technology in general. The suitability and value
of 3D PDF for scholarly publishing has been proven by many other authors and download counts of the preprint
demonstrate a significant interest in the topic. Furthermore, I submitted this article to PeerJ CS since “PeerJ
Computer Science evaluates articles based only on an objective determination of scientific and methodological
soundness, not on subjective determinations of 'impact' or 'readership' for example.”
(https://peerj.com/about/aims-and-scope/cs). Therefore, I have applied only some minor changes where reviewer
#1 pointed out some weak points (e.g., an improved example file).
Furthermore, a new version of the installers was created in order to reflect the latest version upgrade of MeVisLab
(2.7.1 -> 2.8).
However, I appreciated all criticisms and I carefully considered revisions. The concerns have been addressed as
outlined below. The mentioned line numbers refer to the manuscript with tracked changes.
Reviewer #1 comment:
“Basic reporting
Article generally well written. I suggest table 2 could become main text and some of the reasons behind those
desirable features discussed/justified.“
Thank you very much for the compliment.
For this first revision, I decided not to convert the table into main text since the target audience is researchers
that want to apply the presented software. A more detailed description of the requirements engineering process
could distract the reader and could possibly annoy them. Therefore, I originally decided to create a table: on the
one hand it demonstrates the methodological soundness; on the other hand it does not distract the reader. If the
reviewer insists on a conversion into main text, I will be happy to comply.
Reviewer #1 comment:
“Agree with the author that 3D pdf has been a difficult to author and is thus an impediment to uptake.”
Thank you for sharing my estimation of the general importance of the topic.
“The problem is that 3D PDF is not new and things have not significantly improved over the last 5 years. I struggle
to agree that the proposed solution will significantly improve the situation.”
I agree that 3D PDF is not new – in fact, this technology has been available for more than ten years now. However,
possible reasons for the shadowy existence are discussed and the submitted article aims at improving this
situation. It is a fact that the vast majority of scientists does not even know that this technology exists and even if
they know it, it has been hard to apply it so far. The submitted article presents an easy-to-use tool for the creation
of ready-to-publish interactive PDFs. This hopefully removes the main impediment and at the same time spreads
the word. The feedback from the other three reviewers supports this belief.
“To the question of 3D PDF being a suitable format. The author says “… images and multimedia elements like
audio, movies or 3D models”. Lots of 3D data structures are not supported and are key for visualisation, e.g.:
volumetric. But even for geometry, basic primates like spheres and cylinders are not defined except as very inefficient
“triangle soup” mesh representations.”
I agree that PDF is not the perfect format for all possible use cases and for all desired applications. But it is a fact
that PDF de facto is the only commonly accepted and used format for the exchange of electronic documents.
Furthermore, (to my knowledge) it is used by all scholarly journals.
“The author says in reference to model sizes “In most cases, this size could (and should) be reduced significantly,
because the density of polygon meshes does usually not need to be very high for illustrative purposes.” If one can
zoom into models then why might a high resolution mesh not be desirable? Data is becoming increasingly detailed
and high resolution so I do not accept that model sizes are necessarily low. I would further argue that simple 3D
models are more likely to be fully conveyed in 2D, it is exactly the complicated models that require a interactive
navigable experience to assist with understanding.”
From my experience, most models that have been published so far can be reduced by 70% to 90% without any
loss of information. A completely flat surface which could be modeled by 3 or 4 triangles does not need dozens or
hundreds of triangles (see Danz, 2011 for an example – I abstained from pointing out this more clearly in the
manuscript, since it was not my intention to criticize only the work of these colleagues while others missed to
apply the same optimization). On the other hand, a simple model is not always conveyed best in 2D – even rather
simple models of (e.g.) anatomic structures benefit strongly from interactive 3D (see Newe, 2014). An author has
to make a trade-off: high level of details vs. file size.
“It should also be noted that the 3D PDF does not support what one might be considered standard features like
progressive mesh rendering or even multiple levels of detail.”
I am sorry to disagree. PDF (or to be more exact: U3D) supports progressive meshes (“CLOD meshes” =
“Continuous Level of Detail meshes”, see chapter 8.8 of ECMA, 2007). Since the relevant sections of U3D standard
are not noted to be unsupported by Adobe Reader (see Adobe, 2007), it must be assumed that multiple levels of
detail are supported.
Reviewer #1 comment:
“With respect to the import format list it is indeed impressive, but sorry but I do not accept these formats in their full
specification are supported. To test I wasn't unduly negative I tried to import a textured 3D mesh as an OBJ file and
it failed, didn't have time to identify why but suspect it was due to multiple large texture files. It was a very standard
OBJ file that I can open in every other software package I have tried.“
I am sorry that the reviewer experienced problems with the import of 3D files. The object import is based on the
Open Asset Import Library (Assimp) which obviously does not support all features of all 3D formats. This is now
mentioned in the Discussion section (lines 321-326).
The software was tested with models for all formats, but not with all possible features of all formats. The most
commonly used formats (e.g., OBJ) were tested with multiple models of higher complexity. However, that does
not ensure a totally flawless import of every file. I would be happy to investigate examples that fail to import and
to improve the software accordingly. But this requires access to an example that fails to import. On the other
hand, the software is constantly being improved and updated, so there is a good chance that future version will
work with the example that now failed to import.
“Comments for the author
I don't wish to sound too harsh, I like many were initially intrigued by the possibilities 3D PDF may offer. The issues
I have with 3D PDF are
1. Lack of viewer support outside one companies product.
2. Lack of diversity of representation within the format.”
As mentioned above, I agree that PDF is not the perfect format, but it is the best format that is currently available.
There is simply no alternative today. This is now discussed in the manuscript (lines 308-313).
“3. Lack of tools for creating 3D PDF.”
Exactly this issue is addressed by the submitted article. In my humble opinion, it is not valid to argue against an
article with the rationale that the problem which has been solved by this article has not been solved before. On
the contrary: it even emphasizes the significance.
“There seems to be a lack of support for any pdf viewer other than Adobe product. Where is the “export to pdf” in
3D modelling/editing packages? No support from the leading work processing solution. These tell me that
widespread support from the industry is not growing.”
It is a fact that there is only the Adobe Reader available nowadays, but I do not consider this to be a major
drawback since Adobe Reader is available for free and for all major operating systems. This is discussed in the
submitted article. Support by other products is far out of scope of the submitted article, but I’d like to pick it up
here briefly: A direct export to PDF is not supported by 3D modelling packages, but the export to U3D is. And
actually, exporting PDF directly does not make sense in most cases (this is discussed in Newe, 2013 as well as in
the submitted article (lines 107-109)).
Reviewer #1 comment:
See two attachments, 'PoorShading' shows the coarse model supplied as an example, aren't vertex normals
supported? 'CorruptDrawing', Adobe Acrobat viewer didn't seem to handle OS level screen zooming. Not a
complaint of this paper but further evidence of an immaturity of support. I also struggled to find support on mobile
devices.
Unfortunately, I only received one 120kb PDF attached to this review. It shows a zoomed screenshot of the
example. It is not labeled “PoorShading” or “CorruptDrawing”. Therefore, I cannot respond directly to these
comments. However, the features of Adobe Reader (which might differ between the versions of the different
operating systems) are out of scope of this article. The support of the 3D features might be immature, but it is still
the best solution that is available (see response above).
Unfortunately the article has not allayed my concerns.
I might be wrong, but my impression is that reviewer #1 mainly criticizes the 3D feature of PDF in general (and the
support by other software) and not the content of the manuscript. I agree that PDF is not the optimum solution,
but again – it is the best of what’s currently available.
The solution presented is semi-commercial and a 4+GB download, seems like overkill just to create 3D PDFs.
I need to disagree here. The solution is not semi-commercial – it is (permanently) available for free and for all
major operating systems. This was one of the key requirements for the development. The download of the
MeVisLab framework is ~1GB and thus far away from 4GB (the PDF add-on is only 7 to 16 MB). This might be large
package to download, but I do not consider 1 GB to be too much in 2016. There are research articles with
embedded 3D (e.g., Krause, 2014) that have a size of nearly 100MB – for the article only.
Other platforms than MeVisLab (e.g., MeshLab) would have restricted the model output to limited geometries
(e.g., meshes only). A complete stand-alone software would have yielded other issues (e.g., visualization).
Therefore, MeVisLab seems to be the best compromise to me. This is discussed now in the manuscript (lines 314320).
I might agree with the aims of this paper say 5 years ago, as a proposed future opportunity to publish 3D data with
documents, but today it would seem the promise as a suitable medium has not been realised. Similar to the push for
VRML in the late 80’s, the reality has been quite different. There are many reasons for this, one is the complexity and
diversity needed for 3D data representation requirements, another key factor is the lack of cross platform and
software support. In summary, I don’t believe the case has been made that 3D PDF has a future for scientific data
visualisation and that it supports the type of 3D data representations researchers require.
The feedback of the other reviewers supports my belief in 3D PDF. Especially reviewer #3 pointed out, that it is
very useful. The number of publications (in the biomedical field) that use this technology has risen almost every
year since 2005, with a significant ramp in 2014 (Table 1). Download counts of the preprint of this article (nearly
150) demonstrate a strong interest on the part of the community.
Final comment, if the author resubmits then I strongly suggest creating an example that makes a strong case. “
Example figure S1 now visualizes a complex vessel tree with multiple branches and predefined views.
Reviewer #2 comment:
“Basic reporting
The paper is well written and merits publication in PeerJ.
Validity of the findings
I think that the software toolbox for generating 3D PDF documents could become a very useful format in scientific
publications.“
Thank you very much for the compliment and for your recommendation.
Reviewer #2 comment:
“Comments for the author
It would be useful to comment on how this toolbox could be used to visualize 3D structures of molecules and proteins.
“
This has been addressed in the discussion section now (lines 293-294).
Reviewer #4 comment:
“Basic reporting
English is good.
Introduction and background are well written and clearly position the paper in the relevant field.
Figure are relevant, looks good.
Experimental design
This is a paper about software.
I am not sure how it fits to the requirements.
However I presume the design of the program is OK - I have no way to assess it. “
Thank you for this positive assessment.
Reviewer #4 comment:
“Validity of the findings
The example in the Supplementary materials works really well on my computer: Dell notebook about 4 years old.
I think it demonstrates the validity of the project.
Comments for the author
I did not install the program and my review is really deficient due to this.
There are two reasons why I did not install the program:
1.
The download page for MeVisLab
http://www.mevislab.de/download/
indicates the size of the installation
MeVisLabSDK2.8_vc12-64.exe (1154 MB)
This is more than 1 Gigabytes.
It is really too large for me.
2.
I was not sure that the above download is what I need.
The download page talks about Microsoft compilers.
Do I need to compile the installation with them?
The size if the installation is too large to just try it.“
I am sorry that the reviewer estimates a 1G download to be too large. I would not consider this a real problem in
2016, but the general concern has been added to the Discussion section now (lines 314-316).
In fact, any of the Windows version would have been suitable – the compiler version is only relevant if the user
really wants to compile own modules. I see that this concern could come up for other users as well and I have
therefore added a hint to the Results section (lines 243-245 and 262-263).
Reviewer #3 comment:
“Basic reporting
[…]
This manuscript presents a developed tool box for 3D-pdf file presentation of scientific visualization through the
Adobe software. Development of such interactive 3D visualization with easy access is useful for readers to gain a
thorough understanding of the problems presented. It is particularly useful for medicinal and biological applications
such as protein docking etc. From this point of view, this article addresses a useful and interesting issue.”
Thank you for sharing my estimation of the importance of the topic.
“However, I do not believe that this manuscript is publishable at its present form as the 3D-pdf based on in complete
literature review and therefore, this manuscript missed significant milestone work in this direction, which may shake
the foundation of this study and make it redundant.
First of all, the manuscript missed the significant development in interactive 3d-pdf based on the Adobe software/tool
box in 2009, in which the authors detailed the development of the 3D-pdf to show the 3D structures of drugs using
embedded 3d-pdf technique they developed.
1. Lalitah Selvam, Vladislav Vasilyev and Feng Wang, Methylation of zebularine: a quantum mechanical study
incorporating interactive 3D PDF graphs, Journal of Physical Chemistry B 113, 11496-11504(2009).
2. L. Selvam, F. F. Chen and F. Wang, Methylation of zebularine investigated using density functional theory
calculations, Journal of Computational Chemistry, 32(10)(2011)2077–2083.
3. A. P. Wickrama Arachchilage, F. Wang, V. Feyer, O. Plekan, and K. C. Prince, “Photoelectron spectra and
structures of three cyclic dipeptides: PhePhe, TyrPro and HisGly”, Journal of Chemical Physics, 136, 124301 (2012)
The JCP article was in fact selected as the cover page for its 3D structure of the issue. None of the articles are
referred in this manuscript. As the author missed these articles using very similar technique, unless such related
articles are properly referred and the technique of the manuscript is justified, I do not think this article of a very
similar technique is worthy publishing.“
I regret to have the impression that the reviewer seems to have misunderstood the intention of the submitted
article. It does not aim at demonstrating the possibilities and features of 3D PDF or the benefits for scholarly
publishing (i.e., it does not aim at the “development of such interactive 3D visualization”). It also is not intended
to be a review or a history of 3D PDF in science that covers all recent developments and advances (although, I
agree of course, that a thorough literature review is the basis of good scientific practice). The value of 3D PDF for
scientific applications has been proven by many authors in many disciplines. I regret that I have no overview over
all publications that use 3D PDF as a means of visualization, especially not as regards publications outside of my
main field of research (which is in the biomedical domain). However, it is a fact that I have not “missed” the
publications listed by the reviewer (the first one (Selvam, 2009) has been cited in an earlier publication of mine
(Newe, 2013)). On the contrary: these articles have not been cited in the submitted article, since they are not
directly relevant. All three articles “only” apply 3D PDF as a means of visualization. In contrast, the submitted
article presents a new way for generating the necessary intermediate files (U3D) as well as final & complete
figures (which to my best knowledge has never been published before). The main benefit of the presented
software is that complex protocols, long toolchains and expensive commercial software (as needed for the
process described in Selvam, 2009) are no longer needed for the generation of the 3D models or of the interactive
3D figures. Therefore, the listed articles are only loosely associated and not directly related to the submitted
work.
For this first revision, I decided not to cite any of the articles mentioned by the reviewer, because I consider none
of them to be a “significant milestone work” (although (Selvam, 2011) was the first to use animations and
although JCP decided to select it as cover page). In my humble opinion, all three articles are “just” applications of
an existing technology in a new field of research, based on an isolated method for the production of the 3D
models. And even this production method is not described with very much detail, nor is any ready-to-use
software (or just a simple protocol) provided. In fact, (Kumar, 2008) already demonstrated the visualization of
molecules in 2008 (and has been cited in the submitted article). However, if the reviewer insists on a citation, I
will comply in a further revision.
One general comment of mine, since I perpetually stumble across a general misunderstanding if I talk to
colleagues about 3D PDF: the 3D PDF technology has not been invented (nor “developed”) be me or by Selvam or
by any other of the authors that used it in their articles. 3D PDF is an integral part of the PDF specification
(chapter 13.6 of ISO 32000:1-2008). All occurrences in scientific articles are “only” applications of this previously
existing technology. The generation of these interactive figures, however, has been much more cumbersome than
embedding a simple bitmap image – so far.
Reviewer #3 comment:
“Comments for the author
The author need a thorough literature review and developing something existing is extremely time consuming.“
See my response above. The articles listed by the reviewer are not directly related to the submitted work, since
they “only” demonstrate the application of 3D PDF, but not the generation. Selvam and the other authors use 3D
models to enrich their articles – the submitted manuscript in contrast provides a tool that actually creates such
documents.
The only comparable developments have been published by (Barnes et.al., 2013), but their library requires
programming individual solutions. The submitted article presents a universal, ready-to-use solution that produces
ready-to-publish PDF documents with embedded interactive 3D figures. Something comparable has never been
published (nor been developed) before.
Kind regards
Axel Newe
Chair of Medical Informatics
University Erlangen-Nuremberg
" | Here is a paper. Please give your review comments after reading it. |
186 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Three-dimensional (3D) data of many kinds is produced at an increasing rate throughout all scientific disciplines. The Portable Document Format (PDF) is the de-facto standard for the exchange of electronic documents and allows for embedding three-dimensional models. Therefore, it is a well suited medium for the visualization and the publication of this kind of data. The generation of the appropriate files has been cumbersome so far. This article presents the first release of a software toolbox which integrates the complete workflow for generating 3D model files and ready-to-publish 3D PDF documents for scholarly publications in a consolidated working environment. It can be used out-of-the-box as a simple working tool or as a basis for specifically tailored solutions. A comprehensive documentation, an example project and a project wizard facilitate the customization. It is available royalty-free and for Windows, MacOS and Linux.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Throughout many scientific disciplines, the availability -and thus the importance -of threedimensional (3D) data has grown in the recent years. Consequently, this data is often the basis for scientific publications, and in order to avoid a loss of information, the visualization of this data should be 3D whenever possible (Tory & Möller, 2004). In contrary to that, almost all contemporary visualization means (paper printouts, computer screens, etc.) only provide a twodimensional (2D) interface.</ns0:p><ns0:p>The most common workaround for this limitation is to project the 3D data onto the available 2D plane <ns0:ref type='bibr' target='#b31'>(Newe, 2015)</ns0:ref>, which results in the so-called '2.5D visualization' (Tory & Möller, 2004). This projection yields two main problems: limited depth perception and objects that occlude each other. A simple but effective solution of these problems is interaction: by changing the projection angle of a 2.5D visualization (i.e., by changing the point of view), depth perception is improved (Tory & Möller, 2004), and at the same time objects that had previously been occluded (e.g., the backside) can be brought to sight.</ns0:p><ns0:p>A means of application of this simple solution has been available for many years: the Portable Document Format (PDF) from Adobe (Adobe, 2014). This file format is the de-facto standard for the exchange of electronic documents and almost every scientific article that is published nowadays is available as PDF -as well as even articles from the middle of the last century (Hugh-Jones, 1955). PDF allows for embedding 3D models and the Adobe Reader (http://get.adobe.com/reader/otherversions/) can be used to display these models interactively.</ns0:p><ns0:p>Nevertheless, this technology seems not to have found broad acceptance among the scientific community until now, although journals encourage authors to use this technology <ns0:ref type='bibr' target='#b27'>(Maunsell, 2010;</ns0:ref><ns0:ref type='bibr'>Elsevier, 2015)</ns0:ref>. One reason might be that the creation of the appropriate model files and of the final PDF documents is still cumbersome. Not everything that is technically possible is accepted by those who are expected to embrace the innovation if the application of this innovation is hampered by inconveniences <ns0:ref type='bibr' target='#b18'>(Hurd, 2000)</ns0:ref>. Generally suitable protocols and procedures have been proposed by a number of authors before, but they all required of toolchain of at least three This article presents a comprehensive and highly integrated software tool for the creation of both the 3D model files (which can be embedded into PDF documents) and the final, ready-topublish PDF documents with embedded interactive 3D figures. The presented solution is based on MeVisLab, available for all major operating systems (Windows, MacOS and Linux) and requires no commercial license. The source code is available but does not necessarily need to be compiled since binary add-on installers for all platforms are available. A detailed online documentation, an example project and an integrated wizard facilitate re-use and customization.</ns0:p></ns0:div>
<ns0:div><ns0:head>Background and Related Work The Portable Document Format</ns0:head><ns0:p>The Portable Document Format is a document description standard for the definition of electronic documents independently of the software, the hardware or the operating system that is used for creating or consuming (displaying, printing…) it (Adobe, 2008a). A PDF file can comprise all necessary information and all resources to completely describe the layout and the content of an electronic document, including texts, fonts, images and multimedia elements like audio, movies or 3D models. Therefore, it fulfils all requirements for an interactive publication document as proposed by (Thoma et al., 2010).</ns0:p><ns0:p>Although it is an ISO standard <ns0:ref type='bibr'>(ISO 32000-1:2008</ns0:ref><ns0:ref type='bibr'>(ISO, 2008)</ns0:ref>), the specification is available to the full extent from the original developer Adobe (Adobe, 2015) and can be used royalty-free.</ns0:p></ns0:div>
<ns0:div><ns0:head>Embedding 3D Models into PDF</ns0:head><ns0:p>The fifth edition of the PDF specification (PDF version 1.6 (Adobe, 2004)), published in 2004, was the first to support so-called '3D Artwork' as an embedded multimedia feature. In January <ns0:ref type='bibr' target='#b7'>Barnes & Fluke, 2008)</ns0:ref>. Since then, the number of publications that apply PDF 3D technology either in theory or in practice has increased almost every year (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). The most sophisticated implementation so far is the reporting of planning results for liver surgery where the PDF roots are hidden behind a user interface which emulates a stand-alone software application (Newe, Becker & Schenk, 2014). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The Universal 3D (U3D) file format </ns0:p></ns0:div>
<ns0:div><ns0:head>Creating 3D Model Files and PDF Documents</ns0:head><ns0:p>Although many tools and libraries are available that support the creation of 3D model files and of final PDF documents, the whole process is still cumbersome. The problems are manifold: some tools require programming skills; some do not support features those are of interest for scientific 3D data (like polylines (Newe, 2015) and point clouds (Barnes & Fluke, 2008)).</ns0:p><ns0:p>Operating system platform support is another issue, as well as royalty-free use.</ns0:p><ns0:p>As regards the creation of the 3D model files, most of these problems have been addressed in a previous article (Newe, 2015). The main problem, however, remains the creation of the final PDFs. Specifying the content and (in particular) the layout of a document can be a complex task and is usually the domain of highly specialized word processor software. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science descriptive text. If the figure is intended to be provided as supplemental information file instead of being integrated into the main article text, some additional information is necessary as well: At least a general headline and an optional reference to the main article should be provided. If the document content is modularized to these five key elements (Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>), the creation of the PDF itself becomes a rather simple task, because the layout can be pre-defined. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>One last difficulty arises from a peculiarity of interactive 3D figures in PDF: the number viewing options (e.g., camera angle, zoom, lighting…) is nearly unlimited. Although such a figure is intended to provide all these options, an author usually wants to define an initial view at the objects, if only to simply ensure that all objects are visible. No freely available tool for PDF creation currently provides a feature to pre-define such a view. The movie15 package for LaTeX (Grahn, 2005) provides a mechanism do determine the view parameters, but that requires the generation of intermediate PDFs.</ns0:p><ns0:p>Finally it must be mentioned that many previously published 3D models are very largesometimes up to nearly 100 megabytes (Krause et al., 2014). In most cases, this size could (and should) be reduced significantly, because the density of polygon meshes does usually not need to be very high for illustrative purposes.</ns0:p></ns0:div>
<ns0:div><ns0:head>MeVisLab</ns0:head><ns0:p>MeVisLab is a framework for image processing and environment for visual development, Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>published</ns0:formula><ns0:p>Computer Science development paradigm. By means of so-called 'macro modules', these networks can then be converted with little effort into complete applications with an own GUI.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Elicitation of Requirements</ns0:head><ns0:p>As described above, the generation of the necessary 3D model data and particularly of the final PDF is still subject to a number of difficulties. Therefore, the first step was the creation of a list of requirements specifications with the aim to create a tool that overcomes these known drawbacks. </ns0:p></ns0:div>
<ns0:div><ns0:head>ID Requirement Specification R1</ns0:head><ns0:p>The software shall create ready-to-publish PDF documents with embedded 3D models. R1.1 The software shall offer an option to specify the activation mode and the deactivation mode for the 3D models.</ns0:p></ns0:div>
<ns0:div><ns0:head>R2</ns0:head><ns0:p>The software shall provide an integrated, single-window user interface that comprises all necessary steps.</ns0:p></ns0:div>
<ns0:div><ns0:head>R3</ns0:head><ns0:p>The software shall be executable under Windows, MacOS and at least one Linux distribution.</ns0:p></ns0:div>
<ns0:div><ns0:head>R4</ns0:head><ns0:p>The software shall be executable without the need to purchase a commerical license.</ns0:p></ns0:div>
<ns0:div><ns0:head>R5</ns0:head><ns0:p>The software shall create 3D model files in U3D format. R5.1 The software shall create view definitions for the 3D model. R5.2 The software shall create poster images for the PDF document.</ns0:p></ns0:div>
<ns0:div><ns0:head>R6</ns0:head><ns0:p>The software shall import mesh geometry from files in OBJ, STL and PLY format. R6.1 The software should import mesh geometry from other file formats as well. R6.2 The software shall offer an option to reduce the number of triangles of imported meshes. R6.3 The software shall offer an option to specify the U3D object name and the color of imported meshes.</ns0:p></ns0:div>
<ns0:div><ns0:head>R7</ns0:head><ns0:p>The software shall import line set geometry from files in text format. R7.1 The software shall offer an option to specify the U3D object name and the color of imported line sets.</ns0:p></ns0:div>
<ns0:div><ns0:head>R8</ns0:head><ns0:p>The software shall import point set geometry from files in text format. R8.1 The software shall offer an option to specify the U3D object name of imported point sets.</ns0:p><ns0:p>Two requirements have been identified to be the most important ones: 1) the demand for a tool that creates 'ready-to-publish' PDF documents without the need for commercial software and 2) the integration of all necessary steps into a single and easy-to-use interface. Besides these two main requirements, a number of additional requirements have then been identified as well. See Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> for a full list of all requirements that were the basis for the following development.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Creation of an 'App' for MeVisLab</ns0:head><ns0:p>MeVisLab-based solutions presented in previous work (Newe & Ganslandt, 2013; Newe, 2015) already provide the possibility to create U3D files without requiring programming skills and without the need for an intensive training. However, they still needed some basic training as regards assembling the necessary processing chains in MeVisLab. Furthermore, the creation of the final PDF was not possible so far.</ns0:p><ns0:p>Therefore, a new macro module was created for MeVisLab. A macro module encapsulates complex processing networks and can provide an integrated user interface. In this way, the internal processes can be hidden away from the user, who can focus on a streamlined workflow instead. Designed in an appropriate way, a macro module can also be considered as an 'app' inside of MeVisLab.</ns0:p><ns0:p>In order to provide the necessary functionality, some auxiliary tool modules (e.g., for the creation of the actual PDF file) needed to be developed as well. Along with the modules for U3D export mentioned above, these auxiliary tool modules were integrated into the internal processing network of the main 'app' macro. The technical details of these internal modules are not within the scope of this article. However, the source code is available and interested readers are free to explore the code and to use it for own projects.</ns0:p><ns0:p>The user interface of the app was designed in a way that it guides novice users step-by-step without treating experienced users too condescendingly, though. Finally, a comprehensive documentation including an example project, a wizard for creating tailored PDF modules and a verbose help text was set up.</ns0:p></ns0:div>
<ns0:div><ns0:head>Deployment of Core Functionality</ns0:head><ns0:p>For the creation of the actual PDF files, version 2.2.0 of the cross-platform, open source library libHaru (http://libharu.org/) was selected, slightly modified and integrated as third-party contribution into MeVisLab.</ns0:p><ns0:p>Next, the application programming interface (API) of libHaru was wrapped into an abstract base module for MeVisLab in order to provide an easy access to all functions of the library and in order to hide away standard tasks like creating a document or releasing memory. A large number of convenience functions were added to this base module and an exemplary MeVisLab project was set up in order to demonstrate how to use the base module for tailored applications. This base module also served as basis for the PDF creation of the app macro described above. Finally, a project wizard was integrated into the MeVisLab GUI.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The 'Scientific3DFigurePDFApp' module</ns0:p><ns0:p>The new macro module 'Scientific3DFigurePDFApp' for MeVisLab provides an integrated user interface for all steps that are necessary for the creation of U3D models files and for the creation of the final PDF documents with embedded 3D models. The model editor part produces U3D model files of geometry data that are compatible with version 4 of the ECMA-363 standard and poster images in Portable Network Graphics (PNG) format. The PDF editor part produces PDF documents that are compliant with PDF version 1.7 (ISO 32000-1:2008). An example PDF is available as Supplemental File S1.</ns0:p><ns0:p>The user interface is arranged in tabs, whereas each tab comprises all functions for one step of the workflow. By processing the tabs consecutively, the user can assemble and modify 3D models, save them in U3D format, create views and poster images for the PDF document, and finally create the PDF itself step by step (Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>).</ns0:p><ns0:p>The raw model data can be collected in two ways: either by feeding it to the input connectors or by assembling it by means of the built-in assistant. The former option is intended for experienced MeVisLab users that want to attach the module at the end of a processing chain.</ns0:p><ns0:p>The latter option addresses users that simply want to apply the app for converting existing 3D The software allows for importing the geometry data of 39 different 3D formats, including point clouds and line sets from files in character-separated value (CSV) format (see Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> for a full list). The import of textures and animations is not supported.</ns0:p><ns0:p>Objects from different sources can be combined and their U3D properties (colour, name, position in the object tree) can be specified. The density of imported meshes can be adjusted interactively and multiple views (i.e., the specification of camera, lighting and render mode) can be pre-defined interactively as well. Finally, it is also possible to create a poster image which can replace an inactive 3D model in the PDF document if the model itself is disabled or if it cannot be displayed for some reason (e.g., because the reading software does not provide the necessary features). Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Additional Features for Tailored PDF Creation</ns0:head><ns0:p>The abstract module which wraps the API of the PDF library libHaru into a MeVisLab module was made public ('PDFGenerator' module) and can be used for the development of tailored MeVisLab modules. In order to facilitate the re-use of this abstract base module, an exemplary project was set up (/Projects/PDFExamples/SavePDFTemplate). This project demonstrates how to derive a customized module from the PDFGenerator base module and how to specify the content of the PDF file that will be created by means of the new module. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Availability</ns0:head><ns0:p>The whole PDF project for MeVisLab (which includes the Scientific3DFigurePDFApp, the Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(https://github.com/MeVisLab/communitymodules/tree/master/Community). This approach, however, requires compiling the source code and is intended only for experienced users or for users that are willing to become acquainted with MeVisLab. Note, that there are multiple versions available for Windows, depending on the compiler that is intended to be used.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>A Toolbox for the Creation of 3D PDFs</ns0:head><ns0:p>The utilization of 3D PDF technology for scholarly publishing has been revealed and proven both useful and necessary by several authors in the past years. The mainstream application of 3D PDF in science, however, is yet to come.</ns0:p><ns0:p>One reason might be the difficult process that has so far been necessary to create appropriate data and relevant electronic documents. This article presents an all-in-one solution for the creation of such files which requires no extraordinary skills. It can be used by low-end users as an out-of-the-box tool as well as a basis for sophisticated tailored solutions for highend users.</ns0:p><ns0:p>Many typical problems as regards the creation of 3D model files have been addressed and solved. All steps of the workflow are integrated seamlessly. The software is available for all OS platforms and can import and process objects from many popular 3D formats, including polylines and point clouds (Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>). The density of imported meshes can be adjusted interactively which enables the user to find the best balance between the desired level of detail and the file size.</ns0:p><ns0:p>The main contribution, however, is the possibility to create ready-to-publish PDF documents with a minimum of steps. This approach was proposed to be the ideal solution by (Kumar et al., 2010). To best knowledge, this is the first time that such an integrated and comprehensive solution is made available for the scientific community.</ns0:p></ns0:div>
<ns0:div><ns0:head>Applications</ns0:head><ns0:p>The areas of application (see an example in Supplemental File S1) are manifold and not limited to a specific scientific discipline. On the contrary: every field of research that produces </ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>Although the presented software pulls down the major thresholds that impede the creation of interactive figures for scholarly publishing, some limitations still need to be considered.</ns0:p><ns0:p>A general concern is the suitability of PDF as a means to visualize and to exchange 3D models.</ns0:p><ns0:p>PDF and U3D (or PRC) do not support all features that other modern 3D formats provide and that would be of interest for the scientific community (e.g., volumetric models). On the other hand, PDF is commonly accepted and de-facto the only file format that is used for the electronic exchange of scholarly articles. Therefore, PDF may not be the perfect solution, but it is the best solution that is currently available.</ns0:p><ns0:p>The presented software requires MeVisLab as background framework and the installation of Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>system), which could be considered rather large for a PDF creator. On the other hand, MeVisLab integrates a large library for the processing and the visualization of (biomedical) image data. Furthermore, other frameworks (like MeshLab) do not provide all necessary features (e.g., polylines or point clouds) and therefore were not considered to meet basic requirements for the development of the software tool.</ns0:p><ns0:p>The import of 3D models is based on the Open Asset Import Library (http://www.assimp.org/) which does not support all features of all 3D formats. For example, textures and animations cannot be imported and should thus not be embedded into a model file that is intended to be imported. However -although the model-editor part of the presented software does not support textures (or animations), the PDF-creator part can still be used to produce scientific PDFs with textured or animated models, if the necessary U3D files have been created with external and more specialized software. In this use case, the Scientific3DFigurePDFApp does not integrate all necessary steps, but it still remains a 'fewclicks' alternative for the creation of interactive PDF supplements for scientific publications and it still obviates the need for a commercial solution.</ns0:p><ns0:p>Finally, very large model files should be avoided. If a large model fails to import, it should be separated into several sub-models. A mesh reduction can be applied after the import, but a previously reduced mesh speeds up the import process.</ns0:p></ns0:div>
<ns0:div><ns0:head>Suitable Reading Software</ns0:head><ns0:p>The Adobe Reader (http://get.adobe.com/reader/otherversions/) is available free of charge for all major operating systems (MS Windows, Mac OS, Linux). It is currently the only software that can be used to display embedded 3D models and to let the user interact with them (zooming, panning, rotating, selection of components). However, even the Adobe Reader does not support all U3D features (Adobe, 2007), e.g., Glyphs and View Nodes. Furthermore, a rendering flaw has been observed on low-end graphic boards in MacOS hardware (Figure <ns0:ref type='figure' target='#fig_10'>5</ns0:ref>).</ns0:p><ns0:p>Adobe Reader for MacOS does not render transparent surfaces superimposed upon each other correctly: instead, a strong tessellation effect is visible. This may also occur on other platforms but has not been reported yet. Since this is an issue with the rendering engine of Adobe Reader, there is currently no other solution than using a different render mode (e.g., one of the Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>wireframe modes) or different hardware. Experience shows that many users do not expect a PDF document to be interactive.</ns0:p><ns0:p>Therefore, possible consumers should be notified that it is possible to interact with the document and they should also be notified that the original Adobe Reader is required for this.</ns0:p><ns0:p>Although poster images are a workaround to avoid free areas in PDF readers that are not capable of rendering 3D scenes, missing 3D features of a certain reader could be confusing for a user.</ns0:p></ns0:div>
<ns0:div><ns0:head>A Basis for Own Modules</ns0:head><ns0:p>As pointed out in previous work (Newe, 2015), the authoring of a PDF document is usually a complex task and thus in most cases it cannot be avoided to separate the generation of 3D model data from the actual PDF authoring. Although the software tool presented in this article mitigates this general problem by integrating model generation and PDF creation, it is still limited to a certain use case and a pre-defined PDF layout.</ns0:p><ns0:p>However, the API of the core PDF functionality is public and designed in a way that facilitates the creation of own PDF export modules. The large number of convenience functions for the abstract base module (PDFGenerator) facilitates the creation of derived modules. These </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>Three-dimensional data is produced at an increasing rate throughout all scientific disciplines.</ns0:p><ns0:p>The Portable Document Format is a well suited medium for the visualization and the publication of this kind of data. With the software presented in this article, the complete workflow for generating 3D model files and 3D PDF documents for scholarly publications can be processed in a consolidated working environment, free of license costs and with all major operating systems.</ns0:p><ns0:p>The software addresses novices as well as experienced users: On the one hand, it provides an out-of-the-box solution that can be used like a stand-alone application, and on the other and all sources and APIs are freely available for specifically tailored extensions.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>List of abbreviations </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>(Kumar et al., 2010; Danz & Katsaros, 2011) or even four PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016) Manuscript to be reviewed Computer Science (Phelps, Naeger & Marcovici, 2012; Lautenschlager, 2014) different software applications and up to 22 single steps until the final PDF was created. Furthermore, some of the proposed workflows were limited to a certain operating system (OS) (Phelps, Naeger & Marcovici, 2012), required programming skills (Barnes et al., 2013) or relied on commercial software (Ruthensteiner & Heß, 2008). Especially the latter might be an important limiting factor which hampers the proliferation of the 3D PDF format in scientific publishing (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>2005, the Acrobat 7 product family provided the first implementation of tools for creating and displaying these 3D models (Adobe, 2005). The latest version (PDF version 1.7 (Adobe, 2008a)) supports three types of geometry (meshes, polylines and point clouds), textures, animations, 15 render modes, 11 lighting schemes and several other features. The only 3D file format that is supported by the ISO standard (ISO, 2008) is Universal 3D (U3D, see section below). Support for another 3D format (Product Representation Compact, PRC) has been added by Adobe (Adobe, 2008b) and has been proposed to be integrated into the replacement Norm ISO 32000-2 (PDF 2.0). However, this new standard is currently only available as draft version (ISO, 2014) and has not yet been adopted. Although the first application in scientific context was proposed in November 2005 (Zlatanova & Verbree, 2005) and thus quite soon after this new technology was available, it took three more years before the first applications were really demonstrated in scholarly articles (Ruthensteiner & Heß, 2008; Kumar et al., 2008;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figures and supplements for scholarly publications, on the other hand, usually have a specific layout where only the contents of (a limited number of) pre-defined elements vary. There are at least three common elements for a scientific figure: the figure itself, a short caption text and a longer PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. General layout of a scholarly figure if provided as supplemental material.</ns0:figDesc><ns0:graphic coords='7,72.00,181.86,448.50,472.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. User interface of the app. The user interface comprises all necessary steps for the creation of 3D model files and PDF files. It is arranged in tabs for each step.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Output of the SavePDFTemplate module.</ns0:figDesc><ns0:graphic coords='15,72.00,72.00,309.77,436.46' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Project wizard for creating customized PDF modules.</ns0:figDesc><ns0:graphic coords='16,72.00,72.00,468.00,140.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>three-dimensional data can and should harness this technology in order to get the best out of PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016) Manuscript to be reviewed Computer Science that data. One (arbitrary) example for the possible use of mesh models from the recent literature is 3D ultrasound. Dahdouh et.al. recently published about the results of segmentation of obstetric 3D ultrasound images (Dahdouh et al., 2015). That article contains several figures that project three-dimensional models on the available two-dimensional surface. A presentation in native 3D would have enabled the reader to interactively explore the whole models instead of just one pre-defined snapshot. Another example is the visualization of molecular structures as demonstrated by (Kumar et al., 2008). Polylines can be used to illustrate nervous fibre tracking. (Mitter et al., 2015) used 2Dprojections of association fibres in the foetal brain to visualize their results. A real 3D visualization would have been very helpful in this case as well: While some basic knowledge about a depicted object helps to understand 2D projections of 3D structures, the possibility to preserve at least a little depth perception decreases with an increasing level of abstraction (mesh objects vs. polylines).This particularly applies to point clouds which can be observed, for example, in an article by<ns0:ref type='bibr' target='#b33'>(Qin et al., 2015)</ns0:ref>: Although these authors added three-dimensional axes to their figure (no. 6) it is still hard to get an impression of depth and therefore of the real position of the points in 3D space.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>MeVisLab requires a medium-sized download of about 1 GB (depending on the operating PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Rendering artifacts. These tessellation artifacts have been observed on MacOS systems with low-end graphic hardware.</ns0:figDesc><ns0:graphic coords='20,84.91,93.97,318.15,216.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>functions massively lighten the programmer's workload by providing a simple access to routine PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)Manuscript to be reviewed Computer Science tasks like writing text at a defined position or like embedding a 3D model which would normally require a whole series of API calls. Finally, the built-in wizard generates all necessary project files and source code files to create a fully functional module barebone which only needs to be outfitted with the desired functionality.OutlookAlthough this article represents an important milestone, the development of the PDF project for MeVisLab is ongoing. Future goals are the integration of virtual volume rendering (Barnes et al., 2013), animations (van de Kamp et al., 2014) and the parsing of U3D files that have been created with external software. The progress can be tracked via GitHub (https://github.com/MeVisLab/communitymodules/tree/master/Community) and updates to the binary files will be published regularly.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 . Number of publications related to 3D PDFs in biomedical sciences since 2008 (not comprehensive). Year Number of publications with embedded/supplemental 3D PDF Number of publications dealing with/mentioning 3D PDF</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>2005</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>2008</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>2009</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>2010</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>2011</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>2012</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell>2013</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>2014</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>7</ns0:cell></ns0:row><ns0:row><ns0:cell>2015</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Newe & Ganslandt, 2013).</ns0:head><ns0:label /><ns0:figDesc>As outlined above, the U3D file format is the only 3D format that is supported by the current ISO specification of PDF. Initially designed as an exchange format for Computer Aided</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Construction (CAD), it was later standardized by Ecma International (formerly known as</ns0:cell></ns0:row><ns0:row><ns0:cell>European Computer Manufacturers Association, ECMA) as ECMA-363 (Universal 3D File</ns0:cell></ns0:row><ns0:row><ns0:cell>Format). The latest version is the 4 th edition from June 2007 (ECMA, 2007).</ns0:cell></ns0:row><ns0:row><ns0:cell>U3D is a binary file format that comprises all information to describe a 3D scene graph. A U3D</ns0:cell></ns0:row><ns0:row><ns0:cell>scene consists of an arbitrary number of objects that can be sorted in an object tree. The</ns0:cell></ns0:row><ns0:row><ns0:cell>geometry of each object can be defined as a triangular mesh, a set of lines or a set of points. A</ns0:cell></ns0:row><ns0:row><ns0:cell>proprietary bit encoding algorithm allows for a highly compressed storage of the geometry</ns0:cell></ns0:row><ns0:row><ns0:cell>data. A number of additional features and entities (textures, lighting, views, animations) can be</ns0:cell></ns0:row><ns0:row><ns0:cell>defined; details are described in previously published articles (The scholarly publishing company Elsevier invites authors to supplement their articles with</ns0:cell></ns0:row><ns0:row><ns0:cell>3D models in U3D format (Elsevier, 2015) and many 3D software tools provide the possibility to</ns0:cell></ns0:row><ns0:row><ns0:cell>export in U3D format. However, most of them are commercial software, but open source</ns0:cell></ns0:row><ns0:row><ns0:cell>solutions like MeshLab (http://meshlab.sourceforge.net/) are available as well.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Koenig et al., 2006; Heckel, Schwier & Peitgen, 2009; Ritter et al., 2011</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>by MeVis Medical Solutions AG and Fraunhofer MEVIS in Bremen, Germany. It is available via download (http://www.mevislab.de/download/) for all major platforms (Microsoft Windows, Mac OS and Linux) and has a licensing option which is free for use in non-commercial organizations and research ('MeVisLab SDK Unregistered' license, http://www.mevislab.de/mevislab/versions-and-licensing/). Besides the development features, MeVisLab can be used as a framework for creating sophisticated applications with graphical user interfaces that hide the underlying platform and that can simply be used without any programming knowledge (). MeVisLab has been evaluated as a very good platform for creating applicationprototypes (Bitter et al., 2007), is very well documented (http://www.mevislab.de/developer/documentation/) and supported by an active online community (http://www.mevislab.de/developer/community/; https://github.com/MeVisLab/communitymodules/tree/master/Community). All algorithms and functions included into MeVisLab are represented and accessed by 'modules', which can be arranged and connected to image processing networks or data processing networks on a graphical user interface (GUI) following the visual data-flow PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Requirements for the development of the software tool. The two main requirements are highlighted in bold font.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>List of supported 3D formats for importing geometry data (textures, animations and other features are not supported).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>File format</ns0:cell><ns0:cell>Typical File Extension(s)</ns0:cell></ns0:row><ns0:row><ns0:cell>Stereolithography</ns0:cell><ns0:cell>*.stl</ns0:cell></ns0:row><ns0:row><ns0:cell>Stanford Polygon Library</ns0:cell><ns0:cell>*.ply</ns0:cell></ns0:row><ns0:row><ns0:cell>Wavefront Object</ns0:cell><ns0:cell>*.obj</ns0:cell></ns0:row><ns0:row><ns0:cell>Object File Format</ns0:cell><ns0:cell>*.off</ns0:cell></ns0:row><ns0:row><ns0:cell>Blender</ns0:cell><ns0:cell>*.blend</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw Triangles</ns0:cell><ns0:cell>*.raw</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw Point Clouds</ns0:cell><ns0:cell>*.csv; *.txt</ns0:cell></ns0:row><ns0:row><ns0:cell>Raw Line Sets</ns0:cell><ns0:cell>*.csv; *.txt</ns0:cell></ns0:row><ns0:row><ns0:cell>3D GameStudio</ns0:cell><ns0:cell>*.mdl; *.hmp</ns0:cell></ns0:row><ns0:row><ns0:cell>3D Studio Max</ns0:cell><ns0:cell>*.3ds; *.ase</ns0:cell></ns0:row><ns0:row><ns0:cell>AC3D</ns0:cell><ns0:cell>*.ac</ns0:cell></ns0:row><ns0:row><ns0:cell>AutoCAD/Autodesk</ns0:cell><ns0:cell>*.dxf</ns0:cell></ns0:row><ns0:row><ns0:cell>Biovision BVH</ns0:cell><ns0:cell>*.bvh</ns0:cell></ns0:row><ns0:row><ns0:cell>CharacterStudio Motion</ns0:cell><ns0:cell>*.csm</ns0:cell></ns0:row><ns0:row><ns0:cell>Collada</ns0:cell><ns0:cell>*.dae; *.xml</ns0:cell></ns0:row><ns0:row><ns0:cell>DirectX X</ns0:cell><ns0:cell>*.x</ns0:cell></ns0:row><ns0:row><ns0:cell>Doom 3</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Milkshape 3D</ns0:cell><ns0:cell>*.ms3d</ns0:cell></ns0:row><ns0:row><ns0:cell>Modo Model</ns0:cell><ns0:cell>*.lxo</ns0:cell></ns0:row><ns0:row><ns0:cell>Neutral File Format</ns0:cell><ns0:cell>*.nff</ns0:cell></ns0:row><ns0:row><ns0:cell>Ogre</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>pk3</ns0:cell></ns0:row><ns0:row><ns0:cell>Quick3D</ns0:cell><ns0:cell>*.q3o; *q3s</ns0:cell></ns0:row><ns0:row><ns0:cell>RtCW</ns0:cell><ns0:cell>*.mdc</ns0:cell></ns0:row><ns0:row><ns0:cell>Sense8 WorldToolkit</ns0:cell><ns0:cell>*.nff</ns0:cell></ns0:row><ns0:row><ns0:cell>Terragen Terrain</ns0:cell><ns0:cell>*.ter</ns0:cell></ns0:row><ns0:row><ns0:cell>TrueSpace</ns0:cell><ns0:cell>*.cob, *.scn</ns0:cell></ns0:row><ns0:row><ns0:cell>Valve Model</ns0:cell><ns0:cell>*.smd, *.vta</ns0:cell></ns0:row><ns0:row><ns0:cell>XGL</ns0:cell><ns0:cell>*.xgl, *.zgl</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>All functions are explained in detail in a comprehensive documentation which can be</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>accessed directly inside MeVisLab. A stand-alone copy of the documentation is available as</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Supplementary File S2. In order to use the app, it simply needs to be instantiated (via the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>MeVisLab menu: Modules  PDF  Apps  Scientific3DFigurePDFApp). A full feature list is</ns0:cell></ns0:row><ns0:row><ns0:cell>available in Table 4.</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>*.md5mesh; *.md5anim; *.md5camera Irrlicht *.irrmesh; *.irr;*.xml LightWave *.lwo; *.lws *.mesh.xml, *.skeleton.xml, *.material Quake I, Quake II, Quake III *.mdl; *.md2; *.md3; *.PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>List of features.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell>Features</ns0:cell></ns0:row><ns0:row><ns0:cell>Data Import</ns0:cell><ns0:cell>Import external data, import MeVisLab data, import point clouds,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>import line sets, import meshes from 37 file formats, adjust mesh</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>density, preview import</ns0:cell></ns0:row><ns0:row><ns0:cell>Point Cloud Editing</ns0:cell><ns0:cell>Specify point cloud name, specify position in model tree, preview</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>settings</ns0:cell></ns0:row><ns0:row><ns0:cell>Line Set Editing</ns0:cell><ns0:cell>Specify line set name, specify position in model tree, specify colour,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>preview settings</ns0:cell></ns0:row><ns0:row><ns0:cell>Mesh Editing</ns0:cell><ns0:cell>Specify mesh name, specify position in model tree, specify colour,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>specify opacity, preview settings</ns0:cell></ns0:row><ns0:row><ns0:cell>View Specification</ns0:cell><ns0:cell>Specify view name, specify background colour, specify lighting</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>scheme, specify render mode, preview settings, specify multiple</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>views</ns0:cell></ns0:row><ns0:row><ns0:cell>U3D Creation</ns0:cell><ns0:cell>Store model in U3D format, preview scene</ns0:cell></ns0:row><ns0:row><ns0:cell>Poster Image Creation</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>Store poster in PNG format, preview scene, specify superimposed text PDF Creation Store document in PDF (v1.7) format, specify header citation text, specify header headline text, specify U3D file, specify poster file, specify model activation mode, specify model deactivation mode, specify toolbar enabling, specify navigation bar enabling, specify animation start mode, specify caption, specify description text</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>The template code is verbosely annotated and includes examples for setting PDF properties (e.g., meta data, page size, encryption) as well as the document content (including text, images, graphics and 3D models). The output of the SavePDFTemplate module is illustrated in Figure3.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9236:2:0:NEW 4 May 2016)</ns0:note></ns0:figure>
</ns0:body>
" | "#CS-2016:02:9236:1:0:REVIEW
Enriching scientific publications with interactive 3D PDF: An integrated toolbox for creating ready-to-publish
figures
Response to editor & reviewers
Dear Editor, dear Reviewers,
I am pleased to submit the revised version of # CS-2016:02:9236:1:0:REVIEW “Enriching scientific publications with
interactive 3D PDF: An integrated toolbox for creating ready-to-publish figures” for publication in PeerJ Computer
Science. Please note that the title has been changed based on the concerns of reviewer #1.
I appreciated the criticism of reviewer #1 and carefully considered revisions. All concerns have been addressed as
outlined below. The mentioned line numbers refer to the manuscript with tracked changes.
Editor comment:
“We have received two further reviews from our advisors. Based on all four reports received, your manuscript could
be reconsidered for publication should you be able to incorporate (or adequately respond to) the concerns of
Reviewer 1. In particular, that reviewer is requesting that your manuscript only list those formats which are fully
supported. In that same comment, the reviewer notes that textures are not supported and that this is a a fatal
limitation - please respond to this concern which, if correct, could preclude acceptance. That same reviewer notes
that for your article to be accurate, you should acknowledge that this is not a ‘complete toolbox’ (but instead limited
to a specific list of data visualizations) - again you should respond to this concern.“
The title has been changed. The intention for choosing the initial title was to express that the toolbox includes all
necessary tools for the creation of a ready-to-publish PDF document. I agree that it does not contain every
conceivable tool for every feature of PDF, but I did not assume that a reader would have assumed this either.
That’s the domain of Adobe, a commercial software company that has been working on PDF tools for decades.
However, this misapprehension might be caused by English being not my native language and I apologize for this.
The new title is much less handy, but more precise, though.
As regards the other comments of reviewer #1, see my responses below.
Reviewer #1 comment:
“The input format list. The author admits that not all features of all formats are supported. I accept that, it is
notoriously problematic. I suggest that if a format is not fully supported then it should not be in the supported list. I
would rather see a short list (even if it is just 1) of formats that are known for certain to work irrespective of what
features of the format employed. Less is more.”
The current release focuses on the geometry data, which has now been pointed out more clearly several times in
the main text (lines 211, 227, 229). Rather than reducing the list to formats that are fully supported (which would
actually be only STL, since this is the only format which is by definition limited to geometry data), the table header
was modified to point out once more that generally only geometry data can be imported (lines 237-238). (In fact
there are many other features unsupported: cameras, lighting…)
“For example I installed MeVisLab 4.8 for Mac, then installed 3D PDF support from here (3rd April)
https://zenodo.org/record/48758 Opened an obj file in the app but it appeared without the associated textures. The
revised paper acknowledges that textures are not supported by the tool, this in my opinion is a fatal limitation, noting
that it is not a limitation of u3d which does support at least diffuse and transparency textures.”
I agree with reviewer #1 that textures are an important feature, and in fact, support for textures (as well as
animations) is planned for future extensions. However, I do not agree that the lack of texture support for the
model assembling is a fatal limitation. I can support this with two substantiations:
1. Out of the 80 publications (published 2005-2015 in the biomedical domain) that included interactive 3D PDF
figures, only 8 made use of textures (Ruthensteiner, 2010; Ziegler, 2011; Barnes, 2013; Lautenschlager, 2013;
Farke, 2014; Schulz-Mirbach, 2014; de Notaris, 2014; Piras, 2015). I am aware that I certainly did not notice all
relevant publications, but I have observed this field quite intensively and in my humble opinion these 10% give
a quite good idea of the overall ratio (as least regarding biomedical publications). For many applications (e.g.,
all (bio-)chemical use cases like the visualization of molecular structures), textures are absolutely unnecessary.
Especially the published biomedical articles with the most often found use case of anatomical illustrations
relied all on coloring of the structures of interest rather than using textures (coloring is supported by the
presented software).
2. However – the main thing to consider is, that although the model-editor part of the presented software itself
does not support textures, the PDF-creator part can still be used to create scientific PDFs with textured (and/or
animated) models, if the necessary U3D files have been created with external software. Therefore, it still fulfills
its main purpose: providing a “few-clicks” solution for creating interactive PDF figures for scholarly
publications. This is now discussed in the manuscript (lines 353-359).
o The very first version of the app did not integrate the creation/modification of U3D models at all. It focused
purely on the creation of the PDF files (which had been identified as the major problem and which after all
was the reason to start this project). Import/modification/export of U3D files is an additional feature and
covers the (in my view) most important use cases (~90% considering the biomedical publications of the last
10 years). Assumed that the model creation would not be possible at all, the whole reasoning of the
reviewer would become obsolete. I partially agree with the concerns of the reviewer, but with the
aforementioned background in mind, I also feel that it would be a kind of “punishment” if the publication
would not be accepted, just because I integrated an additional feature. On the other hand, I would not be
happy with removing the whole U3D editing part just for invalidating the reviewer’s reasoning.
o Supporting all features of U3D models (textures, animations) requires very complex software and this is far
out of scope of the presented toolbox. It rather concentrates on the core feature of creating PDFs and
comes with some additional basic tools for model editing as a bonus.
Reviewer #1 comment:
“The title refers to “a complete toolbox”, I think the author agrees that the proposed solution is not a complete
toolbox. The types of data visualisation that is supported by the intersecting feature set of this tool, u3D and 3D PDF
is quite narrow..“
I agree, that – seen literally – the toolbox is not “complete”. Most probably, no toolbox will ever be complete… so
I changed the title to be more precise (see first response).
Reviewer #1 comment:
“Regarding file size, I agree that 1GB may not be a significant download these days, but I still maintain that the
solution is a big hammer to crack a small nut. Note I made a mistake in my earlier review, it is not the download that
is 4GB but the final program (actually 5.3GB) on a Mac. That is, it is a lot to download, install and learn for what to
me should be a simple task of creating a 3D PDF. In deed the app itself is all that is needed, not clear why it is based
upon MeVisLab. I note that 2 of the 3 other reviewers, and perhaps all 3, admit they didn’t try the software. I suggest
the hurdle was too high.”
I agree that the solution seems to be a “large hammer to crack a small nut”. However, it is the only solution which
is available for free and for all operating systems. Furthermore, it is based on a validated and previously published
solution and the download/installation sizes have never been reported to be an issue so far. This was one of the
reasons to use MeVisLab (along with the free visualization features, the option to integrate the whole workflow,
the possibility to create customized PDF modules with a few clicks by means of the Module Wizard and some
others more…). And besides this - Zuse’s Z1 or IBM’s Mark I were also rather big hammers, but they both made
the best out of the available technology. I do not want to compare my humble work directly with these
milestones of computer technology and I am aware that the presented solution is not the perfect solution in all
regards – but it works and it is a major step towards a simple creation of 3D supplements for scientific
publications (which was the aim of this article).
Furthermore (and as already pointed out in my last rebuttal letter), I submitted this article to PeerJ CS since “PeerJ
CS evaluates articles based only on an objective determination of scientific and methodological soundness, not on
subjective determination of ‘impact’ or ‘readership’ for example.” Maybe this caused by a language barrier, but to
me this conveys the impression that the readers should decide whether they are willing to download 1G (and to
install 4-5G). Although I partially agree with some of the reviewer’s concerns, I cannot identify any more
comments that point out major scientific or methodological flaws. The article presents a solution which has some
limitations – but all limitations that have been pointed out by the reviewers are discussed in the manuscript. To
use another analogy: would an article that presents a new methodology for automatically creating code out of
UML class diagrams be rejected because it cannot create code from UML use case diagrams with the reasoning
that they are both UML diagrams?
Reviewer #1 comment:
“The author has provided a slightly more interesting model as an example. There is only one viewer, from Adobe.
See supplied image that shows shading errors with the example provided. Is that from the viewer or software
proposed in the paper? If one is using such a tool to present scientific data it should be a huge concern that there are
these representation errors.“
The rendering error is a known issue of Adobe Reader for Mac. It occurs if transparent surfaces are rendered on
top of each other and – to my best knowledge – only on lower-end hardware. This issue is now discussed in the
article (lines 369-377 & new Figure 5). On Windows machines (and Linux), transparent surfaces are rendered
correctly (see screenshot below).
Kind regards
Axel Newe
Chair of Medical Informatics
University Erlangen-Nuremberg
" | Here is a paper. Please give your review comments after reading it. |
187 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Preferences for music can be represented through music features. The widespread prevalence of music streaming has allowed for music feature information to be consolidated by service providers like Spotify. In this paper, we demonstrate that machine learning classification on cultural market membership (Taiwanese, Japanese, American) by music features reveals variations in popular music across these markets.</ns0:p><ns0:p>Methods. We present an exploratory analysis of 1.08 million songs centred on Taiwanese, Japanese and American markets. We use both multiclass classification models (Gradient Boosted Decision Trees (GBDT) and Multilayer Perceptron (MLP)), and binary classification models, and interpret their results using variable importance measures and Partial Dependence Plots. To ensure the reliability of our interpretations, we conducted a follow-up study comparing Top-50 playlists from Taiwan, Japan, and the US on identified variables of importance.</ns0:p><ns0:p>Results. The multiclass models achieved moderate classification accuracy (GBDT = 0.69, MLP = 0.66). Accuracy scores for binary classification models ranged between 0.71 to 0.81. Model interpretation revealed music features of greatest importance: Overall, popular music in Taiwan was characterised by high acousticness, American music was characterized high speechiness, and Japanese music was characterized by high energy features. A follow-up study using Top-50 charts found similarly significant differences between cultures for these three features.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion.</ns0:head><ns0:p>We demonstrate that machine learning can reveal both the magnitude of differences in music preference across Taiwanese, Japanese, and American markets, and where these preferences are different. While this paper is limited to Spotify data, it underscores the potential contribution of machine learning in exploratory approaches to research on cultural differences.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>With 219 million active listeners a month, and a presence in over 60 countries, Spotify is one of the largest music streaming service providers in the world (as of 2020, <ns0:ref type='bibr'>Schwind et al., 2020)</ns0:ref>. To facilitate such a service, they maintain a database of music features for all songs in their service, that is made publicly accessible through the Spotify API (Application Programme Interface). This database contains a wealth of meta and music feature data, that researchers have been using to research human behaviour and engagement with music. For example, Park and colleagues <ns0:ref type='bibr' target='#b43'>(Park et al., 2019)</ns0:ref> analysed data from 1 million individuals across 51 countries and uncovered consistent patterns of music preferences across day-night cycles: relaxing music was more commonly played at night, and energetic music during the day. Pérez-Verdejo and colleagues (2020) found that popular hit songs in Mexico shared many similarities to global hit songs. Spotify's music features have also been used to examine songs in clinical and therapeutic settings, with <ns0:ref type='bibr' target='#b27'>Howlin and Rooney (2020)</ns0:ref> finding that songs used in previous pain management research, if chosen by the patient, tended to have high energy, danceability, and lower instrumentalness features, than experimenter-chosen songs.</ns0:p><ns0:p>In this paper, we use Spotify features to examine how cultures differ in music preferences, through a bottom-up, data-driven analysis of music features across cultures. Here, we quantify music preference through music features, following past research (e.g., <ns0:ref type='bibr' target='#b14'>Fricke et al., 2019)</ns0:ref>. Music plays a huge role in human society, be it in emotion regulation or for social displays of identity and social bonding <ns0:ref type='bibr' target='#b19'>(Groarke & Hogan, 2018;</ns0:ref><ns0:ref type='bibr' target='#b10'>Dunbar, 2012)</ns0:ref>. These are often embedded in the cultural norms and traditions of the listener. In other words, understanding how music differs across cultures may reflect corresponding cultural differences in the sociocultural context that shape our individual preferences towards certain types of music over others. Thus, understanding how music differs between these cultural markets, e.g., by examining their features, may then shed light on possible cultural differences, which can guide follow-up research by generating novel hypotheses, or in supporting various theories on cultural differences aside from music (see below).</ns0:p><ns0:p>To achieve this aim, we rely on machine learning classification of songs based on cultural markets (i.e., culture of origin), as interpretation of these models may reveal insight into how (and which) features differ between cultures. We utilize data seeded originally on Taiwanese, American and Japanese Top-50 lists. This arguably aligns more towards a sociolingual distinction of cultural membership, and not a country-based sampling commonly used in culture research. Our reasoning was that this more closely resembles the differentiated cultural markets that artists of a certain language operate in, that often transcend country boundaries. For example, popular Chinese music meant for a Chinese cultural market often subsumes artists from wider Chinese cultural origins (in countries and territories like Taiwan, Hong <ns0:ref type='bibr'>Kong, Singapore, and Malaysia;</ns0:ref><ns0:ref type='bibr'>Fung, 2013;</ns0:ref><ns0:ref type='bibr'>Moskowitz, 2009)</ns0:ref>. As such, in this paper, these were considered to belong to a 'Chinese' cultural market (including music from language subtypes and dialects). Japanese music was similarly treated as belonging to a 'Japanese' market, and music from Western (Anglo-European) cultural origins (e.g., the US, UK, Canada, and Australia) were considered as belonging to a 'Western' cultural market.</ns0:p><ns0:p>Past approaches towards music preference through Spotify have largely focused on curated lists of <ns0:ref type='bibr'>Top-50 or Top-200 popular songs (e.g., Pérez-Verdejo et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b11'>Febirautami, Surjandari & Laoh, 2019)</ns0:ref>. While such lists are often diluted by the inclusion of 'global' hit songs, they nevertheless provide a window to examine culturally based music preferences. Accordingly, we also conduct a follow-up study using these Top-50 lists from Taiwan, Japan, and the US to ensure the reliability of our interpretations (from the classification model).</ns0:p></ns0:div>
<ns0:div><ns0:head>Cultural differences in music preference</ns0:head><ns0:p>Typically, most of the research in cultural differences in the psychological literature has come from top-down, theoretical approaches. These have been instrumental in shaping the field, by increasing awareness of systematic ways by which people from cultures are different. One of the most instrumental differences is in the independence and interdependence of the self <ns0:ref type='bibr' target='#b33'>(Markus & Kitayama, 1991;</ns0:ref><ns0:ref type='bibr'>2010)</ns0:ref>. Westerners generally tend to be independent, in that they prioritise the autonomy and uniqueness of internal (self) attributes. In contrast, East Asians generally tend to be interdependent, where their concept of self is intricately linked to close social relationships. This has been shown to have implications on music preferences through differences in desirability of emotions. For example, Westerners generally tend to view happiness as a positive and internal hedonic experience to be maximised where possible <ns0:ref type='bibr' target='#b28'>(Joshanloo & Weijers, 2014)</ns0:ref>. As such, Western music preference tends towards high-arousal music, possibly in the search of strong 'happiness' experiences. East Asians, however, view happiness as a positive feeling associated with harmonious social relationships, in contrast to the hedonic, high-arousal definition in Western contexts (citation). Consistently, East Asian preferences for music do not have this high-arousal component, and is more calm, subdued, and relaxing <ns0:ref type='bibr' target='#b51'>(Tsai, 2007;</ns0:ref><ns0:ref type='bibr' target='#b52'>Uchida & Kitayama, 2009;</ns0:ref><ns0:ref type='bibr' target='#b43'>Park et al., 2019)</ns0:ref>. However, such theoretically based analyses may overrepresent Western cultures in research and literature. Consequently, cultural differences within similar, non-Westernised spheres are not well understood, due to the lack of pre-existing theory. For example, Chinese and Japanese cultures are often grouped together in cross-cultural research as a representation of East Asian collectivism, that functions as a comparative antithesis to Western findings (e.g., <ns0:ref type='bibr' target='#b23'>Heine & Hamamura, 2007)</ns0:ref>. Yet, research has also uncovered differences between China and Japan that cannot be explained by these theories <ns0:ref type='bibr' target='#b41'>(Muthukrishna et al., 2020)</ns0:ref>. As such, cultural differences within East Asia are not well understood in the psychological literature, and few theories exist to offer predictions on differences in music preference between these cultures.</ns0:p><ns0:p>Our solution was to examine music as cultural products from the bottom-up. Doing so would reduce the effect of experimenter bias in guiding theory formation and interpretation, when examining a wide database of music features. Cultural products are behavioural manifestations of culture that embody the shared values and collective aesthetics of a society <ns0:ref type='bibr' target='#b40'>(Morling & Lamoreaux, 2008;</ns0:ref><ns0:ref type='bibr' target='#b31'>Lamoreaux & Morling, 2012;</ns0:ref><ns0:ref type='bibr' target='#b48'>Smith et al., 2013)</ns0:ref>. This implies that music consumption behaviour underscores culturally based attitudes, cognitions and emotions that afford preferences for certain congruent types of music. For example, in a crosscultural comparison between Brazil and Japan, de Almeida and <ns0:ref type='bibr' target='#b7'>Uchida (2018)</ns0:ref> found that Brazilian song lyrics contained higher frequencies of positive emotion words and lower frequencies of neutral words than Japanese lyrics. This was consistent and reflective of their respective cultural emphases on emotion expressions (see <ns0:ref type='bibr' target='#b50'>Triandis et al., 1984;</ns0:ref><ns0:ref type='bibr' target='#b52'>Uchida & Kitayama, 2009)</ns0:ref>, and showed that comparing music 'products' elucidated differences between the collective shared values of different cultures. Past research on cultural products have relied on both popularity lists (charts; e.g., <ns0:ref type='bibr' target='#b1'>Askin & Mauskapf, 2017)</ns0:ref>, and on artifacts produced by a culture (e.g., Tweets: Golder & Macy, 2011; newspaper articles: <ns0:ref type='bibr' target='#b4'>Bardi, Calogero, & Mullen, 2008)</ns0:ref>. We utilize both methods, and propose that examining cultural differences in 'music' products on a large-scale may provide potential insight into the sociocultural circumstances that give rise to these differences.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Present Research</ns0:head><ns0:p>We adopted a data-driven, bottom-up approach to explore music preferences between cultures/industries through musical features for this study through machine learning. I.e., we first train a multiclass model to classify songs as belonging to (originating from) Chinese (Taiwanese), Japanese, or Western (American) markets. This is to establish the presence and magnitude of discernible cultural differences in music features. Next, we decompose the model by training 3 binary machine learning classifiers to classify songs as belonging to one culture or another. By applying model interpretation techniques on these models (such as Partial Dependence Plots [PDPs]), we aim to discover the specific difference in preferred musical features between Chinese (Taiwanese)-Japanese markets, Chinese (Taiwanese)-Western (American) markets, and Japanese-Western (American) markets. We aimed to include as many songs as possible that were produced from these respective culture-based music industries to observe systematic trends and differences from as wide a range of musical styles and genres within these industries as possible. Finally, we examine the generalizability of these interpretations by conducting a follow-up study on Top-50 songs from Taiwan, Japan, and the US. If the identified features of difference present in songs produced by a cultural market were indeed representative of cultural differences in music preferences, we expect that these features should also show consistent differences for their respective Top-50 (popular) songs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Overview</ns0:head><ns0:p>To explore differences between cultures, we used machine learning to classify a database of Chinese, Japanese, and English songs into their respective cultural (linguistic) markets. As this was a multiclass classification problem, we conducted the analysis twice using gradient boosted decision trees (GBDTs) and artificial neural networks (multi-layer perceptron, MLP) that are inherently capable of multiclass classification. This was also to examine the consistency in results between two differing methods of analysis, and strengthen the reliability of the analyses. To infer the features that accounted for cultural differences, we use model interpretation techniques, namely relative feature importance (RFI; <ns0:ref type='bibr' target='#b13'>Friedman, 2001)</ns0:ref>, permutational feature importance (PFI; <ns0:ref type='bibr'>Fisher, Rudin, & Dominici, 2018)</ns0:ref>, and partial dependence plots (PDPs, <ns0:ref type='bibr' target='#b13'>Friedman, 2001)</ns0:ref>, to examine and visualise the relationships between the feature and its influence on the probability of classification.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data mining</ns0:head><ns0:p>We accessed the Spotify Application Programme Interface (API) through the 'spotifyr' wrapper <ns0:ref type='bibr' target='#b49'>(Thompson, Parry, & Wolff, 2019)</ns0:ref> in R, to obtain song-level music feature information from Chinese, Japanese and English artists from the Spotify database. This was through a pseudo-snowball sampling method: we relied on Spotify's recommendation systems (the 'get_related_artists' function) to recommend artists related to those in the official Spotify Top-50 chart playlists for Taiwan, Japan, and the US respectively and created a list of artists per country. We then used the same method to obtain another list of recommended artists to these respective 'lists', for up to 6 iterations, in order to obtain comparable sample sizes between these three markets. We also excluded all non-Chinese, non-Japanese, and non-English (language) artists from the respective list. This was through an examination of the associated genres for each artist, which often contained hints to their cultural origins (e.g., J-pop, J-rock, Mandopop). Artists that did not have listed genres were checked manually by the researchers. This resulted in a final N(artists) = 10259 (Japanese = 2587; English = 2466; Chinese = 5206). All song-level feature information for all artists were then obtained from the Spotify database. Duplicates (such as the same song being rereleased in compilation albums) were removed, for a total of N(songs) = 1810210 (Japanese = 646440; Chinese = 360101; English = 803669). To ensure class balances, we randomly downsampled the Japanese and English samples to match the Chinese sample, resulting in a final N(songs) = 1080303.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data handling and analysis</ns0:head><ns0:p>Except for 'key' and 'time signature', all Spotify features were inputted as features in the classification models. These were: 'danceability', 'energy', 'loudness', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'duration' (ms), and 'mode'. A list of definitions for these features is available in Table <ns0:ref type='table'>1</ns0:ref>. These were to classify songs according to their cultural membership (Chinese, English, or Japanese), as the outcome variable. The data was split into a training and testing set along a 3:1 ratio. Parameters for the GBDT model and weights for the MLP model were tuned through 5-fold cross validation on the training set. We also examined RFI scores for each model. For GBDT, this was a measure of the proportion that a feature was selected for stratification in each iterative tree, and for MLP, this was based on PFI, which measures the resultant error of a model when each feature is iteratively shuffled -the greater the error, the larger the influence a feature exerts on the outcome variable <ns0:ref type='bibr'>(Fisher, Rudin, & Dominici, 2018;</ns0:ref><ns0:ref type='bibr' target='#b37'>Molnar, 2019)</ns0:ref>. We then simplified the classification problem by splitting it into 3 separate binary classifications: Japanese-Chinese, Japanese-English, and Chinese-English. GBDTs and MLP models were conducted for these three comparisons, and in addition to RFI measures, we visualised the effect of each variable using PDPs. These show the averaged marginal effect of a feature on the outcome variable in a machine learning model, and is useful to glean an understanding of the nature of the relationship between these variables. PDPs were conducted through the 'pdp' package <ns0:ref type='bibr' target='#b17'>(Greenwell, 2017)</ns0:ref>, and PFIs were conducted through the 'iml' package <ns0:ref type='bibr'>(Molnar, Bischl, & Casalicchi, 2019)</ns0:ref>. Machine learning was conducted through the 'gbm' package <ns0:ref type='bibr'>Greenwell et al., 2019)</ns0:ref> for GBDTs, and the 'nnet' package <ns0:ref type='bibr' target='#b53'>(Venables & Ripley, 2002)</ns0:ref> for MLPs, via the 'caret' wrapper <ns0:ref type='bibr' target='#b30'>(Kuhn, 2019)</ns0:ref> in R (R Core Team, 2019). All R scripts used for data mining and analysis are available in our OSF repository (Note to reviewers: anonymous review link: https://osf.io/d3cky/?view_only=cc89c024cedb4bfd8401544032e505a1).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Descriptives</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> reports the descriptive medians, lower/upper quantiles, and missing data for each feature per culture. The full list of artists, genres, and songs are available in our OSF repository (Note to reviewers: anonymous review link: https://osf.io/d3cky/?view_only=cc89c024cedb4bfd8401544032e505a1). Additionally, we note that while our database of songs spans as early as the 1950s, most of the songs in our database were from the mid-2000s to 2020 (see Figure <ns0:ref type='figure'>1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Multiclass classification (Chinese-Japanese-English)</ns0:head><ns0:p>For the GBDT model, the parameter tuning resulted in N(trees) = 150, interaction depth = 3, alongside default parameters of shrinkage = 0.1, and number of minimum observations per node = 10. The GBDT achieved a classification accuracy of 0.682, 95%CI (0.680, 0.683), significantly above the no information rate (NIR) of 0.333, p < .0001. Aside from the input and output layers, we used a MLP model consisting of 1 hidden layer with 5 nodes. The MLP model achieved a slightly lower accuracy score of 0.660, 95%CI (0.659,0.662), but was still significantly above the NIR of 0.333, p < .0001. RFIs for both models are reported in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Binary classifications</ns0:head><ns0:p>We first unpack the Chinese-Japanese model: For the GBDT model, the parameter tuning resulted in N(trees) = 150, interaction depth = 3, and no changes were made to the other default parameters (as above). The GBDT achieved a classification accuracy of 0.784, 95%CI (0.783, 0.786), AUC = 0.865, significantly above the no information rate (NIR) of 0.500, p < .0001. The MLP model achieved a comparable accuracy score of 0.766, 95%CI (0.764,0.768), AUC = 0.844, significantly above the NIR of 0.500, p < .0001. Next, the Chinese-English model: For the GBDT model, the parameter tuning resulted in N(trees) = 150, interaction depth = 3, and no changes were made to the other default parameters. The GBDT achieved a classification accuracy of 0.807, 95%CI (0.805, 0.809), AUC = 0.885, significantly above the no information rate (NIR) of 0.500, p < .0001. The MLP model achieved a comparable accuracy score of 0.803, 95%CI (0.801,0.805), AUC = 0.880, significantly above the NIR of 0.500, p < .0001. Finally, the Japanese-English model: For the GBDT model, the parameter tuning resulted in N(trees) = 150, interaction depth = 3, and no changes were made to the other default parameters. The GBDT achieved a classification accuracy of 0.713, 95%CI (0.711, 0.715), AUC = 0.797, significantly above the no information rate (NIR) of 0.500, p < .0001. The MLP model achieved a comparable accuracy score of 0.709, 95%CI (0.707,0.711), AUC = 0.791, significantly above the NIR of 0.500, p < .0001. All RFIs and PFIs are reported in Table <ns0:ref type='table'>4</ns0:ref>. Additionally, the 2 most important features are visualised by PDPs in Figure <ns0:ref type='figure'>2</ns0:ref>. A visual inspection of the PDPs suggests that English music is higher than both Japanese and Chinese music in speechiness, Chinese music is higher than both Japanese and English music in acousticness, and Japanese music is higher than English and Chinese music in energy. In comparing Japanese and Chinese music, we note that acousticness and energy were also present, but were identified only in the GBDT model. In contrast, the MLP model identified loudness and instrumentalness as higher in Japanese music than Chinese music.</ns0:p><ns0:p>Overall, this suggests that, unlike English-Japanese or English-Chinese comparisons which were markedly different on a few main features, the differences between Chinese and Japanese music were spread widely across the various features. Consequently, despite relying on different 'important variables', both the MLP and GBDT managed to achieve a comparably high classification accuracy, with the GBDT outperforming the MLP for all classification tasks (results from DeLong's tests are available on our OSF repository).</ns0:p></ns0:div>
<ns0:div><ns0:head>Additional Analyses</ns0:head><ns0:p>We also visualised the changes in features over time for speechiness, acousticness, energy, instrumentalness, and loudness, from 2000 to 2020. Feature information for songs before 2000 were excluded due to the markedly smaller sample. Other than instrumentalness, which showed a notable decrease over time in Japanese songs, the remaining four features showed stability over time. This suggests that the differences in preference highlighted by the RFIs, PFIs and PDPs could indicate long term cultural preferences for music.</ns0:p></ns0:div>
<ns0:div><ns0:head>Follow-up Study</ns0:head><ns0:p>We obtained a second round of data (approximately one year later) from Top-50 lists for Japan, Taiwan and the USA. Focusing on the identified features of speechiness, acousticness, energy, instrumentalness, and loudness from the previous study, Kruskal-Wallis tests revealed a significant effect of energy on speechiness (χ 2 (2) = 30.5, p < .001), acousticness (χ 2 (2) = 24.5, p < .001), energy (χ 2 (2) = 21.0, p < .001), and loudness (χ 2 (2) = 33.5, p < .001), but no significant effect was observed for instrumentalness (χ 2 (2) = 1.4, p = .49). For speechiness, post-hoc Dwass-Steel-Critchlow-Fligner pairwise comparisons revealed that USA was significantly higher than Taiwan (W = 7.34, p < .001) and Japan (W = 5.91, p < .001), but no significant difference was observed between Taiwan and Japan (W = -1.48, p = .55). For acousticness, Taiwan was significantly higher than Japan (W = 5.97, p < .001) and the USA (W = 6.11, p < .001), but no significant difference was observed between the USA and Japan (W = 0.53, p = .93). For energy, Japan was significantly higher than Taiwan (W = 6.01, p < .001), and the USA (W = 4.35, p = .006), but no significant difference was observed between Taiwan and the USA (W = 2.88, p = .103). For loudness, Japan was significantly higher than Taiwan (W = 7.44, p < .001) and the USA (W = 6.36, p < .001), but no significant difference was observed between Taiwan and the USA (W = 2.20, p = .27). Finally, for instrumentalness, no significant difference was observed between Japan and Taiwan (W = 0.61, p = .90), Japan and the USA (W = 1.67, p = .48), or Taiwan and the USA (W = 1.01, p = .75).</ns0:p><ns0:p>In short, with the exception of instrumentalness, Top-50 playlists obtained one year later nevertheless demonstrate strong consistency with the earlier results. American Top-50 songs are higher than both Japanese and Taiwanese Top-50 songs in speechiness, Taiwanese Top-50 songs are higher than both Japanese and American Top-50 songs in acousticness, and Japanese Top-50 songs are higher than American and Taiwanese music in energy and loudness. However, instrumentalness, that was originally identified as a variable of importance for the MLP model, did not consistently differ between cultures. Indeed, Figure <ns0:ref type='figure'>3</ns0:ref> shows that instrumentalness in Chinese music is inconsistent, with strong fluctuations depending on year. More research is needed to determine if instrumentalness is indeed a preferred feature in Taiwanese markets or merely a passing trend.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Across the multiclass and subsequent binary classification tasks, both the GBDT and MLP models were able to consistently classify songs by cultural market with moderately high accuracy, and the GBDT was often marginally better than the MLP for this purpose. This suggests that the patterns of difference between cultural markets were robust enough to be detected by two different algorithms. A comparison of accuracy scores suggested that the difference between Chinese and Japanese music afforded higher accuracy to the models, than Japanese and English differences. While this could be for several reasons, we speculate a possibility in that music preferences between Japanese and Chinese cultures differed greater that Japanese and English differences. Such a reasoning would support growing calls for decentralisation and internationalisation of psychological research <ns0:ref type='bibr' target='#b24'>(Heine & Ruby, 2010;</ns0:ref><ns0:ref type='bibr'>Heinrich, Heine, & Norenzayan, 2010;</ns0:ref><ns0:ref type='bibr' target='#b6'>Cheung, 2012)</ns0:ref>, in showing that Japanese and Chinese speaking cultures, sometimes thought to be homogenous in cross-cultural research, may actually be more different that previously assumed.</ns0:p><ns0:p>A visual inspection of the PDPs and feature importance scores provide some indicator of where these differences lie. Apart from instrumentalness, all identified feature importance proved to be different across (geographical) cultures in similar directions in the follow-up study. This implies that cultural music preferences are reflected both in the music produced by a culture for their respective industry or market, as well as the overall music preferences by geographically bound members of that culture. While this paper does not empirically explore the underlying cultural mechanisms that may account for these differences, this is nevertheless a starting point for future research to continue from, and we speculate on some interpretations for these results. Western music similarly differed from Chinese and Japanese music through higher speechiness. One explanation could be prosodic bias, in that normal spoken Mandarin Chinese inherently contains more pitch movements than English <ns0:ref type='bibr' target='#b26'>(Hirst, 2013)</ns0:ref>, and consequently, what may be perceived as 'speech-like' by Chinese listeners may not correspond to high speechiness scores. However, this can also be explained through previous research on emotion-arousal preferences in the Western and East-Asian contexts. The high speechiness score in English music could indicate larger preferences for hip-hop and rap music. Rap-music has seen a dramatic increase in popularity in Western markets from the 1980s <ns0:ref type='bibr' target='#b36'>(Mauch et al., 2015)</ns0:ref>, and has been shown to express and embody high-arousal emotions like anger (e.g., <ns0:ref type='bibr' target='#b21'>Hakvoort, 2015)</ns0:ref>, and its relative popularity in Anglo-American cultures could be representative on cultural preferences towards these high arousal emotions described earlier <ns0:ref type='bibr' target='#b51'>(Tsai, 2007)</ns0:ref>, compared to Japanese and Chinese cultures.</ns0:p><ns0:p>One feature that differentiated Chinese from English and Japanese music was high acousticness. This points to lower use of electronic instruments in the production process, and may suggest a preference for more organic, natural sounds in Chinese music. Energy appeared to be more important in Japanese than English or Chinese music. We posit that energy preferences in Japan (defined by Spotify as a combination of loudness, complexity, timbre, dynamic range, and noise) could be due to remnants of traditional music aesthetics, that overlap considerably with energy definitions (e.g., beauty in noise/simplicity in complexity: sawari, wabi-sabi, see <ns0:ref type='bibr' target='#b9'>Deva, 1999;</ns0:ref><ns0:ref type='bibr' target='#b0'>Anderson, 2014;</ns0:ref><ns0:ref type='bibr' target='#b42'>Okuno, 2015)</ns0:ref>. On the surface, this could be similarly concluded from increased loudness features in Japanese over Chinese music (from the follow-up study), but the U-shaped relationship between energy and Japanese/Chinese music classification seen in Figure <ns0:ref type='figure'>2</ns0:ref> suggests a deeper nuance that requires further research.</ns0:p><ns0:p>Finally, we consider the strengths and limitations our exploratory approach. Comparing music features offer a greater insight into behavioural and consumption patterns of music preference across cultural spheres. In doing so, we uncover systematic differences between groups that, while being consistent with previous literature, also offer new insight into how cultures differ, that future research can build from in understanding societies. Unfortunately, we were unable to eliminate certain sample biases from our dataset: we assumed our Chinese data to be representative of Chinese music in general, but Spotify is not (as of 2021) active in China despite the inclusion of several mainland Chinese artists in the database. Instead, our findings represented Chinese-speaking listeners in Taiwan and Hong Kong, along with possibly Malaysia and Singapore, who may have differing values and preferences from mainland Chinese listeners, particularly given differences in demographics of users and variation in dialect. Moreover, using the Spotify API limited our selection of music features to those available in the API. Future studies could examine music features through publicly available software (e.g., MIRtoolbox; <ns0:ref type='bibr' target='#b32'>Lartillot, Toiviainen, & Eerola, 2008)</ns0:ref> that have both greater amounts of features and more transparent documentation.</ns0:p><ns0:p>On the other hand, our strengths include our comparisons of features, as opposed to genre, that allowed for validity in comparing cultures because of universality in the perceptual properties of music <ns0:ref type='bibr' target='#b46'>(Savage et al., 2015)</ns0:ref>. This enabled us to conclude that any differences in music features would be due to preference for those features. By contrast, comparing preferences by genre differences across cultures could have introduced confounds to the investigation, as genre is not homogenous across cultures (see <ns0:ref type='bibr' target='#b5'>Bennett, 1999)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In sum, we demonstrated the variability of music preferences across Chinese (Taiwan), Japanese, and Western (American) cultural markets, and identified the features that best account for these differences. In particular, Chinese music was marked by high acousticness, Anglo-American Western music was marked by high speechiness, and Japanese music appeared to be marked by high energy. While we speculated on some reasons why this would be so, future research is needed to validate these theories to develop a holistic understanding of popular music preferences in Chinese and Japanese cultures. As music is an integral part of human society and culture, understanding the mechanisms by which we prefer different types of music may also shed light on the aspects of human society and experience that correspond to these differences.</ns0:p><ns0:p>Our paper also demonstrates the potential uses of machine learning and other computer science methods in cross-cultural research. Given the advent of digital and online media, these repositories of cultural products may hold valuable insight into the diversity of humanity. While not necessarily salient to most social scientists, machine learning as demonstrated in this paper may offer an efficient and viable technique for the analysis of such data, with less bias than commonly used methods like self-reports or machine translations of multilingual text. Future research could also apply this methodology to identify new avenues of cultural differences in other mediums, as an additional analysis tool. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,393.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,280.87,525.00,403.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,204.37,525.00,401.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Medians (L/U quantiles) and missing data for musical features (excluding mode), and release year.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature</ns0:cell><ns0:cell cols='2'>Chinese</ns0:cell><ns0:cell cols='2'>Japanese</ns0:cell><ns0:cell cols='2'>Western</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Median (L/U)</ns0:cell><ns0:cell>Missing</ns0:cell><ns0:cell>Median (L/U)</ns0:cell><ns0:cell>Missing</ns0:cell><ns0:cell>Median (L/U)</ns0:cell><ns0:cell>Missing</ns0:cell></ns0:row><ns0:row><ns0:cell>Danceability</ns0:cell><ns0:cell>0.56 (0.46/0.66)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.56 (0.45/0.67)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.66 (0.55/0.76)</ns0:cell></ns0:row><ns0:row><ns0:cell>Energy</ns0:cell><ns0:cell>0.46 (0.34/0.63)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.76 (0.51/0.90)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.69 (0.54/0.82)</ns0:cell></ns0:row><ns0:row><ns0:cell>Loudness</ns0:cell><ns0:cell>-8.9 (-11.1/-6.9)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>-6.2 (-9.3/-4.3)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>-6.8 (-8.9/-5.2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Speechiness</ns0:cell><ns0:cell>0.04 (0.03/0.05)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.05 (0.04/0.09)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.08 (0.04/0.24)</ns0:cell></ns0:row><ns0:row><ns0:cell>Acousticness</ns0:cell><ns0:cell>0.54 (0.21/0.76)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.11 (0.01/0.49)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.09 (0.02/0.29)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Instrumentalness 1.4E-6 (0.00/1.1E-</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>1.0E-4 (0.00/0.36) 4</ns0:cell><ns0:cell>0.00 (0.00/0.0006)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Liveness</ns0:cell><ns0:cell>0.13 (0.10/0.21)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.14 (0.10/0.29)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.14 (0.10/0.29)</ns0:cell></ns0:row><ns0:row><ns0:cell>Valence</ns0:cell><ns0:cell>0.37 (0.24/0.57)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.52 (0.31, 0.71)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.51 (0.33/0.69)</ns0:cell></ns0:row><ns0:row><ns0:cell>Tempo</ns0:cell><ns0:cell>122.7</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>123.7 (99.0/144.0) 4</ns0:cell><ns0:cell>119.7 (96.6.136.2)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(100.0/138.1)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Duration (ms)</ns0:cell><ns0:cell>240213</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>236840</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>215640</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(205586/272586)</ns0:cell><ns0:cell /><ns0:cell>(187266/281573)</ns0:cell><ns0:cell /><ns0:cell>(184727/250693)</ns0:cell></ns0:row><ns0:row><ns0:cell>Release year</ns0:cell><ns0:cell>2011 (2005/2015)</ns0:cell><ns0:cell>148</ns0:cell><ns0:cell>2015 (2010/2018)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>2016 (2012/2018)</ns0:cell></ns0:row></ns0:table><ns0:note>1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58632:1:0:NEW 20 Apr 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58632:1:0:NEW 20 Apr 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "
Attn. Dr. Jason Jung,
Associate Editor,
PeerJ Computer Science
On behalf of the authorship team, I would like to thank you and the anonymous reviewers for taking the time to edit and comment on the manuscript. The comments have contributed greatly to the revisions, and we sincerely appreciate their efforts. In this revision, we have changed the focus of the manuscript in line with the reviewers’ suggestions, and our responses to their comments are included with this letter (in blue).
Warmest regards,
Kongmeng Liew (Ph.D.),
Nara Institute of Science and Technology
Reviewer comments:
The title is too ambiguous and grammatically wrong.
Thank you for your comments. Considering our change in framing for the paper (see reply below), we have changed the title to ‘Cultural differences in music features across Taiwanese, Japanese and American markets’.
The authors have to check term usages on the overall manuscript, e.g., boom in music streaming.
We have proofread and checked the manuscript.
Motivation is not reasonable: 'Preferences for music can be represented through music features.'
User preferences are combinations of item features and users' tastes.
Please, give some better explanations without logical leaps.
Thank you for the comment. In recent years, music features have been used as a strategy to quantify music preference at the song level, rather than at the self-report-based individual participant level (e.g., Fricke et al., 2019). I.e., rather than music preferences being a combination of item features and individual tastes, features present a method for the quantification of the music that an individual may prefer to listen to.
Fricke, K. R., Greenberg, D. M., Rentfrow, P. J., & Herzberg, P. Y. (2019). Measuring musical preferences from listening behavior: Data from one million people and 200,000 songs. Psychology of Music. https://doi.org/10.1177/0305735619868280
I cannot find the originality of this study.
The authors merely classified musics into market regions by using the conventional ML-based classifiers.
Although the authors said that contributions of musical features to the classification accuracy can show cultural differences between the market regions, they did not provide adequate discussions for this point (just a feature has higher contribution to this market region than the other regions).
Thank you for the comments. We have changed the framing of the revision to reflect our proposed contribution, that is in highlighting cultural changes in music preference by means of music features. However, to expound on how and why these differences arise would require more complex modelling and additional experiments, which would drastically change the direction of the paper. As such, we envisage this paper’s contribution to be a preliminary exploratory step in this long process of understanding cultural differences in music preference, as an interdisciplinary endeavor that we are still in the process of studying.
There have been numerous studies for classifying musics according to their physical features.
Therefore, the following sentence cannot be the contribution of this study.
'We demonstrate that machine learning can reveal both the magnitude of differences
38 in music preference across Taiwanese, Japanese, and American markets, and where these preferences are different.'
Average readers have already known that we can do these things with the conventional ML models.
Thank you for your comments. To our knowledge, few studies have examined differences in features of popular music across cultures. Most have focused on within-culture preferences, or cultural change, or in lyrics. Nevertheless, we agree that it is not informative for the reader to show that classification is technically possible through machine learning, since there is no novelty in our method.
Rather, as Reviewer 1 suggested, we changed our focus to show that cultural differences exist between music markets, and this can be identified through machine learning, and is consistent with other forms of analysis. As such, we introduced a follow-up study that tested for differences in the identified features (from the earlier machine learning model interpretations) in Top 50 charts, and showed that they were generalizable and stable. We modified the structure of our paper to follow this new framing.
Experimental design
The experimental subjects and procedures are adequate.
Also, they have been described well.
Validity of the findings
The findings of this paper should be cultural differences between music markets.
However, the authors have concentrated on the effectiveness of the machine learning on classifying musics.
According to this point, the abstract and introduction should be modified.
See reply above.
Also, to reveal the cultural differences, contributions of musical features to the classification accuracy are not enough.
Please, add more in-depth analysis to ensure the originality of this study.
Thank you for your comments. We recognize that classification accuracy may not be enough to definitively argue where and how music differs cross-culturally with respect to music features. As such, the newly-added follow-up study may help to show that even through a different lens (popular music consumed across cultures), the results converge with the initial findings (popular music produced by a cultural market) to show consistent differences in certain features of popular music between cultures.
The paper is, generally, well-described and clear.
However, Figure 2 and 3 need to be redrawn since they are not clear.
Thank you for your comments. Figures 2 and 3 have been redrawn and re-rendered.
Overall, this research paper is well structured. Although it is a minor thing, authors are suggested to prove read the article to the professional reader to improve the quality of the word choice and the paper's story flow.
Thank you for your comments. We have since proofread the manuscript.
Experimental design
The authors define the experiment steps in a good way. However, several things still need to be clarified:
Thank you for the comments. Indeed, some of them (such as the train-test split) were determined due to heuristics, so this was a good opportunity to rethink some of our decisions. We address each of the points below.
1. The sufficiency of the sample size.
We were reaching a saturation point with the data for the Taiwanese market, in that each additional iteration of ‘related artists’ did not yield many new songs to the list. As the Taiwanese sample was also the smallest of the 3 cultures, and classes needed to be balanced (down sampling of US and Japanese music), this limited the sample size used for our study.
2. The justification of the features that the authors used for classification.
As we relied primarily on Spotify data, we used all the available features on the public API. We think this strategy is appropriate given that we use a bottom-up approach. Furthermore, the robustness of these findings is also demonstrated in Study 2. However, the use of additional features not included in the Spotify API may also be interesting and informative, and we intend to explore this area in subsequent research.
3. The training-testing split ratio whether it is optimum.
While we initially relied primarily on heuristics to determine a 75/25 split, we note that it is not too far off from the Guyon’s (1997) recommended scaling law, which would give an approximate (suggested) train-test split of 70/30 for our data.
4. The tuned cross-validation parameter if it is enough.
Hyperparameterization was conducted using 5 fold cross validation across hyperparameter values predetermined by the default settings in Caret. While this is not an exhaustive method, we do see similar accuracy rates between the training error and testing error, suggesting that the models are not overfitting.
Validity of the findings
1. The authors are strongly encouraged to use a statistical measure to compare the model's quality rather than just comparing the accuracy and AUC score. For instance, using Delong's method to compare the model performance.
Thank you for your suggestion. We used Delong’s method to compare model ROCs, and the results are in our OSF repository.
2. It will be better for the authors to explicitly restate this study's novelty, on how this study differs from other studies and its contribution, in the conclusion part. It could link the finding and the research goal stated initially and make the importance of this study more sound.
Thank you for the comment. This issue was somewhat raised by Reviewer 1 as well, so we strengthened the overall framing on our novelty in identifying cross-cultural differences in popular music features, particularly in the introduction, with an added note on implications in the conclusion section.
" | Here is a paper. Please give your review comments after reading it. |
188 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Preferences for music can be represented through music features. The widespread prevalence of music streaming has allowed for music feature information to be consolidated by service providers like Spotify. In this paper, we demonstrate that machine learning classification on cultural market membership (Taiwanese, Japanese, American) by music features reveals variations in popular music across these markets.</ns0:p><ns0:p>Methods. We present an exploratory analysis of 1.08 million songs centred on Taiwanese, Japanese and American markets. We use both multiclass classification models (Gradient Boosted Decision Trees (GBDT) and Multilayer Perceptron (MLP)), and binary classification models, and interpret their results using variable importance measures and Partial Dependence Plots. To ensure the reliability of our interpretations, we conducted a follow-up study comparing Top-50 playlists from Taiwan, Japan, and the US on identified variables of importance.</ns0:p><ns0:p>Results. The multiclass models achieved moderate classification accuracy (GBDT = 0.69, MLP = 0.66). Accuracy scores for binary classification models ranged between 0.71 to 0.81. Model interpretation revealed music features of greatest importance: Overall, popular music in Taiwan was characterised by high acousticness, American music was characterized high speechiness, and Japanese music was characterized by high energy features. A follow-up study using Top-50 charts found similarly significant differences between cultures for these three features.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion.</ns0:head><ns0:p>We demonstrate that machine learning can reveal both the magnitude of differences in music preference across Taiwanese, Japanese, and American markets, and where these preferences are different. While this paper is limited to Spotify data, it underscores the potential contribution of machine learning in exploratory approaches to research on cultural differences.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>With 219 million active listeners a month, and a presence in over 60 countries, Spotify is one of the largest music streaming service providers in the world (as of 2020, <ns0:ref type='bibr'>Schwind et al., 2020)</ns0:ref>. To facilitate such a service, they maintain a database of music features for all songs in their service, that is made publicly accessible through the Spotify API (Application Programme Interface). This database contains a wealth of meta and music feature data, that researchers have been using to research human behaviour and engagement with music. For example, Park and colleagues <ns0:ref type='bibr' target='#b42'>(Park et al., 2019)</ns0:ref> analysed data from 1 million individuals across 51 countries and uncovered consistent patterns of music preferences across day-night cycles: relaxing music was more commonly played at night, and energetic music during the day. Pérez-Verdejo and colleagues (2020) found that popular hit songs in Mexico shared many similarities to global hit songs. Spotify's music features have also been used to examine songs in clinical and therapeutic settings, with <ns0:ref type='bibr' target='#b26'>Howlin and Rooney (2020)</ns0:ref> finding that songs used in previous pain management research, if chosen by the patient, tended to have high energy, danceability, and lower instrumentalness features, than experimenter-chosen songs.</ns0:p><ns0:p>In this paper, we use Spotify features to examine how cultures differ in music preferences, through a bottom-up, data-driven analysis of music features across cultures. Here, we quantify music preference through music features, following past research (e.g., <ns0:ref type='bibr' target='#b13'>Fricke et al., 2019)</ns0:ref>. Music plays a huge role in human society, be it in emotion regulation or for social displays of identity and social bonding <ns0:ref type='bibr' target='#b19'>(Groarke & Hogan, 2018;</ns0:ref><ns0:ref type='bibr' target='#b9'>Dunbar, 2012)</ns0:ref>. These are often embedded in the cultural norms and traditions of the listener. In other words, understanding how music differs across cultures may reflect corresponding cultural differences in the sociocultural context that shape our individual preferences towards certain types of music over others. Thus, understanding how music differs between these cultural markets, e.g., by examining their features, may then shed light on possible cultural differences, which can guide follow-up research by generating novel hypotheses, or in supporting various theories on cultural differences aside from music (see below).</ns0:p><ns0:p>To achieve this aim, we rely on machine learning classification of songs based on cultural markets (i.e., culture of origin), as interpretation of these models may reveal insight into how <ns0:ref type='bibr'>(and which)</ns0:ref> features differ between cultures. We utilize data seeded originally on Taiwanese, American and Japanese Top-50 lists. This arguably aligns more towards a sociolingual distinction of cultural membership, and not a country-based sampling commonly used in culture research. Our reasoning was that this more closely resembles the differentiated cultural markets that artists of a certain language operate in, that often transcend country boundaries. For example, popular Chinese music meant for a Chinese cultural market often subsumes artists from wider Chinese cultural origins (in countries and territories like Taiwan, Hong <ns0:ref type='bibr'>Kong, Singapore, and Malaysia;</ns0:ref><ns0:ref type='bibr'>Fung, 2013;</ns0:ref><ns0:ref type='bibr'>Moskowitz, 2009)</ns0:ref>. As such, in this paper, these were considered to belong to a 'Chinese' cultural market (including music from language subtypes and dialects). Japanese music was similarly treated as belonging to a 'Japanese' market, and music from Western (Anglo-European) cultural origins (e.g., the US, UK, Canada, and Australia) were considered as belonging to a 'Western' cultural market.</ns0:p><ns0:p>Past approaches towards music preference through Spotify have largely focused on curated lists of <ns0:ref type='bibr'>Top-50 or Top-200 popular songs (e.g., Pérez-Verdejo et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b10'>Febirautami, Surjandari & Laoh, 2019)</ns0:ref>. While such lists are often diluted by the inclusion of 'global' hit songs, they nevertheless provide a window to examine culturally based music preferences. Accordingly, we also conduct a follow-up study using these Top-50 lists from Taiwan, Japan, and the US to ensure the reliability of our interpretations (from the classification model).</ns0:p></ns0:div>
<ns0:div><ns0:head>Cultural differences in music preference</ns0:head><ns0:p>Typically, most of the research in cultural differences in the psychological literature has come from top-down, theoretical approaches. These have been instrumental in shaping the field, by increasing awareness of systematic ways by which people from cultures are different. One of the most instrumental differences is in the independence and interdependence of the self <ns0:ref type='bibr' target='#b32'>(Markus & Kitayama, 1991;</ns0:ref><ns0:ref type='bibr'>2010)</ns0:ref>. Westerners generally tend to be independent, in that they prioritise the autonomy and uniqueness of internal (self) attributes. In contrast, East Asians generally tend to be interdependent, where their concept of self is intricately linked to close social relationships. This has been shown to have implications on music preferences through differences in desirability of emotions. For example, Westerners generally tend to view happiness as a positive and internal hedonic experience to be maximised where possible <ns0:ref type='bibr' target='#b27'>(Joshanloo & Weijers, 2014)</ns0:ref>. As such, Western music preference tends towards high-arousal music, possibly in the search of strong 'happiness' experiences. East Asians, however, view happiness as a positive feeling associated with harmonious social relationships, in contrast to the hedonic, high-arousal definition in Western contexts (citation). Consistently, East Asian preferences for music do not have this high-arousal component, and is more calm, subdued, and relaxing <ns0:ref type='bibr' target='#b50'>(Tsai, 2007;</ns0:ref><ns0:ref type='bibr' target='#b51'>Uchida & Kitayama, 2009;</ns0:ref><ns0:ref type='bibr' target='#b42'>Park et al., 2019)</ns0:ref>. However, such theoretically based analyses may overrepresent Western cultures in research and literature. Consequently, cultural differences within similar, non-Westernised spheres are not well understood, due to the lack of pre-existing theory. For example, Chinese and Japanese cultures are often grouped together in cross-cultural research as a representation of East Asian collectivism, that functions as a comparative antithesis to Western findings (e.g., <ns0:ref type='bibr' target='#b22'>Heine & Hamamura, 2007)</ns0:ref>. Yet, research has also uncovered differences between China and Japan that cannot be explained by these theories <ns0:ref type='bibr' target='#b40'>(Muthukrishna et al., 2020)</ns0:ref>. As such, cultural differences within East Asia are not well understood in the psychological literature, and few theories exist to offer predictions on differences in music preference between these cultures.</ns0:p><ns0:p>Our solution was to examine music as cultural products from the bottom-up. Doing so would reduce the effect of experimenter bias in guiding theory formation and interpretation, when examining a wide database of music features. Cultural products are behavioural manifestations of culture that embody the shared values and collective aesthetics of a society <ns0:ref type='bibr' target='#b39'>(Morling & Lamoreaux, 2008;</ns0:ref><ns0:ref type='bibr' target='#b30'>Lamoreaux & Morling, 2012;</ns0:ref><ns0:ref type='bibr' target='#b47'>Smith et al., 2013)</ns0:ref>. This implies that music consumption behaviour underscores culturally based attitudes, cognitions and emotions that afford preferences for certain congruent types of music. For example, in a crosscultural comparison between Brazil and Japan, de Almeida and <ns0:ref type='bibr' target='#b6'>Uchida (2018)</ns0:ref> found that Brazilian song lyrics contained higher frequencies of positive emotion words and lower frequencies of neutral words than Japanese lyrics. This was consistent and reflective of their respective cultural emphases on emotion expressions (see <ns0:ref type='bibr' target='#b49'>Triandis et al., 1984;</ns0:ref><ns0:ref type='bibr' target='#b51'>Uchida & Kitayama, 2009)</ns0:ref>, and showed that comparing music 'products' elucidated differences between the collective shared values of different cultures. Past research on cultural products have relied on both popularity lists (charts; e.g., <ns0:ref type='bibr' target='#b1'>Askin & Mauskapf, 2017)</ns0:ref>, and on artifacts produced by a culture (e.g., Tweets: Golder & Macy, 2011; newspaper articles: <ns0:ref type='bibr' target='#b3'>Bardi, Calogero, & Mullen, 2008)</ns0:ref>. We utilize both methods, and propose that examining cultural differences in 'music' products on a large-scale may provide potential insight into the sociocultural circumstances that give rise to these differences.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Present Research</ns0:head><ns0:p>We adopted a data-driven, bottom-up approach to explore music preferences between cultures/industries through musical features for this study through machine learning. I.e., we first train a multiclass model to classify songs as belonging to (originating from) Chinese (Taiwanese), Japanese, or Western (American) markets. This is to establish the presence and magnitude of discernible cultural differences in music features. Next, we decompose the model by training 3 binary machine learning classifiers to classify songs as belonging to one culture or another. By applying model interpretation techniques on these models (such as Partial Dependence Plots [PDPs]), we aim to discover the specific difference in preferred musical features between Chinese (Taiwanese)-Japanese markets, Chinese (Taiwanese)-Western (American) markets, and Japanese-Western (American) markets. We aimed to include as many songs as possible that were produced from these respective culture-based music industries to observe systematic trends and differences from as wide a range of musical styles and genres within these industries as possible. Finally, we examine the generalizability of these interpretations by conducting a follow-up study on Top-50 songs from Taiwan, Japan, and the US. If the identified features of difference present in songs produced by a cultural market were indeed representative of cultural differences in music preferences, we expect that these features should also show consistent differences for their respective Top-50 (popular) songs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Overview</ns0:head><ns0:p>To explore differences between cultures, we used machine learning to classify a database of Chinese, Japanese, and English songs into their respective cultural (linguistic) markets. As this was a multiclass classification problem, we conducted the analysis twice using gradient boosted decision trees (GBDTs) and artificial neural networks (multi-layer perceptron, MLP) that are inherently capable of multiclass classification. This was also to examine the consistency in results between two differing methods of analysis, and strengthen the reliability of the analyses. To infer the features that accounted for cultural differences, we use model interpretation techniques, namely relative feature importance (RFI; <ns0:ref type='bibr' target='#b12'>Friedman, 2001)</ns0:ref>, permutational feature importance (PFI; <ns0:ref type='bibr'>Fisher, Rudin, & Dominici, 2018)</ns0:ref>, and partial dependence plots (PDPs, <ns0:ref type='bibr' target='#b12'>Friedman, 2001)</ns0:ref>, to examine and visualise the relationships between the feature and its influence on the probability of classification.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data mining</ns0:head><ns0:p>We accessed the Spotify Application Programme Interface (API) through the 'spotifyr' wrapper <ns0:ref type='bibr' target='#b48'>(Thompson, Parry, & Wolff, 2019)</ns0:ref> in R, to obtain song-level music feature information from Chinese, Japanese and English artists from the Spotify database. This was through a pseudo-snowball sampling method: we relied on Spotify's recommendation systems (the 'get_related_artists' function) to recommend artists related to those in the official Spotify Top-50 chart playlists for Taiwan, Japan, and the US respectively and created a list of artists per country. We then used the same method to obtain another list of recommended artists to these respective 'lists', for up to 6 iterations, in order to obtain comparable sample sizes between these three markets. We also excluded all non-Chinese, non-Japanese, and non-English (language) artists from the respective list. This was through an examination of the associated genres for each artist, which often contained hints to their cultural origins (e.g., J-pop, J-rock, Mandopop). Artists that did not have listed genres were checked manually by the researchers. This resulted in a final N(artists) = 10259 (Japanese = 2587; English = 2466; Chinese = 5206). All song-level feature information for all artists were then obtained from the Spotify database. Duplicates (such as the same song being rereleased in compilation albums) were removed, for a total of N(songs) = 1810210 (Japanese = 646440; Chinese = 360101; English = 803669). To ensure class balances, we randomly downsampled the Japanese and English samples to match the Chinese sample, resulting in a final N(songs) = 1080303.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data handling and analysis</ns0:head><ns0:p>Except for 'key' and 'time signature', all Spotify features were inputted as features in the classification models. These were: 'danceability', 'energy', 'loudness', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'duration' (ms), and 'mode'. A list of definitions for these features is available in Table <ns0:ref type='table'>1</ns0:ref>. These were to classify songs according to their cultural membership (Chinese, English, or Japanese), as the outcome variable. The data was split into a training and testing set along a 3:1 ratio. Parameters for the GBDT model and weights for the MLP model were tuned through 5-fold cross validation on the training set. We also examined RFI scores for each model. For GBDT, this was a measure of the proportion that a feature was selected for stratification in each iterative tree, and for MLP, this was based on PFI, which measures the resultant error of a model when each feature is iteratively shuffled -the greater the error, the larger the influence a feature exerts on the outcome variable <ns0:ref type='bibr'>(Fisher, Rudin, & Dominici, 2018;</ns0:ref><ns0:ref type='bibr' target='#b37'>Molnar, 2019)</ns0:ref>. We then simplified the classification problem by splitting it into 3 separate binary classifications: Japanese-Chinese, Japanese-English, and Chinese-English. GBDTs and MLP models were conducted for these three comparisons, and in addition to RFI measures, we visualised the effect of each variable using PDPs. These show the averaged marginal effect of a feature on the outcome variable in a machine learning model, and is useful to glean an understanding of the nature of the relationship between these variables. PDPs were conducted through the 'pdp' package <ns0:ref type='bibr' target='#b17'>(Greenwell, 2017)</ns0:ref>, and PFIs were conducted through the 'iml' package <ns0:ref type='bibr'>(Molnar, Bischl, & Casalicchi, 2019)</ns0:ref>. Machine learning was conducted through the 'gbm' package <ns0:ref type='bibr'>Greenwell et al., 2019)</ns0:ref> for GBDTs, and the 'nnet' package <ns0:ref type='bibr'>(Venables & Ripley, 2002)</ns0:ref> for MLPs, via the 'caret' wrapper <ns0:ref type='bibr' target='#b29'>(Kuhn, 2019)</ns0:ref> in R (R Core Team, 2019). All R scripts used for data mining and analysis are available in our OSF repository (Note to reviewers: anonymous review link: https://osf.io/d3cky/?view_only=cc89c024cedb4bfd8401544032e505a1).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Descriptives</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> reports the descriptive medians, lower/upper quantiles, and missing data for each feature per culture. The full list of artists, genres, and songs are available in our OSF repository (Note to reviewers: anonymous review link: https://osf.io/d3cky/?view_only=cc89c024cedb4bfd8401544032e505a1). Additionally, we note that while our database of songs spans as early as the 1950s, most of the songs in our database were from the mid-2000s to 2020 (see Figure <ns0:ref type='figure'>1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Multiclass classification (Chinese-Japanese-English)</ns0:head><ns0:p>For the GBDT model, the parameter tuning resulted in N(trees) = 150, interaction depth = 3, alongside default parameters of shrinkage = 0.1, and number of minimum observations per node = 10. The GBDT achieved a classification accuracy of 0.682, 95%CI (0.680, 0.683), significantly above the no information rate (NIR) of 0.333, p < .0001. Aside from the input and output layers, we used a MLP model consisting of 1 hidden layer with 5 nodes. The MLP model achieved a slightly lower accuracy score of 0.660, 95%CI (0.659,0.662), but was still significantly above the NIR of 0.333, p < .0001. RFIs for both models are reported in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Binary classifications</ns0:head><ns0:p>We first unpack the Chinese-Japanese model: For the GBDT model, the parameter tuning resulted in N(trees) = 150, interaction depth = 3, and no changes were made to the other default parameters (as above). The GBDT achieved a classification accuracy of 0.784, 95%CI (0.783, 0.786), AUC = 0.865, significantly above the no information rate (NIR) of 0.500, p < .0001. The MLP model achieved a comparable accuracy score of 0.766, 95%CI (0.764,0.768), AUC = 0.844, significantly above the NIR of 0.500, p < .0001. Next, the Chinese-English model: For the GBDT model, the parameter tuning resulted in N(trees) = 150, interaction depth = 3, and no changes were made to the other default parameters. The GBDT achieved a classification accuracy of 0.807, 95%CI (0.805, 0.809), AUC = 0.885, significantly above the no information rate (NIR) of 0.500, p < .0001. The MLP model achieved a comparable accuracy score of 0.803, 95%CI (0.801,0.805), AUC = 0.880, significantly above the NIR of 0.500, p < .0001. Finally, the Japanese-English model: For the GBDT model, the parameter tuning resulted in N(trees) = 150, interaction depth = 3, and no changes were made to the other default parameters. The GBDT achieved a classification accuracy of 0.713, 95%CI (0.711, 0.715), AUC = 0.797, significantly above the no information rate (NIR) of 0.500, p < .0001. The MLP model achieved a comparable accuracy score of 0.709, 95%CI (0.707,0.711), AUC = 0.791, significantly above the NIR of 0.500, p < .0001. All RFIs and PFIs are reported in Table <ns0:ref type='table'>4</ns0:ref>. Additionally, the 2 most important features are visualised by PDPs in Figure <ns0:ref type='figure'>2</ns0:ref>. A visual inspection of the PDPs suggests that English music is higher than both Japanese and Chinese music in speechiness, Chinese music is higher than both Japanese and English music in acousticness, and Japanese music is higher than English and Chinese music in energy. In comparing Japanese and Chinese music, we note that acousticness and energy were also present, but were identified only in the GBDT model. In contrast, the MLP model identified loudness and instrumentalness as higher in Japanese music than Chinese music.</ns0:p><ns0:p>Overall, this suggests that, unlike English-Japanese or English-Chinese comparisons which were markedly different on a few main features, the differences between Chinese and Japanese music were spread widely across the various features. Consequently, despite relying on different 'important variables', both the MLP and GBDT managed to achieve a comparably high classification accuracy, with the GBDT outperforming the MLP for all classification tasks (results from DeLong's tests are available on our OSF repository).</ns0:p></ns0:div>
<ns0:div><ns0:head>Additional Analyses</ns0:head><ns0:p>We also visualised the changes in features over time for speechiness, acousticness, energy, instrumentalness, and loudness, from 2000 to 2020. Feature information for songs before 2000 were excluded due to the markedly smaller sample. Other than instrumentalness, which showed a notable decrease over time in Japanese songs, the remaining four features showed stability over time. This suggests that the differences in preference highlighted by the RFIs, PFIs and PDPs could indicate long term cultural preferences for music.</ns0:p></ns0:div>
<ns0:div><ns0:head>Follow-up Study</ns0:head><ns0:p>We obtained a second round of data (approximately one year later) from Top-50 lists for Japan, Taiwan and the USA. Focusing on the identified features of speechiness, acousticness, energy, instrumentalness, and loudness from the previous study, Kruskal-Wallis tests revealed a significant effect of energy on speechiness (χ 2 (2) = 30.5, p < .001), acousticness (χ 2 (2) = 24.5, p < .001), energy (χ 2 (2) = 21.0, p < .001), and loudness (χ 2 (2) = 33.5, p < .001), but no significant effect was observed for instrumentalness (χ 2 (2) = 1.4, p = .49). For speechiness, post-hoc Dwass-Steel-Critchlow-Fligner pairwise comparisons revealed that USA was significantly higher than Taiwan (W = 7.34, p < .001) and Japan (W = 5.91, p < .001), but no significant difference was observed between Taiwan and Japan (W = -1.48, p = .55). For acousticness, Taiwan was significantly higher than Japan (W = 5.97, p < .001) and the USA (W = 6.11, p < .001), but no significant difference was observed between the USA and Japan (W = 0.53, p = .93). For energy, Japan was significantly higher than Taiwan (W = 6.01, p < .001), and the USA (W = 4.35, p = .006), but no significant difference was observed between Taiwan and the USA (W = 2.88, p = .103). For loudness, Japan was significantly higher than Taiwan (W = 7.44, p < .001) and the USA (W = 6.36, p < .001), but no significant difference was observed between Taiwan and the USA (W = 2.20, p = .27). Finally, for instrumentalness, no significant difference was observed between Japan and Taiwan (W = 0.61, p = .90), Japan and the USA (W = 1.67, p = .48), or Taiwan and the USA (W = 1.01, p = .75).</ns0:p><ns0:p>In short, with the exception of instrumentalness, Top-50 playlists obtained one year later nevertheless demonstrate strong consistency with the earlier results. American Top-50 songs are higher than both Japanese and Taiwanese Top-50 songs in speechiness, Taiwanese Top-50 songs are higher than both Japanese and American Top-50 songs in acousticness, and Japanese Top-50 songs are higher than American and Taiwanese music in energy and loudness. However, instrumentalness, that was originally identified as a variable of importance for the MLP model, did not consistently differ between cultures. Indeed, Figure <ns0:ref type='figure'>3</ns0:ref> shows that instrumentalness in Chinese music is inconsistent, with strong fluctuations depending on year. More research is needed to determine if instrumentalness is indeed a preferred feature in Taiwanese markets or merely a passing trend.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Across the multiclass and subsequent binary classification tasks, both the GBDT and MLP models were able to consistently classify songs by cultural market with moderately high accuracy, and the GBDT was often marginally better than the MLP for this purpose. This suggests that the patterns of difference between cultural markets were robust enough to be detected by two different algorithms. A comparison of accuracy scores suggested that the difference between Chinese and Japanese music afforded higher accuracy to the models, than Japanese and English differences. While this could be for several reasons, we speculate a possibility in that music preferences between Japanese and Chinese cultures differed greater that Japanese and English differences. Such a reasoning would support growing calls for decentralisation and internationalisation of psychological research <ns0:ref type='bibr' target='#b23'>(Heine & Ruby, 2010;</ns0:ref><ns0:ref type='bibr'>Heinrich, Heine, & Norenzayan, 2010;</ns0:ref><ns0:ref type='bibr' target='#b5'>Cheung, 2012)</ns0:ref>, in showing that Japanese and Chinese speaking cultures, sometimes thought to be homogenous in cross-cultural research, may actually be more different that previously assumed.</ns0:p><ns0:p>A visual inspection of the PDPs and feature importance scores provide some indicator of where these differences lie. Apart from instrumentalness, all identified feature importance proved to be different across (geographical) cultures in similar directions in the follow-up study. This implies that cultural music preferences are reflected both in the music produced by a culture for their respective industry or market, as well as the overall music preferences by geographically bound members of that culture. While this paper does not empirically explore the underlying cultural mechanisms that may account for these differences, this is nevertheless a starting point for future research to continue from, and we speculate on some interpretations for these results. Western music similarly differed from Chinese and Japanese music through higher speechiness. One explanation could be prosodic bias, in that normal spoken Mandarin Chinese inherently contains more pitch movements than English <ns0:ref type='bibr' target='#b25'>(Hirst, 2013)</ns0:ref>, and consequently, what may be perceived as 'speech-like' by Chinese listeners may not correspond to high speechiness scores. However, this can also be explained through previous research on emotion-arousal preferences in the Western and East-Asian contexts. The high speechiness score in English music could indicate larger preferences for hip-hop and rap music. Rap-music has seen a dramatic increase in popularity in Western markets from the 1980s <ns0:ref type='bibr' target='#b36'>(Mauch et al., 2015)</ns0:ref>, and has been shown to express and embody high-arousal emotions like anger (e.g., <ns0:ref type='bibr' target='#b20'>Hakvoort, 2015)</ns0:ref>, and its relative popularity in Anglo-American cultures could be representative on cultural preferences towards these high arousal emotions described earlier <ns0:ref type='bibr' target='#b50'>(Tsai, 2007)</ns0:ref>, compared to Japanese and Chinese cultures.</ns0:p><ns0:p>One feature that differentiated Chinese from English and Japanese music was high acousticness. This points to lower use of electronic instruments in the production process, and may suggest a preference for more organic, natural sounds in Chinese music. Energy appeared to be more important in Japanese than English or Chinese music. We posit that energy preferences in Japan (defined by Spotify as a combination of loudness, complexity, timbre, dynamic range, and noise) could be due to remnants of traditional music aesthetics, that overlap considerably with energy definitions (e.g., beauty in noise/simplicity in complexity: sawari, wabi-sabi, see <ns0:ref type='bibr' target='#b8'>Deva, 1999;</ns0:ref><ns0:ref type='bibr' target='#b0'>Anderson, 2014;</ns0:ref><ns0:ref type='bibr' target='#b41'>Okuno, 2015)</ns0:ref>. On the surface, this could be similarly concluded from increased loudness features in Japanese over Chinese music (from the follow-up study), but the U-shaped relationship between energy and Japanese/Chinese music classification seen in Figure <ns0:ref type='figure'>2</ns0:ref> suggests a deeper nuance that requires further research.</ns0:p><ns0:p>Finally, we consider the strengths and limitations our exploratory approach. Comparing music features offer a greater insight into behavioural and consumption patterns of music preference across cultural spheres. In doing so, we uncover systematic differences between groups that, while being consistent with previous literature, also offer new insight into how cultures differ, that future research can build from in understanding societies. Unfortunately, we were unable to eliminate certain sample biases from our dataset: we assumed our Chinese data to be representative of Chinese music in general, but Spotify is not (as of 2021) active in China despite the inclusion of several mainland Chinese artists in the database. Instead, our findings represented Chinese-speaking listeners in Taiwan and Hong Kong, along with possibly Malaysia and Singapore, who may have differing values and preferences from mainland Chinese listeners, particularly given differences in demographics of users and variation in dialect. Moreover, using the Spotify API limited our selection of music features to those available in the API. Future studies could examine music features through publicly available software (e.g., MIRtoolbox; <ns0:ref type='bibr' target='#b31'>Lartillot, Toiviainen, & Eerola, 2008)</ns0:ref> that have both greater amounts of features and more transparent documentation.</ns0:p><ns0:p>On the other hand, our strengths include our comparisons of features, as opposed to genre, that allowed for validity in comparing cultures because of universality in the perceptual properties of music <ns0:ref type='bibr' target='#b45'>(Savage et al., 2015)</ns0:ref>. This enabled us to conclude that any differences in music features would be due to preference for those features. By contrast, comparing preferences by genre differences across cultures could have introduced confounds to the investigation, as genre is not homogenous across cultures (see <ns0:ref type='bibr' target='#b4'>Bennett, 1999)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In sum, we demonstrated the variability of music preferences across Chinese (Taiwan), Japanese, and Western (American) cultural markets, and identified the features that best account for these differences. In particular, Chinese music was marked by high acousticness, Anglo-American Western music was marked by high speechiness, and Japanese music appeared to be marked by high energy. While we speculated on some reasons why this would be so, future research is needed to validate these theories to develop a holistic understanding of popular music preferences in Chinese and Japanese cultures. As music is an integral part of human society and culture, understanding the mechanisms by which we prefer different types of music may also shed light on the aspects of human society and experience that correspond to these differences.</ns0:p><ns0:p>Our paper also demonstrates the potential uses of machine learning and other computer science methods in cross-cultural research. Given the advent of digital and online media, these repositories of cultural products may hold valuable insight into the diversity of humanity. Computer science as a field has utilised these kinds of data to great effect, be it in developing recommendation systems, or in predicting consumer behaviour, and we hope to demonstrate that these same data and methods can also contribute towards research on society and culture. While a common argument has been that machine learning emphasises prediction, whereas social scientific research prefers interpretation, we show that the two goals are not mutually exclusive. As demonstrated through the use of model interpretation techniques like RFIs and PDPs, supplementing commonly-used prediction focused models in computer science with explanation and model interpretation techniques enables big data and machine learning to offer an efficient and viable means for the empirical analysis of sociocultural phenomenon. At the same time, these methods are more objective, and hold less bias than commonly used methods like selfreports. Particularly for cross-cultural research, future directions could also apply this methodology to identify new avenues of cultural differences in other mediums, as an additional analysis tool. Additionally, computer scientists and engineers could also benefit from this knowledge, as such analyses of sociocultural phenomenon could also aid with the fine-tuning of weights in more opaque deep learning models, such as in recommendation systems used by streaming companies when targeting users from different cultures. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58632:2:0:NEW 7 Jun 2021) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,199.12,525.00,393.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,280.87,525.00,403.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,204.37,525.00,401.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Medians (L/U quantiles) and missing data for musical features (excluding mode), and release year.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature</ns0:cell><ns0:cell cols='2'>Chinese</ns0:cell><ns0:cell cols='2'>Japanese</ns0:cell><ns0:cell cols='2'>Western</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Median (L/U)</ns0:cell><ns0:cell>Missing</ns0:cell><ns0:cell>Median (L/U)</ns0:cell><ns0:cell>Missing</ns0:cell><ns0:cell>Median (L/U)</ns0:cell><ns0:cell>Missing</ns0:cell></ns0:row><ns0:row><ns0:cell>Danceability</ns0:cell><ns0:cell>0.56 (0.46/0.66)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.56 (0.45/0.67)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.66 (0.55/0.76)</ns0:cell></ns0:row><ns0:row><ns0:cell>Energy</ns0:cell><ns0:cell>0.46 (0.34/0.63)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.76 (0.51/0.90)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.69 (0.54/0.82)</ns0:cell></ns0:row><ns0:row><ns0:cell>Loudness</ns0:cell><ns0:cell>-8.9 (-11.1/-6.9)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>-6.2 (-9.3/-4.3)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>-6.8 (-8.9/-5.2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Speechiness</ns0:cell><ns0:cell>0.04 (0.03/0.05)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.05 (0.04/0.09)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.08 (0.04/0.24)</ns0:cell></ns0:row><ns0:row><ns0:cell>Acousticness</ns0:cell><ns0:cell>0.54 (0.21/0.76)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.11 (0.01/0.49)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.09 (0.02/0.29)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Instrumentalness 1.4E-6 (0.00/1.1E-</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>1.0E-4 (0.00/0.36) 4</ns0:cell><ns0:cell>0.00 (0.00/0.0006)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Liveness</ns0:cell><ns0:cell>0.13 (0.10/0.21)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.14 (0.10/0.29)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.14 (0.10/0.29)</ns0:cell></ns0:row><ns0:row><ns0:cell>Valence</ns0:cell><ns0:cell>0.37 (0.24/0.57)</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.52 (0.31, 0.71)</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.51 (0.33/0.69)</ns0:cell></ns0:row><ns0:row><ns0:cell>Tempo</ns0:cell><ns0:cell>122.7</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell cols='2'>123.7 (99.0/144.0) 4</ns0:cell><ns0:cell>119.7 (96.6.136.2)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(100.0/138.1)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Duration (ms)</ns0:cell><ns0:cell>240213</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>236840</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>215640</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(205586/272586)</ns0:cell><ns0:cell /><ns0:cell>(187266/281573)</ns0:cell><ns0:cell /><ns0:cell>(184727/250693)</ns0:cell></ns0:row><ns0:row><ns0:cell>Release year</ns0:cell><ns0:cell>2011 (2005/2015)</ns0:cell><ns0:cell>148</ns0:cell><ns0:cell>2015 (2010/2018)</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>2016 (2012/2018)</ns0:cell></ns0:row></ns0:table><ns0:note>1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58632:2:0:NEW 7 Jun 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58632:2:0:NEW 7 Jun 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Professor Jung,
On behalf of the authorship team, I would like to thank you and the anonymous reviewers for the feedback towards the manuscript. We welcome all the comments, and address the points copied in below (reply in grey). As your comment was fairly similar to Reviewer 2’s comment, we consolidated our responses into one main reply. Please do not hesitate to contact me if you need any additional clarifications or to discuss the points below.
Warmest regards,
Kongmeng Liew (Ph.D.)
Editor comment:
Please revise your manuscript based on the review comments. Importantly, authors need to consider how to emphasize the relevance of the work to the topical coverage of this journal.
Reviewer 2 comment:
The authors have significantly revised their manuscript according to review comments. The authors have changed their main contribution from classifying popular music according to cultural backgrounds to analyzing popular music. However, I still do not agree that the analysis methods used in this study have novelty. Also, the authors have not shown applications or potential applications of the analysis methods. Therefore, the current version of the manuscript is not suitable for journals in the computer science area. I want to recommend the authors search for journals dealing with data analysis results, not analysis methodologies.
Thank you for your comments. As you and the reviewers may have pointed out, our work is situated at the intersection of the computer sciences and social sciences, in that we use methods frequently employed in the computer sciences (such as music information retrieval and explainable machine learning), applied to the study of sociocultural phenomenon. We think that our novelty lies in the application of computer science methodology to cultural background classification as a way to uncover cultural differences. In the manuscript, we elaborate on these ideas in the ‘conclusion’ section, on how the ‘Computer Sciences’ component of this paper is central to the research. For example, in lines 388-392, “as demonstrated through the use of model interpretation techniques like RFIs and PDPs, supplementing commonly-used prediction focused models in computer science with explanation and model interpretation techniques enables big data and machine learning to offer an efficient and viable means for the scientific analysis of sociocultural phenomenon.”
We think this manuscript is relevant for publication in PeerJ Computer Science for three additional reasons.
1) We respectfully disagree with Reviewer 2 that PeerJ Computer Science is an analysis methodology focused journal. In Point 1 of the ‘Aims and Scope’ (https://peerj.com/about/aims-and-scope/cs) of the journal, it is stated that the journal is open to “articles across the whole of Computer Science”. While our paper may be different from conventional articles that focus primarily on systems and processes, we think that we nevertheless employ sufficient usage of computer science processes and techniques (particular in the social computing field) that may be difficult for readers with backgrounds outside the Computer Sciences to comprehend fully. This was also one of our main considerations in deciding to submit to this journal.
2) In the light of recent discussions on sociotechnical issues in computer science (e.g., Ehsan & Riedl, 2020: https://link.springer.com/chapter/10.1007/978-3-030-60117-1_33), that call for increased explanations and considerations on human factors, our results and interpretation-focused discussion may also be relevant to this category of computer science research.
3) In essence, our paper falls under the category of computational social science (see Mason et al., 2013: https://link.springer.com/article/10.1007/s10994-013-5426-8, paragraphs 1-2), that is an interdisciplinary approach bringing together social scientists and computer scientists. As such, we think this discipline is suitable for a broad Computer Science journal like PeerJ Computer Science. Moreover, our approach of using computational methods to uncover cross-cultural differences is of highly novel, and may be of interest to researchers in this field.
" | Here is a paper. Please give your review comments after reading it. |
189 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Framing is a process of emphasizing a certain aspect of an issue over the others, nudging readers or listeners towards different positions on the issue even without making a biased argument. Here, we propose FrameAxis, a method for characterizing documents by identifying the most relevant semantic axes ('microframes') that are overrepresented in the text using word embedding. Our unsupervised approach can be readily applied to large datasets because it does not require manual annotations. It can also provide nuanced insights by considering a rich set of semantic axes. FrameAxis is designed to quantitatively tease out two important dimensions of how microframes are used in the text. Microframe bias captures how biased the text is on a certain microframe, and microframe intensity shows how actively a certain microframe is used. Together, they offer a detailed characterization of the text. We demonstrate that microframes with the highest bias and intensity well align with sentiment, topic, and partisan spectrum by applying FrameAxis to multiple datasets from restaurant reviews to political news. The existing domain knowledge can be incorporated into FrameAxis by using custom microframes and by using FrameAxis as an iterative exploratory analysis instrument. Additionally, we propose methods for explaining the results of FrameAxis at the level of individual words and documents. Our method may accelerate scalable and sophisticated computational analyses of framing across disciplines.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Framing is a process of emphasizing a certain aspect of an issue over the others, nudging readers or listeners towards different positions on the issue even without making a biased argument. Here, we propose FrameAxis, a method for characterizing documents by identifying the most relevant semantic axes ('microframes') that are overrepresented in the text using word embedding. Our unsupervised approach can be readily applied to large datasets because it does not require manual annotations. It can also provide nuanced insights by considering a rich set of semantic axes. FrameAxis is designed to quantitatively tease out two important dimensions of how microframes are used in the text. Microframe bias captures how biased the text is on a certain microframe, and microframe intensity shows how actively a certain microframe is used. Together, they offer a detailed characterization of the text. We demonstrate that microframes with the highest bias and intensity well align with sentiment, topic, and partisan spectrum by applying FrameAxis to multiple datasets from restaurant reviews to political news. The existing domain knowledge can be incorporated into FrameAxis by using custom microframes and by using FrameAxis as an iterative exploratory analysis instrument. Additionally, we propose methods for explaining the results of FrameAxis at the level of individual words and documents. Our method may accelerate scalable and sophisticated computational analyses of framing across disciplines.</ns0:p></ns0:div>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Framing is a process of highlighting a certain aspect of an issue to make it salient <ns0:ref type='bibr' target='#b12'>(Entman, 1993;</ns0:ref><ns0:ref type='bibr' target='#b9'>Chong and Druckman, 2007)</ns0:ref>. By focusing on a particular aspect over another, even without making any biased argument, a biased understanding of the listeners can be induced <ns0:ref type='bibr' target='#b26'>(Kahneman and Tversky, 1979;</ns0:ref><ns0:ref type='bibr' target='#b12'>Entman, 1993;</ns0:ref><ns0:ref type='bibr' target='#b18'>Goffman, 1974)</ns0:ref>. For example, when reporting on the issue of poverty, a news media may put an emphasis on how successful individuals succeeded through hard work. By contrast, another media may emphasize the failure of national policies. It is known that these two different framings can induce contrasting understanding and attitudes about poverty <ns0:ref type='bibr' target='#b24'>(Iyengar, 1994)</ns0:ref>. While readers who are exposed to the former framing became more likely to blame individual failings, those who are exposed to the latter framing tended to criticize the government or other systematic factors rather than individuals. Framing has been actively studied, particularly in political discourse and news media, because framing is considered to be a potent tool for political persuasion <ns0:ref type='bibr' target='#b47'>(Scheufele and Tewksbury, 2007)</ns0:ref>. It has been argued that the frames used by politicians and media shape the public understanding of issue salience <ns0:ref type='bibr' target='#b9'>(Chong and Druckman, 2007;</ns0:ref><ns0:ref type='bibr' target='#b27'>Kinder, 1998;</ns0:ref><ns0:ref type='bibr' target='#b31'>Lakoff, 2014;</ns0:ref><ns0:ref type='bibr' target='#b54'>Zaller, 1992)</ns0:ref>, and politicians strive to make their framing more prominent among the public <ns0:ref type='bibr' target='#b11'>(Druckman and Nelson, 2003)</ns0:ref>.</ns0:p><ns0:p>Framing is not confined to politics. It has been considered crucial in marketing <ns0:ref type='bibr' target='#b33'>(Maheswaran and Meyers-Levy, 1990;</ns0:ref><ns0:ref type='bibr' target='#b22'>Homer and Yoon, 1992;</ns0:ref><ns0:ref type='bibr' target='#b20'>Grewal et al., 1994)</ns0:ref>, public health campaigns <ns0:ref type='bibr' target='#b45'>(Rothman and Salovey, 1997;</ns0:ref><ns0:ref type='bibr' target='#b14'>Gallagher and Updegraff, 2011)</ns0:ref>, and other domains <ns0:ref type='bibr' target='#b39'>(Pelletier and Sharp, 2008;</ns0:ref><ns0:ref type='bibr' target='#b23'>Huang</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr'>et al., 2015)</ns0:ref>. Yet, the operationalization of framing is inherently vague <ns0:ref type='bibr' target='#b46'>(Scheufele, 1999;</ns0:ref><ns0:ref type='bibr' target='#b49'>Sniderman and Theriault, 2004)</ns0:ref> and remains a challenging open question. Since framing research heavily relies on manual efforts from choosing an issue to isolating specific attitudes, identifying a set of frames for an issue, and analyzing the content based on a developed codebook <ns0:ref type='bibr' target='#b9'>(Chong and Druckman, 2007)</ns0:ref>, it is not only difficult to avoid an issue of subjectivity but also challenging to conduct a large-scale, systematic study that leverages huge online data. Several computational approaches have been proposed to address these issues. They aim to characterize political discourse, for instance, by recognizing political ideology <ns0:ref type='bibr' target='#b48'>(Sim et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bamman and Smith, 2015)</ns0:ref> and sentiment <ns0:ref type='bibr' target='#b41'>(Pla and Hurtado, 2014)</ns0:ref>, or by leveraging established ideas such as the moral foundation theory <ns0:ref type='bibr' target='#b25'>(Johnson and Goldwasser, 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fulgoni et al., 2016)</ns0:ref>, general media frame <ns0:ref type='bibr' target='#b6'>(Card et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b29'>Kwak et al., 2020)</ns0:ref>, and frame-related language <ns0:ref type='bibr' target='#b3'>(Baumer et al., 2015)</ns0:ref>. Yet, most studies still rely on small sets of predefined ideas and annotated datasets.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>To overcome these limitations, we propose FrameAxis, an unsupervised method for characterizing texts with respect to a variety of microframes. Each microframe is operationalized by an antonym pair, such as legal -illegal, clean -dirty, or fair -unfair. The value of antonym pairs in characterizing the text has been repeatedly demonstrated <ns0:ref type='bibr' target='#b21'>(Haidt and Graham, 2007;</ns0:ref><ns0:ref type='bibr' target='#b25'>Johnson and Goldwasser, 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fulgoni et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b1'>An et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>Kozlowski et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Mathew et al., 2020)</ns0:ref>. For example, MFT identifies the five basic moral 'axes' using antonyms, such as 'Care/Harm' and 'Fairness/Cheating', 'Loyalty/Betrayal', 'Authority/Subversion', and 'Purity/Degradation', as the critical elements for individual judgment <ns0:ref type='bibr' target='#b21'>(Haidt and Graham, 2007)</ns0:ref>. MFT has been applied to discover politicians' stances on issues <ns0:ref type='bibr' target='#b25'>(Johnson and Goldwasser, 2018)</ns0:ref> and political leaning in partisan news <ns0:ref type='bibr' target='#b13'>(Fulgoni et al., 2016)</ns0:ref>, demonstrating flexibility and interpretability of antonymous semantic axes in characterizing the text. On the other hand, SemAxis <ns0:ref type='bibr' target='#b1'>(An et al., 2018)</ns0:ref> and following studies <ns0:ref type='bibr' target='#b28'>(Kozlowski et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Mathew et al., 2020)</ns0:ref> leverage the word embeddings to characterize the semantics of a word in different communities or domains (e.g., different meaning of 'soft' in the context of sports vs. toy) by computing the similarities between the word and a set of predefined antonymous axes ('semantic axes'). As in SemAxis, FrameAxis leverages the power of word embedding, which allows us to capture similarities between a word and a semantic axis.</ns0:p><ns0:p>For each microframe defined by an antonym pair, FrameAxis is designed to quantitatively tease out two important dimensions of how it is used in the text. Microframe bias captures how biased the text is on a certain microframe, and microframe intensity shows how actively a certain microframe is used.</ns0:p><ns0:p>Both dimensions together offer a nuanced characterization of the text. For example, let us explain the framing bias and intensity of the text about an immigration issue on the illegal -legal microframe. Then, the framing bias measures how much the text focuses on an 'illegal' perspective of the immigration issue rather than a 'legal' perspective (and vice versa); the framing intensity captures how much the text focuses on an illegal or legal perspective of the immigration issue rather than other perspectives, such as segregation (i.e., segregated -desegregated microframe).</ns0:p><ns0:p>While FrameAxis works in an unsupervised manner, FrameAxis can also benefit from manually curated microframes. When domain experts are already aware of important candidate frames of the text, they can be directly formulated as microframes. For the case when FrameAxis works in an unsupervised manner-which would be much more common, we propose methods to identify the most relevant semantic axes based on the values of microframe bias and intensity. Moreover, we also suggest document and word-level analysis methods that can explain how and why the resulting microframe bias and intensity are found with different granularity.</ns0:p><ns0:p>We emphasize that FrameAxis cannot replace conventional framing research methods, which involves sophisticated close reading of the text. Also, we do not expect that the microframes can be directly mapped to the frames identified by domain experts. FrameAxis can thus be considered as a computational aid that can facilitate systematic exploration of texts and subsequent in-depth analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>FrameAxis involves four steps: (i) compiling a set of microframes, (ii) computing word contributions to each microframe, (iii) calculating microframe bias and intensity by aggregating the word contributions, and finally (iv) identifying significant microframes by comparing with a null model. We then present how to compute the relevance of microframes to a given corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Building a Set of Predefined Microframes</ns0:head><ns0:p>FrameAxis defines a microframe as a 'semantic axis' <ns0:ref type='bibr' target='#b1'>(An et al., 2018)</ns0:ref> in a word vector space-a vector from one word to its antonym. Given a pair of antonyms (pole words), w + (e.g., 'happy') and w − (e.g., 'sad'), the semantic axis vector is v f = v w + − v w − , where f is a microframe or a semantic axis (e.g., happy -sad), and v w + and v w − are the corresponding word vectors. To capture nuanced framing, it is crucial to cover a variety of antonym pairs. We extract 1,828 adjective antonym pairs from WordNet <ns0:ref type='bibr' target='#b35'>(Miller, 1995)</ns0:ref> and remove 207 that are not present in the GloVe embeddings (840B tokens, 2.2M vocab, 300d vectors) <ns0:ref type='bibr' target='#b40'>(Pennington et al., 2014)</ns0:ref>. As a result, we use 1,621 antonym pairs as the predefined microframes.</ns0:p><ns0:p>As we explained earlier, when potential microframes of the text are known, using only those microframes is also possible.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computation of Microframe Bias and Intensity</ns0:head><ns0:p>A microframe f (or semantic axis in <ns0:ref type='bibr' target='#b1'>(An et al., 2018)</ns0:ref>) is defined by a pair of antonyms w + and w − . Microframe bias and intensity computation are based on the contribution of each word to a microframe. Formally, we define the contribution of a word w to a microframe f as the similarity between the word vector v w and the microframe vector v f (= v w + − v w − ). While any similarity measure between two vectors can be used here, for simplicity, we use cosine similarity:</ns0:p><ns0:formula xml:id='formula_0'>c w f = v w • v f v w v f (1)</ns0:formula><ns0:p>We then define microframe bias of a given corpus t on a microframe f as the weighted average of the word's contribution c w f to the microframe f for all the words in t. This aggregation-based approach shares conceptual roots with the traditional expectancy value model <ns0:ref type='bibr' target='#b38'>(Nelson et al., 1997b)</ns0:ref>, which explains an individual's attitude to an object or an issue. In the model, the individual's attitude is calculated by the weighted sum of the evaluations on attribute a i , whose weight is the salience of the attribute a i of the object. In FrameAxis, a corpus is represented as a bag of words, and each word is considered an attribute of the corpus. Then, a word's contribution to a microframe can be considered as the evaluation on attribute, and the frequency of the word can be considered as the salience of an attribute. Accordingly, the weighted average of the word's contribution to the microframe f for all the words in t can be mapped onto the individual's attitude toward an object-that is, microframe bias. An analogous framework using a weighted average of each word's score is also proposed for computing the overall valence score of a document <ns0:ref type='bibr' target='#b10'>(Dodds and Danforth, 2010)</ns0:ref>. Formally, we calculate the microframe bias, B t f , of a text corpus t on a microframe f as follows:</ns0:p><ns0:formula xml:id='formula_1'>B t f = ∑ w∈t (n w c w f ) ∑ w∈t n w (2)</ns0:formula><ns0:p>where n w is the number of occurrences of word w in t.</ns0:p><ns0:p>Microframe intensity captures how strongly a given microframe is used in the document. Namely, given corpus t on a microframe f we measure the second moment of the word contributions c w f on the microframe f for all the words in t. For instance, if a given document is emotionally charged with many words that strongly express either happiness or sadness, we can say that the happy -sad microframe is heavily used in the document regardless of the microframe bias regarding the happy -sad axis.</ns0:p><ns0:p>Formally, microframe intensity, I t f , of a text corpus t on a microframe f is calculated as follows:</ns0:p><ns0:formula xml:id='formula_2'>I t f = ∑ w∈t n w (c w f − B T f ) 2 ∑ w∈t n w (3)</ns0:formula><ns0:p>where B T f is the baseline microframe bias of the entire text corpus T on a microframe f for computing the second moment. As the squared term is included in the equation, the words that are far from the baseline microframe bias-and close to either of the poles-contribute strongly to the microframe intensity.</ns0:p><ns0:p>We present an illustration of microframe intensity and bias in Figure <ns0:ref type='figure' target='#fig_15'>1(A)</ns0:ref>, where arrows represent the vectors of words appeared in a corpus, and blue and orange circles represent two pole word vectors, which define the w + -w − microframe. If words that are semantically closer to one pole are frequently used in a corpus, the corpus has the high microframe bias toward that pole and the high microframe intensity on the w + -w − microframe (top right). By contrast, if words that are semantically closer to both poles are frequently used, the overall microframe bias becomes low by averaging out the biases toward both poles, but the microframe intensity stays high because the w + -w − microframe is actively used (bottom right).</ns0:p></ns0:div>
<ns0:div><ns0:head>3/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Handling Non-informative Topic Words</ns0:head><ns0:p>It is known that pretrained word embeddings have multiple biases <ns0:ref type='bibr' target='#b5'>(Bolukbasi et al., 2016)</ns0:ref>. Although some de-biasing techniques are proposed <ns0:ref type='bibr' target='#b55'>(Zhao et al., 2018)</ns0:ref>, those biases are not completely eliminated <ns0:ref type='bibr' target='#b19'>(Gonen and Goldberg, 2019)</ns0:ref>. For example, the word 'food' within a GloVe pretrained embedding space is much closer to 'savory' (cosine similiarity: 0.4321) than 'unsavory' (cosine similarity: 0.1561). As those biases could influence the framing bias and intensity due to its high frequencies in the text of reviews on food, we remove it from the analysis of the reviews on food.</ns0:p><ns0:p>FrameAxis computes the word-level framing bias (intensity) shift to help this process, which we will explain in 'Explainability' Section. Through the word-level shift, FrameAxis users can easily check whether some words that should be neutral on a certain semantic axis are located as neutral within a given embedding space.</ns0:p><ns0:p>While this requires manual efforts, one shortcut is to check the topic word first. For example, when</ns0:p><ns0:p>FrameAxis is applied to reviews on movies, the word 'movie' could be considered first because 'movie' should be neutral and non-informative. Also, as reviews on movies are likely to contain the word 'movie' multiple times, even smaller contribution of 'movie' to a given microframe could be amplified by its high frequency of occurrences, n w in Equation ( <ns0:ref type='formula'>2</ns0:ref>) and (3). After the manual confirmation, those words are replaced with <UNK> tokens and are not considered in the computation of framing bias and intensity.</ns0:p><ns0:p>In this work, we also removed topics words as follows: in the restaurant review dataset, the word indicating aspect (i.e., ambience, food, price, and service) is replaced with <UNK> tokens. In the AllSides' political news dataset, we consider the issue defined by AllSides as topic words, such as abortion, immigration, elections, education, polarization, and so on.</ns0:p></ns0:div>
<ns0:div><ns0:head>Identifying Statistically Significant Microframes</ns0:head><ns0:p>The microframe bias and intensity of a target corpus can be interpreted with respect to the background distribution for statistical significance. We compute microframe bias and intensity on the microframe f from a bootstrapped sample s from the entire corpus T , denote by B NULL s f and I NULL s f , respectively. We set the size of the sample s to be equal to that of the target corpus t.</ns0:p><ns0:p>Then, the differences between B NULL s f and B t f and that between I NULL s f and I t f shows how likely the microframe bias and intensity in the target corpus can be obtained by chance. The statistical significance of the observation is calculated by doing two-tailed tests on the N bootstrap samples. By setting a threshold p-value, we identify the significant microframes. In this work, we use N = 1, 000 and p = 0.05.</ns0:p><ns0:p>We also can compute the effect size (|η|, which is the difference between the observed value and the sample mean) for microframe f :</ns0:p><ns0:formula xml:id='formula_3'>η B f = B t f − B NULL f = B t f − ∑ N i B NULL s i f N (4) η I f = I t f − I NULL f = I t f − ∑ N i I NULL s i f N (5)</ns0:formula><ns0:p>We can identify the top M significant microframes in terms of the microframe bias (intensity) by the M microframes with the largest |η B | (|η I |).</ns0:p></ns0:div>
<ns0:div><ns0:head>Microframe Bias and Intensity Shift per Word</ns0:head><ns0:p>We define the word-level microframe bias and intensity shift in a given corpus t as follows:</ns0:p><ns0:formula xml:id='formula_4'>S t w (B f ) = n w c w f ∑ w∈t n w (6) S t w (I f ) = n w (c w a − B T f ) 2 ∑ w∈t n w (7)</ns0:formula><ns0:p>which shows how a given word (w) brings a shift to microframe bias and intensity by considering both the word's contribution to the microframe (c w a ) and its appearances in the target corpus t (n w ). In this work, both shifts are compared to those from the background corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Contextual Relevance of Microframes to a Given Corpus</ns0:head><ns0:p>Not all predefined microframes are necessarily meaningful for a given corpus. While we provide a method to compute the statistical significance of each microframe for a given corpus, filtering out irrelevant microframes in advance can reduce computation cost. We propose two methods to compute relevance of microframes to a given corpus: embedding-based and language model-based approaches.</ns0:p><ns0:p>First, an embedding-based approach calculates relevance of microframes as cosine similarity between microframes and a primary topic of the corpus within a word vector space. A topic can be defined as a set of words related to a primary topic of the corpus. We use τ = {w t1 , w t2 , w t3 , ..., w tn } to represent a set of topic words. The cosine similarity between a microframe f defined by two pole words w + and w − and a set of topic words τ can be represented as the average cosine similarity between pole word vectors (v w + and v w − ) and a topic word vector (v w ti ):</ns0:p><ns0:formula xml:id='formula_5'>r t f = 1 |τ| ∑ w ti ∈τ (relevance of w + to w ti ) + (relevance of w − to w ti ) 2 = 1 |τ| ∑ w ti ∈τ v w ti •v w + v w ti v w + + v w ti •v w − v w ti v w − 2 (8)</ns0:formula><ns0:p>Second, a language model-based approach calculates relevance of microframes as perplexity of a template-filled sentence. For example, consider two templates as follows:</ns0:p><ns0:p>• T1(topic word, pole word): {topic word} is {pole word}.</ns0:p><ns0:p>• T2(topic word, pole word): {topic word} are {pole word}.</ns0:p><ns0:p>If a topic word is 'healthcare' and a microframe is essential -inessential, four sentences, which are 2 for each pole word, can be generated. Following a previous method <ns0:ref type='bibr' target='#b50'>(Wang and Cho, 2019)</ns0:ref>, we use a pre-trained OpenAI GPT model to compute the perplexity score.</ns0:p><ns0:p>We take a lower perplexity score for each pole word because a lower perplexity score should be from the sentence with a correct subject-verb pair (i.e., singular-singular or plural-plural). In this stage, for instance, we take 'healthcare is essential' and 'healthcare is inessential'. Then, we sum two perplexity scores from one pole and the other pole words and call it frame relevance of the corresponding microframe to the topic.</ns0:p><ns0:p>According to the corpus and topic, a more complex template, such as 'A (an) {topic word} issue has a {pole word} perspective.' might work better. More appropriate template sentences can be built with good understanding of the corpus and topic.</ns0:p></ns0:div>
<ns0:div><ns0:head>Human Evaluation</ns0:head><ns0:p>We perform human evaluations through Amazon Mechanical Turk (MTurk). For microframe bias, we prepare the top 10 significant microframes ranked by the effect size (i.e., answer set) and randomly selected 10 microframes with an arbitrary microframe bias (i.e., random set) for each pair of aspect and sentiment (e.g., positive reviews about ambience). As it is hard to catch subtle differences of the magnitude of microframe biases and intensity through crowdsourcing, we highlight microframe bias on each microframe with bold-faced instead of its numeric value.</ns0:p><ns0:p>As a unit of question-and-answer tasks in MTurk (Human Intelligence Task [HIT]), we ask 'Which set of antonym pairs do better characterize a positive restaurant review on ambience? (A word on the right side of each pair (in bold) is associated with a positive restaurant review on ambience.)' The italic text is changed according to every aspect and sentiment. We note that, for every HIT, the order of microframes in both sets is shuffled. The location (i.e., top or bottom) of the answer set is also randomly chosen to avoid unexpected biases of respondents.</ns0:p><ns0:p>For microframe intensity, we prepare the top 10 significant microframes (i.e., answer set) and randomly selected 10 microframes (i.e., random set) for each pair of aspect and sentiment. The top 10 microframes are chosen by the effect size, computed by Equation ( <ns0:ref type='formula'>4</ns0:ref>) and ( <ns0:ref type='formula'>5</ns0:ref>), among the significant microframes. We then ask 'Which set of antonym pairs do better characterize a positive restaurant review on service?' The rest of the procedure is the same as the framing bias experiment.</ns0:p><ns0:p>For the quality control of crowd-sourced answers, we recruit workers who (1) live in the U.S., (2) have more than 1,000 approved HITs, and (3) achieve 95% of approval rates. Also, we allow a worker to answer up to 10 HITs. We recruit 15 workers for each (aspect, sentiment) pair. We pay 0.02 USD for each HIT.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>Microframe in Restaurant Reviews</ns0:head><ns0:p>To validate the concept of microframe bias and intensity, we examine the SemEval 2014 task 4 dataset <ns0:ref type='bibr' target='#b42'>(Pontiki et al., 2014)</ns0:ref>, which is a restaurant review dataset where reviews are grouped by aspects (food, ambience, service, and price) and sentiment (positive and negative). This dataset provides an ideal playground because i) restaurant reviews tend to have a clear bias-whether the experience was good or bad-which can be used as a benchmark for framing bias, and ii) the aspect labels also help us perform the fine-grained analysis and compare microframes used for different aspects of restaurant reviews.</ns0:p><ns0:p>We compute microframe bias and intensity for 1,621 predefined microframes, which are compiled from WordNet <ns0:ref type='bibr' target='#b35'>(Miller, 1995)</ns0:ref> (See Methods for detail), for every review divided by aspects and sentiments.</ns0:p><ns0:p>The top two microframes with the highest microframe intensity are shown in Figure <ns0:ref type='figure' target='#fig_11'>1</ns0:ref>. For each highestintensity microframe, we display the microframe bias that is computed through a comparison between the positive (negative) reviews and the null model-bootstrapped samples from the whole corpus (See Methods). The highest-intensity microframes are indeed relevant to the corresponding aspect: hospitable -inhospitable and best -worst for service, cheap -expensive and pointless -pointed for price, savory -unsavory and appealing -unappealing for food, and active -quiet and loud -soft for ambience. At the same time, it is clear that positive and negative reviews tend to focus on distinct perspectives of the experience. Furthermore, observed microframe biases are consistent with the sentiment labels; microframe biases in positive reviews are leaning toward the positive side of the microframes, and those in negative reviews toward the negative side.</ns0:p><ns0:p>In other words, FrameAxis is able to automatically discover that positive reviews tend to characterize service as hospitable, price as cheap, food as savory, and ambience is tasteful, and negative reviews describe service as worst, price as pointless, food as unappealing, and ambience as loud in an unsupervised manner. Then, how and why do these microframes get those bias and intensity? In the next section, we propose two tools to provide explainability with different granularity behind microframe bias and intensity.</ns0:p></ns0:div>
<ns0:div><ns0:head>Explainability</ns0:head><ns0:p>To understand computed microframe bias and intensity better, we propose two methods: i) word-level microframe bias (intensity) shift, and ii) document-level microframe bias (intensity) spectrum.</ns0:p><ns0:p>The word-level impact analysis has been widely used for explaining results of the text analysis <ns0:ref type='bibr' target='#b10'>(Dodds and Danforth, 2010;</ns0:ref><ns0:ref type='bibr' target='#b44'>Ribeiro et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b32'>Lundberg and Lee, 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Gallagher et al., 2020)</ns0:ref>. Similarly, we can compute the word-level microframe shift that captures how each word in a target corpus t influences the resulting microframe bias (intensity) by aggregating contributions of the word w for microframe bias (intensity) on microframe f (See Methods). It is computed by comparison with its contribution to a background corpus. For instance, even though w is a word that conveys positive connotations, its contribution to a target corpus can become negative if its appearance in t is lower than that in the background corpus. the effect of each word for microframe bias on the savory -unsavory microframe in positive reviews. The same word's total contribution differs due to the frequency because its contribution on the axis c w f is the same. For instance, the word 'delicious' appears 80 times in positive reviews, and the normalized term frequency is 0.0123. By contrast, 'delicious' appears only three times in non-positive reviews, and the normalized term frequency is 0.0010. In short, the normalized frequency of the word 'delicious' is an order of magnitude higher in positive reviews than non-positive reviews, and thus the difference strongly shifts the microframe bias toward 'savory' on the savory -unsavory microframe. A series of the words describing positive perspectives of food, such as delicious, fresh, tasty, great, good, yummy, and excellent, appears as the top words with the highest microframe bias shifts toward savory on the savory -unsavory microframe.</ns0:p><ns0:p>Similarly, on the right in Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>unappealing microframe, respectively, and the orange bar shows the difference between the two shifts.</ns0:p><ns0:p>The word 'great' in the negative reviews shifts microframe bias toward appealing less than that in the background corpus; the word 'great' less frequently appears in negative reviews (0.0029) than in background corpus (0.0193). Consequently, the resulting microframe bias attributed from the word 'great' in the negative reviews is toward 'unappealing' on the appealing -unappealing microframe. In addition, words describing negative perspectives of food, such as soggy, bland, tasteless, horrible, and inedible, show the orange bars heading to unappealing rather than appealing side on the appealingunappealing microframe. Note that the pole words for these microframes do not appear in the top word lists. These microframes are found because they best capture-according to word embedding-these words collectively. On the left, top words that shift microframe bias toward 'hospitable', such as friendly, attentive, great, excellent, nice, helpful, accommodating, wonderful, and prompt, are captured from the positive reviews.</ns0:p><ns0:p>On the right, top words that shift microframe bias toward 'worst', such as rude, horrible, terrible, awful, bad, wrong, and pathetic, are found. Similar to the top words in negative reviews about food, some words that shift framing bias toward 'best' less frequently appear in the negative reviews than the background corpus, making their impact on microframe bias be farther from 'best' on the best -worst microframe.</ns0:p><ns0:p>As the word-level microframe shift diagram captures, what FrameAxis detects is closely linked to the abundance of certain words. Does it mean that our results merely reproduce what simpler methods for detecting overrepresented words perform? To answer this question, we compare the log odds ratio with informative Dirichlet prior <ns0:ref type='bibr' target='#b36'>(Monroe et al., 2017)</ns0:ref>. With the log odds ratio, service, friendly, staff, attentive, prompt, fast, helpful, owner, and always are found to be overrepresented words. This list of overrepresented words is always the same when comparing given corpora because it only considers their frequencies of appearances in the corpora. By contrast, FrameAxis identifies the most relevant words for each microframe by considering their appearances and their contributions to the microframe, providing richer interpretability. Even though a word appears many times in a given corpus, it does not shift the microframe bias or intensity if the word is irrelevant to the microframe. The second way to provide explainability is computing a document-level framing bias (intensity) and visualizing them as a form of a microframe bias (intensity) spectrum. Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>(E) shows microframe bias spectra of positive and negative reviews. Each blue and red line corresponds to an individual positive and negative review, respectively. Here we choose microframes that show large differences of microframe bias between the positive and negative reviews as well as high intensities by using the following procedure. We rank microframes based on average microframe intensity across the reviews and the absolute differences of microframe bias between the positive and negative reviews, sum both ranks, and pick the microframes with the lowest rank-sum for each aspect. In contrast to the corpus-level microframe analysis or word-level microframe shift, this document-level microframe analysis provides a mesoscale view showing where each document locates on the microframe bias spectrum.</ns0:p></ns0:div>
<ns0:div><ns0:head>Microframe Bias and Intensity Separation</ns0:head><ns0:p>As we show that FrameAxis reasonably captures relevant microframe bias and intensity from positive reviews and negative reviews, now we focus on how the most important dimension of positive and negative reviews-positive sentiment and negative sentiment-is captured by FrameAxis. Consider that there is a microframe that can be mapped into a sentiment of reviews. Then, if FrameAxis works correctly, Figure <ns0:ref type='figure'>3</ns0:ref> shows the cumulative density function (CDF) of the magnitude of microframe bias separations,</ns0:p><ns0:formula xml:id='formula_6'>|∆ pos−neg B f</ns0:formula><ns0:p>|, for 1,621 different microframes for each aspect. Given that the bad -good axis is a good proxy for sentiment <ns0:ref type='bibr' target='#b1'>(An et al., 2018)</ns0:ref>, the bad -good microframe would have a large bias separation if the microframe bias on that microframe captures the sentiment correctly. Indeed, the bad -good microframe shows a large separation -larger than 99.91 percentile across all aspects (1.5th rank on average). For comparison, the irreligious -religious microframe does not separate positive and negative restaurant reviews well (19.88 percentile, 1,298.8th rank on average). The large microframe bias separation between the microframe bias of positive reviews and that of negative reviews supports that the bad -good microframe-and thus FrameAxis-captures the most salient dimension of the text.</ns0:p><ns0:p>Using the two separation measures, we can compare two corpora with respect to both microframe intensity and bias. We find that the absolute values of both separations, |∆ pos−neg</ns0:p><ns0:formula xml:id='formula_7'>I f | and |∆ pos−neg B f |, are</ns0:formula><ns0:p>positively correlated across the four aspects (Spearman's correlation ρ = 0.379 (ambience), 0.471 (food), 0.228 (price), and 0.304 (service)), indicating that when a certain microframe is more heavily used, it also tends to be more strongly biased.</ns0:p><ns0:p>To illustrate a detailed picture, we show microframe intensity and bias separation of each microframe in This characterization provides a comprehensive view of how microframes are employed in positive and negative reviews for highlighting different perspectives. For instance, when people write reviews about the price of a restaurant, incredible, nice, good, cheap, incomparable, best, and pleasant perspectives are highlighted in positive reviews, but judgmental, unoriginal, pointless, and unnecessary are highlighted in negative reviews. From the document-level framing spectrum analysis, the strongest 'judgmental' and 'unnecessary' microframe biases are found from the reviews about the reasoning behind pricing, such as 'Somewhat pricey but what the heck.'</ns0:p><ns0:p>While some generic microframes, such as incredible -credible or worst -best, are commonly found across different aspects, aspect-specific microframes, such as uncrowded -crowded or inhospitable -hospitable, are found in the reviews about corresponding aspect. Most of the microframe biases in the positive reviews convey positive connotations, and those in the negative reviews convey negative connotations.</ns0:p></ns0:div>
<ns0:div><ns0:head>Human Evaluation</ns0:head><ns0:p>We perform human evaluations through Amazon Mechanical Turk (MTurk). Similar with the word intrusion test in evaluating topic modeling <ns0:ref type='bibr' target='#b8'>(Chang et al., 2009)</ns0:ref>, we assess the quality of identified framing bias and intensity by human raters.</ns0:p><ns0:p>For microframe bias, we prepare the top 10 significant microframes with the highest microframe bias (i.e., answer set) and randomly selected 10 microframes with an arbitrary bias (i.e., random set) for each pair of aspect and sentiment (e.g., positive reviews about ambience). As it is hard to catch subtle differences of the magnitude of microframe biases and intensity through crowdsourcing, we highlight microframe bias on each microframe with bold-faced like Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> instead of showing its numeric value.</ns0:p><ns0:p>We then ask which set of microframe with a highlighted bias do better characterize a given corpus, such as 'positive' reviews on 'ambience'. See Methods for detail.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For microframe intensity, we prepare the top 10 significant microframes (i.e., answer set) and randomly selected 10 microframes (i.e., random set) for each pair of aspect and sentiment. We then ask which set of microframe do better characterize a given corpus, such as 'positive' reviews on 'ambience'.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> shows the fraction of the correct choices of workers (i.e., choosing the answer set). The overall average accuracy is 87.5% and 75.0% for significant microframes with the highest microframe bias and intensity, respectively. For microframe bias, in (+) Service and (+) Ambience, human raters chose the answer sets correctly without errors. By contrast, for microframe intensity, some sets show a relatively lower performance. We manually check them for error analysis and find that workers tended to choose the random set when generic microframes, such as positive -negative, appear in a random set due to its ease of interpretation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Contextually Relevant Microframes</ns0:head><ns0:p>As we mentioned in Method, in addition to automatically identified microframes that are strongly expressed in a corpus, we can discover microframes that are relevant to a given topic without examining the corpus.</ns0:p><ns0:p>We use each aspect-food, price, ambience, and service-as topic words. By using the embedding-based approach, we find healthy -unhealthy for food, cheap -expensive for price, noisy -quiet for ambience, and private -public for service as the most relevant microframes. It is also possible to use different words as topic words for identifying relevant microframes. For example, one might be curious about how people think about waiters specifically among the reviews on service. In this case, the most relevant microframes become impolite -polite and attentive -inattentive by using 'waiter' as a topic word. Then, the computed microframe bias and intensity show how these microframes are used in a given corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>Microframe in Political News</ns0:head><ns0:p>As a demonstration of another practical application of FrameAxis, we examine news media. The crucial role of media's framing in public discourse on social issues has been widely recognized <ns0:ref type='bibr' target='#b37'>(Nelson et al., 1997a)</ns0:ref>. We show that FrameAxis can be used as an effective tool to characterize news on different issues through microframe bias and intensity. We collect 50,073 news headlines of 572 liberal and conservative media from AllSides <ns0:ref type='bibr' target='#b0'>(AllSides, 2012)</ns0:ref>. These headlines fall in one of predefined issues defined by AllSides, such as abortion, immigration, elections, education, polarization, and so on. We examine framing bias and intensity from the headlines for a specific issue, considering all three aforementioned scenarios.</ns0:p><ns0:p>The first scenario is when domain experts already know which microframes are worth to examine.</ns0:p><ns0:p>For example, news about immigration can be approached through a illegal -legal framing <ns0:ref type='bibr' target='#b51'>(Wright et al., 2016)</ns0:ref>. In this case, FrameAxis can reveal how strong 'illegal vs. legal' media framing is and which position a media outlet has through the microframe bias and intensity on the illegal -legal microframe. document (news headline)-level microframe bias spectrum. Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>(A) exhibits the average microframe intensity and bias of individual media, which we call a microframe bias-intensity map. To reveal the general tendency of conservative and liberal media's microframe, we also plot their mean on the map. For clarity, we filter out media that have less than 20 news headlines about immigration. Conservative media have higher microframe intensity than liberal media, meaning that they more frequently report the illegallegal microframe of the immigration issue than liberal media do. In addition, conservative media have the microframe bias that is closer to illegal than legal compared to liberal media, meaning that they report more on the illegality of the immigration issue. In summary, the media-level microframe bias-intensity map presents framing patterns of news media on the immigration issue; conservative media do report illegal perspectives more than legal perspectives of the issue and do more frequently report them than liberal media do. FrameAxis. The average microframe intensity on the relaxed -tense microframe is higher in liberal media than conservative media, and the microframe bias of liberal media is toward to 'tense' compared to conservative media. Word-level microframe shift diagrams clearly show that liberal media focus much more on the devastating aspects of gun control, whereas conservative media do not evoke those strong images but focus more on the owner's rights. Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>(F) shows news headlines that are the closest to 'tense.' The key advantage of employing word embedding in FrameAxis is again demonstrated here; out of two headlines in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>(F), none has the word 'tense,' but other words, such as violence or gunfight, deliver the microframe bias toward 'tense.' Although relaxed -tense may not be the kind of microframes that are considered in traditional framing analysis, it aptly captures the distinct depictions of the issue in media, opening up doors to further analysis.</ns0:p><ns0:p>The microframe bias-intensity map correctly captures the political leaning of news media as in Figure <ns0:ref type='figure'>6</ns0:ref>. As we mentioned earlier, we can also discover relevant microframes given a topic (See Methods).</ns0:p><ns0:p>As an example, we compute the most relevant microframes given 'abortion' as a topic in Figure <ns0:ref type='figure'>7</ns0:ref>.</ns0:p><ns0:p>The most relevant microframes to news about 'abortion' indeed capture key dimensions in the abortion debate <ns0:ref type='bibr' target='#b17'>(Ginsburg, 1998)</ns0:ref>. Of course, it is not guaranteed that conservative and liberal media differently use those microframes. The average microframe biases of conservative and liberal media on the four microframes are indeed not statistically different (p > 0.1). However, modeling contextual relevance provides a capability to discover relevant microframes to a given corpus easily even before examining the actual data.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>In this work, we propose an unsupervised method for characterizing the text by using word embeddings.</ns0:p><ns0:p>We demonstrated that FrameAxis can successfully characterize the text through microframe bias and intensity. How biased the text is on a certain microframe (microframe bias) and how actively a certain microframe is used (microframe intensity) provide a nuanced characterization of the text. Particularly, we showed that FrameAxis can support different scenarios: when an important microframe is known (e.g., the illegal vs. legal microframe on an immigration issue), when exploration of potential microframes is needed, and when contextually relevant microframes are automatically discovered. The explainability through a document-level microframe spectrum and word-level microframe shift diagram is useful to understand how and why the resulting microframe bias and intensity are captured. They make FrameAxis transparent and help to minimize the risk of spurious correlation that might be embedded in pretrained word embeddings.</ns0:p><ns0:p>We applied FrameAxis to casual texts (i.e., restaurant reviews) and political texts (i.e., political news). In addition to a rich set of predefined microframes, FrameAxis can compute microframe bias and intensity on an arbitrary microframe, so long as it is defined by two (antonymous) words. This flexibility provides a great opportunity to study microframes in diverse domains. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>example, the word 'immigrant' is closer to 'illegal' than 'legal' (0.463 vs. 0.362) in the GloVe word embedding. Indeed, multiple biases, such as gender or racial bias, in pretrained word embeddings have been documented <ns0:ref type='bibr' target='#b5'>(Bolukbasi et al., 2016)</ns0:ref>. While those biases provide an opportunity to study prejudices and stereotypes in our society over time <ns0:ref type='bibr' target='#b16'>(Garg et al., 2018)</ns0:ref>, it is also possible to capture incorrect microframe bias due to the bias in the word embeddings (or language models). While several approaches are proposed to debias word embeddings <ns0:ref type='bibr' target='#b55'>(Zhao et al., 2018)</ns0:ref>, they have failed to remove those biases completely <ns0:ref type='bibr' target='#b19'>(Gonen and Goldberg, 2019)</ns0:ref>. Nevertheless, since FrameAxis does not depend on specific pretrained word embedding models, FrameAxis can fully benefit from newly developed word embeddings in the future, which minimize unexpected biases. When using word embeddings with known biases, it may be possible to minimize the effects of such biases through an iterative process as follows: (1) computing microframe bias and intensity; (2) finding top N words that shift the microframe bias and shift;</ns0:p><ns0:p>(3) identifying words that reflect stereotypic biases; and (4) replacing those words with an <UNK> token, which is out of vocabulary and thus is not included in the microframe bias and intensity computation, and repeat this process of refinement. The iteration ends when there are no stereotypical words are found in (3). Although some stereotypic biases may exist beyond N, depending on N, their contribution to microframe shift may be suppressed enough.</ns0:p><ns0:p>Second, an inherent limitation of a dictionary-based approach behind microframe bias and intensity computation exists. Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>(E) reveals the limitation. While 'There was no ambience' conveys a negative connotation, its microframe bias is computed as closer to beautiful than ugly. This error can be potentially addressed by sophisticated end-to-end approaches to model representations of sentences, such as Sentence</ns0:p><ns0:p>Transformers <ns0:ref type='bibr' target='#b43'>(Reimers and Gurevych, 2019)</ns0:ref>. While we use a dictionary-based approach for its simplicity and interpretability in this work, FrameAxis can support other methods, including Sentence Transformers, in computing microframe bias and intensity as well. As a proof-of-concept, we use Sentence Transformers to handle the case in Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>(E), which is that 'there was no ambiance' has framing bias closer to 'beautiful' than 'ugly'. We compute the representation of three sentences: 'there was no ambience', 'ambience is beautiful', and 'ambience is ugly'. Then, we find that the similarity between 'there was no ambience' and 'ambience is beautiful' (0.3209) is less than that between 'there was no ambience' and 'ambience is ugly' (0.6237). This result indicates that Sentence Transformers correctly understands that the meaning of sentences. As the dictionary-based approach has its own strengths in simplicity and interpretability, future work may seek a way to blend the strengths of different approaches. Even with these limitations, we argue that our approach can greatly help researchers across fields to harness the power of neural embedding methods for text analysis and systematically scale up framing analysis to internet-scale corpora.</ns0:p><ns0:p>We release the source code of FrameAxis, and we will develop it as an easy-to-use library with supporting visualization tools for analyzing microframe bias and intensity for a broader audience. We believe that such efforts would facilitate computational analyses of microframes across disciplines. The prices were fantastic.</ns0:p><ns0:p>The food now is inconsistent.</ns0:p><ns0:p>The food was bland oily. And the food was fantastic.</ns0:p><ns0:p>The food is amazing!!!!</ns0:p><ns0:p>Restaurant was dirty and unkempt.</ns0:p><ns0:p>And everytime I go there's a bunch of drunk idiots making a scene out front.</ns0:p><ns0:p>There was no ambiance.</ns0:p><ns0:p>Decor is charming. E, Document-level microframe bias spectra of positive (blue) and negative (red) reviews about different aspects. For example, two reviews about service, 'An excellent service' and 'The service is fantastic' have the microframe bias closer to the 'good' on the bad -good microframe, and other two, 'The service is awful' and 'Horrible food and Horrible service' have the microframe bias closer to the 'bad'. These spectra clearly show the microframe bias that each document has on a given microframe. </ns0:p><ns0:formula xml:id='formula_8'>I f</ns0:formula><ns0:p>> 0 ( f is more highlighted in positive reviews). Similarly, we highlight the microframe bias of negative reviews when ∆ pos−neg I f < 0 ( f is more highlighted in negative reviews). For clarity, we add the subscript to represent the microframe bias of the positive reviews (+) or negative reviews (-). Microframe biases in the positive reviews generally convey positive connotations, and those in the negative reviews convey negative connotations. <ns0:ref type='bibr' target='#b17'>(Ginsburg, 1998)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Word-level microframe shift diagram and microframe spectrum for understanding framing intensity and bias. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>CDF of the magnitude of a microframe bias separation, which is the difference of average microframe biases between positive and negative reviews.</ns0:p><ns0:p>As we can expect from the nature of the dataset (positive and negative reviews), the good-bad microframe, which maps into a sentiment axis, exhibits a large microframe bias separation. By contrast, irreligious-religious frame, which is rather irrelevant in restaurant reviews, shows a small microframe bias separation as expected.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>Microframe intensity and bias separation of each microframe between positive and negative reviews. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(A) shows the top 10 words with the highest microframe bias shift for the two high-intensity microframes from the 'food' aspect. On the left, the green bars show how each word in the positive reviews shifts the microframe bias toward either savory or unsavory on the savory -unsavory microframe, and the gray bars show how the same word in the background corpus (non-positive reviews) shifts the microframe bias. The difference between the two shifts, which is represented as the orange bars, shows</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>(A), the green and the gray bars show how each word in the negative reviews and background corpus (non-negative reviews) shifts the microframe bias on the appealing -6/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(C) shows the top 10 words with the highest framing intensity shift for the two high-intensity microframes from the 'food' aspect. On the right, compared to Figure 2(A), more words reflecting the nature of the unappealing -appealing microframe, such as tasteless, horrible, inedible, oily, undercooked, disgusting, flavorless, and watery, are shown as top words in terms of the microframe intensity shift. Aswe mentioned earlier, we confirm that the words that are far from the baseline framing bias-and close to either of the poles-contribute strongly to the microframe intensity.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(B) shows another example of word-level framing bias shift in the reviews about service.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(D) shows the top 10 words with the highest microframe intensity shift for the two highintensity microframes from the 'service' aspect. On the left, compared to Figure 2(B), more words describing the inhospitable -hospitable microframe, such as wonderful and gracious, are included. On the right, compared to Figure 2(B), more words reflecting the nature of the worst -best microframe, such as bad, pathetic, lousy, wrong, horrendous, and poor are shown as top words in terms of the framing intensity shift.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>=(</ns0:head><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021) Manuscript to be reviewed Computer Science microframe biases on the corresponding microframe captured from positive reviews and negative reviews should be significantly different. Formally, we define a microframe bias separation as the difference between microframe bias of positive reviews and that of negative reviews on microframe f , which is denoted by ∆ pos−neg B f = (Microframe bias on microframe f of positive reviews) − (Microframe bias on microframe f of negative reviews) = B pos f − B neg f . Similarly, a microframe intensity separation can be defined as following: ∆ pos−neg I f Microframe intensity on microframe f of positive reviews) -(Microframe intensity on microframe f of negative reviews) = I pos f − I neg f .</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Microframes above the gray horizontal line have higher microframe intensity in positive reviews than negative reviews. We indicate the microframe bias with bold face. Word + indicates that positive reviews are biased toward the pole, and word − means the opposite (negative reviews). For instance, at the top, the label 'sour-sweet + ' indicates that 'sweetness' of the ambience is highlighted in positive reviews, and the label 'loud − -soft' indicates that 'loudness' of the ambience frequently appears in negative reviews. For clarity, the labels for microframes are written for top 3 and bottom 3 microframes of ∆ pos−negI f and ∆ pos−neg B f each.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 (</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5(A)-(C) show how FrameAxis can capture microframes used in news reporting with different granularity: (A) media-level microframe bias-intensity map, (B) word-level microframe bias shift, and (C)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 (</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5(B) shows the top words that contribute the most to the microframe bias on the illegal-legal microframe in conservative and liberal media. Conservative media use the word 'illegal' much more than background corpus (i.e., liberal and centered media). Also, they use the word 'amnesty' more frequently, for example, within the context of 'Another Court Strikes Down Obama's Executive Amnesty (Townhall)', and 'patrol' within the context of 'Border Patrol surge as illegal immigrants get more violent (Washington Times)'. Liberal media mention the word 'illegal' much less than the background and the words 'reform','opinion', and 'legal' more. </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 (</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5(C) shows a document (news headline)-level microframe bias spectrum on the illegallegal microframe. The news headline with the highest microframe bias toward 'illegal' is 'U.S. to</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Figure 6. Figure 6(A) and (C) are the microframe bias-intensity maps of news on Democratic party, and Figure 6(B) and (D) are those on Republic party. We test the bad -good microframe for Figure 6(A)and (B) and the irrational -rational microframe for Figure6(C) and (D). The captured bias fits the intuition; Liberal news media show microframe bias toward 'good' and 'rational' when they report the Democratic party, and conservative news media show the same bias when they report the Republican party. Interestingly, their microframe intensity becomes higher when they highlight negative perspectives of those microframes, such as 'bad' and 'irrational.' </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. A, Illustrations of microframe intensity and bias. Blue and orange circles represent two pole word vectors, which define the w + -w − microframe, and gray arrows represent the vector of words appeared in a given corpus. The width of the arrows indicates the weight (i.e., frequency of appearances) of the corresponding words. The figure shows when microframe intensity and bias can be high or low. B, Microframe bias with respect to the top two microframes with the highest microframe intensity for each aspect in restaurant reviews. The high-intensity microframes are indeed relevant to the corresponding aspect, and microframe biases on these microframes are also consistent with the sentiment labels.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. Word-level microframe shift diagram and document-level microframe spectrum for understanding framing intensity and bias. A-D, word-level contribution to selected microframes in restaurant reviews shows which words contribute to the resulting microframe bias and intensity the most. E, Document-level microframe bias spectra of positive (blue) and negative (red) reviews about different aspects. For example, two reviews about service, 'An excellent service' and 'The service is fantastic' have the microframe bias closer to the 'good' on the bad -good microframe, and other two, 'The service is awful' and 'Horrible food and Horrible service' have the microframe bias closer to the 'bad'. These spectra clearly show the microframe bias that each document has on a given microframe.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Microframe intensity and bias separation of each microframe between positive and negative reviews. We highlight the microframe bias of positive reviews when ∆ pos−neg</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>FFigure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. Three different views characterizing news headlines for an immigration issue on the illegallegal microframe (A-C) and gun control and right issue on the relaxed -tense microframe (D-F). A and D, Media-level microframe bias-intensity map. Each media is characterized by microframe intensity and bias. Conservative media has microframe bias toward 'illegal' while liberal media has microframe bias toward 'legal' in news on an immigration issue. Also, conservative media use the illegal -legal microframe more intensively. B and E, Word-level microframe bias and intensity shift. Each word contributes the most to the resulting microframe bias and intensity. Conservative media use the word 'illegal' much more than background corpus (i.e., liberal and centered media), and it influences on the resulting microframe bias. C and F, Document (news headline)-level microframe bias spectrum. Red lines indicate headlines from right-wing media, blue lines indicate headlines from left-wing media, and purple lines indicate headlines from center media. Three thick bars with red, blue, and purple colors show the average bias of the conservative, liberal, and center media, respectively. The microframe spectrum effectively shows the microframe bias of news headlines.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>word-level contribution to selected microframes in restaurant reviews shows which words contribute to the resulting microframe bias and intensity the most. (E) Document-level microframe bias spectra of positive (blue) and negative (red) reviews about different aspects. Two reviews about service, `An excellent service' and `The service is fantastic' have the microframe bias closer to the `good' on the bad-good microframe, and other two, `The service is awful' and `Horrible food and Horrible service' have the microframe bias closer to the `bad'. These spectra clearly show the microframe bias that each document has on a given microframe. PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Each media is characterized by microframe intensity and bias. Conservative media has microframe bias toward 'illegal' while liberal media has microframe bias toward 'legal' in news on an immigration issue. Also, conservative media use the illegal -legal microframe more intensively. B and E, Word-level microframe bias and intensity shift. Each word contributes the most to the resulting microframe bias and intensity. Conservative media use the word 'illegal' much more than background corpus (i.e., liberal and centered media), and it influences on the resulting microframe bias. C and F, Document (news headline)-level microframe bias spectrum. Red lines indicate headlines from right-wing media, blue lines indicate headlines from left-wing media, and purple lines indicate headlines from center media. Three thick bars with red, blue, and purple colors show the average bias of the conservative, liberal, and center media, respectively. The microframe spectrum effectively shows the microframe bias of news headlines. Microframe bias spectra and B, Microframe intensity spectra of the top 4 most relevant microframes to news about 'Abortion' found by the embedding-based approach. The most relevant microframes indeed capture key dimensions in the abortion debate</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='7'>Computer Science Computer Science</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>0.014 A A</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Townhall</ns0:cell><ns0:cell>conservative</ns0:cell><ns0:cell>0.013 B</ns0:cell><ns0:cell>Vox conservative</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>0.08 unborn Newsmax unconstitutional 0.007 0.008 0.009 0.010 0.011 0.012 0.013 Microframe intensity pro-choice</ns0:cell><ns0:cell cols='2'>0.07</ns0:cell><ns0:cell>bad</ns0:cell><ns0:cell cols='2'>0.06 Microframe bias Should Parental Consent Be Required 0.05 0.04 good for Pregnant Minors to Have Abortions? NPR News Americans are Mislabeling 0.03 0.006 0.007 Themselves on Abortion Breitbart News CNN (Web News) Fox News HuffPost New York Times -News Politico TheBlaze.com Vox Wall Street Journal -News Washington Post Washington Times center liberal 0.008 0.009 0.010 0.011 0.012 Microframe intensity Idaho: Abortion Ban Found Unconstitutional OPINION: Can abortion be de-stigmatized? Sarah Palin: Hillary Clinton should open her eyes</ns0:cell><ns0:cell>0.10 Vanity Fair</ns0:cell><ns0:cell>0.09</ns0:cell><ns0:cell>0.08 bad abortion attitudes 0.07 Microframe bias 0.06 Forty years of Reason USA TODAY Breitbart News 0.05 good faith question is relevant 0.04 0.03 born Why the abortion and TheBlaze.com constitutional CNN (Web News) Christian Science Monitor Daily Beast Fox News NPR News New York Times -News Newsmax Townhall Be Legal? Should Abortion Washington Times Wall Street Journal -News Continues Today in Texas Battle For Life HuffPost Politico Salon center liberal Thousands Rally For Life in Texas Washington Post pro-life</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>0.006 0.007 C</ns0:cell><ns0:cell>Breitbart News prenatal</ns0:cell><ns0:cell /><ns0:cell cols='4'>Townhall Oklahoma Senate Passes Unborn Child conservative center liberal Protection from Dismemberment Abortion Act Everyone should have the right to decide when and whether to have children.</ns0:cell><ns0:cell>0.007 0.008</ns0:cell><ns0:cell>Salon Wow: More Abortion Restrictions Passed Vanity Fair the Last Two Years than in the Past Decade Attitudes about abortion</ns0:cell><ns0:cell>conservative center liberal perinatal</ns0:cell></ns0:row><ns0:row><ns0:cell>Microframe intensity</ns0:cell><ns0:cell cols='3'>0.01 Newsmax pro-life pro-choice-0.002 0.003 0.004 0.005</ns0:cell><ns0:cell cols='2'>0.02 irrational Fox News</ns0:cell><ns0:cell cols='2'>0.03 Microframe bias HuffPost Politico Vox CNN (Web News) 0.04 rational NPR News New York Times -News 0.05 Wall Street Journal -News Washington Post Washington Times Cleaning up the Big Abortion machine TheBlaze.com The Dems Made Their Choice Sarah Palin: Hillary Clinton should open her eyes</ns0:cell><ns0:cell>0.005 0.006 'War on women': The GOP counteroffensive Microframe intensity 0.00 0.003 0.004</ns0:cell><ns0:cell>Daily Beast 0.01 irrational</ns0:cell><ns0:cell>HuffPost Washington Post Politico CNN (Web News) 0.03 Microframe bias 0.02 rational New York Times -News Townhall Wall Street Journal -News Vox 0.04 0.05 Fox News Reason Breitbart News Christian Science Monitor NPR News Newsmax TheBlaze.com USA TODAY Washington Times</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>unconstitutional constitutional-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Law Ruled Unconstitutional Alabama Abortion Clinic Idaho: Abortion Ban Found Unconstitutional</ns0:cell><ns0:cell>deserves the equal protection of the law. Every human being, born or unborn,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>born-unborn</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Texas Abortion Bill Author In 2007: No Health Care For Unborn Because 'They're Not Born Yet'</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>prenatal perinatal-</ns0:cell><ns0:cell /><ns0:cell cols='3'>The Pill Is More Than Birth Control Everyone should have the right to decide when and whether to have children.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021) Manuscript to be reviewed D Figure 6. Microframe bias-intensity map of news on the Democratic party(A, C) and news on the Republican party (B, D). Liberal news media show microframe bias toward 'good' and 'rational' in news on the Democratic party, and conservative news media show microframe bias toward 'good' and 'rational' in news on the Republican party. Microframe intensity of liberal and conservative media is higher when they highlight negative perspectives (i.e., 'bad' and 'irrational').PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021) Manuscript to be reviewed B Figure 7. A,</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58648:1:1:NEW 9 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "RE: PeerJ submission - FrameAxis: Characterizing framing
bias and intensity with word embedding
Dear Dr. Arkaitz Zubiaga,
Thank you very much for your effort on our manuscript. We appreciate reviewers for their valuable
comments and feedback.
The primary concerns in the reviews were following: First, the concept of “framing” in the classical
framing analysis is much more nuanced and complex than what is presented here, and thus it is
misleading to present our method as a replacement for the classical framing analysis or use the
term “framing” to refer to a narrower operationalization in our paper. Second, it was suggested
that there should be a stronger motivation as well as a set of concrete research questions. Third,
there was a concern about the robustness and usefulness of our method.
Addressing these comments, we have revised our manuscript extensively to clarify our motivation, research questions, and the operationalization of microframes. We now clearly state that our
method is not a replacement of classical framing research methods, but a computational aid that can
systematically discover patterns that can help further in-depth analysis of texts. Our introduction
and discussion have also been revised to clarify the motivation of the study and research questions.
We have also added detailed explanations about topic words and perplexity computation using language models. Regarding the concerns of utility, we would like to highlight that our method has
already been adopted by multiple independent studies, based on our arXiv preprint. We would also
want to ask whether our understanding of the PeerJ’s publication criteria, with strong emphasis
on the validity of the work rather than its impact, is correct. Please refer to our point-by-point
responses for more details.
By addressing these excellent comments, we believe that our manuscript has been improved significantly. We hope that our manuscript is now ready for the publication, but we will also be looking
forward to receiving other comments and suggestions that can further improve our manuscript.
Thank you again for your constructive comments and we will be looking forward to hearing from
you.
Sincerely Yours,
Haewoon Kwak
1
Contents
1
2
3
Response to Referee 1
3
Comment 1.1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Comment 1.2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Comment 1.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Comment 1.4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Comment 1.5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
Response to Referee 2
6
Comment 2.1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Comment 2.2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Comment 2.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
Comment 2.4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
Comment 2.5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
Comment 2.6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
Response to Referee 3
9
Comment 3.1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
Comment 3.2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Comment 3.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Comment 3.4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Comment 3.5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Comment 3.6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Comment 3.7
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Comment 3.8
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Comment 3.9
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Comment 3.10
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Comment 3.11
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Comment 3.12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Comment 3.13
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Comment 3.14
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2
1
Response to Referee 1
Comment 1.1
My review here is based on the conceptualization and operationalization of framing. While
the effort and intention of designing a new framing method are impressive, I don’t think that
the approach suggested by the authors is methodologically sound. Framing is a complicated
process and inductively identifying a frame using manual coding is in itself a sophisticated
endeavor. Gamson and Modigliani defined frame as a “central organizing idea” which “suggests what the controversy is about, the essence of an issue” (1987). Also, Van Gorp (2010,
pp. 91-92) mentions the need to identify reasoning devices like appeals to principles and
causal analysis. By relying on microframes, we are ignoring the larger picture which can
reduce the meaning of the text by relying on words alone. The authors claim that each
microframe is operationalized by an antonym pair, but what about neutral frames? What
about generic frames like human interest or economic frames? In the literature, we also
have macro frames that are supposed to be the broader overreaching ideas that can be very
difficult to automatedly identify.
Response:
Thank you for your helpful comments! As a group of researchers who do not have strong expertise
in framing research, this is a very helpful comment to appropriately orient our research. We agree
that we have implicitly understated the complexity and sophistication of the framing analysis and,
at the same time, overstated the contribution of our research, although it was not our original
intention to claim that our research can replace existing framing analysis methods.
We completely agree that FrameAxis can’t replace existing methods. We also do not expect that
the “microframes” found by FrameAxis can be directly mapped to the general “frames” that are
often studied. To clarify this, we have revised our introduction. We now state:
“We emphasize that FrameAxis cannot replace conventional framing research methods, which involves sophisticated close reading of the text. Also, we do not expect
that the microframes can be directly mapped to the frames identified by domain experts. FrameAxis can thus be considered as a computational aid that can facilitate
systematic exploration of texts and subsequent in-depth analysis.” (L90-93)
Also, we have revised our statements throughout the manuscript to avoid any hyperbole and misinterpretation of FrameAxis.
Comment 1.2
This approach, unfortunately, will overlook so much meaning. The same argument applies
to capturing framing bias. This is again one of the most complicated intellectual tasks for
many reasons including the fact that bias is a relative term and operationalizing it using
automated methods is almost impossible.
3
Response:
Again, we appreciate your valuable comments. We identify that an important part of this issue stems from our (mis)usage of the term “framing”, conflating its meaning in the context of
FrameAxis and that of classical framing analysis. Therefore, to avoid the conflation of concepts,
we have replaced “framing bias” and “framing intensity” with “microframe bias” and “microframe
intensity”. We have also revised our manuscript and figures to clearly distinguish them from framing bias in the conventional sense.
Comment 1.3
What I think the authors are attempting to measure is the sentiments more than the cognitive
concepts as they mention the following: “Framing bias in FrameAxis is analogous to positive
or negative sentiment in the sentiment analysis, and framing intensity is to polarity, which
is the strength of the expressed sentiment”.
Response:
Thank you for your comment. The sentiment axis is indeed one of the most important semantic
axes. However, FrameAxis aims to capture much more general semantics beyond the sentiment.
Sentiment analysis can be considered the microframe bias on the sentiment axis; considering a
variety of the semantic axes allows us to capture other nuanced semantics. However, this may not
be immediately clear to the readers before the detailed explanation of the method. Thus, we have
added an example of framing bias and intensity:
“For example, let us explain the framing bias and intensity of the text about an immigration issue on the illegal – legal microframe. Then, the framing bias measures
how much the text focuses on an ‘illegal’ perspective of the immigration issue rather
than a ‘legal’ perspective (and vice versa); the framing intensity captures how much
the text focuses on an illegal or legal perspective of the immigration issue rather than
other perspectives, such as segregation (i.e., segregated – desegregated microframe).”
(L77-82)
In this example, although legal – illegal may carry some sentiment (that is, ‘legal’ is more positive
than ‘illegal’), the primary semantics that this axis captures would be about the legality, rather than
sentiment. We can also find numerous axes that are sentiment-neutral such as quantitative – qualitative, endogeneous – exogeneous, functional – organic, prospective – retrospective, active – quiet,
natural – artificial, singular – plural, defensive – offensive, formal – informal, far – near, young
– old, maternal – paternal, theoretical – empirical, homosexual – heterosexual, mental – physical,
etc. Microframe bias measured on these axes would not carry strong sentiment connotation while
providing useful information about the text.
Comment 1.4
In terms of operationalization of the predefined microframes, the authors mentioned that
they extracted 1,828 adjective antonym pairs from WordNet Miller. However, inductive
framing analysis is far much complicated than looking at pairs of adjectives. Tankard, for
4
example, referred to 11 framing elements including: 1. Photos, 2. photo captions, 3. Leads,
4. source selection, 5. quotes selection, 6. pull quotes, 7. Logo, 8. statistics and charts, 9.
concluding statements and paragraphs, 10. Headlines, 11. Subheads (2001, p. 101). Also,
Van Gorp (2010) offers a longer list of framing devices that are similar to the one above, and
adjectives are not even amongst them.
Response:
Thank you again for providing a useful reference. Again, we do not argue that FrameAxis replaces
conventional framing research, especially those that involve non-text data such as photos and other
elements. FrameAxis, in its current form, would be exclusively useful for text data, serving as a
systematic and supportive exploratory analysis tool. Thus, photos or charts would be outside of
FrameAxis’ scope.
We would also like to illustrate the usefulness of FrameAxis with an example from classical framing literature. In Table 1 from Van Gorp (2010), namely [Emotional appeal: sadness, compassion], this emotional appeal can be captured by FrameAxis on the sad – happy or other relevant
microframes. Also, in [Description of visual scene with contrast: idyllic scenery vs. misery;
metaphor ‘rising tide’ that refers to an unstoppable overwhelming force], this description of visual
scene with “contrast” is a perfect example of the value of antonym axes. Even in Table 2, we
can see that adjectives or semantic axes can capture some semantics. “good governance” can be
captured by the good – bad microframe, “sacredness of life” can be captured by the holy – unholy
microframe, and so on.
Comment 1.5
To sum up, I think the approach suggested by the authors is interesting because it mixes
between sentiment analysis and in a way topic modelling, but it is certainly not related to
identifying frames.
Response:
Thank you for your comments. We hope that we have clarified our method’s contribution and
provided convincing examples that demonstrate that our method can provide useful insights beyond
what sentiment analysis or topic modeling can offer. Although we strongly agree that our method
is not a replacement of traditional framing analysis, we believe that it can facilitate various forms
of text analysis, including framing analysis.
We would also like to mention that there are already multiple papers that have adopted our method.
First, in “Studying Moral-based Differences in the Framing of Political Tweets” (accepted in the
15th International AAAI Conference on Web and Social Media (ICWSM) 2021, arXiv:2103.11853),
the authors used FrameAxis to investigate the moral framing of German political tweets on Twitter. They defined their axes based on the German dictionary to reflect the Moral Foundation Theory. Using multiple datasets of tweets, the authors confirmed that a moral framing identified by
FrameAxis is “congruent with the public perception of the political parties.” Second, in “Mapping
Moral Valence of Tweets Following the Killing of George Floyd” (arXiv:2104.09578), the authors
used FrameAxis to analyze tweets regarding Black Lives Matter movement in Los Angeles after
5
the killing of George Floyd. By using the microframes defined by the Moral Foundation Theory,
the authors found different activation of each moral dimension according to a topic (e.g., police
force, community of colors, etc.). These two studies demonstrate the flexibility and utility of
FrameAxis. Particularly, the first study shows that our method can be successfully applied to nonEnglish corpus once the word embedding is available. Considering the availability of non-English
word embeddings these days, it seems to be promising that the applicability of FrameAxis can be
extended to non-English documents. Finally, in “Characterizing Partisan Political Narratives about
COVID-19 on Twitter” (arXiv:2103.06960; we note that this paper is written by some of us),
FrameAxis is used to differentiate the distinct focus of arguments by Democrats and Republicans.
FrameAxis reveals how two parties’ narratives are focusing on different aspects of the pandemic
response (e.g., government’s role vs. individuals and small business), which were readily available
by other analyses.
2
Response to Referee 2
Comment 2.1
I would request the authors to explain framing in a bit more detail. To someone coming from
Computer science background like me, it was initially a bit difficult to understand what it is.
Explaining it with a real world example (may be even from the datasets that you have used)
would be helpful. This would certainly improve the readability of the paper.
Response:
We added an example to show how distinctive frames can bring different understandings of the
issue:
For example, when reporting on the issue of poverty, a news media may put an emphasis on how successful individuals succeeded through hard work. By contrast, another
media may emphasize the failure of national policies. It is known that these two different framings can induce contrasting understanding and attitudes about poverty (Iyengar, 1994). While readers who are exposed to the former framing became more likely
to blame individual failings, those who are exposed to the latter framing tended to
criticize the government or other systematic factors rather than individuals. (L33-38)
Comment 2.2
You mention that while computing framing bias and intensity, you do not consider the topical
words. How do you decide which are the topical words? Is it done manually? How many
such words did you consider?
Response:
Thank you for raising an important question. The decision to remove the topical word comes
6
from the unwanted word embedding bias (L464-L480). For example, when we analyze reviews
about food, the word ‘food’ already has some bias, which has closer to ‘savory’ (cosine similarity: 0.4321) than ‘unsavory’ (cosine similarity: 0.1561), within the GloVe embedding space. We
confirmed that the impact of ‘food’ on framing bias and intensity through word-level contribution graphs. As the word ‘food’ itself is supposed to be neutral, we remove it from the analysis.
Similarly, we remove ‘price’ from the analysis of reviews on price, ‘service’ from the analysis
of reviews on service, ‘ambience’ from the analysis reviews on ambience. Similarly, for political
news datasets, we remove the issue keyword given by Allsides (https://www.allsides.com/topicsissues). We remove only one word for each as described in L127-146:
“It is known that pretrained word embeddings have multiple biases (Bolukbasi et al.,
2016). Although some debiasing techniques are proposed (Zhao et al., 2018), those
biases are not completely eliminated (Gonen 121 and Goldberg, 2019). For example,
the word ‘food’ within a GloVe pretrained embedding space is much closer to ‘savory’
(cosine similiarity: 0.4321) than ‘unsavory’ (cosine similarity: 0.1561). As those
biases could influence the framing bias and intensity due to its high frequencies in the
text of reviews on food, we remove it from the analysis of the reviews on food.
FrameAxis computes the word-level framing bias (intensity) shift to help this process, which we will explain in ‘Explainability’ Section. Through the word-level shift,
FrameAxis users can easily check whether some words that should be neutral on a
certain semantic axis are located as neutral within a given embedding space.
While this requires manual efforts, one shortcut is to check the topic word first. For
example, when FrameAxis is applied to reviews on movies, the word ‘movie’ could
be considered first because ‘movie’ should be neutral and non-informative. Also, as
reviews on movies are likely to contain the word ‘movie’ multiple times, even smaller
contribution of ‘movie’ to a given microframe could be amplified by its high frequency
of occurrences, 𝑛𝑤 in Equation (2) and (3). After the manual confirmation, those
words are replaced with <UNK> tokens and are not considered in the computation of
framing bias and intensity.
In this work, we also removed topics words as follows: in the restaurant review dataset,
the word indicating aspect (i.e., ambience, food, price, and service) is replaced with
<UNK> tokens. In the AllSides’ political news dataset, we consider the issue defined by AllSides as topic words, such as abortion, immigration, elections, education,
polarization, and so on.”
Comment 2.3
The authors also calculate perplexity scores to determine the relevance of a microframe to
a given topic. I was not sure how the language model was created. Did you just use a
statistical language model and calculate bi-gram probabilities? I would like the authors to
7
elaborate on this a bit.
Response:
We expand the explanations:
“Following a previous method (Wang and Cho, 2019), we use a pre-trained OpenAI
GPT model to compute the perplexity score.” (L173-174)
Comment 2.4
The captions could be made a bit more informative by adding what one should conclude
from them. Since they are all put together at the end of the manuscript, it would be helpful
to the reader. The quality of the figures could be improved as well. Caption of figure 4
contains latex symbols.
Response:
We update the captions. The quality of the figure was degraded when the manuscript and figures
were merged by the PeerJ submission system. High-quality figures can be seen when you download
the figure files separately. We will combine the manuscript and figures by ourselves and submit
the integrated manuscript.
Comment 2.5
The references to the figures in discussion section are missing.
Response:
Sorry for inconvenience. We updated the reference. It refers to Figure 2.
Comment 2.6
Overall, I think it is a nicely written paper and I would be willing to accept it given the above
mentioned modifications are made.
Response:
Thank you for your encouraging comments!
8
3
Response to Referee 3
Comment 3.1
This paper proposes a simple approach to detecting prominent aspects of a text corpus,
which the authors refer to as “microframes”. The method is as follows: i) get a set of
antonyms pairs and pretrained word vectors; ii) compute the difference between the word
vectors corresponding to each word in a pair, and call this difference a microframe; iii)
compute the cosine similarity between each word in the corpus, and report the mean and
variance of the cosine similarities for a microframe over all tokens in the corpus. This is
essentially the same idea as one of the earliest papers in bias in word embeddings (Bolukbasi
et al., 2016), except that difference vectors are being compared to individual words, rather
than other differences, aggregated over tokens.
This is valid way to characterize a corpus, but it is unclear what the ultimate purpose of this
sort of approach is. In addition, some of the choices made seem puzzling, and there isn’t
a very convincing effort to show that this method has advantages over others. Overall, I
worry about the usefulness and the robustness of this approach, and exactly what claims the
authors are trying to make.
Response:
Thank you for recognizing our approach as a “valid way to characterize a corpus”. The method
was born from a very common need in computational social science—a need for large-scale comparative text analysis tools, especially that go beyond sentiment analysis, topic modeling, and
word-frequency comparison by revealing which aspects of the story are emphasized (framing). In
particular, many of our own research projects have been dealing with political biases and polarization, where framing (and framing analysis) plays a crucial role in the discourse. We developed
this method because we were not aware of computational tools that can characterize frames by
leveraging word embedding. We would be happy to learn about methods that we were not aware
of and to perform comparative analysis on any methods that can provide insights into the framing
of a document in an unsupervised manner.
We also argue that, although many word embedding-based studies leverages similar operations,
our method is quite distinct from (Bolukbasi et al., 2016). The way we estimate framing bias is
more similar to our previous work, SemAxis (An et al., 2018), than Bolukbasi et al., 2016, and the
framing intensity (microframe intensity) is a new measure for estimating how strongly a frame is
used in the document, about which existing studies like Bolukbasi et al., 2016 cannot say much
about. It is distinct from sentiment analysis because we can examine many microframes other
than sentiment; it is distinct from topic modeling or word-frequency analysis because we leverage
the ‘softness’ of word embedding to capture the usage of words that are close to the poles of our
microframes and because we employ ‘microframes’ to ground and interpret words onto meaningful
axes. As demonstrated in the paper, our method discovers insights that are not accessible with other
word-based analysis (e.g., see Fig. 2(B) and L292-301 for related discussion).
9
Comment 3.2
The authors position their approach in comparison to analyses of “framing” in the social
sciences. The literature of framing of course is not a monolith. Researchers are often interested in getting deep insight into the arguments that are made in a set of documents, which
is why the effort is devoted to developing a codebook and carefully reading the text. It is
unclear that the outputs of this system would be of any value to such researchers. Perhaps
the authors have a different set of users in mind, but if not social scientists, then whom?
(And if social scientists, then more extensive use cases with examples of how this might
work in practice would be very helpful.) Stating how users would benefit from this system,
and demonstrating that it achieves that goal in comparison to alternative would make this
paper more suitable for scholarly publication.
Response:
First of all, we do not think that FrameAxis can replace sophisticated framing research. We also do
not expect that FrameAxis can always map an existing frame, such as generic frame you mentioned,
into a corresponding microframe. To clarify this, we added
“We emphasize that FrameAxis cannot replace conventional framing research methods, which involves sophisticated close reading of the text. Also, we do not expect
that the microframes can be directly mapped to the frames identified by domain experts. FrameAxis can thus be considered as a computational aid that can facilitate
systematic exploration of texts and subsequent in-depth analysis.” (L90-93)
Also, we have toned down the entire manuscript to avoid misconceptions about the goal of FrameAxis.
Perhaps a good way to demonstrate its utility is to point out that our work has already been cited
and adopted multiple times. First, in “Studying Moral-based Differences in the Framing of Political Tweets” (accepted in the 15th International AAAI Conference on Web and Social Media
(ICWSM) 2021, arXiv:2103.11853), the authors used FrameAxis to investigate the moral framing
of German political tweets on Twitter. They defined their axes based on the German dictionary to
reflect the Moral Foundation Theory. Using multiple datasets of tweets, the authors confirmed that
a moral framing identified by FrameAxis is “congruent with the public perception of the political
parties.” Second, in “Mapping Moral Valence of Tweets Following the Killing of George Floyd”
(arXiv:2104.09578), the authors used FrameAxis to analyze tweets regarding Black Lives Matter
movement in Los Angeles after the killing of George Floyd. By using the microframes defined
by the Moral Foundation Theory, the authors found different activation of each moral dimension
according to a topic (e.g., police force, community of colors, etc.). These two studies demonstrate
the flexibility and utility of FrameAxis. Particularly, the first study shows that our method can be
successfully applied to non-English corpus once the word embedding is available. Considering
the availability of non-English word embeddings these days, it seems to be promising that the applicability of FrameAxis can be extended to non-English documents. Finally, in “Characterizing
Partisan Political Narratives about COVID-19 on Twitter” (arXiv:2103.06960; we note that this paper is written by some of us), FrameAxis is used to differentiate the distinct focus of arguments by
Democrats and Republicans. FrameAxis reveals how two parties’ narratives are focusing on different aspects of the pandemic response (e.g., government’s role vs. individuals and small business),
10
which were readily available by other analyses.
We believe that the ultimate test of a method’s utility is its usage in actual research. The fact that
our method, even before its formal publication, has been rapidly cited and employed in multiple
computational social science studies is, in our opinion, a strong sign of its utility.
Comment 3.3
In terms of the reliability of the method, the fact that word vector similarity captures semantic similarity is now well established. The novelty here seems to be partly the use of
antonyms from WordNet for coming up with around 1600 potential “microframes”. However, I am concerned that many of these will be highly questionable and not informative.
I replicated the authors proposed way of generating such a list, and generated a set of five
random pairs, which were: (quantitative, qualitative) (incongruent, congruent) (inessential,
essential) (crosswise, lengthwise) (agitated, unagitated)
Some of these might be relevant to framing, but it seems like many will not. Moreover,
because the authors propose using only two words to establish the direction corresponding
to the “microframe”, some of these will likely be quite unstable. For example, using the
same word vectors as the authors, the most similar words to the qualitative - quantitative
direction appear to be: quantitative, Quantitative, QE, QE3, Fed, Bernanke, Treasury.
Presumably texts having to do with “quantiative easing” have given rise to the similarity to
(Ben) Bernake and the fed, but this is not related to the qualitative quantitative distinction.
More thorough investigation of the potential problems or limitations of this approach would
be useful.
Response:
Thank you for raising an interesting concern.
We certainly are not arguing that all of our potential microframes found from WordNet are meaningful for every document. This is why we employ a procedure to identify the most strongly expressed microframes and why we provide the word-shift tool to examine and interpret the results.
As a computational method with minimal manual tuning, it will certainly have false positives.
However, at the same time, such false positives can still provide insights, and the method can be
iteratively tuned to delve into the results and discover new insights.
In your example, the discovery of quantitative – qualitative microframe would lead to, via the
word-shift analysis, the discovery that it is driven by the frequent mention of “quantitative easing,”
which will then help researchers investigate other, more precisely targeted microframes. Furthermore, the fact that Fed, QE, and Bernanke are heavily mentioned in a given document in comparison to others can already be a useful insight for further investigation. For instance, if we compare
two sets of documents, where one focuses on monetary policies’ role in inflation whereas the other
focuses on the trading tension and supply shock, picking up these microframes, even if they are not
“correct” in a strict sense, can lead to key insights that one set of documents frame the discussion
by focusing on the monetary policies while the other overlooks them. Or, perhaps we can also
imagine two sets of documents both of which talk about the quantitative easing, one focusing on
the macro economic implications, and the other focusing on the fairness (asset price vs. wage).
11
In this case, we’d expect to see microframes that signify economy and fairness overrepresented
respectively in each set of documents. We want to also reiterate that the primary strength of our
method is its ability to facilitate an exploratory analysis that goes beyond existing comparative
tools.
Furthermore, as demonstrated by the two aforementioned studies, which provide further validation
of our method, used their custom microframes driven by the Moral Foundation Theory, FrameAxis
can support any microframes that are compiled by experts who already have prior knowledge in
the domain, corpus, and context. In those cases, FrameAxis can be more efficient and accurate.
As FrameAxis takes the dictionary-based approach for better interpretability, the inherent limitation of the dictionary-based approach is a valid criticism. As we prove the potential power of
sentence representation, recent advances in NLP might offer a potent but less interpretable solution.
Comment 3.4
A few issues related to the presentation should also be mentioned. There are a few disfluencies, e.g., “the common procedure of the framing research”, “corpora;’ instead of “corpus”
when referring to a single corpus, “microaframes” on page 5, “are worth to examine” on
page 8, “leveraging word embedding”). There are also improperly compiled references to
Figures, such as “Figure ??(E)” on page 10. One extremely annoying problem is that the inline citations are not formatted properly. They are missing brackets, making them difficult
to distinguish from the text.
Response:
We are sorry about this issue. We have carefully edited the manuscript.
Comment 3.5
I am also concerned that Figure A is somewhat misleading. They show cases of high framing
intensity as being those when words are similar to the individual poles of the antonym pair.
However, because the cosine similarity is being computed with respect to the difference
between the pair of vectors, the words that represent the highest intensities should be those
that are parallel to the difference between the two. (i.e., starting at the origin, but pointing in
the same direction as the difference of vectors). In some cases, the words close to the polls
will tend to have high cosine similarity to this vector, but that is more of a property of the
word vector space.
As an example, taking a pair the authors highlight – the (clean, dirty) pair – the highest
intensity words appear to be clean, Clean, ensure, maintain, Excellent, ensure, CLEAN.
Clearly some of these are closely aligned with clean (e.g., clean, Clean, CLEAN), but others
are not aligned with either clean or dirty individually (ensure, maintain, etc.)
Response:
We have updated the figure.
12
Comment 3.6
The raw data for one experiment is shared, but not for all. In particular, the full raw results from the human evaluation should also be made available, complete with anonymized
worker IDs.
Response:
We have uploaded the workers’ responses with their anonymized IDs and the Python script to
process it.
Comment 3.7
Some terminology is not defined. On page 2 the authors define 𝑐 𝑤𝑓 in terms of similarity, but
this should be “cosine similarity” to be specific. More importantly, the authors introduce
“topic” words on page 3, but do not define what a topic word is, or how they are selected.
Beyond that, it is unclear why a topic word would not be relevant to framing, and why they
are removed.
Response:
Thank you for your valuable comments. We clarified the similarity measure as follows:
“While any similarity measure between two vectors can be used here, for simplicity,
we use cosine similarity.” (L108)
The decision to remove the topical word comes from the unwanted word embedding bias (L464L480). For example, when we analyze reviews about food, the word ‘food’ already has some
bias, which has closer to ‘savory’ (cosine similiarity: 0.4321) than ‘unsavory’ (cosine similarity:
0.1561), within the GloVe embedding space. We confirmed that the impact of ‘food’ on framing
bias and intensity through word-level contribution graphs. As the word ‘food’ itself is supposed
to be neutral, we remove it from the analysis. Similarly, we remove ‘price’ from the analysis of
reviews on price, ‘service’ from the analysis of reviews on service, ‘ambience’ from the analysis
reviews on ambience. Similarly, for political news datasets, we remove the issue keyword given by
Allsides (https://www.allsides.com/topics-issues). We remove only one word for each. We clearly
describe this in L118-138:
“It is known that pretrained word embeddings have multiple biases (Bolukbasi et al.,
2016). Although some debiasing techniques are proposed (Zhao et al., 2018), those
biases are not completely eliminated (Gonen and Goldberg, 2019). For example, the
word ‘food’ within a GloVe pretrained embedding space is much closer to ‘savory’
(cosine similiarity: 0.4321) than ‘unsavory’ (cosine similarity: 0.1561). As those
biases could influence the framing bias and intensity, they are carefully considered in
the computation.
FrameAxis computes the word-level framing bias (intensity) shift to help this process, which we will explain in ‘Explainability’ Section. Through the word-level shift,
13
FrameAxis users can easily check whether some words that should be neutral on a
certain semantic axis are located as neutral within a given embedding space.
While this requires manual efforts, one shortcut is to check a topic word first. For
example, when FrameAxis is applied to reviews on movies, the word ‘movie’ could
be considered first because ‘movie’ should be neutral and non-informative. Also, as
reviews on movies are likely to contain the word ‘movie’ multiple times, even smaller
contribution of ‘movie’ to a given microframe could be amplified by its high frequency
of occurrences, 𝑛𝑤 in Equation (2) and (3). After the manual confirmation, those
words are replaced with <UNK> tokens and are not considered in the computation of
framing bias and intensity.
In this work, we also removed topics words as follows: in the restaurant review dataset,
the word indicating aspect is replaced with <UNK> tokens. It is ambience, food, price,
and service. In the AllSides’ political news dataset, we consider the issue defined by
AllSides as topic words, such as abortion, immigration, elections, education, polarization, and so on.”
Comment 3.8
Experimental design
The primary experiments in this paper include a Mechanical Turk study, and several visualizations of examples. Although the visualizations are useful for illustrating the kinds of
things the authors see this as being used for, they do not do a good job of convincing the
read reader that this method is useful.
Response:
Thank you for your comment, but we are not sure which visualizations are referred here and why,
given the vagueness of the comment. We are also not sure whether and how we can improve the
visualization. We believe that the multiple examples of adoptions of our method that happened immediately after we posted the paper on the preprint server, strongly suggest that many researchers
are already convinced by the utility of the method.
We also believe that PeerJ’s review criteria specifically focuses on the validity rather than presumed
importance of the study, and thus we would like to request to be judged according to the journal’s
criteria. We have conducted both quantitative and human evaluation of the method and shown that
it can reveal insights that are not readily available by other methods. We also hope that the already
active usage of our method in other studies alleviate the concern of usefulness.
Comment 3.9
On the human evaluation, it seems the purpose is to see if the top microframes are more
relevant than random ones to the aspect of a restaurant review, but this seems like a very
low bar, given that so many of the micorframes appear to be highly niche. (Indeed, I am
14
surprised the results are not stronger here, which makes me wonder about the quality of the
annotations). In terms of selecting the significant frames to present, the use of a significance
threshold of 0.05, without any correction for multiple comparisons, is surprising, given that
there are 1000s of pairs being tested.
Response:
Thank you for your valuable comments.
As we explained in the text, when general microframes, such as good – bad, positive – negative,
or best – worst, are included in the random set, workers tend to choose them because of its broad
applicability and ease of interpretation. While those microframes are not the most specific to a
given document or topic, they can be applied to most of the documents or topics. In other words,
the task is sometimes not trivial because even the most appropriate and specific microframes may
not be chosen due to the presence of much more general and universally applicable microframes.
Regarding the comment on the statistical significance, we would like to reiterate that our method
is intended as an exploratory and iterative analysis tool. The goal is not running a rigorous statistical test but filtering out microframes that can be safely removed. In that sense, the significance
threshold is tunable and arbitrary. If we apply the multiple comparison corrections (e.g., Bonferroni’s), it may well throw out the baby with the bathwater and lose its utility as an exploratory tool.
Because we assume that discovered microframes will be analyzed further (e.g., by using the word
shift diagram), we believe that minimizing false positives is not the most useful objective.
We also made it clear that we ranked the significant microframes by the effect size :
“The top 10 microframes are chosen by the effect size, computed by Equation (4) and
(5), among the significant microframes.” (L198-199)
Comment 3.10
The idea of testing the contextual relevance of microframes to a given corpus seems like a
poor fit to the rest of the paper, and is not well developed. It is not clear that there is any
experimental evaluation of this, and many details are absent. Saying “a language-model
based approach” is almost completely uninformative and could refer to a huge range of
possibilities. The choice to use particular constructions (A is B) is not well motivated or
tested.
Response:
Thank you for your valuable comments. While the significant microframes are automatically found
from the corpus, filtering out irrelevant microframes in advance can save the computation cost. We
clarified the motivation of relevance computation and its process:
“While we provide a method to compute the statistical significance of each microframe
for a given corpus, filtering out irrelevant microframes in advance can save the computation cost.” (L164-166)
15
“Following a previous method (Wang and Cho, 2019), we use a pre-trained OpenAI
GPT model to compute the perplexity score.” (L173-L174)
Neural machine translation literature provide ample evidence for the utility and validity of the
perplexity score of generated sentences computed by language models. Also, since FrameAxis
can find the strongly expressed microframes automatically, this part of the method is to reduce
computation cost by filtering out irrelevant microframes in advance. Thus, we do not believe that
an additional evaluation for our usage of perplexity score is critical (e.g., perplexity score of ‘food
is delicious’ (76.03) vs. ‘food is quantitative’ (282.64); as the perplexity score is the exponential
of the cross-entropy, lower is better.). We made it clear that the template sentences used in this
work are one of the examples and could be tuned according to a corpus and topic:
“For example, consider two templates as follows:” (L169)
“According to the corpus and topic, a more complex template, such as “A (an) {topic
word} issue has a {pole word} perspective.” might work better. More appropriate
template sentences can be built with good understanding of the corpus and topic.”
(L180-L182)
Comment 3.11
Mostly the results consist of displaying lists of microframes that seem to have have face
validity (e.g., we assume that hospitable and inhospitable) is relevant to “service”, but there
is no real evaluation of whether or not this is the “correct” inference for this data or not.
Response:
Thank you for your comments. Although we are proposing an exploratory text analysis method—
which is trickier to evaluate, we have conducted three-fold quantitative evaluations. First, we
introduced statistically significant microframes for SemEval-14, which are well categorized by
sentiment and aspects. For every aspect and sentiment, we found that identified microframes are
well aligned. The probability that selected two microframes among 1,621 are well-aligned with its
corresponding aspect and sentiment should be extremely low. We thus argue that it is more than
the cherry-picking face validity. Second, we defined the concept of microframe bias separation
and demonstrated that the bad – good microframe successfully separated the positive and negative
reviews. Since the bad – good microframe is well mapped into a sentiment axis, the largest separation between the positive and negative reviews on the bad – good microframe can be considered
that the microframe bias on the bad – good microframe effectively captures the sentiment. Third,
we conducted a manual evaluation by using Amazon Mechanical Turk. To demonstrate its utility
as an exploratory analysis tool, we also use case studies. We hope the reviewer would clarify which
parts of these multiple evaluations are not “real” and why. We would also like to mention that the
papers adopted our method have demonstrated the validity and utility of our method as well.
16
Comment 3.12
Validity of the findings As noted above, the usefulness of this method has not been rigorously
evaluated. All findings reported seem valid and legitimate, but it is not clear they answer
the questions of what is this method useful for, and is it better than the alternatives. Giving
more though to the specific scientific claims that the authors want to make would help to
determine what experiments are relevant to evaluating those claims. Presenting examples is
useful, but not enough on their own.
Response:
Please refer to our answers to 3.8−3.11. We again would like to request that the manuscript should
be judged by the PeerJ’s publication criteria and emphasize that the method is already being used
in multiple, independent research projects even before the formal publication.
Comment 3.13
As also noted above, some raw data have been shared, but not all.
Response:
As we explained in Response to Comment 3.6, We uploaded the workers’ responses with their
anonymized IDs and the Python script to process it.
Comment 3.14
It is unclear what the conclusions of this paper are, in part due to the lack of a clear research
question. Although I commend the authors for developing this idea and releasing a system
for using it, a research paper based on this idea should have a stronger set of claims being
made and well-designed experiments to evaluate those claims.
Response:
This manuscript proposes “an unsupervised method for characterizing texts with respect to a variety of microframes (L58-59).” Our method aims to facilitate framing analysis, which is an integral
way to understand and compare texts. We demonstrate how it works with multiple datasets and
conduct a human evaluation. Then, we propose tools for explaining results, which will help further
analysis. As a method paper, we do not intend to produce strong claims about any particular systems or datasets, which we believe is a perfectly fine position to take for the publications in PeerJ.
Our primary claim is that our method can help framing analysis and other text analysis by automatically discovering overrepresented microframes and by providing interpretable instruments.
We have demonstrated this point through examples and quantitative evaluations. Our claim has
been independently corroborated by other research projects.
17
" | Here is a paper. Please give your review comments after reading it. |
192 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The inevitable evolution of information technology has led to the creation of IoT-Fog-Cloud systems, which combine the Internet of Things (IoT), Cloud Computing and Fog Computing.</ns0:p><ns0:p>IoT systems are composed of possibly up to billions of smart devices, sensors and actuators connected through the Internet, and these components continuously generate large amounts of data. Cloud and fog services assist the data processing and storage needs of IoT devices. The behaviour of these devices can change dynamically (e.g. properties of data generation or device states). We refer to systems allowing behavioural changes in physical position (i.e. geolocation), as the Internet of Mobile Things (IoMT). The investigation and detailed analysis of such complex systems can be fostered by simulation solutions. The currently available, related simulation tools are lacking a generic actuator model including mobility management. In this paper, we present an extension of the DISSECT-CF-Fog simulator to support the analysis of arbitrary actuator events and mobility capabilities of IoT devices in IoT-Fog-Cloud systems. The main contributions of our work are: (i) a generic actuator model and its implementation in DISSECT-CF-Fog, and (ii) the evaluation of its use through logistics and healthcare scenarios. Our results show that we can successfully model IoMT systems and behavioural changes of actuators in IoT-Fog-Cloud systems in general, and analyse their management issues in terms of usage cost and execution time.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Internet of Things (IoT) is estimated to reach over 75 billion smart devices around the world by 2025 <ns0:ref type='bibr' target='#b24'>(Taylor et al., 2015)</ns0:ref>, which will dramatically increase the network traffic and the amount of data generated by them. IoT systems often rely on Cloud Computing solutions, because of its ubiquitous and theoretically infinite, elastic computing and storage resources. Fog Computing is derived from Cloud Computing to resolve the problems of increased latency, high density of smart devices and the overloaded communication channels, which also known as the bottleneck-effect.</ns0:p><ns0:p>Real-time IoT applications <ns0:ref type='bibr' target='#b20'>(Ranjan et al., 2020)</ns0:ref> require faster and more reliable data storage and processing than general ones, especially when data privacy is also a concern. The proximity of Fog Computing nodes to end users usually ensures short latency values, however these nodes are resourceconstrained as well. Fog Computing can aid cloud nodes by introducing additional layers between the cloud and the IoT devices, where a certain part of the generated data can be processed faster <ns0:ref type='bibr' target='#b8'>(Mahmud et al., 2018)</ns0:ref>.</ns0:p><ns0:p>A typical fog topology is shown in Figure <ns0:ref type='figure'>1</ns0:ref>, where sensors and actuators of IoT devices are located at the lowest layer. Based on their configuration and type, things produce raw sensor data. These are then stored and processed on cloud and fog nodes (this data flow is denoted by red dotted arrows). Sensors are mostly resource-constrained and passive entities with restricted network connection, on the other hand, actuators ensure broad functionality with Internet connection and enhanced resource capacity <ns0:ref type='bibr' target='#b16'>(Ngai et al., 2006)</ns0:ref>.</ns0:p><ns0:p>They aspire to make various types of decisions by assessing the processed data retrieved from the nodes. This data retrieval is marked by solid orange arrows in Figure <ns0:ref type='figure'>1</ns0:ref>. These actions can affect on the physical environment or refine the configuration of the sensors, such modification can be the increasing or decreasing of the data sampling period or extending the sampling period, this later results in different amounts of generated data. Furthermore, the embedded actuators can manipulate the behaviour of smart devices, for instance, restart or shutdown a device, and motion-related responses can also be expected.</ns0:p><ns0:p>These kind of actions are noted by grey dashed arrows in Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1. The connections and layers of a typical fog topology</ns0:head><ns0:p>In the surrounding world of IoT devices, location is often fixed, however, the Quality of Service (QoS) of these systems should also be provided at the same level in case of dynamic and moving devices.</ns0:p><ns0:p>Systems composed of IoT devices supporting mobility features are also known as the Internet of Mobile Things (IoMT) <ns0:ref type='bibr' target='#b14'>(Nahrstedt et al., 2020)</ns0:ref>. Mobility can have a negative effect on the QoS to be ensured by fog systems, for instance, they could increase the delay between the device and the actual node it is connected to. Furthermore, using purely cloud services can limit the support for mobility <ns0:ref type='bibr' target='#b17'>(Pisani et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Wireless Sensor Networks (WSN) are considered as predecessors of the Internet of Things. In a WSN, the naming convention of sensor and actuator components follows publisher/subscriber or producer/consumer notions <ns0:ref type='bibr' target='#b21'>(Sheltami et al., 2016)</ns0:ref>, however IoT sensor and actuator appellations are commonly accepted by the IoT simulation community as well. Publishers (i.e. sensors or producers) share the data which are sensed in the environment, until then subscribers (i.e. actuators or consumers) react to the sensor data (or to an incoming message) with an appropriate action. In certain situations, actuators can have both of these roles, and behave as a publisher, especially when the result of a command executed by an actuator needs to be sent and further processed.</ns0:p><ns0:p>Investigating IoT-Fog-Cloud topologies and systems in real word is rarely feasible on the necessary scales, thus different simulation environments are utilised by researchers and system architects for such purpose. It can be observed that only a few of the currently available simulation tools deal with a minimal ability to model actuator and/or mobility events, which strongly restricts their usability. It implies that a comprehensive simulation solution, with an extendable, well-detailed mobility and actuator model, is missing for fog-enhanced IoT systems.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To address this open issue, we propose a generic actuator model for IoT-Fog-Cloud simulators and implement it by extending the DISSECT-CF-Fog <ns0:ref type='bibr'>(Markus et al., 2020)</ns0:ref> open-source simulator, to be able to model actuator components and mobility behaviour of IoT devices. As the main contributions of our work, our proposal enables: (i) more realistic and dynamic IoT behaviour modelling, which can be configured by using the actuator interface of IoT devices, (ii) the ability of representing and managing IoT device movement (IoMT), and (iii) the analysis of different types of IoT applications having actuator components in IoT-Fog-Cloud environments. Finally, the modelling of such complex systems are demonstrated through a logistics and a healthcare scenario.</ns0:p><ns0:p>The rest of the paper is structured as follows: Section 2 introduces and compares the related works, Section 3 presents our proposed actuator model and simulator extension. Section 4 presents our evaluation scenarios, and finally Section 5 concludes our work.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>According to the definition by <ns0:ref type='bibr' target='#b0'>(Bonomi et al., 2012)</ns0:ref>, an actuator is a less limited entity than a sensor in terms of its network connectivity and computation power, since it is responsible for controlling or taking actions in an IoT environment. Usually actuators are identified as linear, motors, relays or solenoids to induce motion of a corresponding entity. The work in <ns0:ref type='bibr' target='#b13'>Motlagh et al. (2020)</ns0:ref> categorises actuators based on their energy source as following: (i) pneumatic, (ii) hydraulic, (iii) electric and finally (iv) thermal actuator, however this kind of classification might restrict the usability of actuators to the energy sector.</ns0:p><ns0:p>The presence of actuators plays a vital role in higher level software tools for IoT as well, for instance in FogFlow <ns0:ref type='bibr' target='#b3'>(Cheng et al., 2018)</ns0:ref>. It is an execution framework dedicated for service orchestrations over cloud and fog systems. This tool helps infrastructure operators to handle dynamic workloads of real IoT services enabling low latency on distributed resources. According to their definition, actuators perform actions (e.g. turning on/off the light) in an IoT environment, which can be coordinated by an external application.</ns0:p><ns0:p>The already existing, realised actuator solutions are well-known and commonly used in Technical Informatics, however the modelling of an actuator entity in simulation environments is not straightforward, and most of the simulation tools simply omit or simplify it, nevertheless actuators are considered as essential components of the IoT world.</ns0:p><ns0:p>Concerning IoT and fog simulation, a survey paper by <ns0:ref type='bibr' target='#b23'>Svorobej et al. (2019)</ns0:ref> compares seven simulation tools supporting infrastructure and network modelling, mobility, scalability, resource and application management. Unfortunately, in some cases the comparison is restricted to a binary decision, for instance if the simulator has a mobility component or not. Another survey by <ns0:ref type='bibr'>Markus and Kertesz (2020)</ns0:ref> examined 44 IoT-Fog-Cloud simulators, in order to determine the characteristics of these tools. 11 parameters were used for the comparison, such as type of the simulator, the core simulator, publication date, architecture, sensor, cost, energy and network model, geolocation, VM management and lastly, source code metrics.</ns0:p><ns0:p>These survey papers represent the starting point for our further investigations in the direction of geolocation and actuator modelling.</ns0:p><ns0:p>FogTorchPI <ns0:ref type='bibr' target='#b1'>(Brogi et al., 2018</ns0:ref>) is a widely used simulator, which focuses on application deployment in fog systems, but it limits the possibilities of actuator interactions. <ns0:ref type='bibr' target='#b25'>Tychalas and Karatza (2018)</ns0:ref> proposed a simulation approach focusing on the cooperation of smartphones and fog, however the actuator component was not considered for the evaluation.</ns0:p><ns0:p>The CloudSim-based iFogSim simulator <ns0:ref type='bibr' target='#b4'>(Gupta et al., 2016)</ns0:ref> is one of the leading fog simulators within the research community, which follows the sense-process-actuate model. The actuator is declared as the responsible entity for the system or a mechanism, and the actualisation event is triggered when a task, which known as a Tuple, determining a certain amount of instruction and size in bytes, is received by the actuator. In the current implementation of iFogSim, this action has no significant effect, however custom events also can be defined by overriding the corresponding method, nevertheless no such events are created by default. The actuator component is determined by its connection and network latency. The original version of iFogSim does not support mobility, however the static, geographical location of a node is stored.</ns0:p><ns0:p>Another CloudSim extension is the EdgeCloudSim <ns0:ref type='bibr' target='#b22'>(Sonmez et al., 2018)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>another. This work also takes into account the attractiveness of a position to define the duration of stay at some place. Further mobility models can be created by extending the default class for mobility, but there is no actuator entity implemented in this approach.</ns0:p><ns0:p>The FogNetSim++ <ns0:ref type='bibr' target='#b19'>(Qayyum et al., 2018)</ns0:ref> can be used to model fog networks supporting heterogeneous devices, resource scheduling and mobility. In this paper six mobility strategies were proposed, and new mobility policies can also be added. This simulator aids the entity mobility models, which handles the nodes independently, and takes into account parameters such as speed, acceleration, direction in a three-dimensional coordinate system. Unfortunately, the source code of the simulator presents examples of the linear and circular mobility behaviour only. This simulation tool used no actuator model. YAFS <ns0:ref type='bibr' target='#b7'>(Lera et al., 2019)</ns0:ref> is a simulator to analyse IoT application deployments and mobile IoT scenarios. The actuator in this realisation is defined as an entity, which receives messages with the given number of instructions and bytes, similarly to the solution of iFogSim. The paper also mentioned dynamic user mobility, which takes into account different routes using GPX formats (it is used by application to depict data on the map), but this behaviour was not explained or experimented with.</ns0:p></ns0:div>
<ns0:div><ns0:head>Simulator</ns0:head><ns0:p>Actuator Mobility Core simulator Prog. language Year DISSECT-CF-Fog (this work) nevertheless the representative class of an IoT device has a method for actuator events, which can be also overridden. There is only one predefined actuator event affecting the battery of an IoT device, however it was not considered during the evaluation phase by the authors. This simulation tool also takes into consideration the mobility of smart devices. The location of a device is represented by a three-dimensional coordinate system. Motion is influenced by a given velocity and range, where the corresponding device can move, and only horizontal movements are considered within the range by the default moving policy.</ns0:p><ns0:formula xml:id='formula_0'>X X DISSECT-CF Java 2020 iFogSim X - CloudSim Java 2017 EdgeCloudSim - X CloudSim Java 2017 FogNetSim++ - X OMNet++ C++ 2018 IoTSim-Edge X X CloudSim Java 2019 YAFS X X - Python 2019 MobFogSim X X iFogSim Java 2020</ns0:formula><ns0:p>MobFogSim <ns0:ref type='bibr' target='#b18'>(Puliafito et al., 2020)</ns0:ref> aims to model user mobility and service migration, and it is one of the latest extension of the iFogSim, where actuators are supported by default. Furthermore, the actuator model was revised and improved to handle migration decisions, because migration is often affected by end user motions. To represent mobility, it uses a two-dimensional coordinate system, the users' direction and velocity. The authors considered real datasets as mobility patterns, which describe buses and routes of public transportation.</ns0:p><ns0:p>The comparison of related simulation based approaches is shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. It highlights the existence of actuator and mobility interfaces, the base simulator of the approach and the programming language, in which the actual tool was written. We also denoted the year, when the simulation solution was released or published. It also reveals the leading trends for fog simulation. Based on <ns0:ref type='bibr'>Markus and Kertesz (2020)</ns0:ref>, more than 70% of the simulators are written in Java programming language and only 20% of them are developed using Python or C++. The rest of them are more complex applications (i.e. Android-based software). This survey also points out that mostly the network type of simulators is written in C++, which focuses on fine-grained network model, however these tools typically do not have predefined models and components for representing cloud and fog nodes, and VM management operations. The event-driven general purpose simulators are usually implemented in Java.</ns0:p><ns0:p>The actuator and mobility abilities of these simulators are further detailed in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. Detailed characteristics of the related simulation tools follows similar logic in all cases. The third column highlights actuator events that can be triggered in a simulator. The fourth column shows the supported mobility options (we only listed the ones offered in their source code) and finally we denoted the position representation manner in the last column.</ns0:p><ns0:p>One can observe that there is a significant connection between mobility support and actuator functions, but only half of the investigated simulators applied both of them. Since the actuator has no commonly used software model within the latest simulation tools, developers omit it, or it is left to the users to implement it, which can be time consuming (considering the need for additional validation). In a few cases, both actuator and mobility models are simplified or just rudimentary handled, thus realistic simulations cannot be performed.</ns0:p><ns0:p>In this paper, we introduce an actuator interface and mobility functionality for the DISSECT-CF-Fog simulator. We define numerous actuator events and mobility patterns to enhance and refine the actuator model of a simulated IoT system. To the best of our knowledge, no other simulation solution offers such enriched ways to model actuator components.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>THE ACTUATOR AND MOBILITY MODELS OF DISSECT-CF-FOG</ns0:head><ns0:p>The heterogeneity of interconnected IoT devices often raises difficulties in simulator solutions, as the creation of a model that comprehensively depicts the behaviour of these diverse components is challenging.</ns0:p><ns0:p>In a simulation environment, a concrete type of any device is described by its characteristics. For instance, it does not really matter, if a physical machine utilises an AMD or an Intel processor, because the behaviour of the processor are modelled by the number of CPU cores and the processing power of one core, which should be defined in a realistic way. Following this logic, the actual realisation of an actuator entity -which follows the traditional subscriber model -, can be any type of actuator (e.g. motors or relays), if the effects of it are appropriately and realistically modelled. This means that in our actuator implementation, a command received by an actuator must affect the network load considering bandwidth and latency, moreover based on certain decisions the actuator should indicate changes in the behaviour of the IoT device or sensor (e.g. increasing data sensing frequency or changing the actual position). In case of IoMT, the traditional WSN model cannot be followed, hence moving devices can act as a publisher (monitoring) and subscriber as well (receiving commands related to movements).</ns0:p><ns0:p>Our proposed actuator interface of the DISSECT-CF-Fog simulator aims to provide a generic, unified, compact and platform-independent representation of IoT actuator components. DISSECT-CF-Fog is based on DISSECT-CF <ns0:ref type='bibr' target='#b6'>(Kecskemeti, 2015)</ns0:ref>, which was proposed as a general purpose simulator to investigate the energy consumption of cloud infrastructures. The evolution phases of DISSECT-CF-Fog can be seen in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>, where each background colour represents a milestone of the development, and it also depicts the layers of the simulation tool.</ns0:p><ns0:p>Typical event-driven simulators are lacking predefined models for complex behaviours (e.g. con- Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>prevalence in the field of IoT. The DISSECT-CF-Fog actuator model is fairly abstract, hence it mainly focuses on the actuators' core functionality and its effect on the simulation results, but it does not go deep into specific actuator-device attributes.</ns0:p><ns0:p>The actuator interface should facilitate a more dynamic device layer and a volatile environment in a simulation. Therefore, it is preferred to be able to implement actuator components in any kind of simulation scenario, if needed. In our model, one actuator is connected to one IoT device for two reasons in particular: (i) it is observing the environment of the smart device and can act based on previously specified conditions, or (ii) it can influence some low-level sensor behaviour, for instance it changes the sampling interval of a sensor, resets or completely stops the smart device.</ns0:p><ns0:p>The latter indirectly conveys the conception of a reinterpreted actuator functionality for simulator solutions. The DISSECT-CF-Fog actuator can also behave as a low-level software component for sensor devices, which makes the model compound.</ns0:p><ns0:p>The actuator model of DISSECT-CF-Fog can only operate with compact, well-defined events, that specify the exact influence on the environment or the sensor. The set of predefined events during a simulation provides a restriction to the capability of the actuator and limits its scope to certain actions that are created by the user or already exist in the simulator. A brief illustration of sensor-based events are shown in Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 3. Low-level sensor events</ns0:head><ns0:p>The determination of the exact event, executed by the actuator, happens in a separate, reusable and extendable logic component. This logic component can serve as an actual actuator configuration, but can also be used as a descriptor for environmental changes and their relations to specific actuator events. This characteristic makes the actuator interface thoroughly flexible and adds some more realistic factors to certain simulation scenarios.</ns0:p><ns0:p>With the help of the logic component, the actuator interface works in an automatic manner. After a cloud or fog node has processed the data generated by the sensors, it sends a response message back to the actuator, which chooses an action to be executed. This models the typical sensor-service-actuator communication direction.</ns0:p><ns0:p>Unexpected actions may occur in real-life environments, which are hard to be defined by algorithms, and the execution of some events may not require cloud or fog processes, e.g. when a sensor fails. To be able to handle such issues, the actuator component is capable of executing events apart from its predefined configuration. This feature facilitates the immediate and direct communication between sensors and actuators.</ns0:p><ns0:p>For the proper behaviour of the actuator, the data representation in the simulator needs to be more Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the simulation can be run without the actuator component. This might significantly decrease the actual runtime of the simulation, as there could potentially be some computing heavy side effects, when applying actuator functionalities.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Requirements for modelling the Internet of Mobile Things</ns0:head><ns0:p>The proximity of computing nodes is the main principle of Fog Computing and it has numerous benefits, but mobile IoT devices may violate this criterion. These devices can move further away from their processing units, causing higher and unpredictable latency. When a mobile device moves out of range of the currently connected fog node, a new, suitable fog node must be provided. Otherwise, the quality of service would drastically deteriorate and due to the increased latencies, the fog and cloud nodes would hardly be distinguishable in this regard, resulting in losing the benefits of Fog Computing.</ns0:p><ns0:p>Another possible problem that comes with mobile devices is service migration. The service migration problem can be considered as when, where and how (W2H) questions. Service migration usually happens between two computing nodes, but if there is no fog node in an acceptable range, the service could be migrated to the smart device itself, causing lower performance and shorter battery time. However, service migration only makes sense when there are stateful services, furthermore it is beyond the topic of this paper, we consider stateless services and decisions of their transfer among the nodes only.</ns0:p><ns0:p>The physical location of fog nodes in a mobile environment is a major concern. Placing Fog Computing nodes too far from each other will result in higher latency or connection problems. In this case, IoT devices are unable to forward their data, hence they are never processed. Some devices may store their data temporarily, until they connect to a fog node, but this contradicts real-time data processing promises of fogs.</ns0:p><ns0:p>A slightly better approach would be to install fog nodes fairly dense in space to avoid the problem discussed above. However, there might be some unnecessary nodes in the system, causing a surplus in the infrastructure, which results in resource wastage.</ns0:p><ns0:p>Considering different mobility models for mobile networks in simulation environments have been researched for a while. The survey by <ns0:ref type='bibr' target='#b2'>Camp et al. (2002)</ns0:ref> presents 7 entity and 6 group mobility models in order to replace trace files, which can be considered as the footprints of movements in the real world. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Actuator implementation in DISSECT-CF-Fog</ns0:head><ns0:p>DISSECT-CF-Fog is a discrete event simulator, which means there are dedicated moments, when system variables can be accessible and modifiable. The extended classes of the timing events, which can be recurring and deferred, ensure to create the dedicated time-dependent entities in the system.</ns0:p><ns0:p>As mentioned in Section 3.1, a complex, detailed data representation in the simulator is mandatory in order to provide sufficient information for the actuator component. Data fragments are represented by DataCapsule objects in the system of DISSECT-CF-Fog. The sensor-generated data is wrapped in a well-parameterized DataCapsule object, and forwarded to an IoT Application located in a fog or cloud node to be processed. A DataCapsule object uses the following attributes:</ns0:p><ns0:p>• source: Holds a reference to the IoT device generating sensor data, so the system keeps track of the data source.</ns0:p><ns0:p>• destination: Holds a reference to the Application of a fog node where the data has been originally forwarded to.</ns0:p><ns0:p>• dataFlowPath: In some cases fog nodes cannot process the current data fragment, therefore they might send it to another one. This parameter keeps track of the visited fog nodes by the data before it has been processed.</ns0:p><ns0:p>• bulkStorageObject: Contains one or more sensor-generated data that has been wrapped into one</ns0:p><ns0:p>DataCapsule.</ns0:p><ns0:p>• evenSize: The size of the response message sent from a fog node to the actuator component (in bytes). This helps to simulate network usage while sending information back to the actuator.</ns0:p><ns0:p>• actuationNeeded: Not every message from the IoT device requires an actuator response event. This logical value (true -false) holds true, if the actuator should take action after the data has been processed, otherwise it is false.</ns0:p><ns0:p>• fogProcess: A logical value (true -false), that is true, if the data must be processed in a fog node, and should not be sent to the cloud. It is generally set to true, when real-time response is needed from the fog node.</ns0:p><ns0:p>• startTime: The exact time in the simulator, when the data was generated.</ns0:p><ns0:p>• processTime: The exact time in the simulator, when the data was processed.</ns0:p><ns0:p>• endTime: The exact time in the simulator, when the response has been received by the actuator.</ns0:p><ns0:p>• maxToleratedDelay and priorityLevel: These two attributes define the maximum delay tolerated by the smart device and the priority of the data. Both of them could play a major role in task-scheduling algorithms (e.g., priority task scheduling), but they have no significant role in the current extension.</ns0:p><ns0:p>• actuatorEvent: This is the specific event type that is sent back to the actuator for execution.</ns0:p><ns0:p>To set these values accurately, some sensor-specific and environment-specific properties are required.</ns0:p><ns0:p>The SensorCharacteristics class integrates these properties and helps to create more realistic simulations.</ns0:p><ns0:p>The following attributes can be set:</ns0:p><ns0:p>• sensorNum: The number of applied sensors in a device. It is directly proportional to the size of the generated data.</ns0:p><ns0:p>• mttf : The mean time until the sensor fails. This attribute is essential to calculate the sensor's average life expectancy, which helps in modelling sensor failure events. If the simulation's time exceeds the mttf value, the sensor has a higher chance to fail. If a sensor fails, the actuator forces it to stop.</ns0:p><ns0:p>• minFreq and maxFreq: These two numbers represent the maximum and minimum sampling rate of the sensor. If a sensor does not have a predefined sampling rate but rather senses changes in the environment, then these are environment-specific attributes and their values could be defined by estimating the minimum and maximum time interval between state changes in the environment. These attributes are necessary to limit the possible frequency value of a sensor when the actuator imposes an event which affects the frequency.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• fogDataRatio: An estimation on how often the sensor generates data, that requires a fog process.</ns0:p><ns0:p>This value is usually higher in the case of sensors that generate sensitive data or applications that require real-time response.</ns0:p><ns0:p>• actuatorRatio: An estimation on how often the sensor generates data, that requires actuator action. This is typically an environment-specific attribute. The more inconsistent and variable the environment is, the higher its chance to trigger the actuator, thus the value of this attribute should be set higher. This attribute has an impact on the DataCapsule's actuationNeeded value. If the actuatorRatio is higher, then it is more likely to set the actuationNeeded attribute to true.</ns0:p><ns0:p>• maxLatency: Its value determines the maximum latency tolerated by the device when communicating with a computing node. For instance, in the case of medical devices, this value is generally lower than in the case of agricultural sensors. Mobile devices may move away from fog nodes inducing latency fluctuations and this attribute helps to determine whether a computing node is suitable for the device, or the expected latency exceeds this maxLatency limitation, therefore the device should look for a new computing node. This attribute plays a major role in triggering fog-selection actuator events when the IoT device is moving between fog nodes.</ns0:p><ns0:p>• delay: The delay of the data generating mechanism of the sensor. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The life-cycle of a DataCapsule object ends, when the actuator interface receives a notification indirectly by triggering a consumption event for which the IoT device has been subscribed to.By definition of the DataCapsule, its life-cycle can also end without actualisation events, if the actuationNeeded is set to false. A simplified demonstration of the DataCapsule's path in the system (meaning the data flow) can be seen in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>.</ns0:p><ns0:p>The actuator model in DISSECT-CF-Fog is represented as the composition of three entities that highly depend on each other. These entities are the Actuator, ActuatorStrategy and the ActuatorEvent. The entities are serving input directly or indirectly for each other, as shown in Figure <ns0:ref type='figure'>6</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 6. Operation of the actuator model</ns0:head><ns0:p>As mentioned in Section 3.1 the actuator model must only operate with predefined events to limit its scope to certain actions. These events are represented by the ActuatorEvent component, which is the core element of this model. By itself, the ActuatorEvent is only an interface and should be implemented in order to specify an exact action. There are some predefined events in the system: five of them are low-level, sensor-related events (as discussed in Section 3.1, the other five are related to the mobile functionality of the devices, but these can be extended to different types of behaviours.</ns0:p><ns0:p>Since the actuator has the ability to control the sensing process itself <ns0:ref type='bibr' target='#b17'>(Pisani et al., 2020)</ns0:ref>, half of the predefined actuator events foster low-level sensor interactions. The Change filesize event can modify the size of the data to be generated by the sensor. Such behaviour reflects use cases, when more or less detailed data are required for the corresponding IoT application, or the data should be encrypted or compressed for some reason. The Increase frequency and Decrease frequency might be useful when the IoT application requires an increased time interval between the measurements of a sensor. A typical use case of this behaviour is when a smart traffic control system of a smart city monitors the traffic at night, when usually less inhabitants are located outside. The maximum value of the frequency is regulated by the corresponding SensorCharacteristics object. The Decrease frequency is the opposite of the previously mentioned one, a typical procedure may appear in IoT healthcare, for instance the blood pressure sensor of a patient measures continuously increasing values, thus more frequent perceptions are required. The minimum value of the frequency is regulated by the corresponding SensorCharacteristics. The Stop device event imposes fatal error of a device, typically occurring randomly, and it is strongly related to the mttf of the SensorCharacteristics. The mttf is considered as a threshold, before reaching it, there is only a small chance for failure, after exceeding it, the chance of a failure increases exponentially. Finally, the</ns0:p><ns0:p>Restart device reboots the given device to simulate software errors or updates.</ns0:p><ns0:p>Customised events can be added to the simulation by defining the actuate() method of the Actu-atorEvent class, that describes the series of actions to occur upon executing the event. The event is selected corresponding to the ActuatorStrategy, which is a separate and reusable logic component and indispensable according to Section 3.1. It is also an interface, and should be implemented to define scenario-specific behaviour. Despite its name, the ActuatorStrategy is capable of more than just simulating the configuration of an actuator and its event selection mechanism. This logic component can also be used to model the environmental changes and their side effects.</ns0:p><ns0:p>DISSECT-CF-Fog is a general fog simulator that is capable of simulating a broad spectrum of scenarios only by defining the key features and functionalities of each element of a fog and cloud infrastructure.</ns0:p><ns0:p>The ActuatorStrategy makes it possible to represent an environment around an IoT device, and make Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the actuator component reactive to its changes. For instance, let us consider a humidity sensor and a possible implementation of the actuator component. We can then mimic an agricultural environment in the ActuatorStrategy with the help of some well-defined conditions to react to changes in humidity values, and select the appropriate customised actuator events (e.g. opening windows, or watering), accordingly. This characteristic enables DISSECT-CF-Fog to simulate environment-specific scenarios, while maintaining its extensive and generic feature.</ns0:p><ns0:p>Finally, the Actuator component executes the implemented actions and events. There are two possible event executions offered by this object:</ns0:p><ns0:p>1. It can execute an event selected by the strategy. This is the typical usage, and it is performed automatically for devices needing actualisation, every time after the data have been processed by a computing unit, and a notification is sent back to the device.</ns0:p><ns0:p>2. Single events can also be fired by the actuator itself. If there is no need for an intermediate computing unit (i.e. data processing and reaction for the result), the actuator can act immediately, wherever it is needed as we mentioned in Section 3.1.</ns0:p><ns0:p>There might be a delay between receiving an ActuatorEvent and actually executing it, especially when the execution of the event is a time consuming procedure. This possible delay can be set by the latency attribute of the Actuator. By default, a device has no inherent actuator component, but it can be explicitly set by the setActuator() method in order to fulfil the optional presence of the actuator as mentioned in Section 3.1.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Representing IoMT environments in DISSECT-CF-Fog</ns0:head><ns0:p>The basis of mobility implementations in the competing tools usually represent the position of users or devices as two or three-dimensional coordinate points, and the distance between any two points is calculated by the Euclidean distance, whereby the results can be slightly inaccurate. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science actual position only matters and is evaluated before the decisions are made by a computing appliance or a device, for instance when the sensed data is ready to be forwarded.</ns0:p><ns0:p>As we stated in Section 3.2, the mobile device movements are based on certain strategies. Currently two mobility strategies are implemented. We decided to implement one entity and one group mobility model according to <ns0:ref type='bibr' target='#b2'>Camp et al. (2002)</ns0:ref>, but since we provide a mobility interface, the collection of the usable mobility models can be easily extended.</ns0:p><ns0:p>The goal of the (i) Nomadic mobility model is that entities move together from one location to another, in our realisation multiple locations (i.e. targets) are available. It is very similar to the public transport of a city, where the route can be described by predefined points (or bus stops), and the dedicated points (P i ) are defined as entities of the GeoLocation class. An entity reaching the final point of the route will no longer move, but may function afterwards. Between the locations, a constant v speed is considered, and there is a fixed order of the stops as follows:</ns0:p><ns0:formula xml:id='formula_1'>P (lat,long) 1 v → P (lat,long) 2 v → ... v → P (lat,long) n</ns0:formula><ns0:p>The (ii) Random Walk mobility takes into consideration entities with unexpected and unforeseen movements, for instance the observed entity walks around the city, unpredictably. The aim of this policy is to avoid moving in straight lines with a constant speed during the simulation, because such movements are unrealistic. In this policy, a range of the entity is fixed (r), where it can move with a random speed (v). From time to time, or if the entity reaches the border of the range, the direction and the speed of the movement dynamically change (P i ). That kind of movement is illustrated in Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref>.</ns0:p><ns0:p>The MobilityDecisionMaker class is responsible for monitoring the position of the fog nodes and IoT devices, and making decisions knowing these properties. This class has two main methods. The (i) handleDisconnectFromNode() closes the connection with the corresponding node in case the latency exceeds the maximum tolerable limit of the device, or the IoT device is located outside of the range of the node. The (ii) handleConnectToNode() method is used, when a device finds a better fog node instead of the current one, or the IoT device runs without connection to any node, and it finds an appropriate one. These methods are directly using the actuator interface to execute the corresponding mobility-based actuator events.</ns0:p><ns0:p>As we mentioned earlier, actuation and mobility are interlinked, thus we introduce five actuator events related to mobility according to Section 3.2. Position changes are done by Change position event of the actuator. The connection or disconnection methods of a device are handled by the Disconnect from node and Connect to node events, respectively. When a more suitable node is available for a device than the already connected one, the Change node actuator event is called. Finally, in some cases a node may stay without any connection options due to its position, or in cases when only overloaded or badly equipped fog nodes are located in its neighbourhood. The Timeout event is used to measure the unprocessed data due to these conditions, and to empty the device's local repository, if data forwarding is not possible.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EVALUATION</ns0:head><ns0:p>We evaluated the proposed actuator and mobility extensions of the DISSECT-CF-Fog simulator with two different scenarios, which belong to the main open research challenges in the IoT field <ns0:ref type='bibr' target='#b9'>(Marjani et al., 2017)</ns0:ref>. The goal of these scenarios is to present the usability and broad applicability of our proposed simulation extension. We also extended one of the scenarios with larger scale experiments, in order to determine the limitations of DISSECT-CF-Fog (e.g. determining the possible maximum number of simulated entities).</ns0:p><ns0:p>Our first scenario is IoT-assisted logistics, where more precise location tracking of products and trucks can be realised, than with traditional methods. It can be useful for route planning (e.g. for avoiding traffic jams or reducing fuel consumption), or for better coping with different environmental conditions (e.g. for making weather-specific decisions).</ns0:p><ns0:p>Our second scenario is IoT-assisted (or smart) healthcare, where both monitoring and reporting abilities of the smart systems are heavily relied on. Sensors wore by patients continuously monitor the health state of the observed people, and in case of data spikes it can immediately alarm the corresponding nurses or doctors.</ns0:p><ns0:p>During the evaluation of our simulator extension we envisaged a distributed computing infrastructure composed of a certain number of fog nodes (hired from local fog providers) to serve the computational Manuscript to be reviewed Computer Science needs of our IoT applications. Beside these fog resources, additional cloud resources can be hired from a public cloud provider. For each of the experiments, we used the cloud schema of LPDS Cloud of MTA SZTAKI 1 to determine realistic CPU processing power and memory usage for the physical machines.</ns0:p><ns0:p>Based on this schema we attached 24 CPU cores and 112 GB of memory for a fog node, and set at most 48 CPU cores and 196 GB of memory to be hired from a cloud provider to start virtual machines <ns0:ref type='bibr'>(VMs)</ns0:ref> for additional data processing.</ns0:p><ns0:p>The simulator can also calculate resource usage costs, so we set VM prices according to the Amazon Web Services 2 (AWS) public cloud pricing scheme. For a cloud VM having 8 CPU cores and 16 GB RAMs we set 0.204$ hourly price (a1.2xlarge), while for a fog VM having 4 CPU cores and 8 GB RAMs we set 0.102$ hourly price (a1.xlarge). This means that the same amount of data is processed twice faster on the stronger, cloud VM, however the cloud provider also charges twice as much money for it. In our experiments, we proportionally scale the processing time of data, for every 50 kBytes, we model one minute of processing time on the Cloud VM.</ns0:p><ns0:p>For both scenarios, we used a PC with Intel Core i5-4460 3.2GHz, 8GB RAM and a 64-bit Windows 10 operating system to run the simulations. Since our simulations take into account random factors, each experiment was executed ten times, and the average values are presented below. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>The Logistics IoT Scenario</ns0:head><ns0:p>In the first scenario, we simulated a one year long operation of a smart transport route across cities located in Hungary. This track is exactly 875 kilometers long, and it takes slightly more than 12 hours to drive through it by a car based on the Google Maps, which means the average speed of a vehicle is about 73 km/h. We placed fog nodes in 9 different cities maintained by a domestic company, and we used a single cloud node of a cloud provider located in Frankfurt. Each fog node has direct connection with the cloud node, the latency between them is set based on the values provided by the WonderNetwork service 3 .</ns0:p><ns0:p>A fog node forms a cluster with the subsequent and the previous fog node on the route as depicted in Figure <ns0:ref type='figure' target='#fig_11'>8</ns0:ref>. This figure also presents the first test case (a), when the range of a fog node is considered as 25 kilometers radius (similarly to a LoRa network). For the second test case (b), we doubled the range to 50 kilometers radius. The IoT devices (placed in the vehicles to be monitored) were modelled with 4G network options with an average 50 ms of latency.</ns0:p><ns0:p>All vehicles were equipped by three sensors (asset tracking sensor, AIDC (automatic identification and data capture) and RFID (radio-frequency identification)) generating 150 bytes 4 of data per sensor. A daemon service on the computational node checks the local storage for unprocessed data in every five minutes, and allocates them in a VM for processing. Each simulation run deals with increasing number of 1 LPDS Cloud of MTA SZTAKI website is available at: https://www.sztaki.hu/en/science/departments/lpds. Accessed in October, 2020.</ns0:p><ns0:p>2 Amazon Web Service website is available at: https://aws.amazon.com/ec2/pricing/on-demand/. Accessed in October, 2020 IoT entities, we initialise 2, 20 and 200 vehicles in every twelve hours, which go around on the route. Half of the created objects are intended to start their movements in the opposite direction (selected randomly).</ns0:p><ns0:p>During our experiments, we considered two different actuator strategies: the (i) RandomEvent models a chaotic system behaviour, where both mobility and randomly appearing actualisation events of a sensor can happen. The failure rate of IoT components mttf were set to 90% of a year, and avoiding unrealistically low or high data generation frequencies, we limited them to a range of one to 15 minutes (minFreq,maxFreq). Finally, we enhanced the unpredictability of the system by setting the actuatorRatio to 50%. The (ii) TransportEvent actuator policy defines a more realistic strategy to model asset tracking, which aims to follow objects based on a broadcasting technology (e.g. GPS). A typical use case of this, when a warehouse can prepare for receiving supplies according to the actual location of the truck. In our evaluation, if the asset was located closer than five kilometers, it would send position data in every two minutes. In case of five to 10 kilometers, the data frequency is five minutes, and from 10 to 30, the data generation is set to 10 minutes, lastly if it is farther than 30 kilometers, it informs changes in 15 minutes.</ns0:p><ns0:p>The results are shown in Table <ns0:ref type='table' target='#tab_3'>3 and 4</ns0:ref> Interpreting the results, we can observe that in case of the 25 kilometers range, the RandomEvent drops more than half <ns0:ref type='bibr'>(around 56,19%)</ns0:ref> of the unprocessed data losing information, whilst the same average is about 23,4% for the TransportEvent. In case of 50 kilometers range, there is no data dropped, because the nodes roughly cover the route and the size of gaps cannot trigger the Timeout event. In contrary, the ranges do not cover each other in case of the 25 kilometers range, which results in zero Change node event.</ns0:p><ns0:p>Based on the Fog+Cloud cost metric, one can observe that the </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Smart healthcare scenario</ns0:head><ns0:p>In the second scenario, we continued our experiments with a smart healthcare case study. In this scenario, patients wear blood pressure and heart rate monitors. We automatically adjust the data sampling period if the monitors report off nominal behaviour: (i) in case of blood pressure lower than 90 or higher than 140;</ns0:p><ns0:p>(ii) in case of heart rate values lower than 60, and higher than 100.</ns0:p><ns0:p>In this scenario, each patient represents a different data flow (starting from its IoT device), similarly to the previously mentioned way. First the data is forwarded to the fog layer, if the data processing is impossible there due to overloaded resources, then the data is moved to the cloud layer to be allocated to a VM for processing. As IoT healthcare requires as low latency as possible, the frequency of the daemon services on the computational node was set to one minute. Similarly to the first scenario, one measurement of a sensor creates (a message of) 150 bytes.</ns0:p><ns0:p>We focus on the maximum number of IoT devices which can be served with minimal latency by the available fog nodes, and we are also interested in the maximum tolerable delay, if the raw data is processed in the cloud. We applied the same VM parameters as in the previous scenario, and the simulation period took one day. We did not implement mobility in this scenario, nevertheless actualisation events were still required in case of health emergency to see how the system adapts to the unforeseen data.</ns0:p><ns0:p>Similarly to the first scenario, the hospital was assumed to use a public cloud node in Frankfurt, but it was also assumed to maintain three fog nodes on the premises of the hospital. During our experiments, we considered various number of patients (100, 1000 and 10 000), and we investigated how the operating costs and delay change and adapt to the different the number of fog VMs and actualisation events.</ns0:p><ns0:p>Since each fog node is available in the local region, the communication latency was set randomly between 10 and 20 ms (regarding to AWS 5 ), furthermore the actuatorRatio was set to 100%, because of nodes. Using a higher number of fog nodes can foster faster data processing, however in case of 10 000 patients, the best delay is 7.74 minutes, which points out that the utilised resources were overloaded. In the other cases the system managed the patients' data with less than three minutes delay, but decreasing the number of usable fog nodes can continuously increase the delay.</ns0:p><ns0:p>Lastly, we can observe that no failure happened during the evaluation (Restart / stop device), because of the reliability of medical sensors and the short time of simulation. We can also realise that our simulation tool is able to model thousands of smart objects (e.g. IoT devices and sensors), and their one day long simulated operation could be done in 11 seconds of elapsed time (Runtime) in the worst case.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Large-scale experiments of the smart healthcare scenario</ns0:head><ns0:p>In this section our goal was to point out the possible limitations of DISSECT-CF-Fog using the previously detailed smart healthcare scenario. The runtime of DISSECT-CF-Fog largely depends on the used execution environment and its actual hardware resources (mostly memory), similarly to any other software.</ns0:p><ns0:p>Our findings are presented in Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>, in which we used the same metrics as before.</ns0:p></ns0:div>
<ns0:div><ns0:head>17/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed For this scalability study, we also applied the earlier used topology with three fog nodes and a cloud node. To determine the exact number of IoT devices that can be modelled by the simulator is not possible, because our system takes into account random factors. Nevertheless, we can give an estimate by scaling of the number of IoT devices, in our case the amount of active devices (i.e. patients).</ns0:p><ns0:note type='other'>Computer Science Actuator Strategy Healthcare Fog</ns0:note><ns0:p>In this evaluation we increased the number of patients with 10 000 for the test cases, and examined the memory usage of the execution environment. The results showed that even for cases of 170 000 and 180 000 IoT devices, the fog and cloud nodes can process the vast amount of data generated by the modelled IoT sensors, however the Delay value also increased dramatically to 6 256 minutes, in the first case, and 6 796 minutes, in the second case. It is worth mentioning that besides such a huge number of active entities, the Runtime values are below five minutes. When we simulated 190 000 IoT devices, the simulator consumed all of the memory of the underlying hardware.</ns0:p><ns0:p>In the fourth test case, we applied seven fog nodes. Our findings showed that the Delay value decreased spectacularly to 5 886 minutes, however it is far from what we experienced in the second scenario, therefore our further goal was to define how many computational resources (i.e. fog nodes) are required to decrease the Delay parameter below ten minutes, similary to what we expected in the second scenario.</ns0:p><ns0:p>We can clearly seen in the fifth test case that at least 55 fog nodes are required for 190 000 IoT devices to process and store their data. In this case, the Delay value is 9.9 minutes, but because of the higher number of computational nodes, both numbers of the utilised VMs (336 pieces) and these costs (674.9$) increased heavily. The Java representation of the fog and cloud nodes hardly differ, therefore we could reach similar results, if we increased the number of cloud nodes as well.</ns0:p><ns0:p>It can be clearly seen that the critical part of DISSECT-CF-Fog is the number of IoT devices utilising in the system, however if we also increase the number of the simulated computing resources (i.e. fog and cloud nodes), we can reach better scalability (i.e. the delay and simulation runtime would not grow). The reason for this is that the actual Java implementation of DISSECT-CF-Fog stores the references of model entities of the devices and the unprocessed data. To conclude, the current DISSECT-CF-Fog extension is capable of simulating even up to 200 thousand system entities. Limitations are only imposed by the the hardware parameters utilised, and the wrongly (or extremely) chosen ratio of the number of IoT devices and computing nodes set for the experiments.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSION</ns0:head><ns0:p>In this paper, we introduced the extended version of DISSECT-CF-Fog to support actuators and mobility features. Concerning our main contribution, we designed and developed an actuator model that enables broad configuration possibilities for investigating IoT-Fog-Cloud systems. With our extensions, various</ns0:p><ns0:p>IoT device behaviours and management policies can be defined and evaluated with ease in this simulator.</ns0:p><ns0:p>We also evaluated our proposal with two different case studies of frequently used IoT applications, and we extended the smart healthcare scenario with large-scale experiments to determine the limitations of our approach. These IoT scenarios utilise the predefined actuator events of the simulator. We also presented how to use different actuator strategies, in order to define specific application (and sensor/actuator) Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>behaviour. In essence, our solution ensures a compact, generic and extendable interface for actuator events, which is unique among state-of-the-art simulators in the area.</ns0:p><ns0:p>Our future work will address more detailed and extended mobility models for migration and resource scaling decisions. We also plan to extend the actuator strategies to model various types and behaviour of IoT entities.</ns0:p></ns0:div>
<ns0:div><ns0:head>SOFTWARE AVAILABILITY</ns0:head><ns0:p>The source code of the extension can be found on GitHub:</ns0:p><ns0:p>https://github.com/andrasmarkus/dissect-cf/tree/actuator/</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>, which aims to ensure mobility support in simulation environments. It associates the position information of a mobile device to a two-dimensional coordinate point, which can be updated dynamically. This simulation solution considers the nomadic mobility model, by its definition, a group of nodes moves randomly from one position to 3/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>sidering both detailed network and computational resource utilisation), nevertheless DISSECT-CF has such abilities. It utilises its own discrete event simulation (DES) engine, which is responsible to manage the time-dependent entities (Event System) and also considers low-level computing resource sharing, for instance balancing network bandwidth (Unified Resource Sharing) or enabling the measurement of 5/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021) Manuscript to be reviewed Computer Science different energy usage patterns of resources (Energy Modelling). Through the Infrastructure Simulation and Infrastructure Management layers, general IaaS clouds can be modelled with different scheduling policies.The current version of DISSECT-CF-Fog strongly build on the subsystems of the basic simulator, which is proven to be accurate. This system has been leveraged since 2017 to realise different aspects of complex IoT-Fog-Cloud systems. First, we added the typical components of IoT systems (denoted by green in Figure2) like IoT Sensor, IoT Device and IoT Application, to model various IoT use cases with detailed configuration options. The naming DISSECT-CF-Fog was introduced at the end of 2019, after developing the Cost Modelling layer to apply arbitrary IoT and cloud side cost schemes of any providers (shown by blue coloured boxes in Figure2). The tree main components of the fog extension, denoted by yellow (Figure2), are the Fog and Cloud Node, which are responsible for the creation of multi-tier fog topology, the Device Strategy, which chooses the optimal node for a device, and (iii) the Application Strategy, which enables offloading decisions between the entities of the fog topology. The strategies can take into account various parameters of the system, such as network properties (e.g. latency), cost and utilised CPU and memory.The main contribution of this paper are denoted by red in Figure2. To satisfy the increasing need for a well-detailed and versatile simulator, we complete the IoT layer by adding the IoT Actuator component, with its corresponding management elements Actuator Strategy and Device Mobility, to realise the business logic for such related behaviours. In the former versions of DISSECT-CF-Fog, the position of IoT devices were static and fixed, and also the backward communication channels (from the computational nodes to actuators through the IoT devices) did not exist.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The evolution of DISSECT-CF-Fog through its components</ns0:figDesc><ns0:graphic coords='7,193.43,327.08,310.17,310.17' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>detailed and comprehensive. Consequently, this extension of the DISSECT-CF-Fog simulator introduces a new type of data fragment in the system, to store specific details throughout the life-cycle of the sensor-generated data. Finally, the DISSECT-CF-Fog actuator should be optional for simulation scenarios. In consideration of certain scenarios, where the examined results do not depend on the existence of actuator behaviours, 7/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Applying mobility models is a reasonable decision, because they mimic the movements of IoT devices in a realistic way. The advent of IoT and the technological revolution of smartphones have brought the need for seamless and real time services, which may require an appropriate simulation tool to develop and test the cooperation of Fog Computing and moving mobile devices. The current extension of the DISSECT-CF-Fog was designed to create a precise geographical position representation of computing nodes (fog, cloud) and mobile devices and simulate the movements of devices based on specified mobility policies. As the continuous movement of these devices could cause connection problems we consider the following events shown in Figure 4.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Actuator events related to mobility behaviour</ns0:figDesc><ns0:graphic coords='9,141.73,512.92,413.55,158.44' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Data flow in the DISSECT-CF-Fog</ns0:figDesc><ns0:graphic coords='11,141.73,299.69,413.58,241.61' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>To overcome this issue and have a precise model ( as we stated in Section 3.2, we take into account the physical position of the end users, IoT devices and data centres (fog, cloud) by longitude and latitude values. The representative class called GeoLocation calculates distance using the Haversine formula. Furthermore, applying geographical location with a coordinate system often results in a restricted map, where the entities are able to move, thus in our case worldwide use cases can be implemented and modelled.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Random Walk mobility model</ns0:figDesc><ns0:graphic coords='13,245.13,456.95,206.78,202.45' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Applied fog ranges in the first scenario</ns0:figDesc><ns0:graphic coords='15,141.73,269.85,413.58,137.76' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>. The comparison are based on the following parameters:(i) VM reflects the number of created VMs during the simulation on the cloud and fog nodes, which process the amount of generated data. As we mentioned earlier, our simulation tool is able to calculate the utilisation cost of the resources based on the predefined pricing schemes (Fog+Cloud cost). Delay reflects the timespan between the time of last produced data and the last VM operation. Runtime is a metric describing how long the simulation run on the corresponding PC. The rest of the parameters are previously known, it shows the number of the defined actuator and mobility events. Nevertheless Timeout data is highlighting the amount of data lost, which could not been forwarded to any node, because the actual position of a vehicle is to far for all available nodes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>5</ns0:head><ns0:label /><ns0:figDesc>AWS Architecture Guidelines and Decisions website is available at: https://aws.amazon.com/blogs/compute/low-latencycomputing-with-aws-local-zones-part-1/. Accessed in May, 2021 16/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)Manuscript to be reviewed Computer Science the vital information of the sensed data, thus each measurement required some kind of actuation. The rest of the parameters were the same we used in the logistics scenario.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>) 132 125 14 687 1431 119 954 14 192 1392 98 975 14 127 1468 71 153 13 927 1399 Decrease frequency (pc.) 750 751 80 718 8068 684 829 80 845 8104 563 198 80 295 8023 406 155 81 105 8115Restart / stop device (pc.Results and number of events during the second scenario Our findings are depicted in Table5. One can observe that the increasing number of applied fog nodes reduces the average costs per patient, in case of three fog nodes the mean cost (projected on one patient) is around 83.7$. This amount of money is continuously grows as the fog nodes are omitted one by one, the corresponding average operating costs are about 97.7$, 118.7$ and 124.0$, respectively, which means maintaining fog nodes also might be economically worthy.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Delay values of the second scenario</ns0:figDesc><ns0:graphic coords='18,178.44,306.59,340.16,210.69' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='3,193.43,135.30,310.17,310.52' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Comparison of the related simulation tools</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Jha et al. (2020) proposed the IoTSim-Edge simulation framework by extending the CloudSim to</ns0:cell></ns0:row><ns0:row><ns0:cell>model towards IoT and Edge systems. This simulator focuses on resource provisioning for IoT applications</ns0:cell></ns0:row><ns0:row><ns0:cell>considering the mobility function and battery-usage of IoT devices, and different communication and</ns0:cell></ns0:row><ns0:row><ns0:cell>messaging protocols as well. The IoTSim-Edge contains no dedicated class for the actuator components,</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>. The second</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Results of the Random Actuator strategy and number of events during the first scenario</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>3 WonderNetwork website is available at: https://wondernetwork.com/pings. Accessed in October, 20204 Ericsson website is available at: https://www.ericsson.com/en/mobility-report/articles/massive-iot-in-the-city. Accessed in May, 202114/20PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>TransportEvent utilises the cloud and fog resources more, than the RandomEvent, nevertheless the average price of a device (applying two vehicles) is about 1197.7$, in case of 20 assets it decreases to about 206.2$, and lastly operating 200 objects reduce the price to about 50.6$, which means that the continuous load of the vehicles utilises the VMs more effectively. Results of the Transport Actuator strategy and number of events during the first scenario Since the IoT application frequency was set to five minutes, we considered the Delay acceptable, when it was equal or less than five minutes. Based on the results, all test cases fulfilled our expectation. It is worth mentioning that mttf might be effective only in simulating years of operation, thus neither software nor hardware error is triggered (Restart / stop device) in this case. The Runtime metric also points to the usability and reliability performance of DISSECT-CF-Fog; less than three minutes was required to evaluate a one year long scenario with thousand of entities (i.e. simulated IoT devices and sensors running for a year).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>15/20</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Results and number of events in the scalability studies</ns0:figDesc><ns0:table><ns0:row><ns0:cell>/ cloud node ratio</ns0:cell><ns0:cell /><ns0:cell>3 / 1</ns0:cell><ns0:cell /><ns0:cell>7 / 1</ns0:cell><ns0:cell>55 / 1</ns0:cell></ns0:row><ns0:row><ns0:cell>Patient (pc.)</ns0:cell><ns0:cell>170 000</ns0:cell><ns0:cell>180 000</ns0:cell><ns0:cell>190 000</ns0:cell><ns0:cell>190 000</ns0:cell><ns0:cell>190 000</ns0:cell></ns0:row><ns0:row><ns0:cell>VM (pc.)</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell /><ns0:cell>48</ns0:cell><ns0:cell>336</ns0:cell></ns0:row><ns0:row><ns0:cell>Generated data (MB)</ns0:cell><ns0:cell>1 196</ns0:cell><ns0:cell>1 261</ns0:cell><ns0:cell /><ns0:cell>1 513</ns0:cell><ns0:cell>1 679</ns0:cell></ns0:row><ns0:row><ns0:cell>Fog + cloud cost ($)</ns0:cell><ns0:cell>197.6</ns0:cell><ns0:cell>208.5</ns0:cell><ns0:cell /><ns0:cell>244.9</ns0:cell><ns0:cell>674.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Delay (min.)</ns0:cell><ns0:cell>6256.0</ns0:cell><ns0:cell>6796.0</ns0:cell><ns0:cell>Out of memory</ns0:cell><ns0:cell>5886.0</ns0:cell><ns0:cell>9.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Runtime (sec.)</ns0:cell><ns0:cell>186</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell /><ns0:cell>159</ns0:cell><ns0:cell>163</ns0:cell></ns0:row><ns0:row><ns0:cell>Increase frequency (pc.)</ns0:cell><ns0:cell>624 860</ns0:cell><ns0:cell>657 725</ns0:cell><ns0:cell /><ns0:cell>790 999</ns0:cell><ns0:cell>810 153</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Decrease frequency (pc.) 3 557 783 3 751 754</ns0:cell><ns0:cell /><ns0:cell cols='2'>4 498 906 4 049 325</ns0:cell></ns0:row><ns0:row><ns0:cell>Restart / stop device (pc.)</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='20'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54946:2:0:NEW 29 May 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Editor and anonymous Reviewers,
Thank you very much for your time and the valuable comments on the
manuscript. We have made a major revision of the paper regarding the
changes requested, and we tried to take all your comments into account. The
detailed response to the comments follows below.
Best regards,
Andras Markus, Mate Biro, Gabor Kecskemeti and Attila Kertesz
--Reviewer 3:
Concern #1
-
“The storyline of the paper requires some improvements, to make the paper more
objective (in the abstract, in introduction).“
-
“The abstract needs to be simplified and present concrete contributions and
achieved results.“
We have modified the abstract and introduction section to respond to this
comment.
Concern #2
-
“A background section introducing DISSECT-CF-Fog should be added, as it
would help to understand whether or not the contributions of this paper agains
prior work are relevant.“
We have revised the description of DISSECT-CF-Fog in Section 3, accordingly.
We also added an architectural figure of the simulator to highlight the new
contributions.
Concern #3
-
“The terms used, sensor/actuator is misleading and should be revised. The
authors address IoT environments, where there are, from an abstract
perspective, publishers and subscribers. An actuator is a device that is used to
manipulate the physical environment, e.g., temperature control. While a
consumer can be an actuator but also any other type of cyber-physical system.
Or a software-based agent. If the authors truly want to model actuators, which
type are they addressing? Linear, motors, relays, solenoids?“
1
We responded to this comment in the paper by strengthening Section 1, 2 and
3 in this regard. We clarified how we consider the actuator entity, and referred
to related definitions of WSN and IoT systems.
Concern #4
-
“Also, the 'mobility' feature should benefit from a more clear description. Which
type of mobility is being covered? consumer? publisher? any node?“
-
“The decision on relying on random mobility (which has been shown not to be
the best model in terms of simulations) needs to be justified.“
-
“Not clear what is the value-add for mobile scenarios, and the introduction
states that the extension has been developed for an Internet of Mobile things…“
We also added a discussion and a mobility citation in the revision on this issue
in Section 3 and 3.2, and we clarified the applied mobility models in Section 3.4
as well.
Concern #5
-
“The experimental aspects described in section 4 needs to be made more
objective, avoiding claims such as relying on two of the most frequently applied
use-cases - do the authors have a proof on this?“
Thank you for your suggestion, we agree that “frequently applied use-cases”
can be misleading in that form. These two use cases (IoT healthcare and
transport) belong to the main open research challenges of IoT (we added a
citation). We modified Section 4 accordingly, and we also added a reference as
a proof.
Cocern #6
-
“Several papers debate scenarios for Fog relying on different simulators. One of
the problems with the experimental design are the design choices, such as
selecting latency to be between 10 ms and 20ms due to fog node
locality/co-placement. The authors should justify the choice of parameters, or
propose parameters that can provide a better understanding of the variability
impact, creating scenarios with values for 'low', 'medium', 'high' delay, etc.
Similar remarks go to other parameters, such as node cost.“
-
“the experimental design impacts the overall validity, as the selection of
parameters seems to have been done in an ad-hoc way.“
Thank you for this useful comment. Our goal was to deal with as realistic
parameters as possible during the evaluation of our extension. Unfortunately,
we could not avoid considering random factors in some cases, for instance
2
the position of fog nodes were chosen randomly, but the latency between them
were set based on official ping statistics. Similarly, the chosen VM pricing
values follow the official VM instance types of AWS. We also strengthened
Section 4 in this regard with additional references and footnotes to justify our
settings.
Concern #7
-
The implementation section currently provides a high-level specification of the
actuator module. This should be changed to make the flow of the implementation
clearer.
We revised Section 3.1, 3.2, 3.3 and 3.4 to make the flow clearer and more
readable. We tried to separate high-level model description to Section 3.1 and
3.2, and lower-level implementation-related details to Section 3.3 and 3.4. The
exact, low-level specifications and implementation (e.g. Java code of the
actuator) can be seen on GitHub.
Concern #8
-
-
“a patient, in the simulations, stands exactly for what? 1 data flow, VBR or
CBR? What is the size of the packets sent? The overall load is given, but what is
the impact in terms of patients served? Similarly for all parameters described in
Table 5. “
“for the large-scale experiments, where are the results? “
We responded to this comment in Section 4.2, accordingly. The summarised
results of the large-scale experiments are shown in Table 6, and discussed in
Section 4.2.1. IoT devices in our simulator can generate data in fixed, static
intervals (which can be preset, and it can also be changed during an event in
the simulation). The size of such sensor data is also preset, and can be
changed during the simulation similarly. Table 5 and 6 details what happens
when we increase the number of patients served (100 to 100 000, and 170 000
to 190 000).
Concern #9
-
“the value-add of the proposed extension is not clear, as there was no
comparison against a specific benchmark. “
Our main contribution is a modified simulation tool, which enables a more-fine
grained analysis of IoT applications and corresponding IoT-Fog-Cloud
systems for developers and researchers. Similar simulation with such rich
configuration options was not available previously, to the best of our
knowledge. The performance analysis of the software (of the extended
3
DISSECT-CF-Fog simulator) with certain benchmarks was not the purpose of
this work, though previous works addressed this issue concerning the base
simulator (e.g. https://doi.org/10.1016/j.simpat.2019.102042).
4
" | Here is a paper. Please give your review comments after reading it. |
193 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The eXtensible Markup Language (XML) files are widely used by the industry due to their flexibility in representing numerous kinds of data. Multiple applications such as financial records, social networks, and mobile networks use complex XML schemas with nested types, contents, and/or extension bases on existing complex elements or large real-world files. A great number of these files are generated each day and this has influenced the development of Big Data tools for their parsing and reporting, such as Apache Hive and Apache Spark. For these reasons, multiple studies have proposed new techniques and evaluated the processing of XML files with Big Data systems. However, a more usual approach in such works involves the simplest XML schemas, even though, real data sets are composed of complex schemas. Therefore, to shed light on complex XML schema processing for real-life applications with Big Data tools, we present an approach that combines three techniques. This comprises three main methods for parsing XML files: cataloging, deserialization, and positional explode. For cataloging, the elements of the XML schema are mapped into root, arrays, structures, values, and attributes. Based on these elements, the deserialization and positional explode are straightforwardly implemented. To demonstrate the validity of our proposal, we develop a case study by implementing a test environment to illustrate the methods using real data sets provided from performance management of two mobile network vendors. Our main results state the validity of the proposed method for different versions of Apache Hive and Apache Spark, obtain the query execution times for Apache Hive internal and external tables and Apache Spark data frames, and compare the query performance in Apache Hive with that of Apache Spark.</ns0:p><ns0:p>Another contribution made is a case study in which a novel solution is proposed for data analysis in the performance management systems of mobile networks.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION 32</ns0:head><ns0:p>The eXtensible Markup Language (XML) is now widely used on the Internet for different purposes.</ns0:p></ns0:div>
<ns0:div><ns0:head>33</ns0:head><ns0:p>There are numerous XML-based applications that utilize tag-based and nested data structures <ns0:ref type='bibr'>(Chituc,</ns0:ref> explained in the Related Work section, the proposals generally test simple XML schemas or do not present the schema they are used for. Therefore, there is no proof they are useful in the real world. Moreover, it is complicated to reproduce the proposals as no detailed procedure is presented.</ns0:p><ns0:p>To address the lack of methods for processing XML files with complex schemas, we present an approach in this study based on three main methods: (1) cataloging, (2) deserialization, and (3) positional explode. In (1), we identify the main elements within an XML Schema Definition (XSD) and map them in a complete list of items with a systematic order: root, arrays, structures, values, and attributes. In (2), the XML file is converted into a table with rows and columns. Finally, in (3), the elements of the arrays are placed in multiples rows to improve the visualization for the final user.</ns0:p><ns0:p>To demonstrate the validity of our proposal in a Big Data environment, we present a case study that uses 3G and 4G performance management files from two mobile network vendors as real data sets. This is because 3G and 4G are now the most commonly used mobile technologies in the world <ns0:ref type='bibr' target='#b21'>(Jabagi et al., 2020)</ns0:ref>. Furthermore, the mobile networks are growing at a rapid pace: according to the Global System for Mobile Communications, there were around 8 billion connections worldwide in the year 2020 and this is expected to reach 8.8 billion connections by 2025 <ns0:ref type='bibr' target='#b12'>(GSM, 2020)</ns0:ref>. To provide mobile services to these users, thousands of network elements have been deployed around the world. These constantly generate performance management data to monitor the network status close to real-time. For this reason, this large amount of data must be queried in the shortest possible time to offer an excellent service to the end user and efficiently prevent or detect outages <ns0:ref type='bibr' target='#b27'>(Martinez-Mosquera et al., 2020)</ns0:ref>. These data sets are XML files composed of complex schemas with nested structures and arrays.</ns0:p><ns0:p>In this paper, we utilize an Apache Hadoop framework for the experiments as this is an open source solution that has been widely deployed in several projects. Moreover, it provides a distributed file system with the ability to process Big Data with both efficiency and scalability <ns0:ref type='bibr' target='#b24'>(Lin et al., 2013)</ns0:ref>. In addition, our research addresses the evaluation of query execution times for complex XML schemas in different versions, with the aim of validating the proposal for old and new software developments.</ns0:p><ns0:p>For the test environment, Hadoop Distributed File System (HDFS) was used as a data lake, as this is a powerful system that stores several types of data <ns0:ref type='bibr' target='#b2'>(Apache, 2021a)</ns0:ref>. Additionally, we selected Apache Apache Hive due to its native support of XML files <ns0:ref type='bibr' target='#b3'>(Apache, 2021b)</ns0:ref>, and the existing parsing serializer/deserializer tool from IBM to create the external and internal tables. Finally, we evaluate the execution of query times in Apache Spark <ns0:ref type='bibr' target='#b4'>(Apache, 2021c)</ns0:ref> through the implementation of data frames from XML files with complex schemas.</ns0:p><ns0:p>Using the proposed methods, tables and data frames can be created in a more intuitive form. We present all the processes involved in creating Apache Hive internal and external tables and Apache Spark data frames; the results of the evaluation of query execution times; and, finally, a comparison of the results between Apache Hive and Apache Spark.</ns0:p><ns0:p>Our main research questions are as follows:</ns0:p><ns0:p>1. Using the proposed method, is it possible to automatically create Apache Hive external and internal tables and Apache Spark data frames for complex schemas of XML files? 2. In terms of query execution time, what type of Apache Hive table is more efficient; internal or external? 3. Which system, Apache Hive or Apache Spark, provides the shortest query response times?</ns0:p><ns0:p>The remainder of this paper is organized as follows. In the next section, Related Concepts, we review concepts related to our work to facilitate understanding of this area. This section then presents existing relevant studies and the main contributions of our research. The Methods section presents the methods proposed to identify the elements of the catalog and the process to apply deserialization and positional explode methods in Apache Hive and Apache Spark. The Case Study section presents the experimental results of the query execution times for Apache Hive internal and external tables and Apache Spark data frames using 3G and 4G performance management files from two mobile network vendors. Finally, the Conclusions section draws final conclusions, answers the research questions, and discusses possible future work.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED CONCEPTS</ns0:head><ns0:p>To facilitate understanding, the following are brief descriptions of the main concepts employed in this work.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/24</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science XML XML (W3C, 2016) is a flexible text format that was developed by the XML working group of the World Wide Web Consortium (W3C) in 1996. It is based on the Standard Generalized Markup Language (SGML) or ISO 8879. The XML language describes a class of data objects called XML documents that are designed to carry data with a focus on what data are and not how data look. XML has been widely adopted as the language has no predefined tags. Thus, the author defines both the tags and the document structure. XML stores data in plain text format, making it human-readable and machine-readable. This provides an independent way of storing, transporting, and sharing data.</ns0:p><ns0:p>An XML schema describes the structure of an XML document and is also referred to as XSD. Within an XSD, the elements of the XML document are defined. Elements can be simple or complex. A simple element can contain only text, but they can be of several different types, such as boolean, string, decimal, integer, date, time, and so on. By contrast, a complex element contains other simple or complex types (W3C, 2016).</ns0:p><ns0:p>Complex XSD is considered in several studies <ns0:ref type='bibr' target='#b22'>(Krishnamurthy et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b32'>Murthy and Banerjee, 2003;</ns0:ref><ns0:ref type='bibr' target='#b33'>Rahm et al., 2004)</ns0:ref>: those that use complex types, complex contents and/or extension bases on existing complex elements, or large real-world XSD. The complex contents specify that the new type will have more than one element. Extension bases refer to the creation of new data types that extend the structure defined by other data types (W3C, 2016). Large XSD is based on the number of namespaces, elements, and types in the XML files. Appendix A presents an example of a complex XSD that follows the 3rd Generation Partnership Project (2005) format used for performance management in mobile networks.</ns0:p><ns0:p>In this work, the files used for the case study are based on this XSD.</ns0:p><ns0:p>The XSD can also define attributes that contain data related to a specific element. As best practice, attributes are used to store metadata of the elements and the data itself are stored as the value of the elements (W3C, 2016). The XML syntax below presents an example of the use of attributes and values in the elements: an attribute is used to store the identification number of the element, and a value is used to store the data. </ns0:p></ns0:div>
<ns0:div><ns0:head>Apache Hive</ns0:head><ns0:p>Apache Hive <ns0:ref type='bibr' target='#b3'>(Apache, 2021b</ns0:ref>) is a data warehouse software that incorporates its own query language based on SQL, named Apache HiveQL, to read and write data sets that reside in a distributed storage such as HDFS. Apache Hive also makes use of a Java Database Connectivity (JDBC) driver to allow queries from clients such as the Apache Hive command line interface, Beeline, and Hue. Apache Hive fundamentally works with two different types of tables: internal (managed) and external.</ns0:p><ns0:p>With the use of internal tables, Apache Hive assumes that the data and its properties are owned by Apache Hive and can only be changed via Apache Hive command; however, the data reside in a normal file system <ns0:ref type='bibr' target='#b11'>(Francke, 2021)</ns0:ref>.</ns0:p><ns0:p>Conversely, Apache Hive external tables are created using external storage of the data,; for instance, the HDFS directory where the XML files are stored. The external tables are created in Apache Hive but the data are kept in HDFS. Thus, when the external table is dropped, only the schema in the database is dropped, not the data <ns0:ref type='bibr' target='#b11'>(Francke, 2021)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Apache Spark</ns0:head><ns0:p>Apache Spark <ns0:ref type='bibr' target='#b4'>(Apache, 2021c)</ns0:ref> is an engine for large data processing. It can work with a set of libraries such as SQL, Data Frames, MLlib for machine learning, GraphX, and Apache Spark Streaming. These libraries can be combined in the same application. Apache Spark can be used from Scala, Python, R, and SQL shells. This tool also runs on HDFS. Manuscript to be reviewed Computer Science external databases, or existing Resilient Distributed Data sets (RDD). An RDD is a collection of elements partitioned across the nodes of the cluster that can be operated in parallel. In general, RDD is created by starting with a file in the Hadoop file system <ns0:ref type='bibr' target='#b4'>(Apache, 2021c)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance Management Files in Mobile Networks</ns0:head><ns0:p>The mobile network sector is one of the fastest-growing industries around the world. Everybody has witnessed the rapid evolution of its technologies over the last few decades <ns0:ref type='bibr' target='#b27'>(Martinez-Mosquera et al., 2020)</ns0:ref>.</ns0:p><ns0:p>A mobile network is composed of Network Elements (NEs) that produce correlated Performance Measurement (PM) data managed by a Network Manager (NM) <ns0:ref type='bibr'>(3rd Generation Partnership Project, 2005)</ns0:ref>. PM data are used to monitor the operator's network, generate alarms in case of failures, and support decision-making in the area of planning and optimization. In summary, PM data check the behavior of network traffic, almost in real time, through the values of the measurements transmitted in every file.</ns0:p><ns0:p>PM files are based on the XSD proposed in the Technical Specification 32.401 version 5.5.0 from 3rd Generation Partnership <ns0:ref type='bibr'>Project (2005)</ns0:ref> which is presented in Appendix A. Each vendor then adapts and personalizes them according to their needs.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>A review of the literature identified research related to querying XML documents that involves numerous methods and algorithms. Most of these publications test their approaches using simple XSD, only a few are related to Apache Hive and Apache Spark. In the following section, we review the research most closely related and relevant to our study and explain how these studies differ from our approach. This comparison highlights some of the contributions of our work. <ns0:ref type='bibr' target='#b16'>Hricov et al. (2017)</ns0:ref> evaluate the computation times to query XML documents stored in a distributed system using Apache Spark SQL and XML Path (XPath) query language to test three different data sets.</ns0:p><ns0:p>The XML files contain only four attributes. They perform SQL queries to evaluate XPath queries and the Apache Spark SQL API. The main difference from our study is that we evaluate the query execution times for XML documents of complex types from real-life mobile networks in Apache Hive and Apache Spark. Furthermore, we explain in detail how to apply our proposed method to facilitate its replication in other studies, whereas <ns0:ref type='bibr' target='#b16'>Hricov et al. (2017)</ns0:ref> only summarize their proposal. <ns0:ref type='bibr' target='#b25'>Luo et al. (2014)</ns0:ref> propose a schema to store XML documents in Apache Hive named open Apache Hive schema. The proposal consists of defining three columns: markup, content, and Uniform Resource Identifier (URI). Every tag is stored in the markup column, content refers to the value of the attribute and URI the data location. In contrast to our research, the cited study does not present examples for XSD with complex types, neither the results of the computation times in Apache Hive and Apache Spark. Moreover, the cited study only presents an approach for external tables, whereas our study includes internal tables and data frames for Apache Spark. <ns0:ref type='bibr' target='#b17'>Hsu et al. (2012)</ns0:ref> propose a system based on Apache Hadoop cloud computing framework for indexing and querying a large number of XML documents. They test the result times for streaming and batched query. They stated that the XML files need to be parsed and then indexes are produced to be stored as HDFS files; however, no details about the method employed to process the data is described, nor the XSD used in the research. They present the execution times obtained for the index construction and query evaluation. However, we explain in detail how to implement our approach using real data sets from mobile networks for Apache Hive and Apache Spark. <ns0:ref type='bibr' target='#b15'>Hong and Song (2007)</ns0:ref> propose a method for permanently storing XML files into a relational database MS SQL Server. For tests, they employ a web-based virtual collaboration tool called VCEI. For each session, an XML format file is generated with four main entities: identification, opinion, location in the image, and related symbols. In this file, the opinions of the users are associated with digital images. A single table is created with the generated XML document. In contrast to our work, the authors do not focus on Big Data tools such as Apache Hive or Apache Spark, and they only use an XSD with simple types. <ns0:ref type='bibr' target='#b26'>Madhavrao and Moosakhanian (2018)</ns0:ref> propose a method for combining weather services to provide digital air traffic data in standardized formats including XML and Network Common Data Form (NetCDF) using a Big Data framework. This work presents an example with an XML document. The reporting tool used is Apache Spark SQL, but no details about its implementation are presented. This approach 4/24 PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_11'>2021:04:60399:1:1:NEW 25 Jun 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>focuses on providing a query interface for flight and weather data integration, but unlike our study does not evaluate query time execution in Apache Hive and Apache Spark.</ns0:p><ns0:p>Zhang and Mahadevan (2019) present a deep learning-based model to predict the trajectory of an ongoing flight using massive raw flight tracking messages in XML format. They cite the need to parse the raw flight XML files using the package 'com.databricks.Apache Spark.xml' in Apache Spark to extract attributes such as arrival airport, departure airport, timestamp, flight ID, position, altitude, velocity, target position, and so on. However, no detail about the implementation is provided nor is there any information on the XSD used and the behavior for Apache Hive and Apache Spark that we consider in our study. <ns0:ref type='bibr' target='#b37'>Vasilenko and Kurapati (2015)</ns0:ref> discuss the use of complex XML in the enterprise and the constraints in Big Data processing with these types of files. They thus propose a detailed procedure to design XML schemas. They state that the Apache Hive XML serializer-deserializer and explode techniques are suitable for dealing with complex XML and present an example of the creation of a table with a fragment of a complex XML file. By contrast, in our work, we propose the cataloging procedure and also evaluate our approach for Apache Spark data frames. We additionally present the query execution times obtained after applying our proposal.</ns0:p><ns0:p>Other studies also utilize XML documents to evaluate the processing time with HDFS and the Apache Spark engine; however, the files used for the tests contain simple types, with a few attributes, or do not present schemas <ns0:ref type='bibr' target='#b13'>(Hai et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b23'>Kunfang, 2016;</ns0:ref><ns0:ref type='bibr' target='#b40'>Zhang and Lu, 2021)</ns0:ref>.</ns0:p><ns0:p>Finally, our research explains how to use cataloging, deserialization and positional explode to process complex XSD in Apache Hive internal and external tables and Apache Spark data frames; moreover, we demonstrate the validity of our proposal in a test Big Data environment with real PM XML files from two mobile network vendors.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>The solution we propose for querying complex XML schemas is based on Big Data systems such as HDFS for storing, and Apache Hive and Apache Spark for reporting. Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> sketches the architecture employed to evaluate the query execution times for Apache Hive and Apache Spark. HDFS provides a unified data repository to store raw XML files and external tables. In the reporting layer, Apache Hive and Apache Spark are connected to HDFS to perform the queries through Apache Hive Query Language (HQL) <ns0:ref type='bibr' target='#b6'>(Cook, 2018)</ns0:ref>, and XPath expressions for Apache Hive <ns0:ref type='bibr' target='#b36'>(Tevosya, 2011)</ns0:ref> and Scala shell for Apache Spark <ns0:ref type='bibr' target='#b4'>(Apache, 2021c)</ns0:ref>.</ns0:p><ns0:p>Repository Layer In the extract phase presented in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, it is assumed that a cluster-based approach assigns a system to manage the collection of the XML files from the sources. Then, in the load phase, these data are stored in a repository such as HDFS. No changes in the format of the XML files are made in the load process; therefore the transformation process is not comprised.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/24</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_11'>2021:04:60399:1:1:NEW 25 Jun 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Once the data are available on the HDFS, we identify three main vectors: (1) vector A = {A i }; i = 1, ..., N where N relates to the total number of XML files, (2) vector X = {X i }; i = 1, ..., M where M relates to the set of distinct XSD, and (3) vector E = {E i }; i = 1, ..., P where P relates to the different element types in the XSD according to the catalog defined in short, int, float, among others. For clarity, we present an example of the identification of vectors and its respective catalog for the XSD in Figure <ns0:ref type='figure'>2</ns0:ref>. Figure <ns0:ref type='figure'>2</ns0:ref> presents an example of a complex XSD. This contains the following complex types: element1, element11, element111, element112, and element12. Furthermore, element11 extends element111, and element111 extends element112.</ns0:p><ns0:p>For Figure <ns0:ref type='figure'>2</ns0:ref>, there is an XML file only, thus vector A is identified with one element A 1 :</ns0:p><ns0:formula xml:id='formula_0'>A = {A 1 }.</ns0:formula><ns0:p>A 1 is composed of one schema; thus vector X is composed by one element X 1 :</ns0:p><ns0:formula xml:id='formula_1'>X = {X 1 }</ns0:formula><ns0:p>Inside the X 1 schema, the element types from Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref> must be identified; therefore, vector E is composed of five vectors: (1) root E <>, (2) arrays E[], (3) structures E{}, (4) attributes E@, and ( <ns0:ref type='formula'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>values E#. E = {E <>, E[], E{}, E@, E#};</ns0:formula><ns0:p>For Figure <ns0:ref type='figure'>2</ns0:ref>, the five vectors contain the following elements:</ns0:p><ns0:p>1. E <>= {root}; //The main tag is the root.</ns0:p><ns0:p>2. E[] = {element1, element11}; //Two structures with elements of different data types.</ns0:p><ns0:p>3. E{} = {element111, element112, element12}; //Three arrays with elements of homogeneous data types.</ns0:p><ns0:p>4. E@ = {attribute111, attribute112, attribute12}; //Three attributes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>E# = {values o f element111, values o f element112, values o f element12}</ns0:head><ns0:p>; //Three elements with simple content.</ns0:p><ns0:p>After the XML element types are mapped with the proposed catalog in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, tables for Apache</ns0:p><ns0:p>Hive or data frames for Apache Spark can be created. Figure <ns0:ref type='figure'>3</ns0:ref> summarizes the workflow for the three main methods: (1) cataloging, (2) deserialization, and (3) positional explode. For (1), the E <> vector identifies the root of the used XSD, while E[] and E{} vectors allow easy identification of the indexes. An index is identified for each array. It is also important to state whether a structure is placed before an array;</ns0:p><ns0:p>this structure also has an index. Without an index, internal queries to arrays load all the rows belonging to the array. But, with an index, only the specific record in a table is loaded. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Finally, the XPath queries to the nodes are determined for each element of the E@ and E# vectors.</ns0:p><ns0:p>The indexes mapped also supply the expressions of XPath queries. The number of queries corresponds to the number of the columns in an Apache Hive column-separated table or Apache Spark data frame. The detailed procedure to create Apache Hive internal and external tables, and also Apache Spark data frames, is explained in the following sections.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creation of Apache Hive Tables for Complex XSD</ns0:head><ns0:p>First, the input is the XML file denoted by A i stored into HDFS. The XSD that belongs to A i is named vector X i . Vector E is composed of the elements root, array, structure, attribute, and value elements, identified after performing the map with the catalog proposed in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. Processor to populate column-separated Apache Hive tables, where the number of columns correspond to the element numbers of the vector E@ plus E#. However, method (3) may be needed to explode the array data into multiple rows with the help of the elements identified in the vectors E[] and E{}.</ns0:p><ns0:p>The entire process is summarized in Algorithm 1. The input is the XML file denoted by A i , in which the XML Schema and the vector E are identified. Create RawTable or Create ExternalRawTable are applied using the deserialization method, the output of which serves as the input to the Create ColumnSeparatedInternalTable or Create ColumnSeparatedExternalTable functions, respectively. In these functions, positional explode methods are applied. Once the workflow is completed, the data from the XML files are stored in rows and columns in Apache Hive tables and queries through HQL can be performed. This process is useful for internal and external tables.</ns0:p><ns0:p>As stated in the Related Concepts section, Apache Spark can work with a set of libraries. In this work, we select Data Frames as it is similar to the column-separated Apache Hive tables and is independent of other database engines such as Apache Hive.</ns0:p><ns0:p>According to the methods proposed in Figure <ns0:ref type='figure'>3</ns0:ref>, the use of the catalog, deserialization, and explode methods are also suitable for the Apache Spark engine. The difference in Apache Spark, is that there are no internal and external table concepts; therefore, the XML is stored in a data frame variable and the queries are performed over these data frames. Similar to Apache Hive, arrays must be exploded, and values and attributes are the fields to be queried.</ns0:p><ns0:p>The entire process for creating data frames in Apache Spark is summarized in Algorithm 2. The input is the XML file denoted by A i stored previously in HDFS, and the XSD that belongs to A i is named vector X i . Vector E is composed of the root, array, structure, attribute, and value elements, identified after performing the map with the catalog proposed in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>.</ns0:p><ns0:p>The function CreateDataFrame is applied to vector X i using the deserialization method to return the data frame. The output of CreateDataFrame is the input of the CreateColumnSeparatedDataFrame function, where the number of elements in vectors E@ and E# correspond to the number of columns and elements in vectors E[] and E{} for the positional explode method. Finally, the data frame with column-separated values is returned. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>CASE STUDY</ns0:head><ns0:p>The implementation of the Big Data framework was performed using a cloud computing solution composed of a virtual machine VM.Standard 2.2 with the following hardware features:</ns0:p><ns0:p>• 200 GB of storage.</ns0:p><ns0:p>• Two VCPU cores and two threads per core with 4 GHz in total.</ns0:p><ns0:p>• 30 GB of RAM.</ns0:p><ns0:p>At software level, we tested our proposal in two environments. It was important to evaluate two versions to verify that the proposal is applicable in any version and that the query times improve in recent versions. Furthermore, not all potential users always have access to the latest versions of the software. For clarity, we present a case study using PM XML files taken from two mobile network vendors. As stated in the Introduction section, these data are selected because mobile networks generate a high volume of these files every second. For instance, in the United States of America in 2019, there were 395,562 cell sites <ns0:ref type='bibr' target='#b34'>(Statista, 2020)</ns0:ref>. Therefore, taking a PM file with 20 KB as reference, the total file size is analyzed to be approximately 8 GB per second. These samples are based on the XSD presented in Appendix A according to the 3GPP standard. The 3GPP standard maps the tags defined in the file format definition to those used in the XML file <ns0:ref type='bibr'>(3rd Generation Partnership Project, 2005)</ns0:ref>. The data sets used for the tests are the following:</ns0:p><ns0:p>1. An PM XML file from a real 3G mobile network from a vendor named A with the schema presented below: Once the arrays in the XSD are identified in Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref>, this column allows easy identification of the indexes. Thus, an index is identified for each array; however, as mentioned previously, it is important to observe whether a structure is placed before an array; this structure will also have an index. For instance, measCollec{} beginTime@ fileSender{} elementType@ measData{} userLabel@ grandPeriod{} duration@ grandPeriod{} endTime@ repPeriod{} duration@ It is important to highlight that it is necessary to create the raw table to deal only with parent labels of the XML file as we are parsing complex schemas <ns0:ref type='bibr' target='#b18'>(Intel, 2013)</ns0:ref>, and, furthermore, positional explode is only available for SELECT sentences <ns0:ref type='bibr' target='#b29'>(Microsoft, 2027)</ns0:ref> <ns0:ref type='bibr' target='#b7'>(Databricks, 2021)</ns0:ref>. For these preliminary experiments, we utilized the Data Query Language (DQL) type from HQL for query statements and no limits in the rows were expressed.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_10'>5</ns0:ref> presents the average query execution times in the four scenarios and for the two versions of Big Data software components. As indicated, the first and second scenarios take longer than the third and fourth scenarios. Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref> presents the data for Table <ns0:ref type='table' target='#tab_10'>5</ns0:ref> in graphical form, where the difference is remarkable. Therefore, based on the results of these preliminary tests, the first and second scenarios were discarded.</ns0:p></ns0:div>
<ns0:div><ns0:head>13/24</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Following the base expression of Appendix B, we created Apache Hive external and internal raw tables and then column-separated tables with the positional explode method. Over these tables, we conducted the queries obtained for each mobile network vendor from Table <ns0:ref type='table' target='#tab_9'>4</ns0:ref> using sentences of DQL type with HQL. We limited the query to 1000 rows in order to perform the tests in a common scenario for all queries and tools.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In this section, we present the results of the evaluation of execution query time for:</ns0:p><ns0:p>1. Apache Hive internal tables, where the XML files and the Apache Hive tables are stored in the same Apache Hive directory and the queries are performed through HQL.</ns0:p><ns0:p>2. Apache Hive external tables, where the XML files are stored in HDFS and the tables are stored in a Apache Hive directory. Queries are also performed through HQL.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref> presents the average query execution times for the XML files from vendor A and Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref> from vendor B. The queries relate to those identified in Table <ns0:ref type='table' target='#tab_9'>4</ns0:ref> for the third scenario in blue for Apache</ns0:p><ns0:p>Hive version 1 and yellow for version 3, and the fourth scenario in red for version 1 and purple for version 3. The x axis presents the query identifications and the y axis the average query execution time in milliseconds. To determine the optimal query execution time, the sample mean, variance, and standard deviation are calculated.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/24</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science To calculate the sample mean X, we denote the observations drawn from the Apache Hive external 486 and internal tables by X Ei and X Ii respectively, with i = 1, . . . , 14, and N = 14 according to the number of 487 queries. Let:</ns0:p><ns0:formula xml:id='formula_3'>488 XE = 1 N N ∑ i=1 X Ei and XI = 1 N N ∑ i=1 X Ii 489</ns0:formula><ns0:p>To calculate the variance σ 2 , let the expression:</ns0:p><ns0:formula xml:id='formula_4'>σ 2 E = 1 N − 1 N ∑ i=1 (X Ei − XE ) 2 and σ 2 I = 1 N − 1 N ∑ i=1 (X Ii − XI ) 2</ns0:formula><ns0:p>To obtain the standard deviation σ , the square root of the variance is calculated. We perform other tests in order to determine the behavior for external and internal Apache Hive tables version 3, with different file sizes. We only test version 3 as this provides a better performance. The results are presented in Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref>.</ns0:p><ns0:p>As explained in the Related Concepts section, an internal Apache Hive table stores data in its own directory in HDFS, while an external Apache Hive table uses data outside the Apache Hive directory in HDFS. Therefore, as expected, the query execution times for internal tables are smaller than external tables. However, as indicated in Figure <ns0:ref type='figure' target='#fig_11'>7</ns0:ref>, as the number of rows in an XML file increases, internal Apache</ns0:p><ns0:p>Hive tables perform better than external tables. For instance, for 3,000,000 rows the query execution time takes approximately 400 milliseconds, while for an external table it takes around 1,800 milliseconds. </ns0:p></ns0:div>
<ns0:div><ns0:head>Apache Spark</ns0:head><ns0:p>We also evaluated the query execution times for XSD from mobile network vendors A and B in the Apache Spark engine versions 1 and 3. First, a data frame with a single row of raw data was created as positional explode is only available for SELECT sentences. Like Apache Hive, XML files with 4668 rows and 8461 rows were used for the tests of mobile network vendors A and B, respectively.</ns0:p><ns0:p>The tests were conducted for the same queries employed for Apache Hive from Table <ns0:ref type='table' target='#tab_9'>4</ns0:ref>. Again, we limited the query to 1000 rows. Each query was performed in the Scala shell and follows the query syntax Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of Data Retrieval Statements (DRS). For instance, to query Q1 of the mobile network vendor A:</ns0:p><ns0:p>var dataframe = sqlContext.read.</ns0:p><ns0:p>format('com.databricks.Apache Spark.xml').</ns0:p><ns0:p>option('rowTag', 'mdc').load('/hdfs') dataframe.selectExpr('explode(md)as_md').</ns0:p><ns0:p>select(\$'_md.mi.gp').show(1000)</ns0:p><ns0:p>In this section, we present the results of the evaluation of query execution times for Apache Spark data frames. XML files are also stored in HDFS. Queries are performed through a domain-specific language for structured data manipulation in the Scala shell.</ns0:p><ns0:p>The attained results for the query execution times in milliseconds for Apache Spark are presented in </ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison between Apache Hive and Apache Spark</ns0:head><ns0:p>According to the results of the case study, Apache Hive and Apache Spark are useful for processing complex XML schemas using our proposed method. Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> presents a comparison of query execution times between the Apache Hive external table and the Apache Spark data frame for a single row of raw data. Version 3 is used for the reasons stated previously. Furthermore, we use the Apache Hive external table because the raw data is inside the HDFS and the queries are performed there directly. From these results, we conclude that the external Apache Hive table is more efficient when queries to a complete data frame are performed, as indicated in Figure <ns0:ref type='figure' target='#fig_12'>8</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Similar results for individual queries can be observed in Table <ns0:ref type='table' target='#tab_11'>6</ns0:ref>, where the query execution time for Apache Hive version 1 deviates from the average by 22.53 milliseconds as a maximum value for vendor A. Conversely, as indicated in Table <ns0:ref type='table' target='#tab_13'>7</ns0:ref>, the query execution time for Apache Spark version 3 deviates from the average by 60.15 milliseconds as a minimum value.</ns0:p><ns0:p>We can therefore conclude that, because Apache Hive is only a database engine for data warehousing, where the data are already stored in tables inside HDFS as its default repository, it exhibits better performance than Apache Spark, Apache (2021b). Moreover, Apache Spark is not a database even though it can access external distributed data sets from data stores such as HDFS. Apache Spark is able to perform in-memory analytics for large volumes of data in the RDD format; for this reason, an extra process is needed over the data, Apache (2021c). For queries over XML files with complex schemas, Apache Spark is no more efficient than Apache Hive; however, Apache Spark works better for complex data analytics in terms of memory and data streaming, <ns0:ref type='bibr' target='#b20'>Ivanov and Beer (2016)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison with other Approaches</ns0:head><ns0:p>As mentioned previously, multiple studies have evaluated the processing of XML files with Big Data systems. However, these approaches involve the simplest XML schemas and are generally not suitable for complex schemas that are more common in real life implementations.</ns0:p><ns0:p>The studies by <ns0:ref type='bibr' target='#b13'>Hai et al. (2018</ns0:ref><ns0:ref type='bibr' target='#b16'>), Hricov et al. (2017</ns0:ref><ns0:ref type='bibr' target='#b17'>), and Hsu et al. (2012)</ns0:ref> present the results of their experiments processing XML files in terms of query execution time. However, the features of their Big Data ecosystem differ from ours and they do not present the versions used for the software. Therefore, we only take as reference the query execution time of <ns0:ref type='bibr' target='#b16'>Hricov et al. (2017)</ns0:ref> that is approximately 7 s for 1,000,000 rows; while, as a result of our work, to query approximately 3,000,000 rows, Apache Hive external tables take around 2 s; while for the Apache Spark data frame the queries take around 14.25 s, using the Big Data environment version 3.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>Motivated by the need to evaluate queries for complex XSD that are now used in multiple applications, and the Big Data solutions available for processing this file format, we proposed three main methods to facilitate the creation of Apache Hive internal and external tables, Apache Spark data frames, and the identification of their respective queries based on the values and attributes of the XML file.</ns0:p><ns0:p>The three proposed methods were (1) cataloging, (2) deserialization, and (3) positional explode. In (1), five element types of an XSD were identified: root, arrays, structures, values, and attributes. The root element identification facilitated the creation of a raw table with the content of the XML file. In (2), identification of attributes and values elements allowed the raw XML data to be converted into a table with rows and columns. Finally, in (3), the arrays were placed in multiples rows to improve the visualization to the final user.</ns0:p><ns0:p>To validate our proposal, we implemented a Big Data framework with two versions of software components named version 1 and 3. As a case study, we used the performance management files of 3G and 4G technologies from two mobile network vendors as real data sets. Using the proposed methodology, Another important point to make is that the results of direct queries to a data frame with a single row took longer than queries to a Apache Hive external table. Based on these results, our research questions are answered as follows:</ns0:p><ns0:p>1. It is possible to create Apache Hive external and internal tables and Apache Spark data frames using our proposed method. For the cataloging process, the following elements are identified: root, structures, arrays, attributes, and values. For the deserialization process, the values and 2. Apache Hive internal tables generate lower query execution times than external tables with the fourth proposed scenario: creating an Apache Hive internal raw table and then creating a new Apache Hive table with the positional explode method. This result is consistent with the expected behavior as tables and data are stored in the same directory in HDFS.</ns0:p><ns0:p>3. When comparisons are made between Apache Hive and Apache Spark, Apache Hive external table allows for shorter query execution times when queries to a complete data frame are performed. Moreover, Apache Hive external or internal tables are more efficient than Apache Spark for queries to individual values or attributes. We believe this occurs because Apache Spark requires extra in-memory processing for queries on XML files.</ns0:p><ns0:p>In future work, we plan to explore the behavior of a Big Data cluster with several nodes. Moreover, we plan to include PM files from 5G mobile networks in the tests, and to create a benchmark for different data sets and queries.</ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Hadoop Architecture to Evaluate the Query Execution Times of Complex XSD in Apache Hive and Apache Spark. In general, Big Data architectures use the Extract Load and Transform (ELT) process (Marín-Ortega et al., 2014), which transforms the data into a compatible form, at the end of the process. The ELT process differs from the Extract Transform and Load (ETL) process, used for traditional data warehouse operations, where the transformation of the data is conducted immediately after the extraction (Mukherjee and Kar, 2017).</ns0:figDesc><ns0:graphic coords='6,150.09,450.98,396.89,127.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. Example of an XSD with Complex Types and Extension Base.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Frames for Complex XSDThe first phase in the creation of the Apache Hive table relies on XmlInputFormat from the Apache Mahout project<ns0:ref type='bibr' target='#b14'>(Holmes, 2014)</ns0:ref> to shred the input file into XML fragments based on specific start and end 8/24 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021) Manuscript to be reviewed Computer Science tags, determined in the vector E <>. The XML Deserializer queries the XML fragments with an XPath</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>C present examples of the Apache Hive tables and Apache Spark data frames created for the example file in Appendix A. 9/24 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Version 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Apache Hadoop HDFS version 2.6.0, Apache Hive version 1.1.0, Apache Spark version 1.6.0, Java version 1.7.0 67 and Scala version 2.10.5. Version 3. Apache Hadoop HDFS version 3.2.1, Apache Hive version 3.1.2, Apache Spark version 3.0.1, Java version 1.8.0 271, and Scala version 2.12.10.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>In this research, we evaluate the query execution times for Apache Hive and Apache Spark after applying the workflow proposed in Figure3. As described in the Method section, we utilized the XSD of two PM XML files from mobile network vendors named A and B. Table2presents the catalog mapping of the XSD elements for the A and B mobile network vendors. The Root column identifies the roots of the used XSD, Arrays identifies all the tags with arrays, Structures identifies all the tags with structures, and Attributes and Values presents all the attributes and values of the XML files.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Average Query Execution Times (seconds) for XML Files from Vendors A and B in Fourth Scenarios.</ns0:figDesc><ns0:graphic coords='15,235.13,212.33,226.77,170.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Average Query Execution Times (milisenconds) for Apache Hive External and Internal Tables versions 1 and 3 from Vendor A.</ns0:figDesc><ns0:graphic coords='16,235.14,63.78,226.77,170.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Average Query Execution Times (milisenconds) for Apache Hive External and Internal Tables versions 1 and 3 from Vendor B.</ns0:figDesc><ns0:graphic coords='16,235.14,304.12,226.77,170.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Average Query Execution Times (milliseconds) for Internal and External Apache Hive Tables with Different XML File Sizes.</ns0:figDesc><ns0:graphic coords='17,235.13,427.68,226.78,170.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Average Query Execution Times (milliseconds) Comparison for Apache Hive and Apache Spark with a Single Row of Raw Data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Average Query Execution Times (miliseconds) for Apache Hive and Apache Spark from Vendor A.</ns0:figDesc><ns0:graphic coords='19,235.13,294.02,226.77,170.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. Average Query Execution Times (miliseconds) for Apache Hive and Apache Spark from Vendor B.</ns0:figDesc><ns0:graphic coords='19,235.13,524.27,226.77,170.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>internal and external Apache Hive tables and Apache Spark data frames were created in a more intuitive form for both versions. Finally, we presented the execution times of the 14 identified queries for PM files from mobile network vendors A and B. The query types used in this work can be employed for other data sets as they are composed only of SELECT statements. The experimental results indicated that query execution times for Apache Hive internal tables performed better than Apache Hive external tables and Apache Spark data frames. Moreover, the Big Data environment implemented with HDFS version 3.2.1, Apache Hive version 3.1.2, Apache Spark version 3.0.1, Java version 1.8.0271, and Scala version 2.12.10 exhibited better performance than with older versions.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021) Manuscript to be reviewed Computer Science attributes are populated into column-separated fields, while for the positional explode the arrays are uncompounded into multiple rows.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>An Apache Spark data frame is a data set organized into named columns, similar to a table in a relational database. Data frames can be constructed from structured data files, tables in Apache Hive,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021)</ns0:cell><ns0:cell>3/24</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='2'>Element Types Notation</ns0:cell></ns0:row><ns0:row><ns0:cell>Root</ns0:cell><ns0:cell><></ns0:cell></ns0:row><ns0:row><ns0:cell>Array</ns0:cell><ns0:cell>[]</ns0:cell></ns0:row><ns0:row><ns0:cell>Structure</ns0:cell><ns0:cell>{}</ns0:cell></ns0:row><ns0:row><ns0:cell>Attribute</ns0:cell><ns0:cell>@</ns0:cell></ns0:row><ns0:row><ns0:cell>Value</ns0:cell><ns0:cell>#</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Catalog to Identify and Depict Element Types from Complex XSD.The catalog in Table1allows us to map the main element types present in an XML file with a possible notation through symbols such as <>, [], {}, @, and #. Our main goal is to facilitate the acknowledgment of XML element types to create tables and perform queries in both Apache Hive and Apache Spark. These symbols conform to the JavaScript Object Notation (JSON) (W3C, 2020) to avoid new syntax. Root type corresponds to the main tag used to initiate and terminate the XML file. Array and Structure types relate to the types of elements or children-elements array and structure, respectively. Attribute refers to the attributes and Value denotes the values of elements that can be of different types, such as strings, char,</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Algorithm 1: Creation of Apache Hive Tables for Complex XSD Input: XML documents A i Output: Apache Hive Table 1 Create X i ← A i ;</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>2 Create E ← catalog;</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>3 while X do</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>if internal table then</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>Create RawTable;</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>for X i do</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>Deserialization ;</ns0:cell></ns0:row><ns0:row><ns0:cell>11</ns0:cell><ns0:cell>load data into table;</ns0:cell></ns0:row><ns0:row><ns0:cell>12</ns0:cell><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>13</ns0:cell><ns0:cell>return RawTable;</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>Create ColumnSeparatedInternalTable;</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>for RawTable do</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>XPATH strings ← E@, E#;</ns0:cell></ns0:row><ns0:row><ns0:cell>17</ns0:cell><ns0:cell>Positional Explode ← E[], E{};</ns0:cell></ns0:row><ns0:row><ns0:cell>18</ns0:cell><ns0:cell>return ColumnSeparatedInternalTable;</ns0:cell></ns0:row><ns0:row><ns0:cell>19</ns0:cell><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>else</ns0:cell></ns0:row><ns0:row><ns0:cell>21</ns0:cell><ns0:cell>Create ExternalRawTable;</ns0:cell></ns0:row><ns0:row><ns0:cell>22</ns0:cell><ns0:cell>for X i do</ns0:cell></ns0:row><ns0:row><ns0:cell>23</ns0:cell><ns0:cell>xmlinput.start=<>' ← E <>;</ns0:cell></ns0:row><ns0:row><ns0:cell>24</ns0:cell><ns0:cell>xmlinput.end=< / >' ← E <>;</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell>location=/hdfs;</ns0:cell></ns0:row><ns0:row><ns0:cell>26</ns0:cell><ns0:cell>Deserialization ;</ns0:cell></ns0:row><ns0:row><ns0:cell>27</ns0:cell><ns0:cell>end</ns0:cell></ns0:row><ns0:row><ns0:cell>28</ns0:cell><ns0:cell>return ExternalRawTable;</ns0:cell></ns0:row><ns0:row><ns0:cell>29</ns0:cell><ns0:cell>Create ColumnSeparatedExternalTable;</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>for ExternalRawTable do</ns0:cell></ns0:row><ns0:row><ns0:cell>31</ns0:cell><ns0:cell>XPATH strings ← E@, E#;</ns0:cell></ns0:row><ns0:row><ns0:cell>32</ns0:cell><ns0:cell>Positional Explode ← E[], E{};</ns0:cell></ns0:row><ns0:row><ns0:cell>33</ns0:cell><ns0:cell>return ColumnSeparatedExternalTable;</ns0:cell></ns0:row></ns0:table><ns0:note>7 xmlinput.start=<>' ← E <>; 8 xmlinput.end=< / >' ← E <> ; 9 location=/hdfs;</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>presents the indexes identification for the XSD from mobile network vendors A and B. For the second array identified in Table3for vendor A, mt array belongs to the mi structure; therefore, mt.index and mi.index are mapped. Similarly, for the first array identified in Table3for vendor B, measIn f o array</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Vendor</ns0:cell><ns0:cell>Root</ns0:cell><ns0:cell>Arrays</ns0:cell><ns0:cell>Structures</ns0:cell><ns0:cell>Attributes and Values</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>mdc<></ns0:cell><ns0:cell>md[]</ns0:cell><ns0:cell>mff{}</ns0:cell><ns0:cell>mi{}gp#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>md[]mi{}mt[]</ns0:cell><ns0:cell>mfh{}</ns0:cell><ns0:cell>mi{}mts#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>md[]mi{}mv[]</ns0:cell><ns0:cell>md[]mi{}</ns0:cell><ns0:cell>element{}moid#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>md[]mi{}mv[]r[]</ns0:cell><ns0:cell>mv[]element{}</ns0:cell><ns0:cell>r[]element#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>md[]neid{}</ns0:cell><ns0:cell>neid{}nedn#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>neid{}nesw#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>neid{}neun#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>mff{}ts#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>mfh{}cbt#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>mfh{}ffv#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>mfh{}sn#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>mfh{}st#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>mfh{}vn#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>mt[]element@</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell cols='2'>measCollecFile<> measData{}measInfo[]</ns0:cell><ns0:cell>fileFooter{}</ns0:cell><ns0:cell>fileHeader{} fileFormatVersion#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>measInfo[]measValue[]</ns0:cell><ns0:cell>fileFooter{}measCollec{}</ns0:cell><ns0:cell>fileHeader{} vendorName#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>fileHeader{}</ns0:cell><ns0:cell>measInfo[]measTypes#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>fileHeader{}fileSender{}</ns0:cell><ns0:cell>measInfo[] measInfoID#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>fileHeader{}measCollec{}</ns0:cell><ns0:cell>measValue[] measObjLdn#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>measData{}</ns0:cell><ns0:cell>measValue[]measResults#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>measData{}managedElement{}</ns0:cell><ns0:cell>measValue[]suspect#</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>measInfo[]grandPeriod{}</ns0:cell><ns0:cell>measCollec{} endTime@</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>measInfo[]repPeriod{}</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>belongs to the measData structure; therefore, measIn f o.index and measData.index are mapped.Finally, the XPath queries are determined for each attribute and value identified in Table2. The indexes mapped in Table 3 also supply the expressions of the XPath queries. For instance, in the first row of Table4, the XPath expression should be mdc/md/mi/gp; however, to query only the gp, the XPath expression uses the md.index and mi.index indexes.11/24PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Cataloging and Deserialization for XSD from Mobile Network Vendors A and B.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Vendor</ns0:cell><ns0:cell>Array</ns0:cell><ns0:cell>Index</ns0:cell></ns0:row><ns0:row><ns0:cell>A</ns0:cell><ns0:cell>md[]</ns0:cell><ns0:cell>md.index</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>md[]mi{}mt[]</ns0:cell><ns0:cell>mi.index</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>mt.index</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>md[]mi{}mv[]</ns0:cell><ns0:cell>mv.index</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>md[]mi{}mv[]r[]</ns0:cell><ns0:cell>r.index</ns0:cell></ns0:row><ns0:row><ns0:cell>B</ns0:cell><ns0:cell cols='2'>measData{}measInfo[] measData.index</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>measInfo.index</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>measInfo[]measValue[] measValue.index</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Identification of Indexes for XSD from Mobile Network Vendors A and B. The number of queries corresponds to the number of the columns in a column-separated table or data 442 frame. Table 4 presents the identification of 14 queries for mobile network vendors A and B, where the 443 column-separated table or data frame will contain 14 columns for each vendor. It is a simple coincidence 444 that they both have 14 queries as this can vary from file to file. The results obtained are presented in the</ns0:figDesc><ns0:table><ns0:row><ns0:cell>445</ns0:cell></ns0:row><ns0:row><ns0:cell>following subsections.</ns0:cell></ns0:row></ns0:table><ns0:note>446 12/24 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60399:1:1:NEW 25 Jun 2021) /measCollecFile/measData.index/measInfo.index/measValue.index/ measObjLdn Q5 measValue[]measResults# /measCollecFile/measData.index/measInfo.index/measValue.index/measResults Q6 measValue[]suspect# /measCollecFile/measData.index/measInfo.index/measValue.index//measCollecFile/measData.index/measInfo.index/granPeriod/ duration Q12 grandPeriod{} endTime /measCollecFile/measData.index/measInfo.index/granPeriod/ endTime Q13 repPeriod{} duration /measCollecFile/measData.index/measInfo.index/repPeriod/ duration Q14</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Positional Explode and Query Identification for XSD from Mobile Network Vendors A and B. . Creating an Apache Hive external raw table and then a new Apache Hive table with the positional explode method. 4. Creating an Apache Hive internal raw table and then a new Apache Hive table with the positional explode method.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Results</ns0:cell></ns0:row><ns0:row><ns0:cell>Apache Hive</ns0:cell></ns0:row><ns0:row><ns0:cell>XML files with 4,668 rows and 8,461 rows were used for the tests of mobile network vendors A and B,</ns0:cell></ns0:row></ns0:table><ns0:note>respectively. Preliminary experiments were conducted to determine the average query execution time for a complete table. Versions 1 and 3 were tested in four different scenarios: 1. Creating an Apache Hive external raw table and then an Apache Hive view table with the positional explode method. 2. Creating an Apache Hive internal raw table and then an Apache Hive view table with the positional explode method. 3</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Average Query Execution Times (seconds) for XML Files from Vendors A and B in Fourth Scenarios.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>version 3</ns0:cell></ns0:row></ns0:table><ns0:note>Scenario Vendor A [s] version 1 Vendor B [s] version 1 Vendor A [s] version 3 Vendor B [s]</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Manuscript to be reviewed Mean, Variance, and Standard Deviation for Query Execution Times (milliseconds) in Apache Hive External and Internal tables.As Table6indicates, for external tables, the query execution time for vendor A deviates from the average by approximately 22.53 milliseconds for Apache Hive version 1 and 8.37 milliseconds for Apache Hive version 3. For internal tables, the standard deviation is equal to 12.75 milliseconds for Apache Hive version 1 and 2.67 milliseconds for Apache Hive version 3.Conversely, for vendor B, the query execution time deviates from the average by approximately 15.69 milliseconds for Apache Hive version 1 and 6.47 for Apache Hive version 3 for external tables. The standard deviation for internal tables is 11.42 milliseconds for Apache Hive 1 and 2.62 milliseconds for Apache Hive 3. Therefore, the standard deviation obtained from Apache Hive external tables for versions 1 and 3 is greater than that for internal tables; thus, we conclude the fourth scenario allows lower query execution time and that the query execution time for Apache Hive version 3 is more efficient than version 1 as we expected.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Because there are no internal and external table concepts; only one execution time is obtained for each query. As Table 7 indicates, for vendor A the query execution time deviates from the average by approximately 323.19 milliseconds for Apache Spark version 1 and 60.15 milliseconds for version 3. Conversely, for vendor B the standard deviation is equal to 287.31 milliseconds for version 1 and 71.86 milliseconds for version 3. An important conclusion based on these results is that our proposal can be applied on different versions of Apache Spark and performance in the more recent versions is</ns0:figDesc><ns0:table><ns0:row><ns0:cell>improved.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Vendor /</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Apache Spark</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Version Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14</ns0:cell><ns0:cell>X</ns0:cell><ns0:cell>σ</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>A/1 1850 1840 1770 1360 1880 1030 1130 2050 1390 1850 1380 1300 1260 1600 1549.29 323.19</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>A/3 193 217 307 219 143 158 118 193 166 86 106 103 108 177 163.86 60.15</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>B/1 1880 1550 1980 1770 1800 1240 1100 1840 1920 1340 1480 1560 1290 1850 1614.29 287.31</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>B/3 243 168 423 253 239 186 249 187 133 139 206 216 157 204 214.50 71.86</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Query Execution Times, Mean and Standard Deviation (milliseconds) in Apache Spark Version 1 and 3.</ns0:figDesc><ns0:table /></ns0:figure>
</ns0:body>
" | "Department of Informatics and Computer Science
Quito - Ecuador
Ladrón de Guevara E11·253
Zip Code 170525
Tel. (+593) 2 2976 300
Department of Software and Computing Systems
Alicante - Spain
Carretera San Vicente del Raspeig s/n
Zip Code 03690
Tel. 96 590 3400 - Fax 96 590 3464
June 16th, 2021
Dear Editors,
Thank you for allowing the resubmission of our manuscript, which has been improved by addressing the
reviewers’ comments.
We are uploading (a) our point-by-point response to the comments (below) (response to reviewers) including
an English proofreading certificate, (b) an updated manuscript with highlighting indicating changes, (c) a
clean updated manuscript without highlights, and (d) supplementary files.
Best regards,
Diana Martinez Mosquera
On behalf of all authors.
Reviewer 1
The reference line numbers correspond to the original file submitted as it was requested by Editors.
Concern 1.
The problem statement that the authors are trying to solve is not clearly stated.
Are they trying to compare big data frameworks on fast they can process complex XSD or are they trying
to prove that their approach based on cataloging, deserialization and positional explode is superior to
other approaches in the related work?
Author response: Thank you very much for your comment. We regret not being explicit enough in our
writing.
We noted the lack of descriptive methods or insufficiently detailed methods to process complex XSD in the
related works reviewed. Therefore, our first contribution was to propose a method that uses three
techniques to process complex XSD for real XML complex schemas. To validate this proposed method, we
proved it in Apache Hive and Apache Spark. The query execution times were presented as a performance
measure to contribute to the reader's appreciation, not benchmarking these measures with other studies.
Another contribution is the case study where, by using the proposed method, we presented a novel solution
to process XML files from mobile networks named vendor A and vendor B. The real files were attached as
supplementary files.
We realized the lack of an appropriate method for processing complex XML during our research project,
where we developed a network management system with a Big Data architecture. This was the motivation
for this work.
Author action: We have restructured the title, the abstract, the introduction sections and some parts of the
article to explain the problem. We updated the manuscript by changing the title from:
“Evaluation of query execution times for complex XSD in Hive and Spark: A case study for performance management files
in mobile networks'
To
“Efficient Processing of Complex XSD using Hive and Spark.”
We also improved the next sentences:
22-23 Therefore, to shed light on complex XML schema processing for real-life applications with Big Data tools, we present
an approach that combines three techniques.
46-48 However, a more common approach in such works involves the simplest examples of XML documents, with few
attributes or with fragments of the complete XML file, even though real data sets are composed of complex schemas that
include nested arrays and structures.
71-73 In addition, our research addresses the evaluation of query execution times for complex XML schemas in different
versions, with the aim of validating the proposal for old and new software developments.
85-86 Using the proposed method, is it possible to automatically create Apache Hive external and internal tables and
Apache Spark data frames for complex schemas of XML files?
180-181 Furthermore, we explain in detail how to apply our proposed method to facilitate its replication in other studies,
whereas Hricov et al. (2017) only summarize their proposal.
194-195 However, we explain in detail how to implement our approach using real data sets from mobile networks for
Apache Hive and Apache Spark.
217-219 Finally, our research explains how to use cataloging, deserialization and positional explode to process complex
XSD in Apache Hive internal and external tables and Apache Spark data frames; moreover, we demonstrate the validity
of our proposal in a test Big Data environment with real PM XML files from two mobile network vendors.
324 At software level, we tested our proposal in two environments. It was important to evaluate two versions to verify that
the proposal is applicable in any version and that the query times improve in recent versions. Furthermore, not all potential
users always have access to the latest versions of the software.
499 According to the results of the case study, Apache Hive and Apache Spark are useful for processing complex XML
schemas using our proposed method.
532 Our main results state the validity of the proposed method for different versions of Apache Hive and Apache Spark,
obtain the query execution times for Apache Hive internal and external tables and Apache Spark data frames, and compare
the query performance in Apache Hive with that of Apache Spark.
Concern 2.
If they are trying to do the former, they need to consider several performance characteristics. Big data
frameworks usually run on clusters, not on a single nodes.
Author response: Thank you very much for your valuable observation, but we aimed to validate that our
method was well functioning and met its purpose. For this, we tested its behavior under Apache Spark and
Apache Hive. Nevertheless, your observation is an important recommendation for our future work.
Author action: We attempted to solve this doubt in the actions regarding concern 1, and added:
541 In future work, we plan to explore the behavior of a Big Data cluster with several nodes.
Concern 3.
Apache spark version used (1.6.0) is outdated and retired, I would recommend the authors to use Apache
Spark 3.0 since its performance is almost that in 1.6.0
Author response: Thank you very much for this valuable suggestion. We include the results of the tests with
Apache Spark 3.0 and we proved our method works also for this version and that the times improved.
Author action: Table 5, Table 7, Figure 8, Figure 9, and Figure 10 include the new results with version 3.0. In
addition, we achieved a new conclusion:
526 Moreover, the Big Data environment implemented with HDFS version 3.2.1, Apache Hive version 3.1.2, Apache
Spark version 3.0.1, Java version 1.8.0271, and Scala version 2.12.10 exhibited better performance than with older
versions.
Concern 4.
The same goes for Apache Hive. For end customers, the reason to do internal tables and external tables
are very different, however, that choice does affect performance. The authors have not disclosed whether
while using internal tables, their results are affected by caching in Hive.
Author response: Thank you for your timely observation. We included the results of the tests with Apache
Hive 3.0, and we proved our method also worked for this version and that times were improved.
Author action: Table 5, Table 6, Figure 4, Figure 5, Figure 6, Figure 7, and Figure 8 include the new results
with version 3.0. In addition, we added the next sentences:
526 Moreover, the Big Data environment implemented with HDFS version 3.2.1, Apache Hive version 3.1.2, Apache Spark
version 3.0.1, Java version 1.8.0271, and Scala version 2.12.10 exhibited better performance than with older versions.
476 As explained in the Related Concepts section, an internal Apache Hive table stores data in its own directory in HDFS,
while an external Apache Hive table uses data outside the Apache Hive directory in HDFS. Therefore, as expected, the
query execution times for internal tables are smaller than external tables. However, as indicated in Figure 7, as the number
of rows in an XML file increases, internal Apache Hive tables perform better than external tables. For instance, for
3,000,000 rows the query execution time takes approximately 400 milliseconds, while for an external table it takes around
1,800 milliseconds.
535 This result is consistent with the expected behavior as tables and data are stored in the same directory in HDFS.
Concern 5.
While the authors prove that their approach of catalog, deserialization and positional explode works in
Big Data Frameworks, they have not compared that with other approaches for XML parsing that the
related works have described
Author response: We consider this suggested recommendation very appropriate. However, when we tried
to compare our approach with those described in the Related Works section, there were some constraints:
some studies do not describe details about the Big Data frameworks that they use, others use a fragment of
an XML file, or simple XML schemas. Under this context, a comparison was not entirely reliable.
Author action: We added the “Comparison with other Approaches” subsection where we describe why we
did not do benchmarking but we present the obtained results for Apache Hive and Apache Spark for a file
with 3,000,000 rows.
513 Comparison with other Approaches
As mentioned previously, multiple studies have evaluated the processing of XML files with Big Data systems. However,
these approaches involve the simplest XML schemas and are generally not suitable for complex schemas that are more
common in real life implementations. The studies by Hai et al. (2018), Hricov et al. (2017), and Hsu et al. (2012) present
the results of their experiments processing XML files in terms of query execution time. However, the features of their Big
Data ecosystem differ from ours and they do not present the versions used for the software. Therefore, we only take as
reference the query execution time of Hricov et al. (2017) that is approximately 7 s for 1,000,000 rows; while, as a result
of our work, to query approximately 3,000,000 rows, Apache Hive external tables take around 2 s; while for the Apache
Spark data frame the queries take around 14.25 s, using the Big Data environment version 3.
542 Moreover, we plan to include PM files from 5G mobile networks in the tests, and to create a benchmark for different
data sets and queries.
Concern 6.
The authors have not explored why Hive or Spark performs better. When they mention Hive performs
better for queries to extract individual values or attributes, what is the reason behind this? They need to
explore the open-source code to understand what is the root cause. This would improve the validity of
their findings.
Author response: Thank you very much for your valuable observation that allowed us to improve our
validations.
Author action: We add some sentences to mention which can be the reasons for the obtained results:
476 As explained in the Related Concepts section, an internal Apache Hive table stores data in its own directory in HDFS,
while an external Apache Hive table uses data outside the Apache Hive directory in HDFS. Therefore, as expected, the
query execution times for internal tables are smaller than external tables. However, as indicated in Figure 7, as the number
of rows in an XML file increases, internal Apache Hive tables perform better than external tables. For instance, for
3,000,000 rows the query execution time takes approximately 400 milliseconds, while for an external table it takes around
1,800 milliseconds.
513 We can therefore conclude that, because Apache Hive is only a database engine for data warehousing, where the
data are already stored in tables inside HDFS as its default repository, it exhibits better performance than Apache Spark,
Apache (2021b).
Moreover, Apache Spark is not a database even though it can access external distributed data sets from data stores such
as HDFS. Apache Spark is able to perform in-memory analytics for large volumes of data in the RDD format; for this
reason, an extra process is needed over the data, Apache (2021c). For queries over XML files with complex schemas,
Apache Spark is no more efficient than Apache Hive; however, Apache Spark works better for complex data analytics in
terms of memory and data streaming, Ivanov and Beer (2016).
538 This result is consistent with the expected behavior as tables and data are stored in the same directory in HDFS.
540 We believe this occurs because Apache Spark requires extra in-memory processing for queries on XML files.
Concern 7.
Please compare with the latest version of the big data frameworks since they are more up-to date and the
scan processing time is shorter there.
Author response: We consider this suggested recommendation very appropriate.
Author action: The recommended changes were also performed to address concerns 3 and 4.
Concern 8.
Please explore deeper into the frameworks to find the reasons for your findings.
Author response: We agree with this comment to improve the quality of our study.
Author action: We solved this recommendation in the author action of the concern 6.
Concern 9.
Also explore big data cluster results.
Author response: We appreciate this observation of the reviewer. As we mentioned in the author response
of concern 2, we are validating that our method works and what is the behavior with Apache Spark and
Apache Hive. The behavior in a big data cluster will be considered in our future work.
Author action: We consider this an important recommendation for future work.
541 In future work, we plan to explore the behavior of a Big Data cluster with several nodes.
Concern 10.
The authors have a good structure for the paper, however, there are grammatical errors and sentence
construction errors in multiple places. I have described a few examples below
'However, a more common approach in that works involves the simplest examples of XML documents,
even though, the real data sets are composed of complex schemas that include nested arrays and
structures.'
'The reporting tool used is Spark SQL but no details about the implementation are presented.'
'With the purpose of reducing the lack of methods for processing XML files with complex schemas, in this
study we present our approach based on three main methods: (1) catalog, (2) deserialization, and (3)
positional explode.'
I would recommend them to correct these.
Author response: We agree with this valuable and timely observation.
Author action: We have improved the writing of the appropriate sentence. Furthermore, the article was
proofread by a company. The certificate is attached.
20 For these reasons, multiple studies have proposed new techniques and evaluated the processing of XML files with Big
Data systems. However, a more usual approach in such works involves the simplest XML schemas, even though, real
data sets are composed of complex schemas.
205 They perform SQL queries to evaluate XPath queries and the Apache Spark SQL API. The main difference from our
study is that we evaluate the query execution times for XML documents of complex types from real-life mobile networks in
Apache Hive and Apache Spark.
52 To address the lack of methods for processing XML files with complex schemas, we present an approach in this study
based on three main methods: (1) cataloging, (2) deserialization, and (3) positional explode.
Reviewer 2
The reference line numbers correspond to the original file submitted as it was requested by Editors.
Concern 1.
Few sentences which need to be re framed/reexamined as some how these sentences meaning is not clear.
Line numbers are mentioned below:
46-48
193-194
273-274
280-281
307-308
Author response: We agree with this valuable and timely observation.
Author action: We have improved the writing of the appropriate sentence. Furthermore, the article was
proofread by a company. The certificate is attached.
46-48 However, a more common approach in such works involves the simplest examples of XML documents, with few
attributes or with fragments of the complete XML file, even though real data sets are composed of complex schemas that
include nested arrays and structures.
193-194 However, we explain in detail how to implement our approach using real data sets from mobile networks for
Apache Hive and Apache Spark.
273-274 Without an index, internal queries to arrays load all the rows belonging to the array. But, with an index, only the
specific record in a table is loaded.
280-281 First, the input is the XML file denoted by Ai stored into HDFS. The XSD that belongs to Ai is named vector Xi.
Vector E is composed of the elements root, array, structure, attribute, and value elements, identified after performing the
map with the catalog proposed in Table 1.
307-308 The input is the XML file denoted by Ai stored previously in HDFS, and the XSD that belongs to Ai is named vector
Xi. Vector E is composed of the root, array, structure, attribute, and value elements, identified after performing the map
with the catalog proposed in Table 1.
Concern 2.
I found some of the fundamental papers related to work done. Authors can check and include these in
related work or wherever it seems suitable:
Dmitry Vasilenko, “An Empirical Study on XML Schema Idiosyncrasies in Big Data Processing”, in
International Journal on Computer Science and Engineering, October 2015.
Dmitry Vasilenko, Mahesh Kurapati,.” Efficient Processing of XML Documents in Hadoop Map Reduce,
IJCSE, 2014, Vol.6, No.9,p.329–333.
Song Kunfang and Hongwei Lu, “Efficient Querying Distributed Big-XML Data using MapReduce”, Int. J. Grid
High Perform. Comput. 8, 3 (July 2016), 70–79. DOI:https://doi.org/10.4018/IJGHPC.2016070105
Author response: We consider this suggested recommendation very appropriate.
Author action: The recommended 1st and 3rd related work were added:
214-216
Vasilenko and Kurapati (2015) discuss the use of complex XML in the enterprise and the constraints in Big Data processing
with these types of files. They thus propose a detailed procedure to design XML schemas. They state that the Apache
Hive XML serializer-deserializer and explode techniques are suitable for dealing with complex XML and present an
example of the creation of a table with a fragment of a complex XML file. By contrast, in our work, we propose the cataloging
procedure and also evaluate our approach for Apache Spark data frames. We additionally present the query execution
times obtained after applying our proposal.
Other studies also utilize XML documents to evaluate the processing time with HDFS and the Apache Spark engine;
however, the files used for the tests contain simple types, with a few attributes, or do not present schemas (Hai et al.,
2018; Kunfang, 2016; Zhang and Lu, 2021)
Concern 3.
At line number 444-445- I request if you can elaborate or reference why it is needed to create the raw table
at first?
Author response: We agree with this comment to improve the quality of our study.
Author action: We elaborated the writing of the sentences and added references.
444-445 It is important to highlight that it is necessary to create the raw table to deal only with parent labels of the XML file
as we are parsing complex schemas (Intel, 2013), and, furthermore, positional explode is only available for SELECT
sentences (Microsoft, 2027) (Databricks, 2021).
Concern 4.
Please discuss which type of queries have been selected to evaluate the proposed algorithm. Also, mention
whether the same type of queries can be applicable for other application datasets?
Author response: Thank you very much for your comment. It allows for improving the quality of the work.
Author action: The description of the type of queries was added in the sentences:
446 For these preliminary experiments, we utilized the Data Query Language (DQL) type from HQL for query statements
and no limits in the rows were expressed.
455 Following the base expression of Appendix B, we created Apache Hive external and internal raw tables and then
column-separated tables with the positional explode method. Over these tables, we conducted the queries obtained for
each mobile network vendor from Table 4 using sentences of DQL type with HQL.
485 The tests were conducted for the same queries employed for Apache Hive from Table 4. Again, we limited the query
to 1000 rows. Each query was performed in the Scala shell and follows the query syntax of Data Retrieval Statements
(DRS).
528 Finally, we presented the execution times of the 14 identified queries for PM files from mobile network vendors A and
B. The query types used in this work can be employed for other data sets as they are composed only of SELECT
statements.
Concern 5.
The proposed algorithms can be further tested on different size big datasets to validate them for
implementation on real big datasets.
Author response We agree with the suggested improvements.
Author action: The validation for Apache Hive and Apache Spark version 3 with different file sizes, 3000,
30000, 300000, and 3000000 rows was included:
476 We perform other tests in order to determine the behavior for external and internal Apache Hive tables version 3, with
different file sizes. We only test version 3 as this provides a better performance. The results are presented in Figure 7.
476 As explained in the Related Concepts section, an internal Apache Hive table stores data in its own directory in HDFS,
while an external Apache Hive table uses data outside the Apache Hive directory in HDFS. Therefore, as expected, the
query execution times for internal tables are smaller than external tables. However, as indicated in Figure 7, as the number
of rows in an XML file increases, internal Apache Hive tables perform better than external tables. For instance, for
3,000,000 rows the query execution time takes approximately 400 milliseconds, while for an external table it takes around
1,800 milliseconds.
Figure 7. Average Query Execution Times (milliseconds) for Internal and External Hive Tables with Different XML File
Sizes.
Concern 6.
For further research, authors can take benchmark datasets and queries.
Author response Thank you very much for your suggestion.
Author action: We added the “Comparison with their Approaches” subsection where we described why we
did not do benchmarking in this work, but we presented the obtained results for Apache Hive and Apache
Spark for a file with 3,000,000 rows as was suggested for concern 6. In addition, we added this
recommendation as future work in the paper.
513 Comparison with other Approaches
As mentioned previously, multiple studies have evaluated the processing of XML files with Big Data systems. However,
these approaches involve the simplest XML schemas and are generally not suitable for complex schemas that are more
common in real life implementations. The studies by Hai et al. (2018), Hricov et al. (2017), and Hsu et al. (2012) present
the results of their experiments processing XML files in terms of query execution time. However, the features of their Big
Data ecosystem differ from ours and they do not present the versions used for the software. Therefore, we only take as
reference the query execution time of Hricov et al. (2017) that is approximately 7 s for 1,000,000 rows; while, as a result
of our work, to query approximately 3,000,000 rows, Apache Hive external tables take around 2 s; while for the Apache
Spark data frame the queries take around 14.25 s, using the Big Data environment version 3.
542 Moreover, we plan to include PM files from 5G mobile networks in the tests, and to create a benchmark for different
data sets and queries.
This is to certify that the following document has been proofread in terms of the
quality of the English used:
Evaluation of query execution times for complex XSD in Hive and Spark: A case
study for performance management files in mobile networks
The proofreader was Dr David Kinsella on behalf of Academic Proofreading.
If you have any questions, please do not hesitate to ask. We can be contacted at
prime@academicproofreading.com
Academic Proofreading
Hurworth House
17 Beechwood Terrace
Sunderland
SR2 7LY
United Kingdom
Dr. David Mercer
15th June, 2021
" | Here is a paper. Please give your review comments after reading it. |
194 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In recent years in the advancement of medical imaging for the medical diagnosis, the initial assessment of the aliment and the abnormality has become challenging for the radiologists. Magnetic Resonance imaging is one such predominant technology that has is extensively utilized for the initial assessment of the ailments. The article's main objective is to mechanize an efficient approach that can accurately assess the damaged Region of the tumors through automated segmentation that needs minimal training and capable of performing self-Learning, which is computationally and technically efficient than many other approaches like CNN and HARIS algorithm. The process of investigation and statistical analysis of the abnormality would be made much more comfortable and convenient. The proposed approach's performance seems to be much pleasing compared to its counterparts with an accuracy of 77% with minimal training of the model.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In medical diagnostics for identifying any malignant tissues in the human body, medical imaging has played a vital role in the preliminary diagnoses of the malevolent tissues. Many imaging techniques range from X-Ray technology, as stated by R. <ns0:ref type='bibr'>Vaga and K. Bryant [31]</ns0:ref>, computed tomography (C.T.) technology, as stated by Venkatesan <ns0:ref type='bibr' target='#b56'>[46]</ns0:ref>, followed by Magnetic Resonance Imaging (MRI) technology by Chahal et al. <ns0:ref type='bibr' target='#b8'>[5]</ns0:ref>;P. Naga Srinivasu et al. <ns0:ref type='bibr' target='#b34'>[28]</ns0:ref> and positron emission tomography (PET) scan by Norbert Galldiks et al. <ns0:ref type='bibr' target='#b32'>[26]</ns0:ref>; Schillaci et al. <ns0:ref type='bibr' target='#b43'>[36]</ns0:ref>. All the approaches mentioned above are non-invasive and capable of diagnosing the pernicious tissue efficiently. This imaging technology is aptly suitable for the cases with abnormalities in the human body than would help plan the medical physician's clinical with the radiologist's support. Our article has focused on the automated diagnosis of human brain lesions through M.R. imaging and estimating the significance of damage.</ns0:p><ns0:p>The imaging technology has upgraded stupendously that could elaborate every minute and tiny ailment in the mortal body that could be easily diagnosed.</ns0:p><ns0:p>In our proposed work, we mainly focused on recognizing and volumetric estimation of the malignant tissue in the human brain that is recognized through analyzing the M.R. Imaging. There are numerous ways through which the tumorous regions are identified in the M.R. images. Still, a majority of those approaches lack accuracy, especially in the case of semi-automated techniques. To overcome the former approaches' challenges, many automated approaches are proposed involving the conventional Genetic Algorithm (G.A.), as stated by Naga Srinivasu, Parvathaneni, et al. <ns0:ref type='bibr' target='#b31'>[25]</ns0:ref>, Artificial Neural Networks (ANN), as said by Wu, Wentao et al. <ns0:ref type='bibr' target='#b61'>[50]</ns0:ref> and Deep Learning techniques.</ns0:p><ns0:p>In the process of recognizing the aliments, that are evolved through many stages in the procedure that involve the pre-processing of the M.R. images to address the noises in the original M.R. images like Spackle noise, Poisson noise, Gaussian Noise that are acquainted into the image at the various stage in the process rendering the image. And the images are pre-processed to enhance the contrast and the texture that assist in a faster and convenient way to recognize the malignant tissues. Once the image is pre-processed for the noise removal, the M.R. image is processed to remove the skull region from the M.R. image. Then the M.R. image is segmented to recognize the malignant Region, and then the volumetric estimation is performed for the recognized Region. In the later stages, the results at each phase of the proposed method are refined through various optimization techniques.</ns0:p><ns0:p>The MR images are processed, and then the images are segmented to locate the abnormal regions in the image based on the texture-based information. The process of image segmentation plays a vital role in the identification of the malignant areas in the M.R. image; There are various semi-automated and automated ways of segmenting the M.R. images that could efficiently segment the images, but there is a considerable tradeoff between the accuracy and the computation efforts put forwarded by each of those approaches. Now let of go with them in detail.</ns0:p><ns0:p>K-Means algorithm is one such semi-automated approach that segments the M.R. Image based on the pre-determined number of segments. The proposed approach is one of the simple and faster techniques for the segmentation of the images. But the major drawback of this approach is that the total number of segments is fixed well before the segmentation has begun, and if the k value is too large, then it would lead to over-segmentation, and if the k value is too small, then it would lead to under segmentation of the image</ns0:p><ns0:p>The MR images could also be segmented through fully automated approaches like Thresholding, as stated by Sowjanya Kotte et al. <ns0:ref type='bibr' target='#b46'>[39]</ns0:ref>; Kumar, Ram et al. <ns0:ref type='bibr' target='#b21'>[17]</ns0:ref>. The threshold-based method works comparatively better in the homogeneous image, but most care must approximate the threshold value. A minor deviation in the threshold's optimal value would lead to an exponential deviation in the resultant segmented image.</ns0:p><ns0:p>Dalvand, M et al. <ns0:ref type='bibr' target='#b11'>[7]</ns0:ref> have proposed a Region Growing approach that could efficiently address the issue of over-segmentation and under segmentation issue that we generally come across in K-Means based approach. S. Punitha et al. <ns0:ref type='bibr' target='#b41'>[34]</ns0:ref> in their article, they have elaborated on experimentation that the Region Growing based approach would improve the sensitivity and specificity for appropriate recognition of the malignant Region in the human brain. But the seed point must be chosen appropriately. If the initial seed points are inappropriately chosen, the entire process of segmentation of the image goes wrong, and this approach needs computationally more effort.</ns0:p><ns0:p>Sheela, C.J.J. et al. <ns0:ref type='bibr' target='#b44'>[37]</ns0:ref>; N. Mathur et al. <ns0:ref type='bibr' target='#b30'>[24]</ns0:ref> have suggested Edge Based Segmentation for segmentation of the images, which is comparatively simple, and the process of threshold setting capability has been improved through fuzzy-based k-means clustering. But in the suggested edgebased approach, the image has to be pre-processed rigorously to elaborate the edge-related information, which involves high computational efforts.</ns0:p><ns0:p>S. Pandav <ns0:ref type='bibr' target='#b40'>[33]</ns0:ref>; Liang, Yingbo & Fu, Jian <ns0:ref type='bibr' target='#b24'>[19]</ns0:ref>; Kornilov, Anton, et al. <ns0:ref type='bibr' target='#b20'>[16]</ns0:ref> has proposed Watershed-based image segmentation that can effectively address the issue of the massive number of segmented regions that are recognized through edge information by reducing through markercontrolled watershed segmentation. But considerable efforts must be kept during pre-processing the image to separate the foreground and background regions in the image. K. Sudharania et al. <ns0:ref type='bibr' target='#b19'>[15]</ns0:ref> have proposed Morphological based segmentation that exhibits an exceedingly high accuracy in less intensity M.R. images. But it needs several iterations to converge to a better solution, which needs more computational efforts. D, Jude, et al. <ns0:ref type='bibr' target='#b10'>[6]</ns0:ref>; Madallah Alruwaili et al. <ns0:ref type='bibr' target='#b28'>[22]</ns0:ref>; Verma et al. <ns0:ref type='bibr' target='#b57'>[47]</ns0:ref> has proposed Fuzzy C Means(FCM) based M.R. image segmentation approach that is a highly accurate, well-founded, and rapid approach that could efficiently handle the unpredictable situation of assigning the pixels among multiple segments by assigning them to the appropriate segmented based on the membership value that is being evaluated in each iteration. In fuzzy-based segmentation, the pixels are assigned to their corresponding Region based on the membership function value that lies in a range of 0 to 1. The value 1 indicates the membership of pixel that is close to the value of the centroid pixel. The major challenge of FCM is the membership value, which needs to be evaluated in each iteration for all the pixels in the image that need more computational efforts. The adjustment of the bottom and upper approximated for the randomness is complex though FCM.</ns0:p><ns0:p>Al-Shamasneh et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> have suggested an M.R. image segmentation through Contour Based segmentation that efficiently recognizes various brain tissues for both homogeneous and heterogeneous malignant cases. But it is not much efficient for noisy images and high-intensity images. Li, Yang et al. <ns0:ref type='bibr' target='#b23'>[18]</ns0:ref> have proposed a Level Set centric technique for the segmentation of the M.R. image is based on the pre-approximated threshold value. The level set method is a very complex approach, and the approximated threshold value determines the efficiency of the approach's segmentation.</ns0:p><ns0:p>Wang, M. et al. <ns0:ref type='bibr' target='#b58'>[48]</ns0:ref>; Diaz, Idanis and Boulanger, Pierre <ns0:ref type='bibr' target='#b14'>[10]</ns0:ref> have proposed Atlas Based Segmentation for M.R. images is a straight forward approach that employs segmentation of the M.R. images. It is computationally faster than used labeling, and it is independent of the deformation model. The suggested approach merges the intensity template image and its segmented reference image to register to further segment the image. In the atlas-based approach, choosing the initial seed point is crucial as the entire segmentation scenario is based on the seed point selection. However, the efficiency of this approach is based on the precision of the topological graph.</ns0:p><ns0:p>Venmathi A.R. et al. <ns0:ref type='bibr' target='#b54'>[44]</ns0:ref>; Z. Liu et al.</ns0:p><ns0:p>[52] has proposed Markov Random Field (MRF) to segmentation the M.R. image through the Gaussian mixture model that supports incorporating the neighboring pixels association in practical and mathematical perception. The method works on textured-based information. Markov random field-based approach for the segmentation of the image includes spatial information that aids in normalizing noise and overlapping of the neighboring regions. Javaria Amin et al. <ns0:ref type='bibr' target='#b18'>[14]</ns0:ref> have suggested an approach that used the spatial vector alongside Gabor decomposition that discrete the malignant and non-malignant tissues in the M.R. image of Bayesian Classifier. Despite the accuracy, MRF needs more computational efforts and the process of picking the parameters systematically.</ns0:p><ns0:p>Varuna Shree et al. <ns0:ref type='bibr' target='#b55'>[45]</ns0:ref>; M. K. Gibran et al. <ns0:ref type='bibr' target='#b27'>[21]</ns0:ref> have suggested Probabilistic Neural Network (PNN) in combination with Learning Vector Quantization (LVQ) that aids in reducing the computational time by optimization all the hidden layers in the proposed method. The Region of interest that has to recognize for designing the network must be done carefully as the quality of image segmentation is dependent on the Region of interest. Sandhya et al. <ns0:ref type='bibr' target='#b16'>[12]</ns0:ref>; P. A. Mei et al. <ns0:ref type='bibr' target='#b35'>[29]</ns0:ref>; De and C. Guo <ns0:ref type='bibr' target='#b12'>[8]</ns0:ref> have suggested a Self-Organized Map (SOM) based algorithm for segmentation of the M.R. image that includes the spatial data and the grey level intensity information in the segmentation of the image and it is outstanding in separating the malignant tissues. But the SOM engine has to be trained rigorously for better accuracy, and the quality of the image segmentation is directly dependent on the training set. Mapping is one of the complicated tasks in a SOM based approach. M. Havaei et al. <ns0:ref type='bibr' target='#b25'>[20]</ns0:ref>; SivaSai J.G. et al. <ns0:ref type='bibr' target='#b45'>[38]</ns0:ref>; Wentao Wu et al. <ns0:ref type='bibr' target='#b60'>[49]</ns0:ref> have proposed a Deep Neural Networks based approach that is computationally efficient and highly precise of accuracy of the algorithm. Still, the main challenge is that the implementation procedure and the machine must be computationally compatible for faster segmentation and not economically feasible in all cases.</ns0:p><ns0:p>Sachdeva et al. <ns0:ref type='bibr' target='#b42'>[35]</ns0:ref> have suggested multiple hybrid approaches for the segmentation of the M.R. images. In the multiclass categorization of malignant tissues is done efficiently, and high accuracy is attained through the implementation of machine learning and soft computing techniques. Pulse Coded Neural Network (PCNN) is a technique used in coherence with the semiautomated methods for better segmentation. While segmenting the M.R. image, the Region of interest could be perceived as a region growing approach that selects the initial points assumed as the seed points in the earlier stages. Secondly, a Feed Forward Back Neural Network (FFBNN) selects the seed points that send back the input until the input turns uniform. Qayyum, Huma et al. <ns0:ref type='bibr' target='#b36'>[30]</ns0:ref> has attained multiple sub-images with multi-resolution data by employing Stationary Wavelet Transform(SWT). The spatial kernel is being applied over the resultant sub-images to locate the demographic features. With the help of extracted features and the Stationary Wavelet Transform coefficient, the multi-dimensional features are built. The identified features and coefficients as the input to the self-Organized map, Linear vector Quantization, are finally used to refine the results. Srinivasu, P. N et al. <ns0:ref type='bibr' target='#b49'>[41]</ns0:ref> has proposed an approaches Twin Centric Genetic Algorithm with a Social Group Optimization, that has produced a precise outcome in tumor identification from the brain MR images. The twin centric GA model is comparatively faster than the conventional GA approach, with faster crossover rate that results in a new segment and based on the fitness value the mutation operation is performed to reform them with other strongly correlated regions in the image. The outcome of the Twin GA is being refined through Social Group Optimization approach, that has refined the outcome through selecting the appropriate features in the image for the segmentation. Dey N et al. <ns0:ref type='bibr' target='#b13'>[9]</ns0:ref> has experimented the SGO approach in fine tuning the outcome of the classification, The current approach is comparatively faster but in the process of performing the two-point crossover, there is chance of diverging from the optimal number of regions and may end up with over-fitting issue. And execution of two high computational algorithms needs a significant computational effort for segmentation of the MR image.</ns0:p></ns0:div>
<ns0:div><ns0:head>The objective of the paper</ns0:head><ns0:p>The main objective is to formulate a mechanism that can efficiently segment the real-time image into multiple regions based on the available features by minimizing the computation efforts. The proposed approach is a semi-trained strategy that upskills the algorithm with some pre-existing real-time scenarios, the proposed SLNS algorithm itself can differentiate the skull region from the brain tissues in the M.R. image without the need for any external pre-processing algorithm, and it can quickly extricate brain tissues from the non-brain tissues through the available feature set. The proposed method doesn't need rigorous training of the data as in other supervised approaches. The model itself is robust enough with minimal training, the approach can recognize the tumors Region in the M.R. image, resultantly the proposed approach needs less computational efforts, and the approach doesn't need a preprocessing phase to banish the noise and the non-brain Region from the M.R. scan image, and the experimentation has been performed to evaluate the accuracy of the proposed approach, and the upshot seems to be promising.</ns0:p></ns0:div>
<ns0:div><ns0:head>Weak-Learning Network Based Real-Time Segmentation</ns0:head><ns0:p>The semantic technology for real-time image segmentation is an interdisciplinary approach that is an integral part of Fully Convolutional Neural Networks that is extensively used in realworld scenarios to handle the 2D images effectively. The concept of semantic segmentation has been used in coherence with multi-objective function-based algorithms for better results. The weak learning network is based on the partial training of the algorithm and the tuning of the algorithm so that it would be able to differentiate among multiple classes. The algorithm is capable of training itself for better efficient segmentation of the image. In every stage of execution for evaluation of the tumor's region from the M.R. image, the proposed method would be segmenting the image based on the trained information, and the resultant segmented imaged would be further refined in further network layers through multi-objective functions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Partial Training of The Algorithm</ns0:head><ns0:p>The proposed approach is a partially trained approach that hardly needs rigorous training, unlike in its counterparts. In the proposed approach, the original image is being trained with an accepted level of noisy data so that the algorithm would be capable enough to address the issue of Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>noise image and the noise image are being explicitly mentioned through the labels as experimented by T. Xiao et al. <ns0:ref type='bibr' target='#b52'>[43]</ns0:ref> in his paper, he has explained the binary identifier's concept to differentiate the noisy image from a non-noise labeled image, Misra et al. <ns0:ref type='bibr' target='#b29'>[23]</ns0:ref>. It has proposed the idea of independently labeling the noise class. The role of differentiating the image-based noise classes is significantly essential so that the image could be easily determined among the noise levels so that it would process the images based on the noise variance.</ns0:p><ns0:p>The image is being trained to differentiate the brain tissues from non-brain tissues like brain fluids, skull region, thalamus, brainstem, etc., that are to be separated from the brain tissues for better segmentation and assessment of the area of the tumor. However, whenever a new image is fed as the input for the algorithm, it would be provided as the input for the network's selflearning layer for additional images upon segmenting the image.</ns0:p></ns0:div>
<ns0:div><ns0:head>Self-Learning Network-Based Segmentation</ns0:head><ns0:p>In the automated segmentation of the brain M.R. image, the crucial thing is to set up a couple of training sets that are larger, weakly supervised training sets, and the other is a smaller self-learning training set. The complete flow diagram of the proposed framework can be seen in figure <ns0:ref type='figure'>1 stated below</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> Represents the block diagram of SLNS based approach</ns0:p></ns0:div>
<ns0:div><ns0:head>Layered Architecture of SLNS</ns0:head><ns0:p>In the layered architecture of the proposed Self-Learning Network-based Segmentation (SLNS) approach, many convolutional layers would perform many challenging tasks like preprocessing of the image to remove the image's noise by considering the adaptive bilateral filter for pixels that surround the actual corresponding pixel. The image is then further processed to remove the skull region in the next successive layer in correlation with the other connected layers. The image is segmented based on the intensity value of the pixel representing the region's texture. And finally, the classifier is used to dissimilate the non-brain tissues and damaged Region in the human brain.</ns0:p><ns0:p>In the proposed approach, the segmentation algorithm is trained through a different source that acquires sufficient knowledge from the predefined segmented dataset and the knowledge acquired from the earlier experimentations. And each of the convolutional layers is being guided through various techniques that include an adaptive bilateral filter with the sublayers that decide the optimal number of clusters and cluster centroids. The process is fed back for further refinement over multiple iterations.</ns0:p><ns0:p>The image is refined at the earlier stages using an Adaptive Bilateral filter that processes the image that normalizes the noise like Gaussian noise and Poisson noise during image acquisition. The method of noise reduction is carried forward through the following equations ( <ns0:ref type='formula'>1</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_0'>𝐴𝐵 𝐹 = 𝛼 × ∑ 𝑝 𝑝 𝑏 (|| 𝑥 -𝑦 ||) × 𝑝 𝑙 (||𝑖 𝑥 -𝑖 𝑦 ||)</ns0:formula><ns0:p>From the above equation, the  value is determined by the contra harmonic mean of the neighbouring pixels determined in equation ( <ns0:ref type='formula' target='#formula_1'>2</ns0:ref>) stated below, Pb represents the pixel belongingness, and represents the pixel likeliness. Heuristic Approach for Real-time Image Segmentation (HARIS) algorithm is used in segmenting the high dimensional images like the medical MR images for convenient way of identifying the abnormality from the MR images. HARIS approach incorporates two phases in the image segmentation process. In the initial phase the optimal number of regions in the image are being assessed, that assist in accurately recognizing the regions in the image based on the features like texture, intensity and the boundary region related pixels. The techniques like the intraclass correlation and interclass variance are being considered for evaluation. The second phase of HARIS approach is to identify the local best feature within the region in accordance to the Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>𝑝 𝑙<ns0:label>(2)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>presumed feature element that is presumed to be the global best. The selected feature is being used in assembling the pixels as a region based on the feature identified.</ns0:p><ns0:p>In the later stages, the image is segmented based on the intensity level by approximating the minimum number of segments to be 23 from the previous experimental results. The number of segments fitness has been evaluated through the formula stated below (5) Obj fun = ( x × Tot pix pix seg ) + ( y × T s N r ) From The above equation ( <ns0:ref type='formula'>5</ns0:ref>), x and y are the deciding factors that control the proposed algorithm's accuracy and efficiency. x determines the inter-class variance, and y determines the intraclass variance that is stated through equations ( <ns0:ref type='formula' target='#formula_2'>7</ns0:ref>) and ( <ns0:ref type='formula'>9</ns0:ref>).</ns0:p><ns0:p>The maximum interclass variance is being determined by the equation ( <ns0:ref type='formula'>6</ns0:ref>) stated below ( <ns0:ref type='formula'>6</ns0:ref>) <ns0:ref type='formula' target='#formula_2'>7</ns0:ref>) is the elaborated version of the equation ( <ns0:ref type='formula'>6</ns0:ref>) and Ct in the above equation threshold value of the class that is being evaluated through the Fuzzy Entropy-based Thresholding (FET) approach. And the variables That determines the means of the intensities of the image μ 1 ,μ 2 segment.</ns0:p><ns0:formula xml:id='formula_2'>σ 2 inter c (C t ) = σ 2 total(C t ) -σ 2 prev_inter(C t )<ns0:label>(7)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>σ 2 inter c (C t ) = ∑ C t -1 x = 0 p(x) ∑ Z -1 x = C t p ( x ) [μ 1 (C t ) -μ 2 (C t )] 2 Equation (</ns0:formula><ns0:p>The fuzzy entropy-based <ns0:ref type='bibr' target='#b33'>[27]</ns0:ref> thresholding technique that evaluates how strongly a pixel is being strongly correlated to a particular segment, the value of the threshold is being determined by the equation ( <ns0:ref type='formula'>8</ns0:ref>) <ns0:ref type='bibr' target='#b12'>(8)</ns0:ref> FET = ∑ max ints = 1 μ tc ( ints ) log 2 μ tc ( ints ) -∑ 255 ints = max + 1 μ ntc (ints)log 2 μ ntc (ints) From the above equation( <ns0:ref type='formula'>8</ns0:ref>), the variables Are the variables representing the fuzzy μ tc ,μ etc membership concerning the image segments associated with the tumors class and non-tumors class. However, the equation is formulated with respect to Shannon's entropy formulation. And the value of the threshold(FET) would be greater than 0 and would lie below 255.</ns0:p><ns0:p>The value of the intraclass correlation is determined by equation ( <ns0:ref type='formula'>9</ns0:ref>), ( <ns0:ref type='formula'>9</ns0:ref>) <ns0:ref type='formula'>9</ns0:ref>), the variable Represents the standard deviation in concern to the 𝜎 𝑠 given image segment, and the variable Represents the standard deviation concerning the entire 𝜎 𝑖 image that is being considered.</ns0:p><ns0:formula xml:id='formula_4'>I corl = σ 2 s σ 2 s + σ 2 i From equation (</ns0:formula></ns0:div>
<ns0:div><ns0:head>Adaptive Structural Similarity Index:</ns0:head><ns0:p>The adaptive structural similarity index(ASSI) approach is used to classify the pixel among the tumorous and non-tumorous regions. The structural similarity index relay on the likeness among the pixels among tumor regions. The structural similarity index relay on the three factors Manuscript to be reviewed Computer Science that include the structural parameter, luminance parameter, and contrast parameter. The index is determined as the product of the three factors mentioned above. In this paper, we have proposed an adaptive Structural Similarity Index that would assess the membership alongside the similarity index to make the outcome more realistic. The equation of the similarity index is determined through the equation stated below (10) ASSI(p,q) = ω × {x(p,q)α.y(p,q)β.z(p,q)γ} (11)</ns0:p><ns0:formula xml:id='formula_5'>x(p,q) = 2μ p μ q + ∁ 1 μ 2 p + μ 2 q + ∁ 1 (12) y(p,q) = 2σ p σ q + ∁ 2 σ 2 p + σ 2 q + ∁ 2 (13) z(p,q) = σ pq + ∁ 3 σ p σ q + ∁ 3 (14) ω = σ 2 p (σ 2 p + σ 2 e</ns0:formula><ns0:p>2 ) The equation ( <ns0:ref type='formula'>10</ns0:ref>) mentioned above is used in assessing the likelihood of the pixel that could be a part of the tumorous region. The variable determines the probability that is multiplied 𝜔 by the product of the structural parameter presented in equation ( <ns0:ref type='formula'>11</ns0:ref>), luminance parameter presented in equation ( <ns0:ref type='formula'>12</ns0:ref>), and contrast parameter given in equation ( <ns0:ref type='formula'>13</ns0:ref>). The outcome of the proposed model is promising when compared to that of its counterparts. The ASSI is used alongside the trained models in the proposed self-learning model. Algorithm Date: x(p,q)Structural Parameter, y(p,q)Luminance Parameter, z(p,q) Contrast Parameter Input: pixel(p,q) where p represents the row and the q represents the column Output: Assess the correlation index pix(p,q) initialize the starting pixel while(i<maximum_number_iterations) for each pixel in the input image, do Update x(p,q), y(p,q), z(p,q) and ω Approximate the Correlation Index Select the regions in the image Determine the suitable category to assign If CI > Threshold, then Assign the corresponding pixel to the tumorous Region else Assign the corresponding pixel to the non-tumorous Region end if end for end While Figure <ns0:ref type='figure'>2</ns0:ref> represents the proposed model's layered architecture, and the layers like the convolutional layer, max-pool layer, flattening layer, and the fully connected layers almost resemble the Convolutional Neural Network model <ns0:ref type='bibr' target='#b15'>[11]</ns0:ref>. The Convolutional layer in the proposed architecture is responsible for applying kernels' sequence to the input MR images for extracting the features from the image that assist in classifying various regions in the input image. The convolution layer identifies the pixels' spatial association based on the selected features informing as the Region, which makes abnormality identification easy. The pooling layer is generally next to the convolution layer, which is meant for reducing the special size of the outcome in the previous layer, which would resultantly minimize the number of parameters and features that are deliberated for further processing and to minimize the computational time. In the proposed model, the MAX pooling approach is considered in reducing the spatial size. Convolution and the Max_Pooling layers are implemented simultaneously for multiple rounds until all the regions are deeply refined, and the features are identified.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 2 Architecture diagram of the proposed SLNS model</ns0:head><ns0:p>The flattening layer is the next successive layer after the sequence of the Convolution and Pooling layer that performs the transporting the data into the 1-dimensional array that will proceed at the later stages by the successive layered architecture. The Fully Connected layer in the proposed architecture uses the intensity features of the regions that are recognized as abnormal and normal in the considered MR images. Gated Recurrent Unit (GRU) is being used in the proposed model to maintain the interpreted data of the previous experimental prediction. The same apprehension is being used in the future in abnormality recognition from the MR Image. Heuristic Approach for Real-time Image Segmentation (HARIS) algorithm is being used to differentiate the tumors and non-tumors Region by identifying the appropriate pixels as the features for the prediction.</ns0:p><ns0:p>The Gated Recurrent Unit is used to maintain the acquired knowledge from the previous outcomes. GRU is efficient, with comparatively fewer parameters are needed to maintain the training data as stated by Guizhu Shen et al. <ns0:ref type='bibr' target='#b17'>[13]</ns0:ref>; T. Le et al. <ns0:ref type='bibr' target='#b51'>[42]</ns0:ref>. GRU can address challenging tasks like vanishing gradient problems through the two gates, namely the reset gate and the update gate. HARIS works with both the fully connected and GRU layers to recognize the suitable pixels for the prediction.The cost function of the proposed model in concerning to the tensors for the input image I(i,j) can be evaluated through the equation <ns0:ref type='bibr' target='#b19'>(15)</ns0:ref> (15) 𝑗(𝑤,𝛽;𝑖,𝑗)</ns0:p><ns0:formula xml:id='formula_6'>= ‖𝑓 𝑤,𝛽 (𝑖) -𝑗‖ 2 2</ns0:formula><ns0:p>The variable represents the weight associated with the layer and the variable 𝑤 𝑙 𝛽 represents the bias, represents the kernel that is being used in performing the operation 𝑓 𝑤,𝛽 (𝑖) over the elements. The error at the layer in the proposed model is determined through the equation 𝑙</ns0:p><ns0:p>𝑒 𝑙 = ((𝑤 𝑙 ) 𝑡 𝑒 𝑙 + 1 ) Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The is the weight that is associated with the layer and the variable represents the 𝑤 𝑙 𝑙 𝑒 𝑙 + 1</ns0:p><ns0:p>error associated with the layer . The variable is the activation function that is associated</ns0:p><ns0:formula xml:id='formula_8'>𝑙 + 1 𝑓(𝑥 𝑙 )</ns0:formula><ns0:p>with the layer that is being determined by the Rectified Linear Unit (ReLu) in the proposed model. 𝑙</ns0:p><ns0:p>𝑅𝑒𝐿𝑢(𝑥 𝑙 ) = 𝑚𝑎𝑥(0,𝑥) The ReLu based activation function is linear for the values that are greater than zero and it will be zero for all the negative values.</ns0:p><ns0:p>The learning rate of the proposed model is significant in assessing the performance of the model, that generally controls the weights in the network in concerning to the loss gradience. The learning rate is presumed to be at optimal level, so that the model will move towards the solution by considering all the significant features in the prediction. The lesser learning rate presents the slower learning ability result in delayed solution, and on the other hand the higher learning rate result in faster solution that may ignore few features in the learning process. The figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref> represents the learning rate of the proposed model of the proposed model across various epochs. The saturation point for the learning rate in the proposed model is achieved at the epoch=43 and the iteration=187.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 3 The graph representing the Learning Rate</ns0:head><ns0:p>The other hyper parameters that are associated with the evaluation process includes the loss and accuracy functions that are associated with the training and the validation phase of the proposed model. It can be seen from the figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>, the left side graph represents the training and the validation loss of the model, where in the graph after a certain number of epochs approximately from 15, the training and the validation loss are close to each other, that determines the proposed model is reasonably good in identifying the abnormalities from the MR images. The situation like overfitting is considered when the validation loss is much higher than the training loss that would result in inaccurate classification of the abnormal region because of too much of data. Underfitting is a situation in which the training loss is more than the validation loss that would result in poor accuracy of the model due to inappropriate selection of the features in the data. Training and the validation accuracy are the other parameters that are considered for evaluating the proposed model. The graph that is on the right-side represents the accuracy measures of the proposed model. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental Results And Observations</ns0:head><ns0:p>The experimentation has been carry forwarded over the real time MR images that are being acquired from the open source repository LGG dataset that are being acquired from The Cancer Genome Atlas (TCGA) that are captured from patients with acute glioma. The performance of the proposed approach has been evaluated through diverse metrics like Sensitivity, Specificity, Accuracy, Jaccard Similarity Index and Matthews Correlation Coefficient that is being assessed Manuscript to be reviewed Computer Science from the True Positive value that designates how many times does the proposed approach correctly recognize the damaged Region in the human brain as damaged Region and the True Negative that designates the how many times the proposed approach identifies non-damaged Region correctly and False Positive designates the number of times does the proposed approach identifies damaged Region in the brain as non-damaged Region and False Negative designate the count of how many times does the proposed approach mistakenly chosen non-tumors region as tumors region. The figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> presents the output screens of the proposed model. The performance of the proposed Self-Learning Network-based Segmentation (SLNS) approach is being evaluated against the other conventional approaches like Twin-Centric GA with SGO, HARIS approach, and Convolutional Neural Network in concern to the performance evaluation metrics like Sensitivity, Specificity, Accuracy, Jaccard Similarity Index (JSI), Mathew Correlation Coefficient(MCC). The table 1 represents the experimental evaluations of the proposed model. The experimentation is conducted through executing the code repeatedly for 35 images and scaled up for evaluating the confusion matrix. The accuracy of the model is assessed with a standard deviation of 0.015 in assessing the segmented image as present by Agrawal Ritu et al. <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. <ns0:ref type='table' target='#tab_2'>2</ns0:ref> it is observed that the proposed SLNS approach is outperforming when compared to its counterparts. In many of the cases, the proposed approach seems much better than CNN. The semi-trained approach needs comparatively fewer efforts than CNN at the same performance. And in the computational efforts perspective, the proposed algorithms need almost the same execution time as that of the CNN algorithm, but more time than the HARIS Algorithm based segmentation approach. Fuzzy entropy based MRI image segmentation as experimented by Rajinikanth, V., Satapathy, S.C. <ns0:ref type='bibr' target='#b39'>[32]</ns0:ref>, Y. Chao et al. <ns0:ref type='bibr' target='#b62'>[51]</ns0:ref>.The figure <ns0:ref type='figure'>6</ns0:ref> presents the comparative analysis of the various approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 6. Graph representing the comparative analysis of various approaches</ns0:head><ns0:p>The performance of the proposed model is being assessed through the HARIS algorithm that decides the best possible number of segments to assist in identifying the abnormalities in the image and it assigns the pixels to the segment by identifying the ideal pixel in the segment. The second objective function of the HARIS algorithm is being replaced with the fuzzy membership assessment model and the performance of the proposed model is being evaluated and presented in the table 3. The fuzzy membership is being evaluated through the equation ( <ns0:ref type='formula'>15</ns0:ref>) that is stated by A. V. Vaidya et al. <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>From equation( <ns0:ref type='formula'>15</ns0:ref>), the variable represents the vertices that is being considered <ns0:ref type='figure'>7</ns0:ref> presents the assessed computational time consumed by various algorithms. It could be observed from the graphical values that the proposed approach SLNS seems to be much efficient. It is observed that SLNS consumes more time than the HARIS-based approach, but from a precision point of view, the proposed SLNS has the trade-off. The computational time of CNN and SLNS is almost the same, and the computational lag in the proposed approach is because of self-training that needs additional efforts. The proposed model does not need any significant training, unlike the neural network-based models.</ns0:p><ns0:p>The proposed model's performance is being assessed against the various existing algorithms like thresholding, Seeded Region Growing, Fuzzy C-Means, and the Artificial Neural Network models in concern to the evaluation metrics like sensitivity, specificity, and the accuracy as presented by Alam MS et al. <ns0:ref type='bibr' target='#b5'>[4]</ns0:ref>. The comparative analysis of the approaches with obtained values is shown in table <ns0:ref type='table'>4</ns0:ref>.The performances of various algorithms connected to the proposed SLNS have been assessed concerning the size of the tumor. It is observed that from the clinical evidence, the resultant outcomes have been proven to be better compared to their counterparts and the consequent of the proposed methods seems to very pleasing and very accurate. The tabulated values in table 4 represent the resultant experimental values.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 4. Table representing the performance analysis of SLNS approach</ns0:head><ns0:p>The table 4 represents the progress of the tumour growth, that is being identified based on the texture information of the abnormal region as stated by P. Naga Srinivasu et al. <ns0:ref type='bibr' target='#b48'>[40]</ns0:ref>. The abnormal region is classified as the Tumour Core(TC) that depicts the actual region of the tumour and the Enhanced Tumour(ET) that depicts the recent enhancement that has taken place in the tumours region that is presumed to be the progress in the tumour. The Whole Tumour(WT) is the region that includes both the tumor core and the enhanced tumor regions. The enhanced tumor region presents progress of the abnormality that will assist the physician in taking up the a decisions for suitable treatment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>It is observed on the practical implementation of the proposed approach that the resulting outcome seems to be much accurate and precise in identifying abnormality in the human brain. In the conventional fully supervised approaches like Convolutional Neural Networks, the algorithms need to be rigorously trained for better accuracy. Still, the proposed approach would be generating equivalently better outcomes with considerably less training of the algorithm. Moreover, the proposed approach uses a self-learning strategy that produces the foremost outcome.</ns0:p><ns0:p>The original image need not be pre-processed to remove the noise that is present in the image. The proposed algorithm itself can normalize the noise in the image alongside automated differentiating the non-brain tissues like the skull from the brain tissues. However, the proposed approach could be further optimized by incorporating the self-correcting convolution layer through an Ancillary kernel.</ns0:p></ns0:div>
<ns0:div><ns0:head>Future Scope</ns0:head><ns0:p>The proposed model based on the self-learning mechanism is suitable for handling uncertain data more effectively through its previous experiences. The model's performance can be further improvised by incorporating the Long Short Term Memory(LSTM) component for efficiently handling the training data for better accurate prediction of the progress in the tumor growth as the LSTM components are efficient in holding the memories for a longer time by preserving the dependencies based on the network's information. The incorporated memory elements can retain the State information over the specific iterations constructed through multiple gates. The proposed model can also be enhanced by incorporating the feedback component that would help assess tumor growth progress by correlating its previous outcome.</ns0:p></ns0:div>
<ns0:div><ns0:head>Availability of Dataset</ns0:head><ns0:p>The datasets that are used for the experimental purpose are open available over the web with the ground facts that will support evaluating the performance of the proposed model. TCGA-LGG dataset can be downloaded from the website https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG and the dataset related to the experimentation is openly available at https://www.kaggle.com/mateuszbuda/lgg-mrisegmentation/version/1.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:1:1:CHECK 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>3 )</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>+ 𝒊 𝟐 𝟐 + 𝒊 𝟐 𝟑 + … + 𝒊 𝟐 𝒏 𝒊 𝟏 + 𝒊 𝟐 + 𝒊 𝟑 + … + 𝒊 𝒏 The values of the pixel belongingness (Pb) have been demonstrated in equation (3) mentioned below 𝑃 𝑏 = ∑ 𝑝 𝑐 𝑝 (𝑝) 𝑥 𝑝 ∑ 𝑝 𝑐 𝑝 (𝑝) 𝑥 where cp determines the membership coefficient, and p designates the pixel, and x designates the fuzzier metric of belongings. The value of the pixel likeliness(pl) is designated through the equation (4) stated below (4) 𝑝 𝑙 = 𝑝 ( 𝑠 ) × 𝑝( 𝑐𝑝 𝑠 )/ ∑ 𝑝(𝑘) × 𝑝( 𝑐𝑝 𝑘 ) From the above equation, the p(s) denotes the probability of likeliness with the segment s and represents the conditional probability concerning segment s and is pexpected for all the classes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:1:1:CHECK 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:1:1:CHECK 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4The image representing the hyper-parameters</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:1:1:CHECK 11 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Output screens of proposed SLNS approach</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,206.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,411.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,242.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,271.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Table represents the confusion matrix of the proposed model</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Performance analysis of the proposed approach From table</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>et al. The membership is evaluated as follows<ns0:ref type='bibr' target='#b23'>(18)</ns0:ref> </ns0:figDesc><ns0:table /><ns0:note>𝑚𝑒𝑚𝑏𝑒𝑟𝑠ℎ𝑖𝑝 = 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 -𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 -𝐼' 𝑝𝑖𝑥 1 -𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐼 ' 𝑝𝑖𝑥 ) PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:1:1:CHECK 11 Mar 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥in the image for processing. The variable represents the instance data that are the pixels, and𝐼' 𝑝𝑖𝑥 the variable presents the minimum correlation value in concerning to the MR image 𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐼 ' 𝑝𝑖𝑥 )segment. The segment vertex is being assessed through the equation(16) stated below<ns0:ref type='bibr' target='#b24'>(19)</ns0:ref> It can be observed from the table 3, the fuzzy membership based HARIS algorithm has slightly performed well than the traditional HARIS algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>𝑣𝑒𝑟𝑡𝑒𝑥 = 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 +</ns0:cell><ns0:cell>𝐽' 𝑝𝑖𝑥 -𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 1 -𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐽 ' 𝑝𝑖𝑥 )</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>From equation(16), The variable represents the instance pixel in the segment and represents the vertex pixel in the region of the MR image. 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 represents the 𝐽' 𝑝𝑖𝑥 𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐽 ' 𝑝𝑖𝑥 )</ns0:cell></ns0:row><ns0:row><ns0:cell>membership value.</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Table representing the performance analysis with Fuzzy ComponentFigure 7. Graph representing the computational time of various approaches Figure</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:1:1:CHECK 11 Mar 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Reviewer 1
Basic reporting
Reviewer Comment#1
The authors introduce a lot of existing studies. To convince the future readers, the paper needs a comparison table, which uses columns for the existing studies and the manuscript, and rows for the attributes.
Authors Comment: The reviewer recommendations are carefully addressed in the manuscript. A new table with comparative analysis is incorporated with various evaluation metrics like Sensitivity, specificity, Accuracy and the Computational Time are being considered. A new table is added at the line 717. The data related to the table and the table citation is presented in the lines 717 through 725 in the manuscript.
Reviewer Comment#2
Figures 4 and 5 need complete graph legends.
Figures 4 and 5 are not cited. Please explain these figures.
Authors Comment:
As per the directions of the reviewers, The graphs are corrected by placing the legends at the appropriated position and the figure citations are added in the paragraph at the lines 725 and 726. The figure number are modified as the figure 6 and 7 respectively.
Reviewer Comment#3
a majority of those approaches lack inaccuracy
-> a majority of those approaches lack accuracy
Authors Comment:
The correction is done at the line number 92 of the manuscript, And the complete manuscript is verified any such mistakes.
Reviewer Comment#4
'is presented in figure 4 below' in Line 360 should be 'is presented in figure 6 below'.
Authors Comment:
The recommended correction is made and all the figures are re-ordered due to addition of few additional figures like the learning rate, hyper parameters. The figure numbers are updated and cited at the appropriate location in the manuscript.
Reviewer Comment#5
Please add the time unit (e.g., seconds or milliseconds) to Figure 6.
Authors Comment:
The computational time is being assessed in minutes and in the figure 7 the same has been updated as per the recommendations of the reviewer.
Experimental design
Reviewer Comment#1
The authors claim that 'The article's main objective is ... computationally and technically efficient than many other approaches like CNN and HARIS algorithm.' in Abstract.
However, the computational cost of the proposed algorithm is higher than those of CNN or HARIS, according to Figure 6. The objective of the manuscript should be clarified precisely.
Authors Comment:
The reviewer has come up with a pivotal point in concerning to the performance of the proposed model. The technical trade off among the performance, pre-trained data availability and the accuracy of the prediction are discussed at various contexts in the paper. In the abstract the point “The article's main objective is to mechanize an efficient approach that can accurately assess the damaged Region of the tumors through automated segmentation that needs minimal training and capable of performing self-Learning” would highlight the objective of mechanising the model that can work with minimal training than its counterparts. But the proposed model is self-learning model that needs a considerable time to train the model to generate reasonable prediction. And in the manuscript at Experimental Results and Observation it is mention in the lines 720 through 722, it is mention that “The computational time of CNN and SLNS is almost the same, and the computational lag in the proposed approach is because of self-training that need additional efforts.”
The self-learning process of the proposed model has consumed a considerable time, and the same has also been mention in the future scope of the document, where a Long-Short term memory module can be incorporated to improve the performance of the model. The architecture diagram Figure 2 that is added to the manuscript at line 586 and the corresponding paragraphs are added at the line numbers 572 to 584 in the proposed model section of the manuscript.
Reviewer Comment#2
Please explain the HARIS algorithm in detail. Moreover, please add the reference of the paper which proposes the HARIS.
Authors Comment:
As with the recommendations of the reviewer, The information about the HARIS algorithm is added to the manuscript at line number 489 through 505 in the manuscript along with the citation. The mathematical equations corresponding to the HARIS algorithm are already explained in the manuscript from line 506 to 530.
Validity of the findings
Reviewer Comment#1
Your code does not work.
For example,
'from tensorflow.keras.preprocessing.image import ImageDataGnerator' in Line 9 of pcnn-Copy1.py should be
'from tensorflow.keras.preprocessing.image import ImageDataGenerator'.
Please upload the correct source files and also add a readme file, which contains information about the sources and the execution procedure.
Authors Comment:
The code is being checked for the database connection and the working code is again uploaded to the dashboard in the zip file along with the readme file.
Reviewer Comment#2
In the evaluations, the authors compared the proposed algorithm with Twin Centric GA with SGO, HARIS, and CNN.
Please describe why the authors selected these algorithms to be compared.
Further, please describe why the authors did not compare many other algorithms introduced in Introduction Section.
Moreover, please describe the details of Twin Centric GA with SGO.
Authors Comment:
The techniques like GA with SGO, HARIS, and CNN are being considered for evaluating in the paper based on the earlier implementation by the author and the process of comparing the proposed model with conventional approaches would be a convenient and feasible, that is the reason why those methods are specifically chosen for the comparative analysis in the current paper. As per the reviewers advise the same methods are being incorporated in the introduction section. The information about the Twin Centric GA with SGO is being added to the introduction section along with the citation from 378 through 389.
Reviewer 2
Reviewer Comment#1 A few training time learning error convergence plots are required.
Authors Comment: The graphs related to the learning rate and the graphs related to the training and validation loss are incorporated to the manuscript as per the recommendation of the reviewer. The figure 3 at line 626 represents the learning rate and the graphs of hyper parameters are added as figure 4 at line 642.
Reviewer Comment#2 Some details regarding the approach adopted for fixing the network architecture should be incorporated.
Authors Comment:
The detailed implementation of the HARIS approach along with the role of Gated Recurrent Unit in the process of maintaining the learning data is being elaborated in the revised manuscript at the line number 588 through 624 and the block diagram that highlights the adopted approach is being added to the manuscript as figure 2 at line number 587 of the manuscript.
Reviewer Comment#3 At what stage of the network, does the learning saturate?
Authors Comment:
The model is self-learning based model that learns from its own based on the previous experimental outcomes with the assistance of the HARIS algorithm. The graph representing the learning rate of the proposed model is being added to the manuscript at Self-Learning Network Based Segmentation section of the paper. The citation of the figure and the text about the learning rate is presented in the manuscript at the line numbers . The saturation point for the learning rate in the proposed model is achieved at the epoch=43 and the iteration=187. The same has been added in the manuscript at the line number 622 through 624.
Reviewer Comment#4 A confusion matrix should be used to report the best and worst time performance of the network.
Authors Comment:
As suggested by the reviewer, new table with the confusion matrix that reports the best and worst performances of the proposed model is added in the manuscript at the Experimental Results and Discussion section of the paper at line 676. The corresponding text related to the added table is presented in the paragraph at the line numbers 654-652.
Reviewer Comment#5 What shall be the difference in performance with and without the fuzzy portion adopted in the work?
Authors Comment:
The fuzzy membership is being incorporated in the second phase of the HARIS algorithm in evaluating the fitness values of the neighbour pixel to be the part of the segment. A new table is added at the line 715 in the revised manuscript.
Reviewer Comment#6 Some results relating the technique to the overall performance of the network should be included.
Authors Comment:
The results of the Fuzzy entropy based model that are obtained from the existing studies are incorporated in to the manuscript in the table 2 at the line 678 in the manuscript.
Reviewer Comment#7 What cost functional is used to optimize the performance of the network?
Authors Comment: The data associated with the cost function is added in the manuscript at 602 to 615 as per the recommendation of the reviewer.
We would like to thank the reviewers in recommending the constructive suggestions that have assisted us building the standard paper. We have kept our best efforts in addressing all the reviewers comments to incorporate in the manuscript.
" | Here is a paper. Please give your review comments after reading it. |
196 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In recent years in medical imaging technology, the advancement for medical diagnosis, the initial assessment of the ailment, and the abnormality have become challenging for radiologists. Magnetic Resonance Imaging is one such predominant technology used extensively for the initial evaluation of ailments. The primary goal is to mechanizean approach that can accurately assess the damaged region of the human brain throughan automated segmentation process that requires minimal training and can learn by itself from the previous experimental outcomes. It is computationally more efficient than other supervised learning strategies such as CNN deep learning models. As a result, the process of investigation and statistical analysis of the abnormality would be made much more comfortable and convenient. The proposed approach's performance seems to be much pleasing compared to its counterparts, with an accuracy of 77% with minimal training of the model. Furthermore, the performance of the proposed training model is being evaluated through various performance evaluation metrics like sensitivity, specificity, Jaccard Similarity Index, and Mathews correlation coefficient, where the proposed model is productive with minimal training.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Medical imaging technology has been critical in medical diagnostics for accurately detecting the presence of malignant tissues in the human body. There are divergent imaging technologies that are available for diagnosis of abnormality among various organisms that includes PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science X-Ray technology, as stated by <ns0:ref type='bibr' target='#b49'>(Vaga & Bryant, 2016)</ns0:ref>, Computed tomography (CT) technology, as stated by <ns0:ref type='bibr' target='#b51'>(Venkatesan et al., 2017)</ns0:ref>, Magnetic Resonance Imaging (MRI) technology <ns0:ref type='bibr' target='#b4'>(Chahal et al.,2020)</ns0:ref>, <ns0:ref type='bibr'>(Srinivasu et al., 2020)</ns0:ref> and positron emission tomography (PET) scan <ns0:ref type='bibr' target='#b28'>(Norbert et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b38'>(Schillaci et al., 2019)</ns0:ref>. All the approaches mentioned above are non-invasive and capable of diagnosing malignant tissue efficiently. In many cases, imaging technology is aptly suitable for identifying abnormalities in the human body. It would help the physician provide better treatment and assist in planning the clinical procedure. The current study has focused on mechanizing a model capable of diagnosing the MRI imaging and estimating the extent of the damage.</ns0:p><ns0:p>Medical imaging technology has upgraded stupendously to elaborate every minute and tiny ailment in the human body that could efficiently diagnosed disease at a significantly earlier stage. The proposed study primarily focused on tumor identification and volumetric estimation of the tumorous region in the human brain. There are numerous machine learning models available for accurately identifying the tumorous regions from MRI images. Yet, most of these techniques are imprecise, process thirsty, and particularly semi-automated procedures used in tumor identification. To circumvent the limitations of the above methodologies, several automated techniques, including the conventional models like Genetic Algorithm (GA), as stated by <ns0:ref type='bibr'>(Srinivasu et al.,2020)</ns0:ref>, Artificial Neural Networks (ANN), as said by <ns0:ref type='bibr' target='#b55'>(Wentao et al., 2020)</ns0:ref> and Deep Learning techniques as mentioned by <ns0:ref type='bibr'>(Deepalakshmi et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b47'>(Srinivasu et al., 2021)</ns0:ref>.</ns0:p><ns0:p>As part of recognizing ailments, several stages, including pre-processing the MRI images to address noises in the original MRI images, such as Spackle noise, Poisson noise, and Gaussian noise, are introduced into the image at various stages during the process of rendering the image. And the pre-processed step will enhance the contrast and the texture of the image that assist in a faster and convenient way to recognize the malignant tissues. Once the noise in the image is preprocessed, the MRI image is fed to remove the skull region. Then the MRI image is segmented to recognize the malignant region, followed by the volumetric estimation for analyzing the impact of the damage. The results of the automated segmentation approaches are refined over multiple iterations for the precise outcome.</ns0:p><ns0:p>Generally, the medical images are segmented to locate the abnormal regions in the image based on texture-based information. The process of image segmentation plays a vital role in the identification of the malignant areas in the MRI image; There are various semi-automated and automated ways of segmenting the MRI images that could efficiently segment the images, but there is a considerable tradeoff between the accuracy and the computation efforts put forward by each of those approaches. Supervisory models yield better accuracy, but they need tremendous training data that needs more computational resources.</ns0:p><ns0:p>K-Mean's algorithm is one such semi-automated approach that segments the MRI Image based on the pre-determined number of segments as stated by <ns0:ref type='bibr' target='#b2'>(Alam et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b4'>(Chahal et al., 2020)</ns0:ref>. Thus, the proposed approach is one of the simple techniques for the segmentation of the images. But the major drawback of this approach is that the total number of segments is fixed well before the segmentation has begun. Therefore, if the k value is too large, it would lead to oversegmentation, and if the k value is too small, it will lead to under-segmentation of the image.</ns0:p><ns0:p>Segmentation of MR images is indeed possible using completely automated techniques such as thresholding, as stated by <ns0:ref type='bibr' target='#b42'>(Sowjanya et al., 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b19'>(Kumar et al., 2015)</ns0:ref>. The thresholdbased method works comparatively better in the homogeneous image, but accuracy mainly depends on the approximated threshold value. A minimal deviation from the ideal value of the threshold results in an exponential variance in the segmented image.</ns0:p><ns0:p>The other predominantly used segmentation technique through Region Growing <ns0:ref type='bibr' target='#b7'>(Dalvand et al., 2020)</ns0:ref> strategy that can effectively handle the problem of over and under segmentation often encountered in K-Means-based approaches. The experimental studies on the Region Growingbased approach are proven to improve the sensitivity and specificity for precise identification of the malignant region in the human brain <ns0:ref type='bibr' target='#b32'>(Punitha et al., 2018)</ns0:ref>. But the seed point must be chosen appropriately, which needs tremendous efforts in recognizing the optimal seed point. If the initial seed points are inappropriately chosen, the entire process of segmentation of the image goes wrong, and also this approach needs computationally more effort. <ns0:ref type='bibr' target='#b39'>(Sheela et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b26'>(Mathur et al., 2016)</ns0:ref> proposed Edge Based Segmentation for image segmentation, a relatively simplistic approach. But in the suggested edge-based approach, the image must be pre-processed rigorously to elaborate the edge-related information, which involves high computational efforts. <ns0:ref type='bibr' target='#b31'>(Pandav, 2014</ns0:ref><ns0:ref type='bibr'>), (Liang&Fu, 2018</ns0:ref><ns0:ref type='bibr'>), (Kornilo et al., 2018)</ns0:ref> has proposed Watershed-based image segmentation that effectively addresses limiting the number of segmented areas identified using edge information by marker-controlled watershed-based segmentation. The process of limiting the segments will assist in avoiding the over-fitting problem that is experienced in the majority of the user intervened models and the supervised models. But in watershed-based segmentation, considerable efforts are needed during pre-processing phase to separate the foreground and background regions in the image. <ns0:ref type='bibr' target='#b48'>(Sudharania et al., 2016)</ns0:ref> have proposed Morphological based segmentation that exhibits an exceedingly high accuracy in less intensity MRI images. But it needs several iterations to converge to a better solution, which needs more computational efforts. <ns0:ref type='bibr' target='#b17'>(Jude et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>(Madallah et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b53'>(Verma et al., 2015)</ns0:ref> has proposed Fuzzy C Means(FCM)based MRI image segmentation is a highly accurate, well-founded, and rapid approach that effectively handles the uncertain situation of allocating pixels among multiple segments through distributing pixels to the appropriate segment depending on the membership value determined at each iteration. In fuzzy-based segmentation, the pixels are assigned to their corresponding region based on the membership function value that lies in a range of 0 to 1. The value 1 indicates the corresponding pixel is more likely to be associated with the corresponding segment. The biggest problem with the FCM technique is determining the membership value at each iteration for all pixels in the image that need additional processing efforts. In addition, the adjustment of the bottom and upper approximated for the randomness is complex though FCM. <ns0:ref type='bibr' target='#b1'>(Al-Shamasneh et al., 2020)</ns0:ref> have suggested an MRI image segmentation through Contour Based segmentation that efficiently recognizes various brain tissues for both homogeneous and heterogeneous malignant cases. But it is not much efficient for noisy images and high-intensity images. <ns0:ref type='bibr'>(Li et al., 2018)</ns0:ref> have proposed a Level Set centric technique for the MRI image segmentation based on the pre-approximated threshold value. The level set method is a very complex approach, and the approximated threshold value determines the efficiency of the approach's segmentation. <ns0:ref type='bibr' target='#b54'>(Wang et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b11'>(Diaz & Boulanger, 2015)</ns0:ref> have proposed Atlas Based Segmentation for MRI images is a straightforward approach that employs segmentation of the MRI images. It is computationally faster than used labeling, and it is independent of the deformation model. The suggested approach merges the intensity template image and segmented reference image to register to further segment the image. In the atlas-based approach, choosing the initial seed point is crucial as the entire segmentation scenario is based on the seed point selection. However, the efficiency of this approach is based on the precision of the topological graph. <ns0:ref type='bibr' target='#b52'>(Venmathi et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b23'>(Liu et al., 2018)</ns0:ref> have proposed Markov Random Field (MRF) to segment the MRI image through the Gaussian mixture model that supports incorporating the neighboring pixels association in practical and mathematical perception. The method works on textured-based information. Markov random field-based approach for the segmentation of the image includes spatial information that aids in normalizing noise and overlapping the neighboring regions. <ns0:ref type='bibr' target='#b16'>(Javaria et al., 2019)</ns0:ref> have suggested an approach that used the spatial vector alongside Gabor decomposition that distinguishes the malignant and non-malignant tissues in the MRI image of the Bayesian Classifier. Despite the accuracy, MRF needs more computational efforts and the process of picking the parameters systematically. <ns0:ref type='bibr' target='#b50'>(Varuna et al.,2018)</ns0:ref>, <ns0:ref type='bibr' target='#b13'>(Gibran et al., 2020)</ns0:ref> have suggested Probabilistic Neural Network (PNN) in combination with Learning Vector Quantization (LVQ) that assists in reducing the computational time by optimizing all the hidden layers in the proposed method. The region of interest that must be recognized for designing the network must be done carefully, as image segmentation quality depends on the exactness of the region of interest. <ns0:ref type='bibr' target='#b37'>(Sandhya et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b30'>(Mei et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b8'>(De & Guo, 2015)</ns0:ref> have suggested a Self-Organized Map (SOM) based algorithm for segmentation of the MRI image that includes the spatial data and the grey level intensity information in the segmentation of the image. It is outstanding in separating the malignant tissues. But the SOM engine has to be trained rigorously for better accuracy. The quality of the image segmentation is directly dependent on the training set. Mapping is one of the complicated tasks in a SOM-based approach. <ns0:ref type='bibr' target='#b15'>(Havaei et al., 2016</ns0:ref><ns0:ref type='bibr'>), (SivaSai et al., 2021)</ns0:ref>, <ns0:ref type='bibr' target='#b55'>(Wentao et al., 2020)</ns0:ref> have proposed a Deep Neural Networks-based approach that is computationally efficient and highly precise in determining the abnormality from the medical image. Yet, the primary problem is that the implementation procedure and machine must be computationally efficient with adequate processing resources to perform the image segmentation in a reasonable time, which is not always technically feasible. <ns0:ref type='bibr' target='#b35'>(Sachdeva et al., 2016)</ns0:ref> have suggested multiple hybrid approaches for the segmentation of the MRI images. The multiclass categorization of malignant tissues is done efficiently, and high accuracy is attained through machine learning and soft computing techniques. Pulse Coded Neural Network (PCNN) is a technique used in coherence with the semi-automated methods for better segmentation. While segmenting the MRI image, the Region of interest could be perceived as a region growing approach that selects the initial points assumed as the seed points in the earlier stages. Secondly, a Feed Forward Back Neural Network (FFBNN) selects the seed points that send back the input until the input turns uniform. <ns0:ref type='bibr' target='#b33'>(Qayyum et al., 2017)</ns0:ref> have attained multiple subimages with multi-resolution data by employing Stationary Wavelet Transform (SWT). The spatial kernel is being applied over the resultant sub-images to locate the demographic features. With the help of extracted features and the Stationary Wavelet Transform coefficient, the multi-dimensional features are built. The identified features and coefficients as the input to the self-Organized map, Linear vector Quantization, are finally used to refine the results. <ns0:ref type='bibr'>(Srinivasu et al., 2020)</ns0:ref> has proposed an approaches Twin Centric Genetic Algorithm with a Social Group Optimization that has produced a precise outcome in tumor identification from the brain MR images. The twin-centric GA model is comparatively faster than the conventional GA approach, with a faster crossover rate that results in a new segment. The mutation operation is performed based on the fitness value to reform them with other strongly correlated regions in the image. The outcome of the Twin GA is being refined through the Social Group Optimization approach, which has refined the outcome through selecting the appropriate features in the image for the segmentation. <ns0:ref type='bibr' target='#b10'>(Dey et al., 2018)</ns0:ref> has experimented with the SGO approach in fine-tuning the outcome of the classification. The current approach is comparatively faster, but in performing the two-point crossover, there is a chance of diverging from the optimal number of regions and may end up with the over-fitting issue. And the execution of two high computational algorithms needs a significant computational effort to segmentation the MR image.</ns0:p><ns0:p>The main objective is to formulate a mechanism that can efficiently segment the real-time image into multiple regions based on the available features by minimizing the computation efforts. The proposed approach is a self-trained strategy that upskills the algorithm with some pre-existing real-time scenarios. Previous experimental results show that the proposed SLNS algorithm can differentiate the skull region from the brain tissues in the MRI image without any external pre-processing algorithm. It can quickly extricate brain tissues from the non-brain tissues through the available feature set. The proposed model is robust in handling the images with an acceptable noise level, and it needs less computational effort for training the model. The experimentation has been performed to evaluate the accuracy of the proposed approach, and the upshot seems to be promising.</ns0:p></ns0:div>
<ns0:div><ns0:head>Self-Learning Network Based Real-Time Segmentation</ns0:head><ns0:p>Cognitive technology in real-time image segmentation is a multidisciplinary technique that is an intrinsic aspect of Fully Convolutional Neural Networks. Fully Convolutional Neural Networks are widely utilized in real-world settings to successfully handle 2D images. The concept of semantic segmentation has been used in coherence with multi-objective function-based algorithms for better results. The weak learning network is based on the partial training of the algorithm and the tuning of the algorithm so that it would be able to differentiate among multiple classes. In addition, the algorithm is capable of training itself for better efficient segmentation of the image. In every stage of execution for evaluating the tumor's region from the MRI image, the proposed method would segment the image based on the trained information. The resultant segmented imaged would be further refined in further network layer's multi-objective functions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Partial Training of The Algorithm</ns0:head><ns0:p>In contrast to its competitors, the suggested methodology is a partly trained strategy that does not need extensive training. As shown by <ns0:ref type='bibr' target='#b56'>(Xiao et al., 2015)</ns0:ref>, the recommended strategy involves training the original image with an acceptable quantity of noisy data to ensure that the algorithm can address noise images. Additionally, the noise picture is specified in the labels. The binary identifier notion is used to distinguish a noisy picture from a non-noise labeled image <ns0:ref type='bibr' target='#b27'>(Misra et al., 2016)</ns0:ref>. It has proposed the idea of independently labeling the noise class. The role of differentiating the image-based noise classes is significantly essential. The image could be easily determined among the noise levels to process the images based on the noise variance.</ns0:p><ns0:p>The model is being trained to distinguish brain tissues from non-brain tissues such as brain fluids, the skull region, the thalamus, and the brainstem, which must be isolated from brain tissues for improved segmentation and evaluation of the tumor's extent. However, whenever a new image is fed into the algorithm as input, post segmentation, the image is fed into the network's selflearning layer as input for further images.</ns0:p></ns0:div>
<ns0:div><ns0:head>Self-Learning Network-Based Segmentation</ns0:head><ns0:p>In the automated segmentation of the brain MRI image, the crucial thing is to set up training sets that are large enough to train and validate the model. Resultantly it is challenging to obtain a self-sufficient dataset for both trains and validating with ground facts. On the other hand, selflearning models need a relatively smaller training and validation set. Furthermore, the experimental results used in self-training the model ensure better results in future experimentation based on its previous experimental knowledge. The complete flow diagram of the proposed framework can be seen in figure <ns0:ref type='figure'>1 stated below</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> Represents the block diagram of SLNS based approach</ns0:p></ns0:div>
<ns0:div><ns0:head>Layered Architecture of SLNS</ns0:head><ns0:p>The layered architecture of the proposed Self-Learning Network-based Segmentation (SLNS) approach includes the convolutional layers that would perform various challenging tasks like pre-processing of the image to remove the image's noise by considering the adaptive bilateral filter for pixels that surround the actual corresponding pixel. The image is then further processed to remove the skull region in the next successive layer in correlation with the other connected layers. Next, the image is segmented based on the pixel's intensity value, representing the region's texture. And finally, the classifier is used to dissimilate the non-brain tissues and damaged Region in the human brain.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The suggested methodology trains the segmentation algorithm using a renewable avenue that gets adequate information from the preset segmentation dataset and previous experimental results. Additionally, each convolutional layer is driven by a variety of kernels, including an adaptive bilateral filter with sublayers that determine the appropriate number of clusters and cluster centroids, Sobel filter that is used in the edge detection, and other 2D convolution filters that are generally (3x3) in size which are slides across the original input image. Over numerous rounds, the technique is repeated over multiple iterations to identify all the key features in the input image. The image is refined at the earlier stages using an Adaptive Bilateral filter that processes the image that normalizes the noise like Gaussian noise and Poisson noise during image acquisition. The method of noise reduction is carried forward through the following equations</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>𝐴𝐵 𝐹 = 𝛼 × ∑ 𝑝 𝑝 𝑏 (|| 𝑥 -𝑦 ||) × 𝑝 𝑙 (||𝑖 𝑥 -𝑖 𝑦 ||)</ns0:formula><ns0:p>From the above equation, the  value is determined by the contra harmonic mean of the neighboring pixels determined in equation ( <ns0:ref type='formula'>2</ns0:ref>) stated below, Pb represents the pixel belongingness, and represents the pixel likeliness.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑝 𝑙</ns0:head><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'> = 𝒊 𝟐 𝟏 + 𝒊 𝟐 𝟐 + 𝒊 𝟐 𝟑 + … + 𝒊 𝟐 𝒏 𝒊 𝟏 + 𝒊 𝟐 + 𝒊 𝟑 + … + 𝒊 𝒏</ns0:formula><ns0:p>The values of the pixel belongingness (Pb) have been demonstrated in equation ( <ns0:ref type='formula'>3</ns0:ref> Heuristic Approach for Real-time Image Segmentation (HARIS) algorithm is used to segment the high dimensional images like the medical MR images to identify the abnormality from the MR images. HARIS approach incorporates two phases in the image segmentation process. In the initial phase, the optimal number of regions in the image are being assessed, which assists in accurately recognizing the regions in the image based on texture, intensity, and boundary regionrelated pixels. The techniques like the intraclass correlation and interclass variance are being considered for evaluation. The second phase of the HARIS approach is to identify the local best feature within the region following the presumed feature element that is presumed to be the global PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science best. The selected feature is being used in assembling the pixels as a region based on the feature identified.</ns0:p><ns0:p>The image is segmented based on the intensity level in the later stages by approximating the minimum number of segments to 23 from the previous experimental studies. The number of segments fitness has been evaluated through the formula stated below</ns0:p><ns0:p>(5) Obj fun = ( x × Tot pix pix seg ) + ( y × T s N r ) The above Equation ( <ns0:ref type='formula'>5</ns0:ref>), x, and y are the deciding factors that control the proposed algorithm's accuracy and efficiency. x determines the inter-class variance, and y determines the intraclass variance stated through equations ( <ns0:ref type='formula' target='#formula_2'>7</ns0:ref>) and ( <ns0:ref type='formula'>9</ns0:ref>).</ns0:p><ns0:p>The maximum interclass variance is being determined by the equation ( <ns0:ref type='formula'>6</ns0:ref>) stated below ( <ns0:ref type='formula'>6</ns0:ref>) <ns0:ref type='formula' target='#formula_2'>7</ns0:ref>) is the elaborated version of the equation ( <ns0:ref type='formula'>6</ns0:ref>) and Ct in the above equation threshold value of the class that is being evaluated through the Fuzzy Entropy-based Thresholding (FET) approach. And the variables That determines the means of the intensities of the image μ 1 ,μ 2 segment.</ns0:p><ns0:formula xml:id='formula_2'>σ 2 inter c (C t ) = σ 2 total(C t ) -σ 2 prev_inter(C t )<ns0:label>(7)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>σ 2 inter c (C t ) = ∑ C t -1 x = 0 p(x) ∑ Z -1 x = C t p ( x ) [μ 1 (C t ) -μ 2 (C t )] 2 Equation (</ns0:formula><ns0:p>The fuzzy entropy-based <ns0:ref type='bibr' target='#b29'>(Oliva et al., 2019)</ns0:ref> thresholding technique that evaluates how strongly a pixel is being strongly correlated to a particular segment, the value of the threshold is being determined by the equation ( <ns0:ref type='formula'>8</ns0:ref>) (8) FET = ∑ max ints = 1 μ tc ( ints ) log 2 μ tc ( ints ) -∑ 255 ints = max + 1 μ ntc (ints)log 2 μ ntc (ints) From the above equation( <ns0:ref type='formula'>8</ns0:ref>), the variables Are the variables representing the fuzzy μ tc ,μ etc membership concerning the image segments associated with the tumors class and non-tumors class. However, the equation is formulated concerning Shannon's entropy formulation. And the value of the threshold(FET) would be greater than 0 and would lie below 255.</ns0:p><ns0:p>The value of the intraclass correlation is determined by equation ( <ns0:ref type='formula'>9</ns0:ref>), ( <ns0:ref type='formula'>9</ns0:ref>) <ns0:ref type='formula'>9</ns0:ref>), the variable Represents the standard deviation in concern to the 𝜎 𝑠 given image segment, and the variable Represents the standard deviation concerning the entire 𝜎 𝑖 image that is being considered.</ns0:p><ns0:formula xml:id='formula_4'>I corl = σ 2 s σ 2 s + σ 2 i From equation (</ns0:formula></ns0:div>
<ns0:div><ns0:head>Adaptive Structural Similarity Index:</ns0:head><ns0:p>The adaptive structural similarity index(ASSI) approach is used to classify the pixel among the tumorous and non-tumorous regions. The structural similarity index relay on the likeness among the pixels among tumor regions. The structural similarity index relies on three factors: the structural parameter, luminance parameter, and contrast parameter. The index is determined as the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science product of the three factors mentioned above. This paper has proposed an adaptive Structural Similarity Index that would assess the membership alongside the similarity index to make the outcome more realistic. The equation of the similarity index is determined through the equation stated below (10) ASSI(p,q) = ω × {x(p,q)α.y(p,q)β.z(p,q)γ} (11)</ns0:p><ns0:formula xml:id='formula_5'>x(p,q) = 2μ p μ q + ∁ 1 μ 2 p + μ 2 q + ∁ 1 (12) y(p,q) = 2σ p σ q + ∁ 2 σ 2 p + σ 2 q + ∁ 2 (13) z(p,q) = σ pq + ∁ 3 σ p σ q + ∁ 3 (14) ω = σ 2 p (σ 2 p + σ 2 e</ns0:formula><ns0:p>2 ) The equation ( <ns0:ref type='formula'>10</ns0:ref>) mentioned above is used to assess the likelihood of the pixel that could be a part of the tumor region. The variable determines the probability that is multiplied by the 𝜔 product of the structural parameter presented in equation ( <ns0:ref type='formula'>11</ns0:ref>), luminance parameter presented in equation ( <ns0:ref type='formula'>12</ns0:ref>), and contrast parameter given in equation ( <ns0:ref type='formula'>13</ns0:ref>). The outcome of the proposed model is promising when compared to that of its counterparts. The ASSI is used alongside the trained models in the proposed self-learning model. Algorithm Date: x(p,q)Structural Parameter, y(p,q)Luminance Parameter, z(p,q) Contrast Parameter Input: pixel(p,q) where p represents the row and the q represents the column Output: Assess the correlation index pix(p,q) initialize the starting pixel while(i<maximum_number_iterations) for each pixel in the input image, do Update x(p,q), y(p,q), z(p,q) and ω Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Approximate the Correlation Index</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Neural Network model <ns0:ref type='bibr' target='#b12'>(Farhana et al., 2020)</ns0:ref>. The Convolutional layer in the proposed architecture is responsible for applying kernels' sequence to the input MR images for extracting the features from the image that assist in classifying various regions in the input image. The convolution layer identifies the pixels' spatial association based on the selected features informing the Region, making abnormality identification easy. The pooling layer is generally next to the convolution layer, which is meant for reducing the special size of the outcome in the previous layer, which would resultantly minimize the number of parameters and features that are deliberated for further processing and to minimize the computational time. In the proposed model, the MAX pooling approach is considered in reducing the spatial size. Convolution and Max Pooling layers are used concurrently for numerous rounds until all regions are well-tuned, and features are recognized. The flattening layer is the successive layer after the Convolution and Pooling layers. It is responsible for transferring the data into the 1-dimensional array used in the subsequent layered architecture. The Fully Connected layer in the proposed architecture uses the intensity features of the regions that are recognized as abnormal and normal in the considered MR images. Gated Recurrent Unit (GRU) is being used in the proposed model to maintain the interpreted data of the previous experimental prediction. The same apprehension is being used in the future in abnormality recognition from the MR Image. Heuristic Approach for Real-time Image Segmentation (HARIS) algorithm is being used to differentiate the tumors and non-tumors Region by identifying the appropriate pixels as the features for the prediction.</ns0:p><ns0:p>The Gated Recurrent Unit is used to maintain the acquired knowledge from the previous outcomes. GRU is efficient, with comparatively fewer parameters needed to maintain the training data, <ns0:ref type='bibr' target='#b14'>(Guizhu et al., 2018)</ns0:ref> <ns0:ref type='bibr' target='#b20'>(Le et al., 2016)</ns0:ref>. GRU can address challenging tasks like vanishing gradient problems through the two gates, namely the reset gate and the update gate. HARIS works with both the fully connected and GRU layers to recognize the suitable pixels for the prediction. The cost function of the proposed model in concern to the tensors for the input image I(i,j) can be evaluated through the equation ( <ns0:ref type='formula'>15</ns0:ref> The variable represents the weight associated with layer and the variable represents 𝑤 𝑙 𝛽 the bias, represents the kernel that is being used in operating the elements. The error at the 𝑓 𝑤,𝛽 (𝑖) layer in the proposed model is determined through the equation ( <ns0:ref type='formula'>16</ns0:ref>) 𝑙 ( <ns0:ref type='formula'>16</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_6'>𝑒 𝑙 = ((𝑤 𝑙 ) 𝑡 𝑒 𝑙 + 1 ) • 𝑓(𝑥 𝑙 ) The</ns0:formula><ns0:p>is the weight that is associated with layer and the variable represents the 𝑤 𝑙 𝑙 𝑒 𝑙 + 1</ns0:p><ns0:p>error associated with layer . The variable is the activation function that is associated</ns0:p><ns0:formula xml:id='formula_7'>𝑙 + 1 𝑓(𝑥 𝑙 )</ns0:formula><ns0:p>with the layer that is being determined by the Rectified Linear Unit (ReLu) in the proposed model. 𝑙</ns0:p><ns0:p>𝑅𝑒𝐿𝑢(𝑥 𝑙 ) = 𝑚𝑎𝑥(0,𝑥) The ReLu based activation function is linear for the greater than zero values, and it will be zero for all the negative values.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The learning rate of the proposed model is significant in assessing the performance of the model, which generally controls the weights in the network in concern to the loss gradience. The learning rate is presumed to be at an optimal level so that the model will move towards the solution by considering all the significant features in the prediction. The lesser learning rate presents the slower learning ability resulting in a delayed solution. On the other hand, the higher learning rate results in a faster solution that may ignore few features in the learning process. Figure <ns0:ref type='figure'>3</ns0:ref> represents the learning rate of the proposed model across various epochs. The saturation point for the learning rate in the proposed model is achieved at the epoch=43 and the iteration=187.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> The graph representing the Learning Rate In the proposed self-learning centric segmentation model, the model will acquire knowledge from the earlier experimental outcomes. The learning rate is also dependent on the number of epochs that the model is designed to execute before it is evaluated. As the number of epochs increases, the learning rate of the model will move towards the saturation point, and we can observe the change in the learning rate till that point. But increasing the number of epochs will result in consuming more computational efforts in the training process.</ns0:p><ns0:p>The other hyperparameters associated with the evaluation process include the loss and accuracy functions related to the proposed model's training and validation phase. It can be seen from Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>, the left side graph represents the training, and the validation loss of the model, wherein graph after a certain number of epochs, approximately from 15, the training and the validation loss are close to each other, that determines the proposed model is reasonably good in identifying the abnormalities from the MRI images. Overfitting occurs when the validation loss is much greater than the training loss, resulting in improper categorization of the abnormal area due to an abundance of data. Underfitting is a situation in which the training loss is more than the validation loss that would result in poor accuracy of the model due to inappropriate selection of the features in the data. Thus, training and validation accuracy are the other parameters that are considered for evaluating the proposed model. The graph that is on the right side represents the accuracy measures of the proposed model. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental Results And Observations</ns0:head><ns0:p>The experimentation has been carried forward over the real-time MR images acquired from the open-source repository LGG dataset acquired from The Cancer Genome Atlas (TCGA) captured from patients with acute glioma. The performance of the proposed approach has been evaluated through various metrics like Sensitivity, Specificity, Accuracy, Jaccard Similarity Index, and Matthews Correlation Coefficient that is being assessed from the True Positive value that designates how many times does the proposed approach correctly recognize the damaged Region in the human brain as damaged Region and the True Negative that designates the how many times the proposed approach identifies non-damaged Region correctly and False Positive designates the number of times does the proposed approach identifies damaged Region in the brain as non-PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>damaged Region and False Negative designate the count of how many times does the proposed approach mistakenly chosen non-tumors region as tumors region. Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> presents the output screens of the proposed model. The performance of the proposed Self-Learning Network-based Segmentation (SLNS) approach is being evaluated against the other conventional approaches like Twin-Centric GA with SGO, HARIS approach, and Convolutional Neural Network in concern to the performance evaluation metrics like Sensitivity, Specificity, Accuracy, Jaccard Similarity Index (JSI), Mathew Correlation Coefficient(MCC). Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> represents the experimental evaluations of the proposed model. The experimentation is conducted by executing the code repeatedly 35 times and scaled up for evaluating the confusion matrix. The model's accuracy is assessed with a standard deviation of 0.015 in assessing the segmented image as a present by <ns0:ref type='bibr' target='#b0'>(Agrawal et al., 2019)</ns0:ref>. From table 2, it is observed that the proposed SLNS approach is outperforming when compared to its counterparts. In many of the cases, the proposed approach seems much better than CNN. The semi-trained approach needs comparatively fewer efforts than CNN at the same performance. And in the computational efforts perspective, the proposed algorithms need almost the same execution time as the CNN algorithm but more time than the HARIS Algorithm-based segmentation approach. Fuzzy entropy-based MRI image segmentation as experimented by <ns0:ref type='bibr'>(Rajinikanth & Satapath, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>(Chao et al., 2016)</ns0:ref>. Figure <ns0:ref type='figure'>6</ns0:ref> presents the comparative analysis of the various approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 6. Graph representing the comparative analysis of various approaches</ns0:head><ns0:p>The performance of the proposed model is being assessed through the HARIS algorithm that decides the best possible number of segments to assist in identifying the abnormalities in the image, and it assigns the pixels to the segment by identifying the ideal pixel in the segment. The second objective function of the HARIS algorithm is being replaced with the fuzzy membership assessment model, and the performance of the proposed model is being evaluated and presented in table 3. The fuzzy membership is evaluated through equation ( <ns0:ref type='formula'>15</ns0:ref>) stated by <ns0:ref type='bibr' target='#b49'>(Vaidya et al., 2018)</ns0:ref>. The membership is evaluated as follows (18) <ns0:ref type='figure'>7</ns0:ref> presents the assessed computational time consumed by various algorithms. It could be observed from the graphical values that the proposed approach SLNS seems to be much efficient. It is observed that SLNS consumes more time than the HARIS-based approach, but from a precision point of view, the proposed SLNS has a trade-off. The computational time of CNN and SLNS is almost the same, and the computational lag in the proposed approach is because of selftraining that needs additional efforts. The proposed model does not require any significant training, unlike the neural network-based models.</ns0:p><ns0:p>The proposed model's performance is assessed against the various existing algorithms like thresholding, Seeded Region Growing, Fuzzy C-Means, and the Artificial Neural Network models concerning the evaluation metrics like sensitivity, specificity, and accuracy presented by <ns0:ref type='bibr' target='#b2'>(Alam et al., 2019)</ns0:ref>. The comparative analysis of the approaches with obtained values is shown in table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. The performances of various algorithms connected to the proposed SLNS have been assessed concerning the size of the tumor. It is observed that from the clinical evidence, the resultant outcomes have been proven to be better compared to their counterparts and the consequent of the proposed methods seems to very pleasing and very accurate. The tabulated values in table 4 represent the resultant experimental values. <ns0:ref type='table' target='#tab_4'>4</ns0:ref> represents the progress of the tumor growth, that is being identified based on the texture information of the abnormal region as stated by <ns0:ref type='bibr'>(Naga et al., 2020)</ns0:ref>. The abnormal region is classified as the Tumour Core(TC) that depicts the actual region of the tumor, and the Enhanced Tumour(ET) that depicts the recent enhancement that has taken place in the region of the tumor that is presumed to be the progress in the tumor. The Whole Tumour(WT) is the region that includes both the tumor core and the enhanced tumor regions. The enhanced tumor region presents the progress of the abnormality that will assist the physician in taking up the decisions for suitable treatment.</ns0:p><ns0:p>The self-learning-centric models are recently becoming part of the biomedical and healthcare domain, where the models are expected to work with minimal training. In a few situations where there is no adequate data available for training the model, the self-learning models are proven to perform well. The model also needs lesser training than its counterparts. The proposed model is capable of learning from its previous experimental results. The hyperparameters like the learning rate that are presented in the current study are at an acceptable level after 43 epochs,, and other parameters like the training and validation studies state that the model finetuned The performance assessment metrics like the accuracy, sensitivity, specificity, Jaccard Similarity Index (JSI), Mathew Correlation Coefficient(MCC) are assessed by repeated autonomous executions. The obtained values have proven that the model performs reasonably with minimal training of the data. The fuzzy component is being added to evaluate the membership for assigning the pixels to the appropriate segment in contrast to the HARIS algorithm for the assignment of the pixels. The statistical analysis of the study has evinced the performance of the HARIS algorithm in the segmentation process.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The pivotal object of the current study on mechanizing a self-learning model that is efficient in identifying the tumor from the MRI image with minimal training. The model is efficient in working with problems that have minimal training data available. The proposed model efficiently learns from its prior experimental outcomes and utilizes the acquired knowledge for future predictions. It is observed on the practical implementation of the proposed model that the resulting outcome seems to be much accurate and precise in identifying abnormality in the human brain. In contrast to fully supervised models like Convolutional Neural Networks, deep learning models and various classification algorithms need to be rigorously trained for better accuracy. Still, the proposed model is proven to be efficient in generating equivalently better outcomes with other models. The proposed model itself can normalize the noise in the input image and robust in differentiating the non-brain tissues like the skull from the brain tissues. However, the proposed approach could be further optimized by incorporating the self-correcting convolution layer through an Ancillary kernel.</ns0:p><ns0:p>The self-Learning models are robust in handling the unusual problem where there is inadequate training data available. But the self-learning models overfit in few cases as the developer may not have control over the level of training to the model. The overfitting may lead to instability in the predictions that are made in few contexts. Moreover, the process of debugging the issues and rectifying them in the self-learning model is challenging. The models learn from their previous experimental outcomes, and there is a possibility that the models might misinterpret the outcome based on previous experimental results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Future Scope</ns0:head><ns0:p>The proposed model based on the self-learning mechanism is suitable for handling uncertain data more effectively through its previous experiences. The model's performance can be further improvised by incorporating the Long Short Term Memory(LSTM) component for efficiently handling the training data for better accurate prediction of the progress in the tumor growth as the LSTM components are efficient in holding the memories for a longer time by preserving the dependencies based on the network's information. The incorporated memory elements can retain the State information over the specific iterations constructed through multiple gates. The proposed model can also be enhanced by incorporating the feedback component that would help assess tumor growth progress by correlating its previous outcome.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>𝑝 (𝑝) 𝑥 𝑝 ∑ 𝑝 𝑐 𝑝 (𝑝) 𝑥 where cp determines the membership coefficient, and p designates the pixel, and x designates the fuzzier metric of belongings. The value of the pixel likeliness(pl) is designated through equation (4) stated below (4)𝑝 𝑙 = 𝑝 ( 𝑠 ) × 𝑝( 𝑐𝑝 𝑠 )/ ∑ 𝑝(𝑘) × 𝑝( 𝑐𝑝 𝑘 )From the above equation, the p(s) denotes the probability of likeliness with the segment all the classes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Select the regions in the image Determine the suitable category to assign If CI > Threshold, then Assign the corresponding pixel to the tumorous Region else Assign the corresponding pixel to the non-tumorous Region end if end for end While Figure 2 represents the proposed model's layered architecture. The convolutional layer, max-pool layer, flattening layer, and fully connected layers almost resemble the Convolutional PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Architecture diagram of the proposed SLNS model</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>15) (15) 𝑗(𝑤,𝛽;𝑖,𝑗) = ‖𝑓 𝑤,𝛽 (𝑖) -𝑗‖ 2 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4The image representing the hyper-parameters</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Output screens of proposed SLNS approach</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,206.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,411.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,242.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,271.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The table represents the confusion matrix of the proposed model</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Performance analysis of the proposed approach</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>𝐽' 𝑝𝑖𝑥 -𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥1 -𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐽 '𝑝𝑖𝑥 ) From equation(16), represents the vertex pixel in the region of the MR image.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>The variable</ns0:cell><ns0:cell>𝐽' 𝑝𝑖𝑥</ns0:cell><ns0:cell cols='2'>𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 represents the instance pixel in the segment and</ns0:cell><ns0:cell>𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐽 ' 𝑝𝑖𝑥 )</ns0:cell><ns0:cell>represents the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>membership value. It can be observed from table 3. Thus, the fuzzy membership-based HARIS</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>algorithm has slightly performed well than the traditional HARIS algorithm.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>𝑚𝑒𝑚𝑏𝑒𝑟𝑠ℎ𝑖𝑝 = 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 -</ns0:cell><ns0:cell>𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 -𝐼' 𝑝𝑖𝑥 1 -𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐼 ' 𝑝𝑖𝑥 )</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>From equation(15), the variable in the image for processing. The variable 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 represents the instance data that are the pixels, and represents the vertices that are being considered 𝐼' 𝑝𝑖𝑥</ns0:cell></ns0:row><ns0:row><ns0:cell>the variable</ns0:cell><ns0:cell /><ns0:cell cols='4'>presents the minimum correlation value in concern to the MR image</ns0:cell></ns0:row></ns0:table><ns0:note>𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐼 ' 𝑝𝑖𝑥 ) segment. The segment vertex is being assessed through the equation(16) stated below PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021) Manuscript to be reviewed Computer Science (19) 𝑣𝑒𝑟𝑡𝑒𝑥 = 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 +</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Table representing the performance analysis with Fuzzy Component Figure 7. Graph representing the computational time of various approaches Figure</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Table representing the performance analysis of SLNS approach Table</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:3:0:CHECK 10 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Sir/Madam,
Reviewer-Peerj Computer Science.
We would like to thank reviewer for the helpful comments. We have improved the manuscript according to the recommendations. We believe that the results presented in this study are of significant interest to the community and hope that, with our responses and amendments, the reviewer will find our demonstrations and explanations satisfactory and consider our present version appropriate for publication in Electronics.
Reviewer’s remarks are given again in the below, followed by our responses in blue letters. All changes in the manuscript are highlighted through the track changes option.
Basic Report of the reviewer:
The paper submitted investigated the use of self-learning network-based segmentation for brain MRI through HARIS and the reported methodology achieved an accuracy of 77% towards segmenting real-time images into multiple regions. The analysis workflow is interesting in general. However, please consider the following comments.
Authors Response: We would like to thank the reviewer for the time and efforts in carefully going the manuscript and analysing the flow of the proposed model. We are happy for the recommendations and have address all the comments in better possible way and we also believe that the suggestions will enhance the understandability of the manuscript.
Reviewer Comment#1: A major improvement and a carefully proofread spell check are required since the English in the present manuscript is not of publication quality. Indicative:
1. Lines 21-23
2. Lines 53-58
3. Line 83
4. Lines 89-91
5. Lines 94-97
6. Lines 104-106
7. Line 133, the word “discrete” is used as a verb.
8. Lines 156-157
9. Lines 192-194
10. Lines 203-208
11. Lines 233-241
12. Lines 326-328
13. Lines 332-333
14. Lines 377-378
Authors Response: We have carefully checked and revised the document in line with the comments that are made by the reviewer at each line that are mentioned and the same has been highlighted in the revised manuscript through track changes. We believe that the corrections that are made in the manuscript will enhance the comprehensibility of the document. And also relevant corrections are made at the other part of the document.
Reviewer Comment#2: Lines 94-97: Could you please elaborate on the following: “the issue of the massive number of segmented regions”.
Authors Response: We have understood the comment of the reviewer and the necessary information is being added pertaining the claim that is made at the line 99-104 in the revised manuscript and the same has been highlighted through the track changes.
“Watershed-based image segmentation that effectively addresses limiting the number of segmented areas identified using edge information by marker-controlled watershed-based segmentation. The process of limiting the segments will assist in avoiding the over-fitting problem that is experienced in the majority of the user intervened models and the supervised models. But in watershed-based segmentation, considerable efforts are needed during pre-processing phase to separate the foreground and background regions in the image.”
Reviewer Comment#3: Lines 110-111: It is hard to understand what the authors want to state here.
Authors Response: The statement of the limitations of the FCM based segmentation is being improvised as per the comments of the reviewer. And the same has been highlighted in the revised document at line 115-116. The corrected statement is as follows for your perusal
“The biggest problem with FCM technique is determining the membership value at each iteration for all pixels in the image that need additional processing efforts.”
Reviewer Comment#4: Lines 148-149: Please rephrase. What does computationally compatible mean? Is it computationally feasible?
Authors Response: We would like to thank the reviewer for the comment, the necessary corrections are done in the manuscript in line with the comment of the reviewer. The same has been highlighted in the revised document at page 4 from 223-226 through track changes and the same has been appended to this response.
“Yet, the primary problem is that the implementation procedure and machine must be computationally efficient with adequate processing resources in order to perform the image segmentation in reasonable time, which is not always technically feasible.”
Reviewer Comment#5: Line 205: Did you quantify an accepted level of noise for your data?
Authors Response: We totally understanding the comment of the reviewer, the proposed model is efficient in handling the noise images through pre-processing. There is no specific analysis done towards the acceptable level of noise in the image. However, the performance of the segmentation algorithm is largely dependent on the quality of the image where the segmentation quality compromises with noise in the input image.
Reviewer Comment#6: Lines 213-216: Please rephrase the following: “for additional images upon segmenting the image.”.
Authors Response: In line with the suggestion of the reviewer, the necessary corrections are made to the corresponding sentence in the manuscript at page 6 from 293-297 and the same has been highlighted in the revised document and the same has been appended to the responses.
“The model is being trained to distinguish brain tissues from non-brain tissues such as brain fluids, the skull region, the thalamus, and the brainstem, which must be isolated from brain tissues for improved segmentation and evaluation of the tumor's extent. However, whenever a new image is fed into the algorithm as input, post segmentation the image is fed into the network's self-learning layer as input for further images.”
Reviewer Comment#7: Lines 218-220: Please elaborate on the rationale.
Authors Response: The suggestions of the reviewer are carefully addressed and the necessary statements about the self-learning models are elaborated. The necessary changes are being highlighted in the revised document at page 6 from 299-301 and the same has been appended to this author comment.
“In the automated segmentation of the brain MRI image, the crucial thing is to set up a training sets that is large enough to train and validate the model. Resultantly it is challenging to obtain the dataset that is self-sufficient for both train and validating with ground facts. On the other hand self-learning models needs relatively smaller training and validation set. The outcome of the experimental results are used in self training the model, that ensure a better results in future experimentation.”
Reviewer Comment#8: Lines 235-237: “… each of the convolutional layers is being guided through various techniques …”. This is quite confusing.
Authors Response: We have carefully checked the sentence that is being pointed by the reviewer and all the necessary corrections are made to the sentence and the same has been highlighted in the revised manuscript at page 7 from 323-327 and appended to the response for the kind perusal.
“Additionally, each convolutional layer is driven by a variety of kernels, including an adaptive bilateral filter with sublayers that determine the appropriate number of clusters and cluster centroids, sobel filter that is used in the edge detection, and other 2D convolution filers that are generally (3x3) in size which are slides across the original input image. Over numerous rounds, the technique is repeated over multiple iterations for identifications of all the key features in the input image.”
Reviewer Comment#9: Some figures seem to be of low resolution. Please provide images of higher quality.
Authors Response: We understood the comment of the reviewer regarding the quality of the images, We have snipped the images and the images are being customized as per the resolution guidelines of the peerj. The width and hight that are mandated by the peerj computer science has resulted in enlarging the images that has resulted in very minor quality issues. We have tried our best in improvising the quality and re-uploaded the images.
Reviewer Comment#10: Figure 3 presents the loss across the epochs (as the y-axis label indicates) and not the learning rate. The authors should check if that is intended or it is misplaced.
Authors Response: We would like to thank the reviewer for the comment, in fact the y-axis must be the rate rather than loss. We have made the necessary corrections in the image and have re-uploaded for the kind consideration.
Reviewer Comment#11: Table 1: Please include the percentage symbol somewhere in the table. I assume that numbers are percentages.
Authors Response: Thank you for the suggestion and the percentage symbol is being added in the left most column of the table 1 which consist of true positive, true negative, false positive and false negative. Now the table contents seems to be justified with precise way of presenting the values with the units used in approximation.
Experimental design
Reviewer Comment#12: Line 366: Did the authors use a changing/decaying learning rate? If so please elaborate and specify the strategy used. If not, the learning rate should be constant.
Authors Response: We totally understanding the comment of the reviewer and the proposed model is self-learning model and the model will acquire the knowledge form every experiment. The epochs has the significant impact on the learning rate of the model. As we keep increasing the number epochs the learning rate will move towards the saturation point. Until it reach the saturation point, there will be a change in the learning rate. We believe our response can appropriately answer the comment of the reviewer. And we also felt adding the same data at appropriate position in the manuscript at page 11 from 467-471will enrich the legibility of the manuscript.
“In the proposed self-learning centric segmentation model, the model will acquire the knowledge from the earlier experimental outcomes. The learning rate is also dependent on the number of epochs that the model is designed to execute before it is evaluated, as the number of epochs increases the learning rate of the model will move towards the saturation point, and we can observe the change in the learning rate till that point. But increasing the number of epochs will result in consuming more computational efforts in the training process. “
Reviewer Comment#13: Lines 406-409: “executing the code repeatedly for 35 images”. what do you mean? Also, what specifically is scaled up for evaluating the confusion matrix? What does “scaled up” mean?
Authors Response: We have understood the question of the reviewer, the statement “executing the code repeatedly for 35 images” is used in the context of assessing the efficiency of the proposed model. In fact for 35 times we have executed the model independently to assess the accuracy of the model and the presented values are the mean of accuracy that is being observed at each autonomous execution that is performed. We have identified the mistake in the corresponding line as rightly pointed by the reviewer and the necessary amendments are made at the corresponding line of the revised manuscript.
Reviewer Comment#14: Please consider describing more carefully the research question, the rationale as well as the analysis pipeline.
Authors Response: We would like to thank the reviewer for the recommendation, a paragraph pertaining the objective of the paper and the outcome foy the implementation at the end of the results and discussion part of the paper and the same has been highlighted at page 13 from 592-610 in the revised manuscript.
“The self-learning centric models are recently becoming the part of the biomedical and healthcare domain, where the models are expected to work with minimal training. In few situation where there is no adequate data available for training the model, the self-learning models are proven to perform well and the model also needs lesser training than its counterparts. The proposed model is capable of learning from its previous experimental results and the hyper parameters like learning rate that are presents in current study are at acceptable level after 43 epochs and other parameters like the training and validation studies states that the model is fine tuned. The performance assessment metrics like the accuracy, sensitivity, specificity, Jaccard Similarity Index (JSI), Mathew Correlation Coefficient(MCC) are assessed by repeated autonomous executions and the obtained vales have proven that the model performs reasonable with minimal training of the data. The fuzzy component is being added to evaluate the membership for assigning the pixels to the appropriate segment in contrast to the HARIS algorithm for assignment of the pixels. The statistical analysis of the study has evince the performance of HARIS algorithm in the segmentation process.”
Validity of the findings
Reviewer Comment#15: The impact of the method is not clearly stated. Also, the conclusion statement should be linked to the initial research question.
Authors Response: We would like to thanks the reviewer for the comment about the conclusion part of the document. All the necessary corrections are done to the conclusion by adding the impact of the model and the conclusion statement of the current study is made clear. All the corrections that are performed in the conclusion part of the revised manuscript are highlighted at page 14 from 626-623 and the same has been appended to this response for kind perusal.
“The pivotal object of the current study on mechanizing a self-learning model that is efficient in identifying the tumor from the MRI image with minimal training. The model is efficient in working with problems that have minimal training data available. The proposed model efficiently learns from its prior experimental outcomes and utilizes the acquired knowledge for future predictions. It is observed on the practical implementation of the proposed model that the resulting outcome seems to be much accurate and precise in identifying abnormality in the human brain. In contrast to fully supervised models like Convolutional Neural Networks, deep learning models, and various classification algorithms need to be rigorously trained for better accuracy. Still, the proposed model is proven to be efficient in generating equivalently better outcomes with other models. The proposed model itself can normalize the noise in the input image and robust in differentiating the non-brain tissues like the skull from the brain tissues. However, the proposed approach could be further optimized by incorporating the self-correcting convolution layer through an Ancillary kernel.
The self-Learning models are robust in handling the unusual problem where there is inadequate training data available. But the self-learning models overfit in few cases as the developer may not have control over the level of training to the model. The overfitting may lead to instability in the predictions that are made in few contexts. Moreover, the process of debugging the issues and rectifying them in the self-learning model is challenging. The models learn from their previous experimental outcomes, and there is a possibility that the models might misinterpret the outcome based on previous experimental results.”
We would like to thank the anonymous reviewers for sparing their valuable time reviewing the manuscript and recommending the authors with constructive suggestions for better comprehensibility of the manuscript. All the recommendations are addressed in a better possible way to enhance the readability of the paper.
" | Here is a paper. Please give your review comments after reading it. |
197 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In recent years in medical imaging technology, the advancement for medical diagnosis, the initial assessment of the ailment, and the abnormality have become challenging for radiologists. Magnetic Resonance Imaging is one such predominant technology used extensively for the initial evaluation of ailments. The primary goal is to mechanizean approach that can accurately assess the damaged region of the human brain throughan automated segmentation process that requires minimal training and can learn by itself from the previous experimental outcomes. It is computationally more efficient than other supervised learning strategies such as CNN deep learning models. As a result, the process of investigation and statistical analysis of the abnormality would be made much more comfortable and convenient. The proposed approach's performance seems to be much pleasing compared to its counterparts, with an accuracy of 77% with minimal training of the model. Furthermore, the performance of the proposed training model is being evaluated through various performance evaluation metrics like sensitivity, specificity, Jaccard Similarity Index, and Mathews correlation coefficient, where the proposed model is productive with minimal training.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Medical imaging technology has been critical in medical diagnostics for accurately detecting the presence of malignant tissues in the human body. There are divergent imaging technologies that are available for diagnosis of abnormality among various organisms that includes PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science X-Ray technology, as stated by <ns0:ref type='bibr' target='#b49'>(Vaga & Bryant, 2016)</ns0:ref>, Computed tomography (CT) technology, as stated by <ns0:ref type='bibr' target='#b51'>(Venkatesan et al., 2017)</ns0:ref>, Magnetic Resonance Imaging (MRI) technology <ns0:ref type='bibr' target='#b4'>(Chahal et al.,2020)</ns0:ref>, <ns0:ref type='bibr'>(Srinivasu et al., 2020)</ns0:ref> and positron emission tomography (PET) scan <ns0:ref type='bibr' target='#b29'>(Norbert et al., 2017)</ns0:ref>, <ns0:ref type='bibr' target='#b39'>(Schillaci et al., 2019)</ns0:ref>. All the approaches mentioned above are non-invasive and capable of diagnosing malignant tissue efficiently. In many cases, imaging technology is aptly suitable for identifying abnormalities in the human body. It would help the physician provide better treatment and assist in planning the clinical procedure. The current study has focused on mechanizing a model capable of diagnosing the MR imaging and estimating the extent of the damage.</ns0:p><ns0:p>Medical imaging technology has upgraded stupendously to elaborate every minute and tiny ailment in the human body that could efficiently diagnosed disease at a significantly earlier stage. The proposed study primarily focused on tumor identification and volumetric estimation of the tumorous region in the human brain. There are numerous machine learning models available for accurately identifying the tumorous regions from MR images. Yet, most of these techniques are imprecise, process thirsty, and particularly semi-automated procedures used in tumor identification. To circumvent the limitations of the above methodologies, several automated techniques, including the conventional models like Genetic Algorithm (GA), as stated by <ns0:ref type='bibr'>(Srinivasu et al.,2020)</ns0:ref>, Artificial Neural Networks (ANN), as said by <ns0:ref type='bibr' target='#b56'>(Wentao et al., 2020)</ns0:ref> and Deep Learning techniques as mentioned by <ns0:ref type='bibr'>(Deepalakshmi et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b47'>(Srinivasu et al., 2021)</ns0:ref>.</ns0:p><ns0:p>As part of recognizing ailments, several stages, including pre-processing the MR images to address noise in the original MR images, such as Spackle noise, Poisson noise, and Gaussian noise, are introduced into the image at various stages during the process of rendering the image. And the pre-processed step will enhance the contrast and the texture of the image that assist in a faster and convenient way to recognize the malignant tissues. Once the noise in the image is preprocessed, the MR image is fed to remove the skull region. Then the MR image is segmented to recognize the malignant region, followed by the volumetric estimation for analyzing the impact of the damage. The results of the automated segmentation approaches are refined over multiple iterations for the precise outcome.</ns0:p><ns0:p>Generally, the medical images are segmented to locate the abnormal regions in the image based on texture-based information. The process of image segmentation plays a vital role in the identification of the malignant areas in the MR image; There are various semi-automated and automated ways of segmenting the MR images that could efficiently segment the images, but there is a considerable tradeoff between the accuracy and the computation efforts put forward by each of those approaches. Supervisory models yield better accuracy, but they need tremendous training data that needs more computational resources.</ns0:p><ns0:p>K-Mean's algorithm is one such semi-automated approach that segments the MR Image based on the pre-determined number of segments as stated by <ns0:ref type='bibr' target='#b2'>(Alam et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b4'>(Chahal et al., 2020)</ns0:ref>. Thus, the proposed approach is one of the simple techniques for the segmentation of the images. But the major drawback of this approach is that the total number of segments is fixed well before the segmentation has begun. Therefore, if the k value is too large, it would lead to oversegmentation, and if the k value is too small, it will lead to under-segmentation of the image.</ns0:p><ns0:p>Segmentation of MR images is indeed possible using completely automated techniques such as thresholding, as stated by <ns0:ref type='bibr' target='#b42'>(Sowjanya et al., 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b20'>(Kumar et al., 2015)</ns0:ref>. The thresholdbased method works comparatively better in the homogeneous image, but accuracy mainly depends on the approximated threshold value. A minimal deviation from the ideal value of the threshold results in an exponential variance in the segmented image.</ns0:p><ns0:p>The other predominantly used segmentation technique through Region Growing <ns0:ref type='bibr' target='#b7'>(Dalvand et al., 2020)</ns0:ref> strategy that can effectively handle the problem of over and under segmentation often encountered in K-Means-based approaches. The experimental studies on the Region Growingbased approach are proven to improve the sensitivity and specificity for precise identification of the malignant region in the human brain <ns0:ref type='bibr' target='#b33'>(Punitha et al., 2018)</ns0:ref>. But the seed point must be chosen appropriately, which needs tremendous efforts in recognizing the optimal seed point. If the initial seed points are inappropriately chosen, the entire process of segmentation of the image goes wrong, and also this approach needs computationally more effort. <ns0:ref type='bibr' target='#b40'>(Sheela et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b27'>(Mathur et al., 2016)</ns0:ref> proposed Edge Based Segmentation for image segmentation, a relatively simplistic approach. But in the suggested edge-based approach, the image must be pre-processed rigorously to elaborate the edge-related information, which involves high computational efforts. <ns0:ref type='bibr' target='#b32'>(Pandav, 2014</ns0:ref><ns0:ref type='bibr'>), (Liang&Fu, 2018</ns0:ref><ns0:ref type='bibr'>), (Kornilo et al., 2018)</ns0:ref> has proposed Watershed-based image segmentation that effectively addresses limiting the number of segmented areas identified using edge information by marker-controlled watershed-based segmentation. The process of limiting the segments will assist in avoiding the over-fitting problem that is experienced in the majority of the user intervened models and the supervised models. But in watershed-based segmentation, considerable efforts are needed during pre-processing phase to separate the foreground and background regions in the image. <ns0:ref type='bibr' target='#b48'>(Sudharania et al., 2016)</ns0:ref> have proposed Morphological based segmentation that exhibits an exceedingly high accuracy in less intensity MR images. But it needs several iterations to converge to a better solution, which needs more computational efforts. <ns0:ref type='bibr' target='#b18'>(Jude et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b26'>(Madallah et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b54'>(Verma et al., 2015)</ns0:ref> has proposed Fuzzy C Means(FCM)based MR image segmentation is a highly accurate, well-founded, and rapid approach that effectively handles the uncertain situation of allocating pixels among multiple segments through distributing pixels to the appropriate segment depending on the membership value determined at each iteration. In fuzzy-based segmentation, the pixels are assigned to their corresponding region based on the membership function value that lies in a range of 0 to 1. The value 1 indicates the corresponding pixel is more likely to be associated with the corresponding segment. The biggest problem with the FCM technique is determining the membership value at each iteration for all pixels in the image that need additional processing efforts. In addition, the adjustment of the bottom and upper approximated for the randomness is complex though FCM. <ns0:ref type='bibr' target='#b1'>(Al-Shamasneh et al., 2020)</ns0:ref> have suggested an MR image segmentation through Contour Based segmentation that efficiently recognizes various brain tissues for both homogeneous and heterogeneous malignant cases. But it is not much efficient for noisy images and high-intensity images. <ns0:ref type='bibr'>(Li et al., 2018)</ns0:ref> have proposed a Level Set centric technique for the MR image segmentation based on the pre-approximated threshold value. The level set method is a very complex approach, and the approximated threshold value determines the efficiency of the approach's segmentation. <ns0:ref type='bibr' target='#b55'>(Wang et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b12'>(Diaz & Boulanger, 2015)</ns0:ref> have proposed Atlas Based Segmentation for MR images is a straightforward approach that employs segmentation of the MR images. It is computationally faster than used labeling, and it is independent of the deformation model. The suggested approach merges the intensity template image and segmented reference image to register to further segment the image. In the atlas-based approach, choosing the initial seed point is crucial as the entire segmentation scenario is based on the seed point selection. However, the efficiency of this approach is based on the precision of the topological graph. <ns0:ref type='bibr' target='#b52'>(Venmathi et al., 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b24'>(Liu et al., 2018)</ns0:ref> have proposed Markov Random Field (MRF) to segment the MR image through the Gaussian mixture model that supports incorporating the neighboring pixels association in practical and mathematical perception. The method works on textured-based information. Markov random field-based approach for the segmentation of the image includes spatial information that aids in normalizing noise and overlapping the neighboring regions. <ns0:ref type='bibr' target='#b17'>(Javaria et al., 2019)</ns0:ref> have suggested an approach that used the spatial vector alongside Gabor decomposition that distinguishes the malignant and non-malignant tissues in the MR image of the Bayesian Classifier. Despite the accuracy, MRF needs more computational efforts and the process of picking the parameters systematically. <ns0:ref type='bibr' target='#b50'>(Varuna et al.,2018)</ns0:ref>, <ns0:ref type='bibr' target='#b14'>(Gibran et al., 2020)</ns0:ref> have suggested Probabilistic Neural Network (PNN) in combination with Learning Vector Quantization (LVQ) that assists in reducing the computational time by optimizing all the hidden layers in the proposed method. The region of interest that must be recognized for designing the network must be done carefully, as image segmentation quality depends on the exactness of the region of interest. <ns0:ref type='bibr' target='#b38'>(Sandhya et al., 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b31'>(Mei et al., 2015)</ns0:ref>, <ns0:ref type='bibr' target='#b8'>(De & Guo, 2015)</ns0:ref> have suggested a Self-Organized Map (SOM) based algorithm for segmentation of the MR image that includes the spatial data and the grey level intensity information in the segmentation of the image. It is outstanding in separating the malignant tissues. But the SOM engine has to be trained rigorously for better accuracy. The quality of the image segmentation is directly dependent on the training set. Mapping is one of the complicated tasks in a SOM-based approach. <ns0:ref type='bibr' target='#b16'>(Havaei et al., 2016</ns0:ref><ns0:ref type='bibr'>), (SivaSai et al., 2021)</ns0:ref>, <ns0:ref type='bibr' target='#b56'>(Wentao et al., 2020)</ns0:ref> have proposed a Deep Neural Networks-based approach that is computationally efficient and highly precise in determining the abnormality from the medical image. Yet, the primary problem is that the implementation procedure and machine must be computationally efficient with adequate processing resources to perform the image segmentation in a reasonable time, which is not always technically feasible. <ns0:ref type='bibr' target='#b36'>(Sachdeva et al., 2016)</ns0:ref> have suggested multiple hybrid approaches for the segmentation of the MR images. The multiclass categorization of malignant tissues is done efficiently, and high accuracy is attained through machine learning and soft computing techniques. Pulse Coded Neural Network (PCNN) is a technique used in coherence with the semi-automated methods for better segmentation. While segmenting the MR image, the Region of interest could be perceived as a region growing approach that selects the initial points assumed as the seed points in the earlier stages. Secondly, a Feed Forward Back Neural Network (FFBNN) selects the seed points that send back the input until the input turns uniform. <ns0:ref type='bibr' target='#b34'>(Qayyum et al., 2017)</ns0:ref> have attained multiple subimages with multi-resolution data by employing Stationary Wavelet Transform (SWT). The spatial kernel is being applied over the resultant sub-images to locate the demographic features. With the help of extracted features and the Stationary Wavelet Transform coefficient, the multi-dimensional features are built. The identified features and coefficients as the input to the self-Organized map, Linear vector Quantization, are finally used to refine the results. <ns0:ref type='bibr'>(Srinivasu et al., 2020)</ns0:ref> has proposed an approaches Twin Centric Genetic Algorithm with a Social Group Optimization that has produced a precise outcome in tumor identification from the brain MR images. The twin-centric GA model is comparatively faster than the conventional GA approach, with a faster crossover rate that results in a new segment. The mutation operation is performed based on the fitness value to reform them with other strongly correlated regions in the image. The outcome of the Twin GA is being refined through the Social Group Optimization approach, which has refined the outcome through selecting the appropriate features in the image for the segmentation. <ns0:ref type='bibr' target='#b11'>(Dey et al., 2018)</ns0:ref> has experimented with the SGO approach in fine-tuning the outcome of the classification. The current approach is comparatively faster, but in performing the two-point crossover, there is a chance of diverging from the optimal number of regions and may end up with the over-fitting issue. And the execution of two high computational algorithms needs a significant computational effort to segmentation the MR image.</ns0:p><ns0:p>The main objective is to formulate a mechanism that can efficiently segment the real-time image into multiple regions based on the available features by minimizing the computation efforts. The proposed approach is a self-trained strategy that upskills the algorithm with some pre-existing real-time scenarios. Previous experimental results show that the proposed SLNS algorithm can differentiate the skull region from the brain tissues in the MR image without any external pre-processing algorithm. It can quickly extricate brain tissues from the non-brain tissues through the available feature set. The proposed model is robust in handling the images with an acceptable noise level, and it needs less computational effort for training the model. The experimentation has been performed to evaluate the accuracy of the proposed approach, and the upshot seems to be promising.</ns0:p></ns0:div>
<ns0:div><ns0:head>Self-Learning Network Based Real-Time Segmentation</ns0:head><ns0:p>Cognitive technology in real-time image segmentation is a multidisciplinary technique that is an intrinsic aspect of Fully Convolutional Neural Networks. Fully Convolutional Neural Networks are widely utilized in real-world settings to successfully handle 2D images. The concept of semantic segmentation has been used in coherence with multi-objective function-based algorithms for better results. The weak learning network is based on the partial training of the algorithm and the tuning of the algorithm so that it would be able to differentiate among multiple classes. In addition, the algorithm is capable of training itself for better efficient segmentation of the image. In every stage of execution for evaluating the tumor's region from the MR image, the proposed method would segment the image based on the trained information. The resultant segmented imaged would be further refined in further network layer's multi-objective functions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Partial Training of The Algorithm</ns0:head><ns0:p>In contrast to its competitors, the suggested methodology is a partly trained strategy that does not need extensive training. As shown by <ns0:ref type='bibr' target='#b57'>(Xiao et al., 2015)</ns0:ref>, the recommended strategy involves training the original image with an acceptable quantity of noisy data to ensure that the algorithm can address noise images. Additionally, the noise picture is specified in the labels. The binary identifier notion is used to distinguish a noisy picture from a non-noise labeled image <ns0:ref type='bibr' target='#b28'>(Misra et al., 2016)</ns0:ref>. It has proposed the idea of independently labeling the noise class. The role of differentiating the image-based noise classes is significantly essential. The image could be easily determined among the noise levels to process the images based on the noise variance.</ns0:p><ns0:p>The model is being trained to distinguish brain tissues from non-brain tissues such as brain fluids, the skull region, the thalamus, and the brainstem, which must be isolated from brain tissues for improved segmentation and evaluation of the tumor's extent. However, whenever a new image is fed into the algorithm as input, post segmentation, the image is fed into the network's selflearning layer as input for further images.</ns0:p></ns0:div>
<ns0:div><ns0:head>Self-Learning Network-Based Segmentation</ns0:head><ns0:p>In the automated segmentation of the brain MR image, the crucial thing is to set up training sets that are large enough to train and validate the model. Resultantly it is challenging to obtain a self-sufficient dataset for both trains and validating with ground facts that are associated with the imaging data concerning to abnormality for cross validation. On the other hand, self-learning models need a relatively smaller training and validation set. Furthermore, the experimental results used in self-training the model ensure better results in future experimentation based on its previous experimental knowledge. The complete flow diagram of the proposed framework can be seen in figure <ns0:ref type='figure'>1 stated below</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> Represents the block diagram of SLNS based approach</ns0:p></ns0:div>
<ns0:div><ns0:head>Layered Architecture of SLNS</ns0:head><ns0:p>The layered architecture of the proposed Self-Learning Network-based Segmentation (SLNS) approach includes the convolutional layers that would perform various challenging tasks like pre-processing of the image to remove the image's noise by considering the adaptive bilateral filter for pixels that surround the actual corresponding pixel. The image is then further processed to remove the skull region in the next successive layer in correlation with the other connected layers. Next, the image is segmented based on the pixel's intensity value, representing the region's texture. And finally, the classifier is used to dissimilate the non-brain tissues and damaged Region in the human brain. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The suggested methodology trains the segmentation algorithm using a self-learning module that gets adequate information from the preset segmentation dataset and previous experimental results. Additionally, each convolutional layer consists of a variety of kernels, including an adaptive bilateral filter with sublayers that determine the appropriate number of clusters and cluster centroids, Sobel filter that is used in the edge detection, and other 2D convolution filters that are generally (3x3) in size which slides across the original input image. Over numerous epochs, the model is trained over multiple iterations to identify all the key features in the input image. The image is refined at the earlier stages using an Adaptive Bilateral filter that processes the image that normalizes the noise like Gaussian noise and Poisson noise during image acquisition. The method of noise reduction is carried forward through the following equations</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>𝐴𝐵 𝐹 = 𝛼 × ∑ 𝑝 𝑝 𝑏 (|| 𝑥 -𝑦 ||) × 𝑝 𝑙 (||𝑖 𝑥 -𝑖 𝑦 ||)</ns0:formula><ns0:p>From the above equation, the  value is determined by the contra harmonic mean of the neighboring pixels determined in equation ( <ns0:ref type='formula'>2</ns0:ref>) stated below, Pb represents the pixel belongingness, and represents the pixel likeliness.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑝 𝑙</ns0:head><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'> = 𝒊 𝟐 𝟏 + 𝒊 𝟐 𝟐 + 𝒊 𝟐 𝟑 + … + 𝒊 𝟐 𝒏 𝒊 𝟏 + 𝒊 𝟐 + 𝒊 𝟑 + … + 𝒊 𝒏</ns0:formula><ns0:p>The values of the pixel belongingness (Pb) have been demonstrated in equation ( <ns0:ref type='formula'>3</ns0:ref> Heuristic Approach for Real-time Image Segmentation (HARIS) algorithm is used to segment the high dimensional images like the medical MR images to identify the abnormality from the MR images. HARIS approach incorporates two phases in the image segmentation process. In the initial phase, the optimal number of regions in the image are being assessed, which assists in accurately recognizing the regions in the image based on texture, intensity, and boundary regionrelated pixels. The techniques like the intraclass correlation and interclass variance are being considered for evaluation. The second phase of the HARIS approach is to identify the local best feature within the region following the presumed feature element that is presumed to be the global PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science best. The selected feature is being used in assembling the pixels as a region based on the feature identified.</ns0:p><ns0:p>The image is segmented based on the intensity level in the later stages by approximating the minimum number of segments to 23 from the previous experimental studies. The number of segments fitness has been evaluated through the formula stated below</ns0:p><ns0:p>(5) Obj fun = ( x × Tot pix pix seg ) + ( y × T s N r ) The above Equation ( <ns0:ref type='formula'>5</ns0:ref>), x, and y are the deciding factors that control the proposed algorithm's accuracy and efficiency. x determines the inter-class variance, and y determines the intraclass variance stated through equations ( <ns0:ref type='formula' target='#formula_2'>7</ns0:ref>) and ( <ns0:ref type='formula'>9</ns0:ref>).</ns0:p><ns0:p>The maximum interclass variance is being determined by the equation ( <ns0:ref type='formula'>6</ns0:ref>) stated below ( <ns0:ref type='formula'>6</ns0:ref>) <ns0:ref type='formula' target='#formula_2'>7</ns0:ref>) is the elaborated version of the equation ( <ns0:ref type='formula'>6</ns0:ref>) and Ct in the above equation threshold value of the class that is being evaluated through the Fuzzy Entropy-based Thresholding (FET) approach. And the variables That determines the means of the intensities of the image μ 1 ,μ 2 segment.</ns0:p><ns0:formula xml:id='formula_2'>σ 2 inter c (C t ) = σ 2 total(C t ) -σ 2 prev_inter(C t )<ns0:label>(7)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>σ 2 inter c (C t ) = ∑ C t -1 x = 0 p(x) ∑ Z -1 x = C t p ( x ) [μ 1 (C t ) -μ 2 (C t )] 2 Equation (</ns0:formula><ns0:p>The fuzzy entropy-based <ns0:ref type='bibr' target='#b30'>(Oliva et al., 2019)</ns0:ref> thresholding technique that evaluates how strongly a pixel is being strongly correlated to a particular segment, the value of the threshold is being determined by the equation ( <ns0:ref type='formula'>8</ns0:ref>) (8) FET = ∑ max ints = 1 μ tc ( ints ) log 2 μ tc ( ints ) -∑ 255 ints = max + 1 μ ntc (ints)log 2 μ ntc (ints) From the above equation( <ns0:ref type='formula'>8</ns0:ref>), the variables Are the variables representing the fuzzy μ tc ,μ etc membership concerning the image segments associated with the tumors class and non-tumors class. However, the equation is formulated concerning Shannon's entropy formulation. And the value of the threshold(FET) would be greater than 0 and would lie below 255.</ns0:p><ns0:p>The value of the intraclass correlation is determined by equation ( <ns0:ref type='formula'>9</ns0:ref>), ( <ns0:ref type='formula'>9</ns0:ref>) <ns0:ref type='formula'>9</ns0:ref>), the variable Represents the standard deviation in concern to the 𝜎 𝑠 given image segment, and the variable Represents the standard deviation concerning the entire 𝜎 𝑖 image that is being considered.</ns0:p><ns0:formula xml:id='formula_4'>I corl = σ 2 s σ 2 s + σ 2 i From equation (</ns0:formula></ns0:div>
<ns0:div><ns0:head>Adaptive Structural Similarity Index:</ns0:head><ns0:p>The adaptive structural similarity index(ASSI) metric is used to classify the pixel among the tumorous and non-tumorous regions. The structural similarity index relay on the likeness among the pixels among tumor regions. The structural similarity index relies on three factors: the structural parameter, luminance parameter, and contrast parameter. The index is determined as the PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science product of the three factors mentioned above. This paper has proposed an adaptive Structural Similarity Index that would assess the membership alongside the similarity index to make the outcome more realistic. The equation of the similarity index is determined through the equation stated below (10) ASSI(p,q) = ω × {x(p,q)α.y(p,q)β.z(p,q)γ} (11)</ns0:p><ns0:formula xml:id='formula_5'>x(p,q) = 2μ p μ q + ∁ 1 μ 2 p + μ 2 q + ∁ 1 (12) y(p,q) = 2σ p σ q + ∁ 2 σ 2 p + σ 2 q + ∁ 2 (13) z(p,q) = σ pq + ∁ 3 σ p σ q + ∁ 3 (14) ω = σ 2 p (σ 2 p + σ 2 e</ns0:formula><ns0:p>2 ) The equation ( <ns0:ref type='formula'>10</ns0:ref>) mentioned above is used to assess the likelihood of the pixel that could be a part of the tumor region. The variable determines the probability that is multiplied by the 𝜔 product of the structural parameter presented in equation ( <ns0:ref type='formula'>11</ns0:ref>), luminance parameter presented in equation ( <ns0:ref type='formula'>12</ns0:ref>), and contrast parameter given in equation ( <ns0:ref type='formula'>13</ns0:ref>). The outcome of the proposed model is promising when compared to that of its counterparts. The ASSI is used alongside the trained models in the proposed self-learning model. Algorithm Date: x(p,q)Structural Parameter, y(p,q)Luminance Parameter, z(p,q) Contrast Parameter Input: pixel(p,q) where p represents the row and the q represents the column Output: Assess the correlation index pix(p,q) initialize the starting pixel while(i<maximum_number_iterations) for each pixel in the input image, do Update x(p,q), y(p,q), z(p,q) and ω Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Approximate the Correlation Index</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Neural Network model <ns0:ref type='bibr' target='#b13'>(Farhana et al., 2020)</ns0:ref>. The Convolutional layer in the proposed architecture is responsible for applying kernels' sequence to the input MR images for extracting the features from the image that assist in classifying various regions in the input image. The convolution layer identifies the pixels' spatial association based on the selected features informing the Region, making abnormality identification easy. The pooling layer is generally next to the convolution layer, which is meant for reducing the special size of the outcome in the previous layer, which would resultantly minimize the number of parameters and features that are deliberated for further processing and to minimize the computational time. In the proposed model, the MAX pooling approach is considered in reducing the spatial size. Convolution and Max Pooling layers are used concurrently for numerous rounds until all regions are well-tuned, and features are recognized. The flattening layer is the successive layer after the Convolution and Pooling layers. It is responsible for transferring the data into the 1-dimensional array used in the subsequent layered architecture. The Fully Connected layer in the proposed architecture uses the intensity features of the regions that are recognized as abnormal and normal in the considered MR images. Gated Recurrent Unit (GRU) is being used in the proposed model to maintain the interpreted data of the previous experimental prediction. The same apprehension is being used in the future in abnormality recognition from the MR Image. Heuristic Approach for Real-time Image Segmentation (HARIS) algorithm is being used to differentiate the tumors and non-tumors Region by identifying the appropriate pixels as the features for the prediction.</ns0:p><ns0:p>The Gated Recurrent Unit is used to maintain the acquired knowledge from the previous outcomes. GRU is efficient, with comparatively fewer parameters needed to maintain the training data, <ns0:ref type='bibr' target='#b15'>(Guizhu et al., 2018)</ns0:ref> <ns0:ref type='bibr' target='#b21'>(Le et al., 2016)</ns0:ref>. GRU can address challenging tasks like vanishing gradient problems through the two gates, namely the reset gate and the update gate. HARIS works with both the fully connected and GRU layers to recognize the suitable pixels for the prediction. The cost function of the proposed model in concern to the tensors for the input image I(i,j) can be evaluated through the equation ( <ns0:ref type='formula'>15</ns0:ref> The variable represents the weight associated with layer and the variable represents 𝑤 𝑙 𝛽 the bias, represents the kernel that is being used in operating the elements. The error at the 𝑓 𝑤,𝛽 (𝑖) layer in the proposed model is determined through the equation ( <ns0:ref type='formula'>16</ns0:ref>) 𝑙 ( <ns0:ref type='formula'>16</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_6'>𝑒 𝑙 = ((𝑤 𝑙 ) 𝑡 𝑒 𝑙 + 1 ) • 𝑓(𝑥 𝑙 ) The</ns0:formula><ns0:p>is the weight that is associated with layer and the variable represents the 𝑤 𝑙 𝑙 𝑒 𝑙 + 1</ns0:p><ns0:p>error associated with layer . The variable is the activation function that is associated</ns0:p><ns0:formula xml:id='formula_7'>𝑙 + 1 𝑓(𝑥 𝑙 )</ns0:formula><ns0:p>with the layer that is being determined by the Rectified Linear Unit (ReLu) in the proposed model. 𝑙</ns0:p><ns0:p>𝑅𝑒𝐿𝑢(𝑥 𝑙 ) = 𝑚𝑎𝑥(0,𝑥) The ReLu based activation function is linear for the greater than zero values, and it will be zero for all the negative values.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The learning rate of the proposed model is significant in assessing the performance of the model, which generally controls the weights in the network in concern to the loss gradience. The learning rate is presumed to be at an optimal level so that the model will move towards the solution by considering all the significant features in the prediction. The lesser learning rate presents the slower learning ability resulting in a delayed solution. On the other hand, the higher learning rate results in a faster solution that may ignore few features in the learning process. Figure <ns0:ref type='figure'>3</ns0:ref> represents the learning rate of the proposed model across various epochs. Initially with fewer epochs the learning rate of the model is high that implies the model is learning faster from the training data and as we increase the epochs the learning rate is saturated that implies the model is learning only the new insights from the training data, the same scenario can be observed from figure <ns0:ref type='figure'>3</ns0:ref>. The saturation point for the learning rate in the proposed model is achieved at the epoch=43 and the iteration=187.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref> The graph representing the Learning Rate In the proposed self-learning centric segmentation model, the model will acquire knowledge from the earlier experimental outcomes. The learning rate is also dependent on the number of epochs that the model is designed to execute before it is evaluated. As the number of epochs increases, the learning rate of the model will move towards the saturation point, and we can observe the change in the learning rate till that point. But increasing the number of epochs will result in consuming more computational efforts in the training process.</ns0:p><ns0:p>The other hyperparameters associated with the evaluation process include the loss and accuracy functions related to the proposed model's training and validation phase. It can be seen from Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, the left side graph represents the training, and the validation loss of the model, wherein graph after a certain number of epochs, approximately from 15, the training and the validation loss are close to each other, that determines the proposed model is reasonably good in identifying the abnormalities from the MR images. When the validation loss is much higher than the training loss, it is assumed as the overfitting, which results in incorrect classification of the abnormal region. Underfitting is a situation in which the training loss is more than the validation loss that would result in poor accuracy of the model due to inappropriate selection of the features in the data. Thus, training and validation accuracy are the other parameters that are considered for evaluating the proposed model. The graph that is on the right side represents the accuracy measures of the proposed model. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental Results And Observations</ns0:head><ns0:p>The experimentation has been carried forward over the real-time MR images acquired from the open-source repository LGG dataset acquired from The Cancer Genome Atlas (TCGA) captured from patients with acute glioma. The performance of the proposed approach has been evaluated through various metrics like Sensitivity, Specificity, Accuracy, Jaccard Similarity Index, PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and Matthews Correlation Coefficient that is being assessed from the True Positive value that designates how many times does the proposed approach correctly recognize the damaged Region in the human brain as damaged Region and the True Negative that designates the how many times the proposed approach identifies non-damaged Region correctly and False Positive designates the number of times does the proposed approach identifies damaged Region in the brain as nondamaged Region and False Negative designate the count of how many times does the proposed approach mistakenly chosen non-tumors region as tumors region. Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> presents the output screens of the proposed model. The performance of the proposed Self-Learning Network-based Segmentation (SLNS) approach is being evaluated against the other conventional approaches like Twin-Centric GA with SGO, HARIS approach, and Convolutional Neural Network in concern to the performance evaluation metrics like Sensitivity, Specificity, Accuracy, Jaccard Similarity Index (JSI), Mathew Correlation Coefficient(MCC). Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> represents the experimental evaluations of the proposed model. The experimentation is conducted by executing the code repeatedly 35 times and scaled up for evaluating the confusion matrix. The model's accuracy is assessed with a standard deviation of 0.015 in assessing the segmented image as a present by <ns0:ref type='bibr' target='#b0'>(Agrawal et al., 2019)</ns0:ref>. From table 2, it is observed that the proposed SLNS approach is outperforming when compared to its counterparts. In many of the cases, the proposed approach seems much better than CNN. The semi-trained approach needs comparatively fewer efforts than CNN at the same performance. And in the computational efforts perspective, the proposed algorithms need almost the same execution time as the CNN algorithm but more time than the HARIS Algorithm-based segmentation approach. Fuzzy entropy-based MR image segmentation as experimented by <ns0:ref type='bibr'>(Rajinikanth & Satapath, 2018)</ns0:ref>, <ns0:ref type='bibr' target='#b6'>(Chao et al., 2016)</ns0:ref>. Figure <ns0:ref type='figure'>6</ns0:ref> presents the comparative analysis of the various approaches.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 6. Graph representing the comparative analysis of various approaches</ns0:head><ns0:p>The performance of the proposed model is being assessed through the HARIS algorithm that decides the best possible number of segments to assist in identifying the abnormalities in the image, and it assigns the pixels to the segment by identifying the ideal pixel in the segment. The second objective function of the HARIS algorithm is being replaced with the fuzzy membership assessment model, and the performance of the proposed model is being evaluated and presented in table 3. The fuzzy membership is evaluated through equation (15) stated by <ns0:ref type='bibr' target='#b49'>(Vaidya et al., 2018)</ns0:ref>. The membership is evaluated as follows ( <ns0:ref type='formula'>18</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_9'>𝑚𝑒𝑚𝑏𝑒𝑟𝑠ℎ𝑖𝑝 = 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 - 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 -𝐼' 𝑝𝑖𝑥 1 -𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐼 ' 𝑝𝑖𝑥 )</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>From equation( <ns0:ref type='formula'>15</ns0:ref>), the variable represents the vertices that are being considered <ns0:ref type='figure'>7</ns0:ref> presents the assessed computational time consumed by various algorithms. It could be observed from the graphical values that the proposed approach SLNS seems to be much efficient. It is observed that SLNS consumes more time than the HARIS-based approach, but from a precision point of view, the proposed SLNS has a trade-off. The computational time of CNN and SLNS is almost the same, and the computational lag in the proposed approach is because of selftraining that needs additional efforts. The proposed model does not require any significant training, unlike the neural network-based models.</ns0:p><ns0:p>The proposed model's performance is assessed against the various existing algorithms like thresholding, Seeded Region Growing, Fuzzy C-Means, and the Artificial Neural Network models concerning the evaluation metrics like sensitivity, specificity, and accuracy presented by <ns0:ref type='bibr' target='#b2'>(Alam et al., 2019)</ns0:ref>. The comparative analysis of the approaches with obtained values is shown in table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>. The performances of various algorithms connected to the proposed SLNS have been assessed concerning the size of the tumor. It is observed that from the clinical evidence, the resultant outcomes have been proven to be better compared to their counterparts and the consequent of the proposed methods seems to very pleasing and very accurate. The tabulated values in table 4 represent the resultant experimental values. <ns0:ref type='table' target='#tab_4'>4</ns0:ref> represents the progress of the tumor growth, that is being identified based on the texture information of the abnormal region as stated by <ns0:ref type='bibr'>(Naga et al., 2020)</ns0:ref>. The abnormal region is classified as the Tumour Core(TC) that depicts the actual region of the tumor, and the Enhanced Tumour(ET) that depicts the recent enhancement that has taken place in the region of the tumor that is presumed to be the progress in the tumor. The Whole Tumour(WT) is the region that includes both the tumor core and the enhanced tumor regions. The enhanced tumor region presents the progress of the abnormality that will assist the physician in taking up the decisions for suitable treatment.</ns0:p><ns0:p>The self-learning-centric models are recently becoming part of the biomedical and healthcare domain, where the models are expected to work with minimal training. In a few situations where there is no adequate data available for training the model, the self-learning models are proven to perform well. The model also needs lesser training than its counterparts. The proposed model is capable of learning from its previous experimental results. The hyperparameters like the learning rate that are presented in the current study are at an acceptable level after 43 epochs, and other parameters like the training and validation studies state that the model fine-tuned The performance assessment metrics like the accuracy, sensitivity, specificity, Jaccard Similarity Index (JSI), Mathew Correlation Coefficient(MCC) are assessed by repeated autonomous executions. The obtained values have proven that the model performs reasonably with minimal training of the data. The fuzzy component is being added to evaluate the membership for assigning the pixels to the appropriate segment in contrast to the HARIS algorithm for the assignment of the pixels. The statistical analysis of the study has evinced the performance of the HARIS algorithm in the segmentation process.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The pivotal object of the current study on mechanizing a self-learning model that is efficient in identifying the tumor from the MR image with minimal training. The model is efficient in working with problems that have minimal training data available. The proposed model efficiently learns from its prior experimental outcomes and utilizes the acquired knowledge for future predictions. It is observed on the practical implementation of the proposed model that the resulting outcome seems to be much accurate and precise in identifying abnormality in the human brain. In contrast to fully supervised models like Convolutional Neural Networks, deep learning models and various classification algorithms need to be rigorously trained for better accuracy. Still, the proposed model is proven to be efficient in generating equivalently better outcomes with other models. The proposed model itself can normalize the noise in the input image and robust in differentiating the non-brain tissues like the skull from the brain tissues. However, the proposed approach could be further optimized by incorporating the self-correcting convolution layer through an Ancillary kernel.</ns0:p><ns0:p>The self-Learning models are robust in handling the unusual problem where there is inadequate training data available. But the self-learning models overfit in few cases as the developer may not have control over the level of training to the model. The overfitting may lead to instability in the predictions that are made in few contexts. Moreover, the process of debugging the issues and rectifying them in the self-learning model is challenging. The models learn from their previous experimental outcomes, and there is a possibility that the models might misinterpret the outcome based on previous experimental results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Future Scope</ns0:head><ns0:p>The proposed model based on the self-learning mechanism is suitable for handling uncertain data more effectively through its previous experiences. The model's performance can be further improvised by incorporating the Long Short Term Memory(LSTM) component for efficiently handling the training data for better accurate prediction of the progress in the tumor growth as the LSTM components are efficient in holding the memories for a longer time by preserving the dependencies based on the network's information. The incorporated memory elements can retain the State information over the specific iterations constructed through multiple gates. The proposed</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>𝑝 (𝑝) 𝑥 𝑝 ∑ 𝑝 𝑐 𝑝 (𝑝) 𝑥 where cp determines the membership coefficient, and p designates the pixel, and x designates the fuzzier metric of belongings. The value of the pixel likeliness(pl) is designated through equation (4) stated below (4)𝑝 𝑙 = 𝑝 ( 𝑠 ) × 𝑝( 𝑐𝑝 𝑠 )/ ∑ 𝑝(𝑘) × 𝑝( 𝑐𝑝 𝑘 )From the above equation, the p(s) denotes the probability of likeliness with the segment all the classes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Select the regions in the image Determine the suitable category to assign If CI > Threshold, then Assign the corresponding pixel to the tumorous Region else Assign the corresponding pixel to the non-tumorous Region end if end for end While Figure 2 represents the proposed model's layered architecture. The convolutional layer, max-pool layer, flattening layer, and fully connected layers almost resemble the Convolutional PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Architecture diagram of the proposed SLNS model</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>15) (15) 𝑗(𝑤,𝛽;𝑖,𝑗) = ‖𝑓 𝑤,𝛽 (𝑖) -𝑗‖ 2 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4The image representing the hyper-parameters</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Output screens of proposed SLNS approach</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,206.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,411.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,242.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,178.87,525.00,271.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The table represents the confusion matrix of the proposed model</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Performance analysis of the proposed approach</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥in the image for processing. The variable represents the instance data that are the pixels, and𝐼' 𝑝𝑖𝑥 the variable presents the minimum correlation value in concern to the MR image 𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐼 ' 𝑝𝑖𝑥 )segment. The segment vertex is being assessed through the equation(16) stated below (19) It can be observed from table3. Thus, the fuzzy membership-based HARIS algorithm has slightly performed well than the traditional HARIS algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>𝑣𝑒𝑟𝑡𝑒𝑥 = 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 +</ns0:cell><ns0:cell>𝐽' 𝑝𝑖𝑥 -𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 1 -𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐽 ' 𝑝𝑖𝑥 )</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>From equation(16), The variable represents the instance pixel in the segment and represents the vertex pixel in the region of the MR image. 𝑣𝑒𝑟𝑡𝑒𝑥 𝑝𝑖𝑥 represents the 𝐽' 𝑝𝑖𝑥 𝑚𝑒𝑚 𝑝𝑖𝑥 (𝐽 ' 𝑝𝑖𝑥 )</ns0:cell></ns0:row><ns0:row><ns0:cell>membership value.</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Table representing the performance analysis with Fuzzy Component Figure 7. Graph representing the computational time of various approaches Figure</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Table representing the performance analysis of SLNS approach Table</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:53794:4:0:NEW 30 Jun 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Reviewer 3
We would like to thank the reviewer for constructive suggestions. The minor revision comments towards the English grammar corrections and necessary amendments will assist us in enhancing the comprehensibility of the manuscript.
Reviewer Comment#1 Correction towards
'to address noises in the original MRI images' -> 'to address noise...'
Authors Comment: We would like to thank the reviewer for the English correction, The necessary corrections are made in the manuscript at page 2 in line 61 in the revised manuscript. And we have searched for any such instances in the manuscript.
Reviewer Comment#2 Correction towards
'renewable avenue' -> What do you mean?
Authors Comment: We understand the ambiguity caused by the word renewable avenue in the manuscript at the layered architecture of the SLNS Model, we have done the necessary amendments to the manuscript concerning to the statement at page 7 line 237. The same has been appended to this response for your perusal.
“The suggested methodology trains the segmentation algorithm using a self-learning module that gets adequate information from the pre-set segmentation dataset and previous experimental results.”
Reviewer Comment#3 Correction towards
'trains and validating with ground facts' -> What do you mean?
Authors Comment: We wish to convey that the information pertaining the abnormality in each image that is being trained to the model and the information associated with the testing image to cross validate the performance of the proposed model. Based on the comment, we have appended a few more words pertaining the ground facts, the corrections are done at page 6 at line 222. The same has been highlighted in the revised manuscript and appended to this response for your kind perusal.
“Resultantly it is challenging to obtain a self-sufficient dataset for both trains and validating with ground facts that are associated with the imaging data concerning to abnormality for cross validation”
Reviewer Comment#4 Correction towards
“layer is driven by”: Do the authors mean consists of or comprise?
Authors Comment: We understand that the work driven by has created the ambiguity in the statement. We have replaced that key word with “consists of” as recommended by the reviewer. The necessary corrections are made to manuscript at page 7 line 240.We would like to thank the reviewer for the comment and the same has been appended to this response for your perusal.
“Additionally, each convolutional layer consists of a variety of kernels,”
Reviewer Comment#5 Correction towards
'which are slides across the original input image' -> slides of the image?
Authors Comment: We came across the grammar error in the statement, that has impacted the meaning of the statement. The statement supposed to be “which slides across the original input image”. The necessary corrections are made in the revised manuscript at page 7 line 243. The same has been highlighted in the revised manuscript and appended to this response for your kind perusal.
“Sobel filter that is used in the edge detection, and other 2D convolution filters that are generally (3x3) in size which slides across the original input image”
Reviewer Comment#6 Correction towards
'Over numerous rounds, the technique is repeated over multiple iterations to identify all the key features in the input image' -> What do the authors mean? Does this refer to the training epochs?
Authors Comment: We would like to thank the reviewer for the comment, as right said by the reviewer it must be “over numerous epochs” rather than rounds. The necessary amendments are made to the manuscript at line 243 in page 7. We believe the amendments made would enhance the technical comprehensibility of the manuscript. The same has been appended to this response for your perusal.
“Over numerous epochs, the model is trained over multiple iterations to identify all the key features in the input image.”
Reviewer Comment#7 Correction towards
'The adaptive structural similarity index(ASSI) approach' -> Is this a metric and not an approach?
Authors Comment: We agree with the statement of the reviewer, the ASSI is a metric that yields a value that will assist in classifying the data values like pixel intensities among multiple classes, that is used in the current model for evaluating the pixel similarity with various regions in the MR image. The necessary corrections are made in the manuscript pertaining the ASSI metrics are made in revised manuscript a page 8 in line 306. The same has been appended to this response for your perusal.
“The adaptive structural similarity index(ASSI) metric is used to classify the pixel among the tumorous and non-tumorous regions.”
Reviewer Comment#8 Correction towards
'Overfitting occurs ...., resulting in improper categorization of the abnormal area due to an abundance of data'. -> Why this is happening? Please elaborate on that, otherwise please rephrase.
Authors Comment: We understand the ambiguity caused in the particular sentence, The necessary corrections are made in the sentence as recommended. The corrections are made in the revised manuscript at page 11 in line 414 in the revised manuscript. The same has been appended to this response for your perusal.
“When the validation loss is much higher than the training loss, it is assumed as the overfitting, which results in incorrect classification of the abnormal region.”
Reviewer Comment#9 Correction towards
'The hyperparameters like the learning rate that are presented in the current study are at an acceptable level after 43 epochs, and other parameters like the training and validation studies state that the model fine-tuned” -> What do the authors mean by learning rate is at an acceptable level?
Authors Comment: we authors wish to convey that the learning rate is saturated after 43 epochs, like if the number of epochs are less than 43 for example if it is 15 than the model is not able to recognise all the pivotal features that might influence the performance of the model. Based on the accuracy of the model we admit that the learning rate is adequate to continue with validations in the current study. However, the learning rate is influenced by the number of experimental instances in self learning model and the number of iterations and epochs is have an impact on learning rate. We hope our efforts in convey the authors assumption pertaining the learning rate is reasonable and we thank the reviewers for the comments that have helped us in understanding the audience perspective on the hyperparameters.
Reviewer Comment#10
From the technical perspective: what happens if unlabeled data further used as new training samples are misclassified? Also, what is the minimum amount of acceptable labelled data to initially train the model?
Authors Comment: We understanding the point of using the unlabelled data in training the data that might mislead the model, the model might misinterpret the non-tumour region as a tumour region and model may relay on the same experimental results in the features experimental instances that would be challenging to handle in such context to work with unlabelled data. And as commented by the reviewer, we have not evaluated our model in accordance to the minimum acceptable labelled data that is needed, moreover we considered 0.08 is the reasonable learning rate for the model to make the predictions.
Reviewer Comment#11
I do suggest the use of 'MR images' instead of MRI images since 'I' corresponds to imaging.
Authors Comment: We would like to thank the reviewer for the comment, the necessary correction towards the utilisation of MRI is being corrected in all instances in the revised manuscript. The same has been highlighted through track changes.
We would like to thank the reviewers in recommending the constructive suggestions that have assisted us building the standard paper. We have kept our best efforts in addressing all the reviewers comments to incorporate in the manuscript. The recommendation of the reviewer seems to be misplaced with other manuscript of peerj publications. Hence majority of the comments seems to be irrelevant to the current manuscript.
" | Here is a paper. Please give your review comments after reading it. |
199 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Container-based application deployments have received significant attention in recent years. Operating system virtualization based on containers as a mechanism to deploy and manage complex, large-scale software systems has become a popular mechanism for application deployment and operation. Packaging application components into selfcontained artifacts has brought substantial flexibility to developers and operation teams alike. However, this flexibility comes at a price. Pracitioners need to respect numerous constraints ranging from security and compliance requirements, to specific regulatory conditions. Fulfilling these requirements is especially challenging in specialized domains with large numbers of stakeholders. Moreover, the rapidly growing number of container images to be managed due to the introduction of new or updated applications and respective components, leads to significant challenges for container management and adaptation. In this paper, we introduce Smart Brix, a framework for continuous evolution of container application deployments that tackles these challenges. Smart Brix integrates and unifies concepts of continuous integration, runtime monitoring, and operational analytics. Furthermore, it allows practitioners to define generic analytics and compensation pipelines composed of self-assembling processing components to autonomously validate and verify containers to be deployed. We illustrate the feasibility of our approach by evaluating our framework using a case study from the smart city domain.</ns0:p><ns0:p>We show that Smart Brix is horizontally scalable and runtime of the implemented analysis and compensation pipelines scales linearly with the number of container application packages.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head></ns0:div>
<ns0:div><ns0:head>28</ns0:head><ns0:p>In recent years, we have seen widespread uptake of operating system virtualization based on contain-29 ers <ns0:ref type='bibr' target='#b22'>(Soltesz et al., 2007)</ns0:ref> as a mechanism to deploy and manage complex, large-scale software systems.</ns0:p></ns0:div>
<ns0:div><ns0:head>30</ns0:head><ns0:p>Using containers, developers create self-contained images of application components along with all 31 dependencies that are then executed in isolation on top of a container runtime (e.g., Docker 1 , rkt 2 , or 32 Triton 3 ). By packaging application components into self-contained artifacts, developers can ensure that 33 the same artifact is consistently used throughout the complete software release process, from initial testing 34 to the final production deployment. This mechanism for application deployment has become especially 35 popular with practitioners executing projects following DevOps <ns0:ref type='bibr'>(Hüttermann, 2012)</ns0:ref> principles. Based 36 on the convergence of development and operations, DevOps advocates a high degree of automation 37 throughout the software development lifecycle (e.g., to implement continuous delivery <ns0:ref type='bibr'>(Humble and 38</ns0:ref> Farley, 2010)), along with an associated focus on deterministic creation, verification, and deployment 39 of application artifacts using Infrastructure as Code (IaC) <ns0:ref type='bibr' target='#b16'>(Nelson-Smith, 2014)</ns0:ref> techniques, such as 40 Dockerfiles 4 for containerized applications.</ns0:p><ns0:p>These properties allow for straightforward implementation of immutable infrastructure deployments, as advocated by IaC approaches. Application container images are usually created using a layered structure so that common base functionality can be reused by multiple container images. Application-specific artifacts are layered on top of a base file system so that for subsequent updates only the modified layers need to be transferred among different deployment environments. Container engine vendors such as Docker and CoreOS provide public repositories where practitioners can share and consume container images, both base images for common Linux distributions (e.g., <ns0:ref type='bibr'>Ubuntu, CoreOS, CentOS, or Alpine)</ns0:ref> to subsequently add custom functionality, as well as prepared application images that can be directly used in a container deployment. Once uploaded to a repository, a container image is assigned a unique, immutable identifier that can subsequently be used to deterministically deploy the exact same application artifact throughout multiple deployment stages. By deploying each application component in its own container 5 , practitioners can reliably execute multiple component versions on the same machine without introducing conflicts, as each component is executed in an isolated container.</ns0:p><ns0:p>However, since each container image must contain every runtime dependency of the packaged application component, each of these dependency sets must be maintained separately. This leads to several challenges for practitioners. Over time, the number of active container images grows due to the introduction of new applications, new application components, and updates to existing applications and their components. This growing number of container images inherently leads to a fragmentation of deployed runtime dependencies, making it difficult for operators to ensure that every deployed container continues to adhere to all relevant security, compliance, and regulatory requirements. Whenever, for instance, a severe vulnerability is found in a common runtime dependency, practitioners either have to manually determine if any active container images are affected, or initiate a costly rebuild of all active containers, irrespective of the actual occurrence of the vulnerability. We argue that practitioners need a largely automated way to perform arbitrary analyses on all container images in their deployment infrastructure. Furthermore, a mechanism is required that allows for the enactment of customizable corrective actions on containers that fail to pass the performed analyses. Finally, in order to allow practitioners to deal with the possibly large number of container images, the overall approach should be able to adapt it's deployment to scale out horizontally.</ns0:p><ns0:p>In this paper, we present Smart Brix, a framework for continuous evolution of container applications.</ns0:p><ns0:p>Smart Brix integrates and unifies concepts of continuous integration, runtime monitoring, and operational analytics systems. Practitioners are able to define generic analytics and compensation pipelines composed of self-assembling processing components to autonomously validate and verify containers to be deployed.</ns0:p><ns0:p>The framework supports both, traditional mechanisms such as integration tests, as well as custom, businessrelevant processes, e.g., to implement security or compliance checks. Smart Brix not only manages the initial deployment of application containers, but is also designed to continuously monitor the complete application deployment topology to allow for timely reactions to changes (e.g., in regulatory frameworks or discovered application vulnerabilities). To enact such reactions to changes in the application environment, developers define analytics and compensation pipelines that will autonomously mitigate problems if possible, but are designed with an escalation mechanism that will eventually request human intervention if automated implementation of a change is not possible. To illustrate the feasibility of our approach we evaluate the Smart Brix framework using a case study from the smart city domain. We show that the runtime of the implemented analysis and compensation pipelines scales linearly with the number of analyzed application packages, and that it adds little overhead compared to container acquisition times.</ns0:p><ns0:p>The remainder of this paper is structured as follows. In Section 2 we present a motivating scenario and relevant design goals for our framework. We present the Smart Brix framework in Section 3, along with a detailed discussion of the framework components. In Section 4 we evaluate our approach using a case study from the smart city domain. Related work is discussed in Section 6, followed by a conclusion and outlook for further research in Section 7.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>MOTIVATION</ns0:head><ns0:p>In this paper, we base our discussion on a scenario containing a multi-domain expert network as created within URBEM 6 , a research initiative of the city of Vienna and TU Wien. To tackle the emerging <ns0:ref type='bibr' target='#b19'>(Schleicher et al., 2015b)</ns0:ref>, which is depicted in Fig. <ns0:ref type='figure'>1</ns0:ref>. This loop outlines a reactive system that enables stakeholders to make informed decisions based on the models and analyses of interdisciplinary domain experts who in turn can access the large amounts of data provided by smart cities. In URBEM, a network consists of experts in the domains of energy, mobility, mathematics, building physics, sociology, as well as urban and regional planning. URBEM aims to provide decision support for industry stakeholders to plan for the future of the city of Vienna and represents a Distributed Analytical Environment (DAE) <ns0:ref type='bibr' target='#b20'>(Schleicher et al., 2015c)</ns0:ref>.</ns0:p><ns0:p>The experts in this scenario rely on a multitude of different models and analytical approaches to make informed decisions based on the massive amounts of data that are available about the city. In turn, these models rely on a plethora of different tools and environments that lead to complex requirements in terms of providing the right runtime environment for them to operate. The used tools range from modern systems for data analytics and stream processing like Cassandra and Spark, to proprietary tools developed by companies and research institutes with a large variance in specific versions and requirements to run them. Additionally, these domains have to deal with a broad range of different stakeholders and their specific security and compliance requirements. Models sometimes need to tailor their runtime environment to specific technology stacks to ensure compliance or to be able to access the data they need.</ns0:p><ns0:p>Managing and satisfying all these requirements is a non-trivial task and a significant factor hindering broader adoption. Therefore, this environment offers an optimal case for the advantages that come with the use of container-based approaches. Operations teams that need to integrate these models no longer need to be concerned with runtime specifics. Experts simply build containers that can be deployed in the heterogenous infrastructures of participating stakeholders.</ns0:p><ns0:p>However, several challenges remain. In URBEM the team of experts with their plethora of different models created over 250 different images that serve as the foundation for running containers. The models in these containers are fueled by data from several different stakeholders in the scenario, ranging from research institutions in the City of Vienna to industry stakeholders in the energy and mobility domain.</ns0:p><ns0:p>Each of them mandates a very distinct set of security and compliance requirements that need to be met in order to run them. These requirements in turn are subject to frequent changes and the containers need to be able to evolve along with them. Additionally, even though the container approach provides isolation from the host system it is still vital to ensure that the containers themselves are not compromised. This calls for means to check the systems running inside the container for known vulnerabilities, an issue that is subject to heavy and fast-paced change, again requiring according evolution. A recent study 7 shows that in the case of Docker, depending on the version of the images, more than 70% of the images show potential vulnerabilities, with over 25% of them being severe. This also begs the question of who is responsible for checking and fixing these vulnerabilities, the operations team or the experts who created them? Despite these security and compliance constraints, the ever-changing smart city domain itself makes it necessary for experts to stay on top of the novel toolsets that emerge in order to handle requirements stemming from topics like Big Data or IoT. This leads to a rapid creation and adaptation of models and their according containers, which in turn need be checked against these constraints again.</ns0:p><ns0:p>Last but not least, these containers need to comply to certain non-functional requirements that arise from the specific situations they are applied in. This calls for the ability to constantly check containers against certain runtime metrics that need to be met in order to ensure that these systems are able to deliver their excepted results within stakeholder-specific time and resource constraints.</ns0:p><ns0:p>All these factors lead to a complex environment that calls for an ability to easily adapt and evolve containers to their ever-changing requirements. Specifically, we identify the following requirements in the context of our domain:</ns0:p><ns0:p>• The ability to check a large amount of heterogenous containers against an open set of evolving requirements. These requirements can be vulnerabilities, compliance constraints, functional tests, or any other metric of interest for the domain.</ns0:p><ns0:p>• The ability to mitigate issues and evolve these containers based on the the results from the previously mentioned checks.</ns0:p><ns0:p>• An approach that is applicable in the context of operations management, while still enabling the participation of experts both for checking as well as evolution.</ns0:p><ns0:p>• An approach that can be applied to existing deployments as well as utilized to test new ones.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>THE SMART BRIX FRAMEWORK</ns0:head><ns0:p>In this section, we introduce the Smart Brix framework for continuos evolution of container-based deployments, which addresses the previously introduced requirements. We start with a framework overview, followed by a detailed description of all framework elements, and conclude with a comprehensive description of our proof of concept implementation including possible deployment variants.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Framework Rationales</ns0:head><ns0:p>The Smart Brix framework follows the microservice <ns0:ref type='bibr' target='#b17'>(Newman, 2015)</ns0:ref> architecture paradigm and an overview of the main framework components is shown in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. The framework is logically organized into four main facets, which group areas of responsibility. Each of these facets is composed of multiple Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>components where each of these components represents a microservice. The components in the Analyzer and Compensation Facet are managed as self-assembling components 8 , an approach we already successfully applied in previous work <ns0:ref type='bibr' target='#b18'>(Schleicher et al., 2015a)</ns0:ref>. Each of these components follows the Command Pattern <ns0:ref type='bibr' target='#b3'>(Gamma et al., 1994)</ns0:ref> and consists of multiple processors that are able to accept multiple inputs and produce exactly one output. This functional approach enables a clean separation of concerns and allows us to decompose complex problems into manageable units. Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> illustrates an example of auto-assembly within the Analyzer facet. We see a set of processors, where each processor is waiting for a specific type of input and clearly specifies the output it produces.</ns0:p><ns0:p>The processors use a message-oriented approach to exchange input and output data, where each output and input is persistently available in the message queue and accessible by any processor. In this example we perform an analysis of a custom-built Debian-based container that hosts the Apache HTTPD server.</ns0:p><ns0:p>There are two potential processors for the input Artifact, each of them able to handle a different container format. Since in our example the Artifact is a Docker Container, only the Docker Analyzer reacts and produces as output a Docker Image. In the next step there are two active processors, the Docker Base Image Analyzer and the Docker Package System Analyzer, both taking Docker Images as input. Since the Docker Base Image Analyzer cannot determine a base image for the given Docker Image, it produces no output. However, the Docker Package System Analyzer is able to determine that the image uses a DPKG-based package system and produces the according output. Now the DPKG Package Analyzer reacts by taking two inputs, the original Artifact as well as the DPKG output and inspects the Artifact via the DPKG command to produce a Package List. In the last step of this auto-assembly example the Vulnerability Analyzer listens for a Package List and produces a List of Vulnerabilities. This enables a straightforward auto-assembly approach, where connecting previous outputs to desired inputs leads to an automatically assembled complex system consisting of simple manageable processors. A processor itself can be anything and is not bound to any specific functionality, so it can be created completely flexibel depending on the task at hand. This approach further eliminates the necessity of complex composition and organization mechanisms, enabling dynamic and elastic compositions of desired functionality, where processors can be added on demand at runtime. This enables the previously mentioned creation of open and flexible analytics and compensation pipelines based on this principle.</ns0:p><ns0:p>Additionally, the components in the analyzer and compensation facets follow the principle of Confidence Elasticity, which means that a component or processor produces a result that is augmented with a confidence value (c ∈ R, 0 ≤ c ≤ 1), with 0 representing no certainty and 1 representing absolute certainty about the produced result. This allows for the specification of acceptable confidence intervals for the framework, which augment the auto-assembly mechanism. The confidence intervals are provided as optional configuration elements for the framework. In case the provided confidence thresholds are not met, the framework follows an escalation model to find the next component or processor that is able to provide results with higher confidence until it reaches the point where human interaction is necessary to produce a satisfactory result (illustrated in Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>). Each processor (p i ) from the set of active processors (P a ) provides a confidence value c i . We define the overall confidence value of all active processors (c a ) as c a = ∏ p i ∈P a c i . The compensation stops when c a meets the specified confidence interval of the framework or a processor represents a human interaction which has a confidence value of (c i = 1).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Smart Brix Manager</ns0:head><ns0:p>In order to initiate a container evolution, the Smart Brix Manager is invoked via the Smart Brix API with the following parameters: (i) a set of Containers to be inspected with (ii) the necessary Credentials to analyze and evolve them, as well as an optional (iii) set of Artifacts necessary to compensate or analyze the containers. In a first step the Smart Brix Manager queries the Repository Manager to see if there are already known issues for the supplied containers. If any known issues are found, the Smart Brix Manager creates a corresponding compensation topic via the messaging infrastructure by publishing the container identifiers as well as the found issues. This represents an input that will subsequently be consumed by the corresponding Compensation Handlers and starts the previously described auto-assembly process in the Compensation Facet.</ns0:p><ns0:p>If no issues were found, the Smart Brix Manager hands off the supplied Containers, Credentials and Artifacts to the Dependency Manager that is responsible for storing them in the Dependency Repository.</ns0:p><ns0:p>As a next step, the Smart Brix Manager creates a corresponding analyzer topic via the messaging 8 http://techblog.netflix.com/2014/06/building-netflix-playback-with-self.html</ns0:p></ns0:div>
<ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2016:01:8649:1:0:REVIEW 11 May 2016)</ns0:ref> Manuscript to be reviewed Furthermore, the Smart Brix Manager provides API endpoints to query the results of analytics and compensation processes, as well as the current status via container identifiers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3'>Repository Manager</ns0:head><ns0:p>The Repository Manager provides a repository for storing analytics results of all analyzed containers as well as their corresponding compensations. The Analytics Repository itself is a distributed key value store that enables Analyzers as well as Compensation Handlers to store information without being bound to a fixed schema. In addition, this enables the previously mentioned open extensibility of our auto-assembly approach by allowing every component to choose the required storage format. Finally, the Repository Manager provides a service interface to store and retrieve analytics and compensation information as well as an interface for querying information based on container identifiers or other attributes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Dependency Manager</ns0:head><ns0:p>The Dependency Manager handles necessary credentials and artifacts that are needed for processing containers. The Dependency Manager provides a service interface that allows the Smart Brix Manager to store artifacts and credentials associated with specific containers. Additionally, it provides a mechanism for components in the Analyzer and Compensation Facets to retrieve the necessary credentials and artifacts for the corresponding container IDs. Finally, it acts as service registry for components in the Utility Facet and exposes them to the Compensation and Analyzer Facet. The Dependency Manager uses a distributed key value store for its Dependency Repository in order to store the necessary information. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Utility Facet</ns0:head><ns0:p>The general role of the Utility Facet is to provide supporting services for Analyzers, Compensation Handlers, and Managers of the framework. Components in the Utility Facet register their offered services via the Dependency Manager. This provides an open and extensible approach that allows to incorporate novel elements in order to address changing requirements of container evolution. In our current architecture, the Utility Facet contains three components. First, a Vulnerability Hub, which represents a service interface that allows Analyzers as well as Compensation Handlers to check artifacts for vulnerabilities. The Vulnerability Hub can either utilize public repositories (e.g., the National Vulnerability Database 9 ), or any other open or proprietary vulnerability repository. The second component is a Compliance Hub that allows to check for any compliance violations in the same way the Vulnerability Hub does. This is an important element in heterogenous multi-stakeholder environments, where compliance to all specified criteria must be ensured at all times. The last element is a Metric Hub, which allows to check artifacts for certain relevant metrics in order to ensure relevant Quality of Service constraints for containers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6'>Analyzers</ns0:head><ns0:p>The task of the components within the Analyzer Facet is to test containers for potential vulnerabilities, compliance violations or any other metrics. The facet is invoked by the Smart Brix Manager, which triggers an auto-assembly process for the given containers that should be analyzed. The Analyzer Facet can contain components for the most prominent container formats like Docker or Rkt, but due to the fact that we utilize the auto-assembly approach, we are able to integrate new container formats as they emerge. For analyzing a container an analyzer follows three basic steps: (i) Determine the base layer of In order to access the containers and to perform analytics, the components within the Analyzer Facet interact with the Dependency Manager. The manager provides them with the necessary credentials for processing containers. Once the analyzers have processed a container, they publish the results, which are augmented with the confidence value, to the corresponding topic where the Smart Brix Manager carries on as previously described.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7'>Compensation Handlers</ns0:head><ns0:p>The components in the Compensation Facet generate potential compensations for containers that have been previously identified by the Analyzers. Like the Analyzers, the Compensation Handlers are invoked by the Smart Brix Manager, which starts an auto-assembly process for the containers with problems that should be compensated. We provide components for the most prominent container formats, with the ability to extend the list as new formats emerge. The compensation handlers follow three basic steps: (i) Apply a compensation strategy for the container and the identified problem; (ii) Verify if the compensation strategy could be applied by rebuilding or restarting the container; (iii) Verify that the problems could be eliminated or reduced.</ns0:p><ns0:p>Again, every step can utilize a set of different processors, each of them with a specific confidence value, which represent different strategies. Possible processors are: (i) Container Processors, which try to use the base image's package manager to upgrade packages with identified vulnerabilities. (ii) Image Processors that try to build a new image without the vulnerabilities; (iii) Similarity Processor that try to compensate via applying steps from similar containers that do not show these vulnerabilities; (iv) Human Provided Processors, which are human experts that manually compensate a container. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The Compensation Handlers interact with the Dependency Manager in a similar way like the Analyzers to retrieve the necessary credentials to operate. As Image Processors and Similarity Processors build new images in order to compensate, they can request the necessary artifacts associated with an image to be able build them.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.8'>Implementation</ns0:head><ns0:p>We created a proof of concept prototype of our framework based on a set of RESTful microservices implemented in Ruby. Each component that exposes a service interface relies on the Sinatra 10 web framework. The Repository Manager and the Dependency Manager utilize MongoDB 11 as their storage backend, which enables the previously described distributed, open, and extendable key value store for their repositories. We implemented a Vulnerability Hub that uses a SQLite 12 storage backend to persist vulnerabilities in a structured format. It holds the recent data from the National Vulnerability Database 13 (NVD), specifically the listed Common Vulnerabilities and Exposures (CVEs). This CVE Hub allows to import the CVEs posted on NVD, stores them in its repository, and allows to search for CVEs by vulnerable software name as well as version via its Sinatra-based REST interface.</ns0:p><ns0:p>To enable the auto-assembly mechanism for each processor within each component in the Analyzer and Compensation Facet, we use a message-oriented middleware. Specifically, we utilize RabbitMQ's 14 topic and RPC concepts, by publishing each output and listening for its potential inputs on dedicated topics. We implemented a Docker Analyzer component with a Base Image Processor and a Convention Processor-based strategy. The Docker Analyzer first tries to determine the operating system distribution of the container by analyzing its history. Specifically, it uses the Docker API to generate the history for the container and selects the first layer's ID, which represents the base layer. It then matches this layer against a set of known layer IDs, which matches corresponding operating system distributions to determine which command to use for extracting the package list. If a match is found, it uses the corresponding commands to determine the package list. If the determined operating system is Ubuntu or Debian, it will use dpkg to determine the package list. If it was CentOS, yum is used, and if it was Alpine, apk. After parsing the package command output into a processable list of packages, it checks each package name and version by using the CVE Hub via its REST interface. When this step is finished the Analyzer publishes the list of possible vulnerabilities, including analyzed packages along with several runtime metrics. In case the base image strategy fails, the Docker Analyzer tries to determine the base layer including the corresponding operating system via a convention processor. Specifically, it test if the image contains any of the known package managers. Based on the results the analyzer determines the distribution flavor and continues as described above.</ns0:p><ns0:p>We further implemented a Docker Compensation Handler with a Container Processor and an Image Processor based compensation strategy. The Container Processor tries to upgrade the container using the operating system distribution's package manager. After this operation succeeds, it checks if the number of vulnerabilities are reduced, by comparing the new version of packages against the CVE Hub. If this was the case it augments the results with a confidence value based on the percentage of fixed vulnerabilities and publishes the results. The Image Processor tries to fix the container by generating a new container manifest (e.g., Dockerfile). More precisely, it uses the Docker API to generate the image history and then derives a Dockerfile from this history. After this step, the Image Processor exchanges the first layer of the Dockerfile with the newest version of its base image. In cases where it cannot uniquely identify the correct Linux flavor, it generates multiple Dockerfiles, for example one for Ubuntu and one for Debian.</ns0:p><ns0:p>It then checks the Dockerfiles' structure for potential external artifacts. Specifically, it searches for any COPY or ADD commands that are present in the Dockerfile. If this is the case, it contacts the Dependency Manager and attempts to retrieve the missing artifacts. Once this is finished the Image Processor tries to rebuild the image based on the generated Dockerfile. After this step is finished, the Image Processor again checks the new list of packages against the CVE Hub, and if it could improve the state of the image it publishes the results with the corresponding confidence value. The prototype implementation is available online and can be found at https://bitbucket.org/jomis/smartbrix/.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 6. Evaluation Setup of Smart Brix running in inspection mode</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.9'>Deployment Modes</ns0:head><ns0:p>The Smart Brix Framework provides a container for each facet and therefore supports deployment on heterogeneous infrastructures. The framework enables wiring of components and aspects via setting the container's environment variables, enabling dynamic setups. We distinguish between two fundamental deployment modes, Inspection Mode and Introspection Mode.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.9.1'>Inspection Mode</ns0:head><ns0:p>The Inspection Mode allows the framework to run in a dedicated inspection and compensation setting.</ns0:p><ns0:p>In this mode the framework ideally runs exclusively without any other containers and utilizes the full potential of the host systems. This means that the Smart Brix Managers wait until they receive an explicit request to analyze and compensate an artifact.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.9.2'>Introspection Mode</ns0:head><ns0:p>The Introspection Mode allows the framework to run in an active container setup. In this mode the framework constantly watches deployed containers via the Smart Brix Manager. The Manager can be provided with a list of containers to watch via a configuration setting. This provided list of containers is then analyzed and compensated. If no container lists are supplied, the Manager watches all running containers on the platform. In this case it initiates a check whenever new images are added, an image of a running container changes, or new vulnerabilities are listed in the CVE Hub.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>EVALUATION</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1'>Setup</ns0:head><ns0:p>For our evaluation we used the following setup. We provisioned three instances in our private OpenStack cloud, each with 7.5GB of RAM and 4 virtual CPUs. Each of these instances was running Ubuntu 14.04 LTS with Docker staged via docker-machine 15 . For our evaluation we choose the inspection deployment variant of our framework in order to stress-test the system without other interfering containers. We deployed one manager container representing the Management Facet, as well as two utility containers containing the CVE Hub and the Messaging Infrastructure on one instance. We then distributed 12 analyzer containers with 12 compensation containers over the remaining two instances. Additionally, we deployed a cAdvisor 16 container on every instance to monitor the resource usage and performance characteristics of the running containers. Fig. <ns0:ref type='figure'>6</ns0:ref> shows an overview of the deployed evaluation setup.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Experiments</ns0:head><ns0:p>Since we currently only have around 250 images in our URBEM setting, we extended the number of images to be evaluated. In order to get a representative set of heterogenous images we implemented a focus on a set with a certain impact. We then extracted the name and the corresponding pull commands along with the latest tag to form the URI of the image. This set of 4000 URIs represented the source for our experiments, which was then split into 3 sets containing 250, 500, and 1000 images to be tested.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Analyzer Experiments</ns0:head><ns0:p>We started our experiments with a focus on the Analyzer Facet of the framework. First, we started the analyzer containers on one instance and started our tests with the 250 image set. After the run finished we repeated it with the 500 and 1000 image set. After the tests with one instance, we repeated the experiments with two instances where each run was repeated 3 times. During the tests we constantly monitored cAdvisor to ensure that the instances were not fully utilized in order to ensure this would not skew results. The focus of our experiments were not the performance characteristics of our framework, in terms of cpu, memory or disk usage, which is why we used cAdvisor only as a monitor to rule out overloading our infrastructure. We also did not utilize any storage backend for cAdvisor since this has shown to be a significant overhead which in turn would have skewed our results.</ns0:p><ns0:p>After the runs had finished we evaluated the vulnerability results. The analyzers logged the analyzed images, their base image flavor (e.g. Ubuntu, Debian etc.), processing time to analyze the image, pull time to get the image from the DockerHub as well as the overall runtime, number of packages, size of the image, and number of vulnerabilities. Over all our experiments the analyzers showed that around 93% of the analyzed images have vulnerabilities. This mainly stems from the fact that our implemented analyzers have a very high sensitivity and check for any potentially vulnerable software with any potentially vulnerable configuration. However, this does not necessarily mean that the specific combination of software and configuration in place shows the detected vulnerability. If we only take a look at the images with a high severity according to their CVSS 18 score, around 40% show to be affected which is conclusive with recent findings 19 . These results underline the importance to implement the measures proposed by our framework. However, the focus of our work and the aim of our experiments was not to demonstrate the accuracy of the implemented vulnerability detection, but the overall characteristics of our framework, which we discuss in the remainder of this section. We first compared the overall runtime of our analyzers, specifically the difference for one instance vs two instance deployments, the results are shown in Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>. Based on the results we see that our approach can be horizontally scaled over two nodes leading to a performance improvement of around 40%. The fact that in our current evaluation setting we were not able to halve the overall runtime using two instances stems from several factors. On the one hand, we have a certain overhead in terms of management and coordination including the fact that we only deployed one manager and storage asset. On the other hand, a lot of the runtime is caused by the acquisition time, which is clearly bound by network and bandwidth.</ns0:p><ns0:p>Since our infrastructure is equipped with just one 100 Mbit uplink that is shared by all cloud resources, this is a clear bottleneck. We also see that the majority of wall clock time is spent for acquisition and that the actual processing time only amounts to approximately 3% of the overall runtime. The fact that the acquisition time for the 1000 image set does not grow linearly like the runs with the 250 and 500 image set, stems from Docker's image layer cache. In this case the overall acquisition time grows slower, because a lot of images in the 1000 set share several layers, which, if already pulled by another analyzer in a previous run, do not need to be pulled again, hence reducing the acquisition time. Finally, we demonstrate that the average processing time of our framework is stable, which is shown in Fig. <ns0:ref type='figure'>8</ns0:ref>. We further notice a small increase in average processing time for the 250 image set, which is caused by the fact that this set contains more images with larger package numbers compared to the overall amount of images tested, resulting in a slightly higher average processing time. As illustrated in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>, per-package processing times remain stable throughout the performed experiments, with a median of 0.558s and a standard deviation of 0.257s. </ns0:p></ns0:div>
<ns0:div><ns0:head>Set</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Compensation Experiments</ns0:head><ns0:p>In the the next part of our experiments we focused on the Compensation Facet of our framework. In order to test the ability to automatically handle compensations of vulnerable images, we tested the implemented Container Processor strategy. This strategy compensates found vulnerabilities via automatic upgrades of existing images. It takes no human intervention, has a very high confidence, keeps all artifacts within the images and is therefore optimal to test the auto-compensation ability of our framework. In the process Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>of compensation the Container Processor generates a new image with the upgraded packages. In order to test this image for improvement we have to store it. This means that for every tested image we have to hold the original image as well as its compensated version. Specifically, we choose to test the most vulnerable images (images with the most vulnerable packages) out of the 1000 image set we tested that are also the most prominent images in our URBEM scenario. This left us with 150 images, which we split in three sets with 50, 100, and 150 images and started our compensation tests. We then repeated each run to demonstrate repeatability and to balance our results. Since the Compensation Facet follows the same principle as the Analyzer Facet we omitted testing it on one instance and immediately started with two instances. After the tests finished, we compared the newly created images to the original ones and checked if the number of vulnerabilities could be reduced.</ns0:p><ns0:p>Overall our experiments showed that from the 150 images we were able to auto-compensate 34 images by reducing the number of vulnerabilities. This illustrates that even a rather simple strategy leads to a significant improvement of around 22,6%, which makes this a very promising approach. In a next step, we compared the overall runtime of our compensation handlers for the three tested sets, and the results are shown in Fig. <ns0:ref type='figure'>9</ns0:ref>. We again can clearly see that the major amount of time is spent for acquisition, in this case pulling the images that need to be compensated. The compensation itself only takes between 24% and 28% of the overall runtime and shows linear characteristics correlating with the number of images to be compensated. The comparatively low increase in acquisition time for the 150 image set again can be explained with the specific characteristics we see in Docker's layer handling.</ns0:p><ns0:p>In a next step, we compared the average processing time for each set, and the results are shown in Fig. <ns0:ref type='figure'>10</ns0:ref>. We again notice similar characteristics as we saw with our analyzers. The average processing time as well as the median processing time are stable. The small increase for the 50 image set is explained with a larger number of images that contain more packages. This fact leads to relatively longer compensation times when upgrading them.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>Our experiments showed that our framework is able to scale horizontally. We further demonstrated that the majority of the runtime, both when analyzing and compensating images is caused by the image acquisition, which is bandwidth bound. Given the fact that in most application scenarios of our framework the images will not necessarily reside on Docker Hub, but instead in a local registry, this factor greatly relativizes.</ns0:p><ns0:p>The processing time itself scales linearly with the number of analyzed packages, and the same was shown for the compensation approach. Furthermore, the processing time in our current evaluation setup is mostly constrained by the prototypical vulnerability checking mechanism and the chosen storage system, which both are not the focus of our contribution. The implementation of different vulnerability checkers, along with more efficient storage and caching of vulnerability data could lead to further reduction in processing time and will be tackled in future work. An additional aspect we did not specifically address in this paper is the fine-grained scale-out of components in all Smart Brix facets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Threats to Applicability</ns0:head><ns0:p>While the presented framework fulfills the requirements set forth in the previously introduced URBEM project, certain threats to the general applicability of Smart Brix remain.</ns0:p><ns0:p>Currently, the auto-assembly mechanism introduced in Section 3.1 attempts to eagerly construct analysis and compensation pipelines that are loosely structured along the level of specificity of the performed analysis. Hence, the number of created pipelines can grow exponentially with the number of candidate components in the worst case. If all components for a given level of specificity accept all inputs produced in the previous level, and all subsequent components accept all produced outputs in turn, the number of created pipelines would grow exponentially with the number of components per level of specificity. This problem can be mitigated by introducing a transparent consolidation mechanism that delays the propagation of produced outputs of a certain type for a specified amount of time, orders them by the reported confidence values, and only submits one (or a few) of the produced output values with the highest confidence values for further consumption by other components. Due to the relatively small number of processing components required for the URBEM use case, we left the implementation of this consolidation mechanism for future work. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>RELATED WORK</ns0:head><ns0:p>The rapid adoption of container-based execution environments for modern applications enables increased flexibility and fast-paced evolution. Next to this fast-paced evolution of containers, new containers are deployed whenever functionality has to be added, which leads to massive amounts of containers that need to be maintained. While the container provides an abstraction on top of the operating system, it is still vital that the underlying system complies to policies or regulations to avoid vulnerabilities. However, checking the plethora of available environments and adapting them accordingly, is not a trivial task.</ns0:p><ns0:p>Among several approaches stemming from the area of SOA like the works of <ns0:ref type='bibr' target='#b14'>Lowis and Accorsi (2009)</ns0:ref>, <ns0:ref type='bibr' target='#b27'>Yu et al. (2006)</ns0:ref> <ns0:ref type='formula'>2014</ns0:ref>)). In contrast to our approach, the aforementioned tools solely concentrate on testing and identifying possible security threats, but do not provide means for adapting the observed application or its environment accordingly.</ns0:p><ns0:p>More recently, container-based approaches are applied in the literature to ease development and operation of applications. <ns0:ref type='bibr' target='#b23'>Tosatto et al. (2015)</ns0:ref> analyze different cloud orchestration approaches based on containers, discuss ongoing research efforts as well as existing solutions. Furthermore, the authors present a broad variety of challenges and issues that emerge in this context. <ns0:ref type='bibr' target='#b26'>Wettinger et al. (2014)</ns0:ref> present an approach that facilitates container virtualization in order to provide an alternative deployment automation mechanism to convergent approaches that are based on idempotent scripts. By applying action-level compensations, implemented as fine-grained snapshots in the form of containers, the authors showed that this approach is more efficient, more robust, and easier to implement as convergent approaches.</ns0:p><ns0:p>However, compared to our approach, the authors do not provide a framework for analyzing container application deployments, which based on identified issues triggers according compensation mechanisms. <ns0:ref type='bibr' target='#b4'>Gerlach et al. (2014)</ns0:ref> introduce Skyport, a container-based execution environment for multi-cloud scientific workflows. By employing Docker containers, Skyport is able to address software deployment challenges and deficiencies in resource utilization, which are inherent to existing platforms for executing scientific workflows. In order to show the feasibility of their approach, the authors add Skyport as an extension to an existing platform, and were able to reduce the complexities that arise when providing a suitable execution environment for scientific workflows. In contrast to our approach the authors solely focus on introducing a flexible execution environment, but do not provide a mechanism for continuously evolving containerbased deployments. <ns0:ref type='bibr' target='#b13'>Li et al. (2015b)</ns0:ref> present an approach that leverages Linux containers for achieving high availability of cloud applications. The authors present a middleware that is comprised of agents to enable high availability of Linux containers. In addition, application components are encapsulated inside containers, which makes the deployment of components transparent to the application. This allows monitoring and adapting components deployed in containers without modifying the application itself.</ns0:p><ns0:p>Although this work shares similarities with our approach, the authors do not provide a framework for testing container-based deployments, which also supports semi-automatic compensation of found issues.</ns0:p><ns0:p>Next to scientific approaches, also several industrial platforms emerged that deal with the development and management of container-based applications, with the most prominent being Tutum 20 and Tectonic 21 .</ns0:p><ns0:p>These cloud-based platforms allow building, deploying and managing dockerized applications. They are specifically built to make it easy for users to develop and operate the full spectrum of applications, reaching from single container apps, up to distributed microservices stacks. Furthermore, these platforms allow keeping applications secure and up to date, by providing easy patching mechanisms and holistic systems views. In contrast to our approach, these platforms only focus on one specific container technology, and are not extensible. IBM recently introduced the IBM Vulnerability Advisor 22 , a tool for discovering possible vulnerabilities and compliance policy problems in IBM containers. While IBM's approach shares similarities with our work, they are solely focusing on Docker containers that are hosted inside their own Bluemix environment and therefore do not provide a generic approach. Furthermore, their</ns0:p><ns0:p>Vulnerability Advisor only provides guidance on how to improve the security of images, but does not support mechanisms to evolve containers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>The numerous benefits of container-based solutions have led to a rapid adoption of this paradigm in recent years. The ability to package application components into self-contained artifacts has brought substantial flexibility to developers and operation teams alike. However, to enable this flexibility, practitioners need to respect numerous dynamic security and compliance constraints, as well as manage the rapidly growing number of container images. In order to stay on top of this complexity it is essential to provide means to evolve these containers accordingly. In this paper we presented Smart Brix, a framework enabling continuous evolution of container application deployments. We described the URBEM scenario as a case study in the smart city context and provided a comprehensive description of its requirements in terms of container evolution. We introduced Smart Brix to address these requirements, described its architecture, and the proof of concept implementation. Smart Brix supports both, traditional continuous integration processes such as integration tests, as well as custom, business-relevant processes, e.g., to implement security, compliance, or other regulatory checks. Furthermore, Smart Brix not only enables the initial management of application container deployments, but is also designed to continuously monitor the complete application deployment topology and allows for timely reaction to changes (e.g., discovered application vulnerabilities). This is achieved using analytics and compensation pipelines that will autonomously detect and mitigate problems if possible, but are also designed with an escalation mechanism that will eventually request human intervention if automated implementation of a change is not possible. We evaluated our framework using a representative case study that clearly showed that the framework is feasible and that we could provide an effective and efficient approach for container evolution.</ns0:p><ns0:p>As part of our ongoing and future work, we will extend the presented framework to incorporate more sophisticated checking and compensation mechanisms. We will integrate mechanisms from machine learning, specifically focusing on unsupervised learning techniques as a potential vector to advance the framework with autonomous capabilities. We also aim to integrate the Smart Brix framework with our work on IoT cloud applications <ns0:ref type='bibr' target='#b9'>(Inzinger et al., 2014;</ns0:ref><ns0:ref type='bibr'>Vögler et al., 2015b,a)</ns0:ref>. Furthermore, we plan to conduct a large-scale feasibility study of our framework in heterogenous container application deployments.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>7</ns0:head><ns0:label /><ns0:figDesc>http://www.banyanops.com/blog/analyzing-docker-hub/ 3/17 PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8649:1:0:REVIEW 11 May 2016) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Smart Brix Framework Overview</ns0:figDesc><ns0:graphic coords='5,141.73,63.78,413.56,209.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Example of auto assembling processors within the analyzer facet.</ns0:figDesc><ns0:graphic coords='7,245.13,83.97,206.79,322.18' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Confidence Adaptation Model Escalation</ns0:figDesc><ns0:graphic coords='7,245.13,476.42,206.79,209.69' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8649:1:0:REVIEW 11 May 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>the container in order to know how to access the package list. (ii) Determine the list of installed packages including their current version. (iii) Match the list of installed packages against a set of vulnerabilities, issues, or compliance constraints in order to determine the set of problems. Every step can follow a different set of strategies to analyze a container represented as different processors, each of them with a specific confidence value. Possible processors for these steps are: (i) Base Image Processors, which try to determine the base layer of a container by matching their history against known base image IDs. (ii) Similarity Processors that try to select a base layer based on similarities in the history of the container with known containers by performing actions like collaborative filtering and text mining. (iii) Convention Processors that try to determine the base layer by trying common commands and checking their results. (iv) Human Provided Processors, which are human experts that manually analyze a container.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>9</ns0:head><ns0:label /><ns0:figDesc>https://nvd.nist.gov/ 8/17 PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8649:1:0:REVIEW 11 May 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Comparison of runtime for analytics between one instance and two instances</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>17</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .Figure 9 .Figure 10 .</ns0:head><ns0:label>8910</ns0:label><ns0:figDesc>Figure 8. Comparison of processing time for analytics with two instances</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,183.09,115.17,330.86,203.54' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,183.09,451.38,330.86,203.54' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,183.09,63.78,330.86,203.54' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Median and standard deviation for processing time per package over all runs with two instances</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='3'>Median Processing Time Standard Deviation Processing Time No. of packages</ns0:cell></ns0:row><ns0:row><ns0:cell>250</ns0:cell><ns0:cell>0.620s</ns0:cell><ns0:cell>0.255s</ns0:cell><ns0:cell>153,275</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>0.564s</ns0:cell><ns0:cell>0.263s</ns0:cell><ns0:cell>303,483</ns0:cell></ns0:row><ns0:row><ns0:cell>1000</ns0:cell><ns0:cell>0.537s</ns0:cell><ns0:cell>0.252s</ns0:cell><ns0:cell>606,721</ns0:cell></ns0:row><ns0:row><ns0:cell>Overall</ns0:cell><ns0:cell>0.558s</ns0:cell><ns0:cell>0.257s</ns0:cell><ns0:cell>1,063,479</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>which deal with classic service vulnerabilities as well as the work of<ns0:ref type='bibr' target='#b11'>Li et al. (2010)</ns0:ref>,<ns0:ref type='bibr' target='#b15'>Lowis and Accorsi (2011)</ns0:ref> propose a novel method for analyzing cloud-based services for certain types of vulnerabilities. Next to general models and methods for classifying and analyzing applications, several approaches emerged that allow vulnerability testing. They range from service oriented approaches for penetration and automated black box testing introduced by<ns0:ref type='bibr' target='#b1'>Bau et al. (2010)</ns0:ref> and<ns0:ref type='bibr' target='#b12'>Li et al. (2015a)</ns0:ref> to model based vulnerability testing like the work of<ns0:ref type='bibr' target='#b10'>Lebeau et al. (2013)</ns0:ref> as well as automated vulnerability and infrastructure testing methods (e.g.<ns0:ref type='bibr' target='#b21'>Shahriar and Zulkernine (2009)</ns0:ref>;<ns0:ref type='bibr' target='#b6'>Hummer et al. (2013)</ns0:ref>).<ns0:ref type='bibr' target='#b0'>Antunes and Vieira (2013)</ns0:ref> introduce SOA-Scanner, an extensible tool for testing service-based environments for vulnerabilities. Based on an iterative approach the tool discovers and monitors existing resources, and automatically applies specific testing approaches. More recently also large scale distributed vulnerability testing approaches have been introduced (e.g.<ns0:ref type='bibr' target='#b2'>Evans et al. (2014)</ns0:ref>;Zhang et al. (</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='10'>http://www.sinatrarb.com/ 11 https://www.mongodb.org/ 12 https://www.sqlite.org/ 13 https://nvd.nist.gov/ 14 https://www.rabbitmq.com/ 9/17 PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8649:1:0:REVIEW 11 May 2016)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='20'>https://www.tutum.co 21 https://tectonic.com 15/17 PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8649:1:0:REVIEW 11 May 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='17'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2016:01:8649:1:0:REVIEW 11 May 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response Letter
Johannes M. Schleicher, Michael Vögler, Christian Inzinger, Schahram Dustdar
May 11, 2016
Dear editor,
We thank the four anonymous reviewers for the extensive and helpful comments relating to our original
submission to PeerJ Computer Science. In this response letter, we detail each change we have made to
the manuscript in order to address the reviewers concerns. In the revised manuscript, we mark added or
updated text in blue font to make it easier for the reviewers to assess our updates. Updated or added
listings, tables or figures are marked with blue borders. Furthermore, as an appendix to this letter, we
list and discuss each of the review comments in detail.
Finally, we have also generally improved the presentation of the paper.
Best regards,
Johannes M. Schleicher,
Michael Vögler,
Christian Inzinger,
Schahram Dustdar
1
1
1.1
Referee 1
Basic Reporting
The paper is well written and provide the insides of novel achievements in terms of containers management. The background is up to date and the motivation presented in the introduction is reflecting
a good knowledge of the current problems in virtualization techniques. The structure is conform to
the natural templates for computing science papers, including the state-of-the-art criticism, the presentation of the proposal, the references to the proof-of-concept implementation as well as the test
results. The figures are relevant for the proposed methodology. The text is self-containing and can be
an example of good practice in writing a paper in the field of distributed systems.
We thank the reviewer for these positive remarks.
1.2
Experimental design
The research questions can be identified in the first section of the paper and are related to real problems
of the current Cloud computing environments. The methods that are exposed are presented to a level
that can be reproduced in similar conditions.
We thank the reviewer for these positive remarks.
1.3
Validity of the findings
The conclusions of the work are relevant for the addressed community. Valid ideas for the future work
are also correctly identified.
We thank the reviewer for these positive remarks.
1.4
Comments for the author
The paper is well written and presents a valuable contribution to the state-of-the-art in a hot topic.
The related work can be improved with the latest achievements in the what concerns the large scale
experiments using container technologies. The scale of the experiments is quite small - the effect of
increasing the number of instances should be studied too.
We thank the reviewer for the constructive input. Regarding the related work, we revised and extended
this section in order to reflect on recent state of the art contributions in addition to the important
foundations we already discussed. With regard to the size of our experiments, we selected the most
popular 4000 images from Docker Hub for testing based on their received number of pulls and positive
ratings (i.e., number of stars). We are confident that this image selection criteria represents a very good
metric for dispersion and potential impact since it shows a broad and recent utilization. Additionally, the
selected images contained a large number of base images that are used as foundation for many composite
images, which significantly increases the impact of these images. Therefore, we think that the selected
number of images is a very representative sample size for our experiments. With regard to the scale of
our experiments, it was our intention to demonstrate that our framework is able to scale in order to deal
with an increasing number of images that need to be analyzed by using additional framework instances.
Although we were able to demonstrate that our framework is able to scale in our experiments, we do
agree with the reviewer and plan to conduct further experiments in our future work to specifically test
the scalability of our framework and respective side effects.
2
2
Referee 2
2.1
Basic reporting
This paper presents the smart brix framework, which offers the ability to check sets of containers
against vulnerability breaches and other specific requirements. It also provides mechanisms for mitigating the identified issues by evolving the containers.
The paper is well written and easy to follow, the overall objective and challenge are well defined.
We thank the reviewer for these positive remarks.
2.2
Experimental design
I believe the description of the technical objectives and challenges could be improved. Moreover, smart
brix is presented as a framework for the continuous evolution of container-based deployment but I felt
that the runtime monitoring aspect was not properly presented in the paper or at least could be more
detailed.
We thank the reviewer for this critical reflection. We revised the general description of the framework as
well as the presentation of the paper as a whole to reflect the technical objectives and challenges in more
detail. Regarding the runtime monitoring aspect we revised the section of the evaluation to clarify that
the runtime characteristics of the framework were not focus of this evaluation, but only monitored them
to avoid overloading our infrastructure resources. We however plan to evaluate the runtime characteristics
in a different setting in our future work.
The overall architecture of the framework is well presented but I would suggest to the authors to also
provide a description on how (technically and methodology) the framework should/could be used in
a production setting (maybe using the case study). Also, few technical details are provided on how
can be used the framework and how easy it is to implement and extend such a continuous evolution
system.
We thank the reviewer for this constructive idea. Since Smart Brix is under active development as an
open source project on Bitbucket1 we are actively extending the framework as well as its documentation.
We are in the process of incorporating these aspects in the prototype repository and will reflect them
accordingly in the accompanying documentation.
Whilst reading the formula describing how can be calculated the overall confidence value, I was wondering if some more complex options have been considered by the authors. If yes, I would suggest
discussing this. Also, regarding the confidence adaptation model escalation, it would be interesting
to evaluate and discuss what is the current limit to the automation, how often it goes to the human
interaction level and to provide an example of what is/can be provided to the human to ease its job.
We thank the reviewer for this valuable input. In the current version we only considered basic confidence
adaptation with minimal escalation to the human level. We focused on the automated aspects in order
to evaluate the basic framework concepts including the ability to horizontally scale. However we plan to
extend the investigation of escalation especially considering human involvement in our future and ongoing
work. Specifically, we plan to integrate and extend mechanisms from social computing that have been
investigated within our group (e.g., [1] and [2]).
To improve the readability of Section 3.2, I would suggest to illustrate the content presented in the
paragraph starting by If no issues with a Figure.
We thank the reviewer for this suggestion. We added a UML sequence diagram in addition to the
description to illustrate and clarify the overall process.
The proposed approach seems to be quite tight to the package managers; I would suggest the author
to clarify this.
1 https://bitbucket.org/jomis/smartbrix
3
We thank the reviewer for this remark. We used the package managers as one illustrative example since it
is a common approach to extract the package information to match it against known vulnerabilities using
the introduced CVE Hub. However, the approach itself is by no means restricted to utilizing package
managers. The auto assembly mechanism allows a processor to be anything that is suitable for the task
at hand. In the case of vulnerabilities the processor could also determine installed packages or software
artifacts by checking the filesystem or any other suitable mechanism. In the case of other container
evolution scenarios a processor might extract something entirely different like certain performance metrics
from an image or certain code metrics. We also added a corresponding remark in section 3.1 to further
clarify this.
2.3
Validity of the findings
Regarding the experiments, it is being said that the experiment was repeated 3 times maybe the authors
could justify this choice and also provide the standard deviation. These experiments are interesting
and it is good that the authors have tested their approach on large sets of images. Maybe It could also
be interesting to perform tests on very specific images and to identify the impact of the software stack
they embed both in term of size and diversity.
I would also suggest to the authors to provide a link to their data and source code (if open source).
We thank the reviewer for this comment. The experiments were performed three times as the per-package
processing times remained stable throughout. We added the according discussion to the evaluation
section. The data as well as the source code are available on Bitbucket. We emphasized the according
link in the paper and link to all artifacts in the ’supplemental information’ section in the submission form
in the PeerJ submission interface.
4
3
3.1
Referee 3
Basic reporting
The paper is generally well-written, clear, and well-argued. There are only a few places where I think
this is not the case, and I have the following suggestions: - Evaluation criteria are discussed for the
first time in Section 4. I think that is too late. I strongly suggest the authors add upfront a short
overview about the research goals the Smart Brix framework is supposed to achieve, how they intend
to measure those achievements and the basics of their evaluation method and criteria. This provides
a better scope for the paper, makes the reading more focused, and facilitates assessing the values of
the approach.
We thank the reviewer for this valuable input. As suggested we added a short overview outlining the
mentioned points in the introduction and updated the abstract in order to provide a clearer scope for the
paper.
- Section 3.2 explains how the Smart Brix Manager works in collaboration with the rest of the framework. The text is dense with description of all the interactions, and it is easy to get lost in them. I
would suggest a Figure with a corresponding diagram (e.g. a UML Sequence Diagram) to accompany
the text as a visual aid.
We thank the reviewer for this remark. We added a UML sequence diagram in addition to the description
to illustrate and clarify the overall process.
- In Section 5 ”Related Work”, lines 452-454 there is a rather long list of references on works that
”propose a novel method for analyzing cloud-based services for certain types of vulnerabilities”. Citing
in numbers without differentiating the works in any way from your own work, or among one another,
is not considered good form. Also, some of these ”cloud-based” papers are not from the Cloud world
(or epoch!) including one dated 2000. Please revise this part.
We thank the reviewer for this critical reflection. We revised and extended the related work section
in order to reflect on recent state of the art contributions in addition to the important foundations we
already discussed.
- The main contributions and take-away claimed by the paper remain somewhat implicit and are not
summarized anywhere that I could see in a concise way. - It looks like an ”Acknowledgements” section
may be missing, as per the PeerJ guidelines.
We thank the reviewer for this remark, we added a short overview and explicitly summarized the contributions of our paper. As per PeerJ guidelines, we left the funding acknowledgments out of the paper and provided the according information in the designated ’additional information and declarations’
form.
3.2
Experimental design
As said above, there is a certain lack of upfront clarity on what the research goals are, and how they
are intended to be measured. That means that the paper does not lead with a clear definition of its
research questions. Although that does not hamper the understanding of the approach and method, in
Sections 1 to 3, and its potential value, the reader remains with an unanswered question on how the
benefits of SmartBrix can be assessed until Section 4 (page 9) The method and procedure for evaluation
in Section 4 make sense and showcase some of the benefits of Smart Brix, in particular its efficiency
and performance. However, as far as I can see they do not shed light on one of the most interesting
and potentially valuable characteristics touted in the paper, i.e., the pipeline self-assembly capabilities,
which should enable to automatically compose complex workflows for Analysis and Compensation out
of a potentially large set of candidate micro-services. Since the overhead of this kind of automated
composition typically grows super-linearly with the number of candidate components, discussing the
number of components in the Analysis and Compensation sets vs. the related complexity and costs
(e.g. time, or chance of not synthesizing a suitable or correct pipeline) of the self-assembly approach
would be quite interesting, and would speak to a different kind of scalability of the approach. A more
detailed analysis of success vs. failure within the described experiments (e.g. false positive and false
5
negatives for the Analysis pipelines, or correctly completed vs. failed compensation attempts) seems
also like a missing, but important, aspect of evaluation.
We thank the reviewer for these valuable comments. We clarified the aim of the paper by outlining
the research goals in the introduction. We agree that the number of pipelines created using the selfassembly approach, as currently implemented, could grow exponentially with the number of candidate
services. This would especially pose a problem if there always is a large number of components that
accepts outputs of previous processing steps as input, and the constructed pipeline has a large number
of subsequent processing steps. In the context of our work in the URBEM project, we found this not
to be an issue, as the created pipelines are mostly quite ’short’ (e.g., see Figure 3), and the number of
candidate components for any given output is usually less than ten. Furthermore, due to the focus of
adding information for specific aspects of the analyzed containers, the processing components roughly
group into technology-specific clusters that rarely provide output data that can be consumed by a large
number of other components. We consider an in-depth analysis of the scalability limits of the autoassembly approach along with possible improvements to the assembly strategy to be out of scope for this
paper. We added an according discussion of our planned future work along with possible improvement
strategies to the ’Discussion’ section.
3.3
Validity of the findings
This paper basically discusses the feasibility and efficiency of the Smart Brix approach towards the
managed evolution of Cloud container-based deployments. The paper does quite a good job and is
sound in that respect. The experiments are intended for the purpose and seem well designed and
simple. The kind of evaluation described though is partial and a bit shallow, as I indicated above, and
begs the question of a fuller assessment of the benefits of the Small Brix approach. I provided above
some suggestions on how to expand and strengthen the evaluation and better showcase the work. I
also think that a ”Threats to validity” and/or ”Limits of applicability” section would be appropriate
as part of the post-evaluation discussion.
We thank the reviewer for this remark. In addition to the changes described above, we added a ’Threats
to Applicability’ section to further address these remarks.
6
Bibliography
[1] Dustdar, S., and Bhattacharya, K. The social compute unit. IEEE Internet Computing, 3 (2011), 64–69.
[2] Scekic, O., Truong, H.-L., and Dustdar, S. Incentives and rewarding in social computing. Communications of the ACM 56, 6 (2013), 72–82.
7
" | Here is a paper. Please give your review comments after reading it. |
200 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The presence of abusive and vulgar language in social media has become an issue of increasing concern in recent years. However, they remain largely unaddressed in lowresource languages such as Bengali. In this paper, we provide the first comprehensive analysis on the presence of vulgarity in Bengali social media content. We develop two benchmark corpora consisting of 7245 reviews collected from YouTube and manually annotate them into vulgar and non-vulgar categories. The manual annotation reveals the ubiquity of vulgar and swear words in Bengali social media content (i.e., in two corpora), ranging from 20% to 34%. To automatically identify vulgarity, we employ various approaches, such as classical machine learning (CML) algorithms, Stochastic Gradient Descent (SGD) optimizer, deep learning (DL) based architecture, and lexicon-based methods. We find although small in size, the swear/vulgar lexicon is effective at identifying the vulgar language due to the high presence of some swear terms in Bengali social media. We observe that the performances of machine leanings (ML) classifiers are affected by the class distribution of the dataset. The DL-based BiLSTM (Bidirectional Long Short Term Memory) model yields the highest recall scores for identifying vulgarity in both datasets (i.e., in both original and class-balanced settings). Besides, the analysis reveals that vulgarity is highly correlated with negative sentiment in social media comments.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Vulgarity or obscenity indicates the use of curse, swear or taboo words in language <ns0:ref type='bibr' target='#b41'>(Wang, 2013;</ns0:ref><ns0:ref type='bibr' target='#b4'>Cachola et al., 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b11'>Eder et al. (2019)</ns0:ref> conceived vulgar language as an overly lowered language with disgusting and obscene lexicalizations generally banned from any type of civilized discourse. Primarily, it involves the lexical fields of sexuality, such as sexual organs and activities, body orifices, or other specific body parts. <ns0:ref type='bibr' target='#b4'>Cachola et al. (2018)</ns0:ref> defined vulgarity as the use of swear/curse words. <ns0:ref type='bibr' target='#b19'>Jay and Janschewitz (2008)</ns0:ref> mentioned vulgar speech includes explicit and crude sexual references. Although the terms obscenity, swearing, and vulgarity have subtle differences in their meaning and scope, they are closely linked with some overlapping definitions. Thus, in this paper, we use them interchangeably to refer to the text that falls into the above-mentioned definition of <ns0:ref type='bibr' target='#b4'>(Cachola et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b11'>Eder et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Jay and Janschewitz, 2008)</ns0:ref>.</ns0:p><ns0:p>With the rapid growth of user-generated content in social media, vulgar words can be found in online posts, messages, and comments across languages. The occurrences of swearing or vulgar words are often linked with abusive or hatred context, sexism, and racism <ns0:ref type='bibr' target='#b4'>(Cachola et al., 2018)</ns0:ref> ; thus, leads to abusive and offensive actions. Hence, identifying vulgar or obscene words has practical connections to understanding and monitoring online content. Furthermore, vulgar word identification can help to improve sentiment classification, as shown by various studies <ns0:ref type='bibr' target='#b4'>(Cachola et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b40'>Volkova et al., 2013)</ns0:ref>.</ns0:p><ns0:p>Social media platforms such as Twitter, Facebook, Instagram, YouTube have made virtual social interaction popular by connecting billions of users. In social media, swearing is ubiquitous according to various studies. <ns0:ref type='bibr' target='#b42'>Wang et al. (2014)</ns0:ref> found that the rate of swear word usage in English Twitter is 1.15%, almost double compared to its use in daily conversation (0.5% 0.7%) as reported by <ns0:ref type='bibr' target='#b19'>(Jay and Janschewitz, 2008;</ns0:ref><ns0:ref type='bibr' target='#b24'>Mehl et al., 2007)</ns0:ref>. <ns0:ref type='bibr' target='#b42'>Wang et al. (2014)</ns0:ref> also reported that 7.73% of tweets in their random sampling collection contain swear words. Based on <ns0:ref type='bibr' target='#b19'>(Jay and Janschewitz, 2008)</ns0:ref>, offensive speech can be classified PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58966:1:0:NEW 23 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science into three categories: vulgar, which includes explicit and crude sexual references, pornographic, and hateful, which refers to offensive remarks targeting people's race, religion, country, etc. The categorization suggests that there exists a link between offensiveness and vulgarity.</ns0:p><ns0:p>Unlike English, research related to vulgarity is still unexplored in Bengali. As the vulgar word usage is dependent on the socio-cultural context and demography <ns0:ref type='bibr' target='#b4'>(Cachola et al., 2018)</ns0:ref>, it is important to explore their usage in languages other than English. For example, the usage of f*ck, a*s, sh*t, etc. are common in many English speaking countries in an expression to emphasize feelings, to convey neutral/idiomatic or even positive sentiment as shown by <ns0:ref type='bibr' target='#b4'>(Cachola et al., 2018)</ns0:ref>; However, the corresponding Bengali words are highly unlikely to be used in a similar context in Bengali, due to the difference in the socio-culture of the Bengali native speakers (i.e., people living in Bangladesh or India).</ns0:p><ns0:p>There is a lack of annotated vulgar or obscene datasets in Bengali, which are crucial for developing effective machine learning models. Therefore, in this work, we create resources for vulgarity analysis in Bengali. Besides, we investigate the presence of vulgarity, which is often associated with abusiveness and inappropriateness in social media. Furthermore, we focus on automatically distinguishing vulgar comments (e.g., usage of filthy language or curses towards a person), which should be monitored and regulated in online communications, and non-vulgar non-abusive negative comments, which should be allowed as part of freedom of speech.</ns0:p><ns0:p>We construct two Bengali review corpora consisting of 7245 comments and annotate them based on the presence of vulgarity. We find a high presence of vulgar words in Bengali social media comments based on the manual annotations. We provide the comparative performance of both lexicon-based and machine learning (ML)(i.e., CML and DL) based methods for automatically identifying the vulgarity in Bengali social media data. As a lexicon, we utilize a Bengali vulgar lexicon, BengVulLex, which consists of 184 swear and obscene terms. We leverage two classical machine learning (CML) classifiers, Support Vector Machine (SVM) <ns0:ref type='bibr' target='#b9'>(Cortes and Vapnik, 1995)</ns0:ref> and Logistic Regression (LR), and an optimizer, Stochastic Gradient Descendent (SGD) <ns0:ref type='bibr' target='#b34'>(Ruder, 2016)</ns0:ref>, to automatically identify vulgar content. In addition, we employ a deep learning architecture, Bidirectional Long Short Term Memory (BiLSTM). We observe that BengVulLex provides a high recall score in one corpus and very high precision scores in both corpora.</ns0:p><ns0:p>BiLSTM shows higher recall scores than BengVulLex in both corpora in class-balanced settings; however, they generate high false positives, thus yield a much lower precision score. The performances of the CML classifiers vary by the class distribution of the dataset.We observe that when undersampling is performed, CML classifiers provide much better performance. Class-balancing using over-sampling techniques like SMOTE <ns0:ref type='bibr' target='#b6'>(Chawla et al., 2002)</ns0:ref> or weighting class based on sample distributions does not improve the performance of CML classifiers significantly in two datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Motivation</ns0:head><ns0:p>As vulgarity is often related to abusive comments on social media, it is required to identify its presence in the textual content. In Bengali, until now, no work has addressed this issue. Although a few papers tried to determine the offensive or hate speech in Bengali utilizing labeled data, none focused on recognizing vulgarity or obscenity. Since social media such as Facebook, Twitter, YouTube, Instagram are popular in Bangladesh, the country with the highest number of Bengali native speakers, it is necessary to distinguish vulgarity in the comments or reviews for various downstream tasks such as abusiveness or hate speech detection and understanding social behaviors. Besides, it is imperative to analyze how vulgarity is related to sentiment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Contributions</ns0:head><ns0:p>The main contributions of this paper can be summarized as follows-</ns0:p><ns0:p>• We manually annotate two Bengali corpora consisting of 7245 reviews/comments into vulgar and non-vulgar categories and make them publicly available (the first of its kind in Bengali). 1</ns0:p><ns0:p>• We provide a quantitative analysis on the presence of vulgarity in Bengali social media content based on the manual annotation.</ns0:p><ns0:p>• We present a comparative analysis of lexicon-based, CML-based, SGD optimizer, and deep learningbased approaches for automatically recognizing vulgarity in Bengali social media content. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Finally, we investigate how vulgarity is related to sentiment in Bengali social media content.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>Researchers studied the existence and socio-linguistic characteristics of swearing, cursing, incivility or cyber-bullying in social media <ns0:ref type='bibr' target='#b42'>(Wang et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b35'>Sadeque et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b22'>Kurrek et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b14'>Gauthier et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b0'>Agrawal and Awekar, 2018)</ns0:ref>. <ns0:ref type='bibr' target='#b42'>Wang et al. (2014)</ns0:ref> investigated the cursing activities on Twitter, a social media platform. They studied the ubiquity, utility, and contextual dependency of swearing on Twitter. <ns0:ref type='bibr' target='#b14'>Gauthier et al. (2015)</ns0:ref> analyzed several sociolinguistic aspects of swearing on Twitter text data. <ns0:ref type='bibr' target='#b42'>Wang et al. (2014)</ns0:ref> investigated the relationship between social factors such as gender with the profanity and discovered males employ profanity much more often than females. Other social factors such as age, religiosity, or social status were also found to be related to the rate of using vulgar words <ns0:ref type='bibr' target='#b23'>(McEnery, 2004)</ns0:ref>. <ns0:ref type='bibr' target='#b23'>McEnery (2004)</ns0:ref> suggested that social rank, which is related to both education and income, is anti-correlated to the use of swear words. The level of education and income are inversely correlated with the usage of vulgarity on social media with education being slightly more strongly associated with a lack of vulgarity than income <ns0:ref type='bibr' target='#b4'>(Cachola et al., 2018)</ns0:ref>. Furthermore, liberal users tend to use vulgarity more on social media, an association on Twitter revealed by <ns0:ref type='bibr' target='#b4'>(Cachola et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b39'>Sylwester and Purver, 2015;</ns0:ref><ns0:ref type='bibr' target='#b32'>Preot ¸iuc-Pietro et al., 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b11'>Eder et al. (2019)</ns0:ref> described a workflow for acquisition and semantic scaling of a lexicon that contains lexical items in the German language, which are typically considered as vulgar or obscene. The developed lexicon starts with a small seed set of rough and vulgar lexical items, and then automatically expanded using distributional semantics. <ns0:ref type='bibr' target='#b19'>Jay and Janschewitz (2008)</ns0:ref> noticed that the offensiveness of taboo words depends on their context, and found that usages of taboo words in conversational context is less offensive than the hostile context. <ns0:ref type='bibr' target='#b30'>Pinker (2007)</ns0:ref> classified the use of swear words into five categories. Since many studies related to the identification of swearing or offensive words have been conducted in English, several lexicons comprised of offensive words are available in the English language. <ns0:ref type='bibr' target='#b33'>Razavi et al. (2010)</ns0:ref> manually collected around 2,700 dictionary entries including phrases and multi-word expressions, which is one of the earliest work offensive lexicon creations. The recent work on lexicon focusing on hate speech was reported by <ns0:ref type='bibr' target='#b15'>(Gitari et al., 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b10'>Davidson et al. (2017)</ns0:ref> studied how hate speech is different from other instances of offensive language.</ns0:p><ns0:p>They used a crowd-sourced lexicon of hate language to collect tweets containing hate speech keywords.</ns0:p><ns0:p>Using crowd-sourcing, they labeled tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. They analyzed when hate speech can be reliably separate from other offensive language and when this differentiation is very challenging.</ns0:p><ns0:p>In Bengali, several works investigated the presence of abusive language in social media data by leveraging supervised ML classifiers and labeled data <ns0:ref type='bibr' target='#b18'>(Ishmam and Sharmin, 2019;</ns0:ref><ns0:ref type='bibr' target='#b3'>Banik and Rahman, 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b38'>Sazzed (2021)</ns0:ref> <ns0:ref type='bibr' target='#b27'>(Mikolov et al., 2013)</ns0:ref> and GloVe <ns0:ref type='bibr' target='#b29'>(Pennington et al., 2014)</ns0:ref> embedding by integrating them into several ML classifiers.</ns0:p><ns0:p>However, none of the existing works focused on recognizing vulgarity or profanity in Bengali social media data. To the best of our knowledge, it is the first attempt to identify and provide a comprehensive analysis of the presence of vulgarity in the context of Bengali social media data.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58966:1:0:NEW 23 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='3'>SOCIAL MEDIA CORPORA</ns0:head><ns0:p>We create two datasets consisting of 7245 comments written in Bengali. Both datasets are collected from social media, YouTube 2 .</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Drama Review Dataset</ns0:head><ns0:p>The first corpus we utilize is a drama review corpus. This corpus was created and deposited by <ns0:ref type='bibr' target='#b36'>(Sazzed, 2020a)</ns0:ref> for sentiment analysis; It consists of 8500 positive and 3307 negative reviews. However, there is no distinction between different types of negative reviews. Therefore, we manually annotate these 3307 negative reviews into two categories; one category contains reviews that convey vulgarity, while the other category consists of negative but non-vulgar reviews.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Subject-Person Dataset</ns0:head><ns0:p>The second corpus is also collected from YouTube. However, unlike the drama review corpus which represents the viewer's feedback regarding dramas, this corpus consists of comments towards a few controversial female celebrities.</ns0:p><ns0:p>We employ a web scraping tool to download the comment data from YouTube, which comes in JSON format. Then utilizing a parsing script, we retrieve the comments from the JSON data. Utilizing a language detection library 3 , we recognize the comments written in Bengali. We exclude reviews written in English and Romanized Bengali (i.e., Bengali language in the Latin script).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CORPORA ANNOTATION</ns0:head><ns0:p>It is common practice to compare annotations of a single source by multiple people which helps validating and improving annotation schemes and guidelines, identifying ambiguities or difficulties in the source, or assessing the range of valid interpretations <ns0:ref type='bibr' target='#b2'>(Artstein, 2017)</ns0:ref>. The comparison can be performed using a qualitative examination of the annotations, calculating agreement measures, or statistical modeling of annotator differences.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Annotation Guideline</ns0:head><ns0:p>For annotating a corpus for various NLP tasks (e.g., hate speech detection, sentiment classification, profanity detection), it is required to utilize a set of guidelines <ns0:ref type='bibr' target='#b21'>(Khan et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b25'>Mehmood et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Pradhan et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b13'>Fortuna and Nunes, 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Sazzed, 2020a)</ns0:ref>.</ns0:p><ns0:p>Here, to distinguish the comments into vulgar and non-vulgar class, annotators are asked to consider the followings guideline-• Vulgar comments: The presence of swearing, obscene language, vulgar slang, slurs, sexual and pornographic terms in a comment <ns0:ref type='bibr' target='#b11'>(Eder et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b4'>Cachola et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Jay and Janschewitz, 2008)</ns0:ref>.</ns0:p><ns0:p>• Non-vulgar comments: The comments which do not have above mentioned characteristics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Annotation Procedure</ns0:head><ns0:p>The annotation is performed by three annotators (A1, A2, A3); Among them, two are male and one female (A1: male, A2: female, A3: male). All of them are Bengali native speakers. The first two annotators (A1 and A2) initially annotate all the reviews. In case of disagreement in annotation, it is resolved by a third annotator (A3) by majority voting.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Annotation Results</ns0:head><ns0:p>The annotation of the reviews by two reviewers (A1, A2) results two cases.</ns0:p><ns0:p>1. Agreement: The two annotators (A1, A2) assign the same label to a review. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Corpora Statistics</ns0:head><ns0:p>After annotation the drama review corpus consists of 2643 non-vulgar negative reviews and 664 vulgar reviews (Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>). The presence of 664 vulgar reviews out of 3307 negative reviews reveals a high presence of vulgarity in the dataset, around 20%. The annotated subject-person dataset consists of 1331 vulgar reviews and 2607 non-vulgar reviews, a total of 3938 reviews. This dataset contains even higher percentages of reviews labeled as vulgar, around 34%.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> presents the top 10 vulgar words from each dataset. We find a high presence of some vulgar words in the reviews, as shown in the top few rows. Besides, we observe a high number of misspelled vulgar words, which makes identifying them a challenging task. Among the top 10 vulgar words in the subject-person dataset, we notice all of them except the last word (last row) are female-specific sexually Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='5'>BASELINE METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.1'>Lexicon-based Methods</ns0:head><ns0:p>We utilize two publicly available Bengali lexicons for identifying vulgarity in a text. The first lexicon we use is a vulgar lexicon, BenVulLex 4 . The other lexicon is a sentiment lexicon, which contains a list of positive and negative sentiment words <ns0:ref type='bibr' target='#b37'>(Sazzed, 2020b)</ns0:ref>. The BenVulLex consists of 184 Bengali swear and vulgar words, semi-automatically created from a social media corpus. The sentiment lexicon consists of 690 opinion words. The goal of utilizing a sentiment lexicon for vulgarity detection is to investigate how well the negative opinion word present in sentiment lexicon can detect vulgarity. The few other</ns0:p><ns0:p>Bengali sentiment lexicons are a dictionary-based word-level translation of popular English sentiment lexicons; thus, not capable of identifying swearing or vulgarity in Bengali text.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Classical Machine Learning (CML) algorithms and SGD optimizer</ns0:head><ns0:p>Two popular CML classifiers, Logistics Regression (LR) and Support Vector Machine (SVM), and an optimizer, Stochastic Gradient Descendent (SGD), are employed to identify vulgar comments.</ns0:p><ns0:p>LR is a predictive analysis model that assigns observations into a discrete set of classes. LR assumes there are one or more independent variables that determine the outcome of the target.</ns0:p><ns0:p>SVM is a discriminative classifier defined by a separating hyperplane. Given the labeled training data, SVM generates an optimal hyperplane that categorizes unseen observations. For example, in twodimensional space, this hyperplane is a line dividing a plane into two parts where each class lays on either side (for linear kernel).</ns0:p><ns0:p>SGD is an optimization technique and does not correspond to a specific family of machine learning models. SGD can be used to fit linear classifiers and regressors such as linear SVM and LR under convex loss functions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.1'>Input</ns0:head><ns0:p>We extract unigrams and bigrams from the text and calculate the tf-idf scores, which are used as an input for the CML classifiers. tf-idf refers to the term frequency-inverse document frequency, which is a numerical statistic that is aimed to reflect the importance of a word to a document in a corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.2'>Parameter settings and library used</ns0:head><ns0:p>For LR 5 and SVM 6 , the default parameter settings of scikit-learn library <ns0:ref type='bibr' target='#b28'>(Pedregosa et al., 2011)</ns0:ref> are used. For SGD, hinge loss and l2 penalty with a maximum iteration of 1500 are employed. We use the scikit-learn library <ns0:ref type='bibr' target='#b28'>(Pedregosa et al., 2011)</ns0:ref> to implement the SVM, LR and SGD.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Deep Learning Classifier</ns0:head><ns0:p>BiLSTM (Bidirectional Long Short Term Memory) is a deep learning-based sequence processing model that consists of two LSTMs <ns0:ref type='bibr' target='#b16'>(Hochreiter and Schmidhuber, 1997)</ns0:ref>. BiLSTM takes input in both forward and backward directions, thus, provides more contextual information to the network.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.1'>Network architecture, hyperparameter settings and library used</ns0:head><ns0:p>The BiLSTM model starts with the Keras embedding layer <ns0:ref type='bibr' target='#b7'>(Chollet et al., 2015)</ns0:ref>. The three important parameters of the embedding layer are input dimension, which represents the size of the vocabulary, output dimensions, which is the length of the vector for each word, input length, the maximum length of a sequence. The input dimension is determined by the number of words present in a corpus, which vary in two corpora. We set the output dimensions to 64. The maximum length of a sequence is used as 200.</ns0:p><ns0:p>A drop-out rate of 0.5 is applied to the dropout layer; ReLU activation is used in the intermediate layers. In the final layer, softmax activation is applied. As an optimization function, Adam optimizer, and as a loss function, binary-cross entropy are utilized. We set the batch size to 64, use a learning rate of 0.001, and train the model for 10 epochs. We use the Keras library <ns0:ref type='bibr' target='#b7'>(Chollet et al., 2015)</ns0:ref> with the TensorFlow backend for BiLSTM implementation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>EXPERIMENTAL SETTINGS AND RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head n='6.1'>Settings</ns0:head></ns0:div>
<ns0:div><ns0:head n='6.1.1'>Lexicon-based method</ns0:head><ns0:p>If a review contains at least one term from BengVulLex, it is considered vulgar. As BengVulLex is comprised of only manually validated slang or swear terms, referring a non-vulgar comment to vulgar (i.e., false positive) is highly unlikely; thus, a very high precision score close to 1 is expected.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1.2'>ML-based classifiers/optimizer</ns0:head><ns0:p>The results of ML classifiers are reported based on 10-fold cross-validation. We provide the performance of various ML classifiers in four different settings based on class distribution, 1. Original setting: The original setting is class-imbalanced, where most of the comments are nonvulgar.</ns0:p><ns0:p>2. Class-balancing using class weighting: This setting considers the distribution of the samples from different classes in training data. The weight of a class is set inversely proportional to the number of samples it contains.</ns0:p><ns0:p>3. Class-balancing using undersampling: In this class-balanced setting, we use all the samples of vulgar class; however, for the non-vulgar class, we randomly select the equal number of non-vulgar comments from a pool of all non-vulgar comments.</ns0:p><ns0:p>4. Class-balancing using SMOTE: SMOTE (synthetic minority over-sampling technique) <ns0:ref type='bibr' target='#b6'>(Chawla et al., 2002)</ns0:ref> is an oversampling technique that generates synthetic samples from the minority class.</ns0:p><ns0:p>It is used to obtain a synthetically class-balanced or nearly class-balanced training set, which is then used to train the classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Evaluation Metrics</ns0:head><ns0:p>We report the comparative performances of various methods utilizing precision, recall and F1 score.</ns0:p><ns0:p>The T P, FP, FN for is defined as follows-TP = vulgar review classified as vulgar Manuscript to be reviewed</ns0:p><ns0:p>Computer Science FP = non-vulgar review classified as vulgar FN = vulgar review classified as non-vulgar</ns0:p><ns0:p>The recall (R V ), precision (P V ) and F1 score (F1 V ) of vulgar class are calculated as-</ns0:p><ns0:formula xml:id='formula_0'>R V = T P T P + FN</ns0:formula><ns0:p>(1) </ns0:p><ns0:formula xml:id='formula_1'>P V = T P T P + FP (2) F1 V = 2 * R V * P V R V + P V (3)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='6.3'>Comparative results for Identifying Vulgarity</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> shows that among the 664 vulgar reviews present in the drama review corpus, the sentiment lexicon identifies only 204 vulgar reviews (based on the negative score). The vulgar lexicon BengVulLex registers 564 reviews as vulgar, with a high recall score of 0.85. In the original class-imbalanced dataset, all the CML classifiers achieve very low recall scores. However, when a class-balanced dataset is selected by performing undersampling to the dominant class, the recall scores of CML classifiers increase significantly to 0.90. However, we notice precision scores decrease in the class-balanced setting due to a higher number of false-positive (FP). BiLSTM provides the highest recall scores in both original and class-balanced setting, which is 0.70 and 0.94, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58966:1:0:NEW 23 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref> shows the performances of various methods in subject-person dataset. We find that the sentiment lexicon shows a very low recall score, only 0.18. The BengVulLex yields a recall score of 0.69. SVM, LR, and SGD exhibit low recall scores below 0.60 in the original class-imbalanced setting.</ns0:p><ns0:p>However, in the class-balanced setting with undersampling (i.e., 1331 comments from both vulgar and non-vulgar categories), a higher recall score is observed. SGD yields a recall score of 0.77. BiLSTM shows the highest recall scores in both original and all the class-balanced settings, which is around 0.8.</ns0:p><ns0:p>BiLSTM provides lower precision scores compared to CML classifiers in both settings (i.e., original class-imbalanced and class-balanced).</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4'>Vulgarity and Sentiment</ns0:head><ns0:p>We further analyze how vulgarity is related to user sentiment in social media. As a social media corpus, we leverage the entire drama review dataset, which contains 8500 positive reviews in addition to 3307 negative reviews stated earlier. Using the BenVulLex vulgar lexicon, we identify the presence of vulgar words in the reviews. We perform a comparative analysis of the presence of vulgar words in both positive and negative reviews. We find only 37 positive reviews out of 8500 positive reviews contain any vulgar words, which is only 0.4% of the total positive reviews. Out of 3307 negative reviews, we observe the presence of vulgar words in 553 reviews, which is 16.67% of total negative reviews. Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> shows examples of several positive reviews that contain vulgar terms.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>DISCUSSION</ns0:head><ns0:p>The results show that the sentiment lexicon yield poor performance in identifying vulgarity in Bengali textual content, as shown by its poor performance in both datasets. The poor coverage of the sentiment lexicon is expected as it contains different types of negative words, thus may lack words that are particularly Manuscript to be reviewed Whenever a class-balanced training set is employed, all the CML classifiers yield a higher recall score.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>We find that the deep learning-based method, BiLSTM is less affected by class imbalance. Only when the difference of class proportion is very high, such as 18% vs 82% in the drama review dataset, we observe BiLSTM shows a high difference in recall score.</ns0:p><ns0:p>Besides, we analyze the motivation behind using vulgar words in Bengali social media data. Although the usage of vulgar words can be non-offensive such as when used in informal communication between closely-related groups or expressing emotion such as Twitter or Facebook status <ns0:ref type='bibr' target='#b17'>(Holgate et al., 2018)</ns0:ref>, we observe when it is used in review or targeted towards a person with no personal connection, it is inappropriate or offensive most of the time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8'>CONCLUSION</ns0:head><ns0:p>With the surge of user-generated content online, the detection of vulgar or abusive language has become a subject of utmost importance. While there have been few works in hate speech or abusive content analysis in Bengali, to the best of our knowledge, this is the first attempt to thoroughly analyze vulgarity in Bengali social media content.</ns0:p><ns0:p>This paper introduces two annotated datasets in Bengali with 7245 reviews to address the resource scarcity for Bengali vulgar language analysis. Besides, we investigate the prevalence of vulgarity in social media comments. Our analysis reveals a high presence of swearing or vulgar words in social media, ranges from 20% to 34% in two datasets. We explore the performance of different automatic approaches for vulgarity identification of Bengali and present a comparative analysis. The analysis reveals the strengths and weaknesses of different approaches and provides directions for future research.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>1 https://github.com/sazzadcsedu/Bangla-vulgar-corpus2/12PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58966:1:0:NEW 23 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>annotated 3000 transliterated Bengali comments into two classes, abusive and non-abusive, 1500 comments for each. For baseline evaluations, the author employed several traditional machine learning (ML) and deep learning-based classifiers. Emon et al. (2019) utilized linear support vector classifier (LinearSVC), logistic regression (LR), multinomial naïve Bayes (MNB), random forest (RF), artificial neural network (ANN), recurrent neural network (RNN) with long short term memory (LSTM) to detect multi-type abusive Bengali text. They found RNN outperformed other classifiers by obtaining the highest accuracy of 82.20%. Chakraborty and Seddiqui (2019) employed machine learning and natural language processing techniques to build an automatic system for detecting abusive comments in Bengali. As input, they used Unicode emoticons and Unicode Bengali characters. They applied MNB, SVM, and Convolutional Neural Network (CNN) with LSTM and found SVM performed best with 78% accuracy. Karim et al. (2020) proposed BengFastText, a word embedding model for Bengali, and incorporated it into a Multichannel Convolutional-LSTM (MConv-LSTM) network for predicting different types of hate speech. They compared BengFastText against the Word2Vec</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Sample vulgar reviews from annotated datasets</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. (a) Top 10 vulgar words in drama review dataset. (b) Top 10 vulgar words in subject-person dataset</ns0:figDesc><ns0:graphic coords='7,203.77,63.78,289.50,149.56' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Examples of positive reviews with vulgar words in drama review corpus</ns0:figDesc><ns0:graphic coords='11,214.11,63.78,268.81,83.39' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='6,172.75,63.78,351.54,188.36' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Annotation of drama review corpus by two annotators (A1, A2)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Vulgar Non-vulgar</ns0:cell></ns0:row><ns0:row><ns0:cell>Vulgar</ns0:cell><ns0:cell>592</ns0:cell><ns0:cell>160</ns0:cell></ns0:row><ns0:row><ns0:cell>Non-vulgar</ns0:cell><ns0:cell>53</ns0:cell><ns0:cell>2502</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Annotation of subject-person dataset by two annotators (A1, A2)</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Vulgar Non-vulgar</ns0:cell></ns0:row><ns0:row><ns0:cell>Vulgar</ns0:cell><ns0:cell>1282</ns0:cell><ns0:cell>120</ns0:cell></ns0:row><ns0:row><ns0:cell>Non-vulgar</ns0:cell><ns0:cell>163</ns0:cell><ns0:cell>2373</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>From the table 1, we see Cohen's kappa (κ ) statistic of two raters (A1, A2) is 0.8070 in the Drama</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>review dataset, which indicate almost perfect agreement. Regarding the percentages, we find both</ns0:cell></ns0:row><ns0:row><ns0:cell>reviewers agreed on 93.55% reviews.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>As shown by Table2, in the subject-person dataset, an agreement of 92.81% is observed. Cohen's κ<ns0:ref type='bibr' target='#b8'>(Cohen, 1960)</ns0:ref> provides a score of 0.8443, which refers to almost perfect agreement.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Description of two corpora after final annotations</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='3'>Vulgar Non-vulgar Total</ns0:cell></ns0:row><ns0:row><ns0:cell>Drama</ns0:cell><ns0:cell>664</ns0:cell><ns0:cell>2643</ns0:cell><ns0:cell>3307</ns0:cell></ns0:row><ns0:row><ns0:cell>Subject-person</ns0:cell><ns0:cell>1331</ns0:cell><ns0:cell>2607</ns0:cell><ns0:cell>3938</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Performance of various methods for vulgarity detection in drama review dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell># Correctly Identified</ns0:cell><ns0:cell>R V</ns0:cell><ns0:cell>P V</ns0:cell><ns0:cell>F1 V</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Vulgar Review</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lexicon</ns0:cell><ns0:cell>Sentiment Lexicon</ns0:cell><ns0:cell>204 (664)</ns0:cell><ns0:cell>0.307</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BengVulLex</ns0:cell><ns0:cell>564 (664)</ns0:cell><ns0:cell cols='3'>0.849 0.998 0.917</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>161 (664)</ns0:cell><ns0:cell>0.245</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.394</ns0:cell></ns0:row><ns0:row><ns0:cell>ML Classifier</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>345 (664)</ns0:cell><ns0:cell cols='3'>0.534 0.994 0.686</ns0:cell></ns0:row><ns0:row><ns0:cell>(Original Setting)</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell>386(664)</ns0:cell><ns0:cell cols='3'>0.588 0.985 0.736</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>462(664)</ns0:cell><ns0:cell cols='3'>0.704 0.783 0.741</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>609(664)</ns0:cell><ns0:cell cols='3'>0.917 0.801 0.855</ns0:cell></ns0:row><ns0:row><ns0:cell>ML Classifier</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>593(664)</ns0:cell><ns0:cell cols='3'>0.893 0.859 0.876</ns0:cell></ns0:row><ns0:row><ns0:cell>(Undersampling)</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell>592(664)</ns0:cell><ns0:cell cols='3'>0.891 0.876 0.883</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>624(664)</ns0:cell><ns0:cell cols='3'>0.940 0.851 0.893</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>367(664)</ns0:cell><ns0:cell cols='3'>0.552 0.970 0.704</ns0:cell></ns0:row><ns0:row><ns0:cell>ML Classifier</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>386 (664)</ns0:cell><ns0:cell cols='3'>0.581 0.982 0.730</ns0:cell></ns0:row><ns0:row><ns0:cell>(SMOTE)</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell>385 (664)</ns0:cell><ns0:cell cols='3'>0.579 0.987 0.730</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>563(664)</ns0:cell><ns0:cell cols='3'>0.850 0.707 0.772</ns0:cell></ns0:row><ns0:row><ns0:cell>(Class weighting)</ns0:cell><ns0:cell>LR</ns0:cell><ns0:cell>385(664)</ns0:cell><ns0:cell cols='3'>0.579 0.96 0.723</ns0:cell></ns0:row><ns0:row><ns0:cell>ML Classifier</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>388(664)</ns0:cell><ns0:cell cols='3'>0.584 0.934 0.719</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SGD</ns0:cell><ns0:cell>438(664)</ns0:cell><ns0:cell cols='3'>0.659 0.964 0.783</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>564(664)</ns0:cell><ns0:cell cols='3'>0.854 0.667 0.749</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Performance of various methods for vulgarity detection in subject-person dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell># Correctly Identified</ns0:cell><ns0:cell>R V</ns0:cell><ns0:cell>P V</ns0:cell><ns0:cell>F1 V</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Vulgar Review</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Lexicon</ns0:cell><ns0:cell>Sazzed (2020b)</ns0:cell><ns0:cell>239 (1331)</ns0:cell><ns0:cell>0.180</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BengVulLex</ns0:cell><ns0:cell>917(1331)</ns0:cell><ns0:cell cols='3'>0.689 0.998 0.815</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>551(1331)</ns0:cell><ns0:cell cols='3'>0.394 0.992 0.563</ns0:cell></ns0:row><ns0:row><ns0:cell>ML Classifiers</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>788(1331)</ns0:cell><ns0:cell cols='3'>0.594 0.962 0.746</ns0:cell></ns0:row><ns0:row><ns0:cell>(Original Setting)</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell>860(1331)</ns0:cell><ns0:cell cols='3'>0.660 0.940 0.775</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>1050(1331)</ns0:cell><ns0:cell cols='3'>0.793 0.724 0.757</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>954(1331)</ns0:cell><ns0:cell cols='3'>0.717 0.870 0.786</ns0:cell></ns0:row><ns0:row><ns0:cell>ML Classifiers</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>969(1331)</ns0:cell><ns0:cell cols='3'>0.728 0.893 0.802</ns0:cell></ns0:row><ns0:row><ns0:cell>(Undersampling)</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell>1027(1331)</ns0:cell><ns0:cell cols='3'>0.772 0.884 0.824</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>1064(1331)</ns0:cell><ns0:cell cols='3'>0.786 0.866 0.824</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>826(1331)</ns0:cell><ns0:cell cols='3'>0.620 0.892 0.731</ns0:cell></ns0:row><ns0:row><ns0:cell>ML Classifier</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>847(1331)</ns0:cell><ns0:cell cols='3'>0.636 0.941 0.759</ns0:cell></ns0:row><ns0:row><ns0:cell>(SMOTE)</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell>866(1331)</ns0:cell><ns0:cell cols='3'>0.650 0.938 0.768</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>1075(1331)</ns0:cell><ns0:cell cols='3'>0.809 0.737 0.771</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>911(1331)</ns0:cell><ns0:cell cols='3'>0.684 0.814 0.743</ns0:cell></ns0:row><ns0:row><ns0:cell>ML Classifier</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>824(1331)</ns0:cell><ns0:cell cols='3'>0.619 0.912 0.737</ns0:cell></ns0:row><ns0:row><ns0:cell>(Class Weighting)</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell>935(1331)</ns0:cell><ns0:cell cols='3'>0.702 0.904 0.790</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>1070(1331)</ns0:cell><ns0:cell cols='3'>0.807 0.742 0.773</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='4'>https://github.com/sazzadcsedu/Bangla-Vulgar-Lexicon 6/12 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:58966:1:0:NEW 23 Jun 2021)</ns0:note>
</ns0:body>
" | "Dear Dr. Zubiaga,
Thank you for giving me the opportunity to submit a revised draft of the manuscript “Identifying
vulgarity in Bengali social media content” for publication in the ‘PeerJ Computer Science’. I
appreciate the time and effort that you and the reviewers dedicated to providing feedback on my
manuscript and are grateful for the insightful comments on and valuable improvements to my
paper. I have incorporated most of the suggestions made by the reviewers. Those changes are
highlighted within the manuscript. Please see below for a point-by-point response to the
reviewers’ comments and concerns. All page numbers refer to the revised manuscript file with
tracked changes.
Reviewer 1:
Basic reporting
The paper aim is to distinguish between vulgar/non-vulgar categories using lexicon-based and
ML based approaches. The authors introduce a dataset of 7000 reviews and have annotators
tag them for vulgarity at review level.
Deep learning is a type of machine learning, the paper presents them as different.
- Thank you for pointing this. The term CML (classical machine learning) is introduced to
refer to classifiers like LR or SVM to remove the ambiguity. ML (machine learning) now refers
to both types of machine classifiers, CML and DL.
The first sentence in the abstract is wrong, at least according to the citations it mentions as a
reference
- I completely agree with your comments that expressing emotion was just one out of 6
functions of vulgar words identified. Thus, the sentence “Vulgarity or obscenity indicates the use
of slang, curse, swear or taboo words to express emotion in language” has been updated to
‘Vulgarity or obscenity indicates the use of curse, swear or taboo words in language’.
In (Cachola et al 2018), vulgarity is defined at a word level, and is not used to express emotion
in language. Actually, in (Holgate et al 2018), express emotion was just one out of 6 functions of
vulgar words identified.
-
Thank you for your comments. I respectfully disagree with you regarding this comment.
Cachola et al. ( 2018) used the term “vulgar” in both word and sentence/comments level. The
authors used the term ‘vulgar word’ 42 times and ‘vulgar tweets’ 14 times in the paper. They
mentioned vulgarity as-
Vulgarity: The study of vulgar language – also referred to as profanity or use of swear/curse
words despite the fact that they can be used for different goals is of interest to linguists,
psychologists and computer scientists.
Besides, Cachola et al. (2018) used express emotions to indicate the usage of vulgarity.
Vulgarity is often employed used to express emotion in language and can be used either to
express negative sentiment or emotions or to intensify the sentiment present in the tweet (Wang,
2013). In one of the examples, ‘I am stupid as f*ck’ conveys more intense anger, while ‘I am
stupid’ conveys a less emotional expression of irritation. Hence, understanding vulgar words is
expected to have practical implications in sentiment analysis on social media.
Another major issues is that slang is equated to obscene and vulgar words (27-28), which is not
true e.g. 'luv' is a slang word, but is neither obscene or vulgar.
- Thank you for your comment. I completely agree with your point that not all slang
words are obscene or vulgar. The manuscript has been updated to reflect that.
In general, it is unclear what the authors refer to by vulgarity at a sentence/review level
throughout the paper and in the annotation task, as it seems the definition changes quite a bit
(e.g. line 1 vs line 35, line 49).
- Thank you for your comments. The manuscript has been updated to clarify the
definition of vulgarity. A separate section was added which describes annotation guidelines.
Besides, the English translations were added for the sample reviews to clarify the
characteristics of the reviews.
The following guidelines were used to label a comment as vulgar- (Line 26-30)
Eder et al. (2019) conceived vulgar language as an overly lowered language with disgusting
and obscene lexicalizations generally banned from any type of civilized discourse. Primarily, it
concerns the lexical fields of sexuality, such as sexual organs and activities, body orifices, or
other specific body parts.
Cachola et al. (2018) defined vulgarity as the use of swear/curse words.
Jay et al. (2008) mentioned vulgar speech includes explicit and crude sexual references.
Experimental design
SGD - that's not a ML method, is an optimization method for a tehnique like LR or SVM. I think
the author may be referring to SGD classifier function in sklearn, which optimizes a loss that can
be associated to one of the above models.
- Thank you for your comment. I totally agree with you. In the paper, SGD is mentioned
as an optimizer. However, they were included in the ML classifier section, as SGD optimizer
uses ML models such as linear SVM and logistic regression. Based on the feedback, the
section headings now include the term optimizer in addition to ML classifiers. (Section 5.2, Line
222)
Validity of the findings
The lexicon-based methods should be evaluated for precision as well, as these is a gold set for
annotations, rather than assume precision is 1. If precision is indeed 1, then the F1 score for the
bengvullex method is better than the ML methods on the drama review dataset and probably
also best on the subject-person data set. Based on these findings, the discussion section of the
paper seems to be misguided.
-
Thanks for your comments. The precision scores have been included for the vulgar
lexicon. When precision is considered, the vulgar lexicon shows highest F1 score in one
corpus (i.e., drama review dataset). The introduction, results and discussion section
were updated to reflect the new results. (section 7)
The class imbalance can be handled through various approaches, including instance weighting
and over/under-sampling. For metrics, one could also use macro-F1, if we care about both
classes equally.
- I agree with the reviewer that over/under-sampling can be employed to address the
class imbalance issue. Various approaches of class-balancing (under-sampling, over-sampling,
and class-weighting) were employed, and corresponding results were provided. (Table 4 and
Table 5)
Comments for the author
Agreement between annotators is good, so dataset could be a useful resource if the authors
make the concept, they annotated clear (rather than vulgarity, it's probably more like
offensiveness).
-The manuscript was modified to clarify the definition of vulgarity (Line 26-30). A new
section was added for the corpus annotation guideline (Line 175-183), which describes the
guidelines to label a comment as vulgar/non-vulgar. The translations of reviews/comments were
included so that the characteristics of the reviews are clear to the non-Bengali speaker (Figure 1
and Figure 2).
The comments were annotated considering vulgarity /obscenity/profanity/swearing, which might
be offensive sometimes or most of the time, but not always. Besides, the offensiveness doesn’t
necessarily require a comment to be obscene/profane/vulgar.
Based on this task definition, the work is not very novel. Previous work on related tasks was
done even for Bengali in particular (https://arxiv.org/abs/2012.09686).
- Thank you for your comment. I have clarified the task definition in the manuscript so
that the characteristics of the dataset are not ambiguous (Line 26-30, Line 175-183). Based on
that, the datasets introduced in the paper can be referred to as obscene/profane/vulgar
datasets, which is different from the hate speech dataset. An obscene comment may or may not
be hate speech (Challenges in Discriminating Profanity from Hate Speech, Malmasi, Shervin, and
Marcos Zampieri., 2018).
Due to its current popularity and good results, authors should have tried to train a BERT-based
model.
- Thank you for your valuable suggestion. We are currently working on expanding the
size of the corpora by including datasets from multiple domains. We will include advanced DL
models and more comprehensive analysis there.
Reviewer 2:
Basic reporting
Generally, the article is structured in a professional way. However, following are a few
suggestions for improvement.
The citations are mixed up with running text causing the difficulty in reading. Please check the
guidelines for authors to improve the citation format.
- Thank you for pointing this out. The in-text citations have been fixed.
Use definite values (instead of writing 'over' or 'around' etc.): line 33, 52, 56, 74, 122, 156, 158,
171, 173, 219, 231, 233, 236, 253, 281. Why the accuracy is in a range (80%-90%)? It should
be exact scores unless there's a technical reason for indefinite values?
- Thank you for your comments. Exact values were provided based on your suggestion.
There may be a separate section of 'Annotation Guidelines'.
- Thank you for pointing this out. Based on your suggestion, a new section is added with
annotation guideline. (section 4.1)
Following are suggestions for improvement of language:
line 67: 'As social ...' ==> 'Since social ...'
line 76: 'Analysis the presence' --- not understandable
line 98: 'As many ...' --- missing 'There are'
line 100: why 'approximately' --- use exact figure if possible/known
line 105: 'labeled data Ishmam...' --- missing any boundary/separator
line 188: 'We' ==> 'we'
Table 4: column 5 heading: 'PVul' ==> 'pVul'
- Thank you for mentioning the typos. These typos have been fixed.
Figure 1,2,3: add English transliteration and translation for understanding of reader
- Based on your suggestion English translation has been added in the revised
manuscript.
Formula for F1 score is not mentioned.
- F1 score formula has been provided in the revised manuscript (line 286).
The columns of each table should be described in the text or in the caption.
- Columns header has been described (line 286).
The rows or cells having the highest scores should be BOLD in the result tables.
-
Based on your suggestion, the cells with highest F1 score were shown in Bold.
Experimental design
Methods described with sufficient detail except the following:
There is no information about separating Training and Test partitions.
If Training and Testing is same then it's certainly over-fitted result.
If it is due to cross-validation then this fact should be explicitly stated.
-
Thank you for your comments. The results are based on 10-fold cross-validation (line
264).
Validity of the findings
All underlying data have been provided and the conclusions are well stated.
Reviewer 3:
Basic reporting
Author addresses very interesting and challenging problem i.e., Identifying vulgarity in Bengali
social media content. Overall paper is well written but here some comments that needs to be
addressed.
Minor Comments
1. Add English description against each example of Bengali for better understanding.
- Thank you for the comment. The English translations have been added based on your
suggestion (Figure 1 and Figure 2).
2. Add 1 to 2 liner details of Machine Learning and Deep learning algorithms used.
- For each classifier, few lines of comments were added. (Line 225 -233)
3. Include more literature review on overall and resource poor languages like
a) Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated
hate speech detection and the problem of offensive language. In Eleventh international aaai
conference on web and social media
b) Elisabeth Eder, Ulrike Krieg-Holz, and Udo Hahn. 2019. At the Lower End of Language—
Exploring the Vulgar and Obscene Side of
German. In Proceedings of the Third Workshop on Abusive Language Online. 119–128
c) Paula Fortuna and Sérgio Nunes. 2018. A survey on automatic detection of hate speech in
text. ACM Computing Surveys (CSUR) 51, 4
(2018), 1–30.
d) Khawar Mehmood, Daryl Essam, Kamran Shafi, and Muhammad Kamran Malik. 2019.
Sentiment Analysis for a Resource Poor
Language—Roman Urdu. ACM Transactions on Asian and Low-Resource Language
Information Processing (TALLIP) 19, 1 (2019), 1–15.
- The above-mentioned references have been added. (Line 176-177)
4. include appropriate references where appropriate like (Ron Artstein. 2017. Inter-annotator
agreement. In Handbook of linguistic annotation. Springer, 297–313) for inter annotator
agreement. also include references of ML algorithms
- Thank you for your suggestion. The reference for Inter-annotator agreement and ML
classifiers were added. (Line 70, Lin 71, Line 171, Line 244)
Major Comments:
Annotation guidelines are missing. Author may take inspiration from the following paper
a) Khan, Muhammad Moin, Khurram Shahzad, and Muhammad Kamran Malik. 'Hate Speech
Detection in Roman Urdu.' ACM Transactions on Asian and Low-Resource Language
Information Processing (TALLIP) 20, no. 1 (2021): 1-19.
b) Rahul Pradhan, Ankur Chaturvedi, Aprna Tripathi, and Dilip Kumar Sharma. 2020. A Review
on Offensive Language Detection. In
Advances in Data and Information Sciences. Springer, 433–439.
- Based on your suggestion, a new section has been added with annotation guidelines.
(Line 174, section 4.1)
Experimental design
Need to mention parameter and hyper parameters of algorithm used.
- The parameter and hyper parameters values were described in detail. (section 5.2.2
and 5.3.1)
Details of word embedding is missing.
- Thank you for pointing this. The manuscript has been updated with the following
detailsThe BiLSTM model starts with the Keras embedding layer (Chollet et al., 2015). The three important
parameters of the embedding layer are input dimension, which represents the size of the vocabulary, output
dimensions, which is the length of the vector for each word, input length, the maximum length of a sequence.
The input dimension is determined by the number of words present in a corpus, which vary in two corpora.
We set the output dimensions to 64. The maximum length of a sequence is used as 200.
Also mention which libraries used in experiments like Tensorflow, keras etc
- The Keras library with the TensorFlow backend was used.(Line 255)
Validity of the findings
No Comments
" | Here is a paper. Please give your review comments after reading it. |
201 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background: Clear language makes communication easier between any two parties. A layman may have difficulty communicating with a professional due to not understanding the specialized terms common to the domain. In healthcare, it is rare to find a layman knowledgeable in medical terminology which can lead to poor understanding of their condition and/or treatment. To bridge this gap, several professional vocabularies and ontologies have been created to map laymen medical terms to professional medical terms and vice versa.</ns0:p><ns0:p>Objective: Many of the presented vocabularies are built manually or semi-automatically requiring large investments of time and human effort and consequently the slow growth of these vocabularies. In this paper, we present an automatic method to enrich laymen's vocabularies that has the benefit of being able to be applied to vocabularies in any domain.</ns0:p><ns0:p>Methods: Our entirely automatic approach uses machine learning, specifically Global Vectors for Word Embeddings (GloVe), on a corpus collected from a social media healthcare platform to extend and enhance consumer health vocabularies. Our approach further improves the consumer health vocabularies by incorporating synonyms and hyponyms from the WordNet ontology. The basic GloVe and our novel algorithms incorporating WordNet were evaluated using two laymen datasets from the National Library of Medicine (NLM), Open-Access Consumer Health Vocabulary (OAC CHV) and MedlinePlus Healthcare Vocabulary .</ns0:p></ns0:div>
<ns0:div><ns0:head>Results:</ns0:head><ns0:p>The results show that GloVe was able to find new laymen terms with an F-score of 48.44%. Furthermore, our enhanced GloVe approach outperformed basic GloVe with an average F-score of 61%, a relative improvement of 25%. Furthermore, the enhanced GloVe showed a statistical significance over the two ground truth datasets with P<.001.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions:</ns0:head><ns0:p>This paper presents an automatic approach to enrich consumer health vocabularies using the GloVe word embeddings and an auxiliary lexical source, WordNet. Our approach was evaluated used healthcare text downloaded from MedHelp.org, a healthcare social media platform using two standard laymen vocabularies, OAC CHV, and MedlinePlus. We used the WordNet ontology to expand the healthcare corpus by including synonyms, hyponyms, and hypernyms for each layman term occurrence in the corpus. Given a seed term selected from a concept in the ontology, we measured our algorithms' ability to automatically extract synonyms for those terms that appeared in the ground truth concept. We found that enhanced GloVe outperformed GloVe with a relative improvement of 25% in the F-score.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>An ontology is a formal description and representation of concepts with their definitions, relations, and classifications in a specific or general domain of discourse <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. It can decrease terminological and conceptual confusion between system software components and facilitate interoperability. Examples of ontologies in different domains are the BabelNet <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, Arabic Ontology <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>, WordNet <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>, and Gene Ontology <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. Ontologies have been used in many domains such as document indexing <ns0:ref type='bibr' target='#b5'>[6,</ns0:ref><ns0:ref type='bibr' target='#b6'>7]</ns0:ref>, personalizing user's profiles for information retrieval systems <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref><ns0:ref type='bibr' target='#b8'>[9]</ns0:ref><ns0:ref type='bibr' target='#b9'>[10]</ns0:ref><ns0:ref type='bibr' target='#b10'>[11]</ns0:ref><ns0:ref type='bibr' target='#b11'>[12]</ns0:ref><ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>, and providing readable data for semantic web applications <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref><ns0:ref type='bibr' target='#b14'>[15]</ns0:ref><ns0:ref type='bibr' target='#b15'>[16]</ns0:ref><ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. Several ontologies have been developed and/or proposed for the healthcare domain. One of the biggest healthcare ontologies in the field of biomedicine is the Unified Medical Language system (UMLS). This ontology consists of more than 3,800,000 professional biomedicine concepts. It lists biomedical concepts from different resources, including their part of speech and variant forms <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>. The National Library of Medicine (NLM) manages the UMLS ontology and updates it yearly. As examples of the professional vocabularies included in the UMLS, the Gene Ontology (GO) <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>, Disease Ontology (DO) <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref>, and Medical Subject Headings (MeSH) <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. The UMLS not only has professional vocabularies but also included laymen vocabularies. These vocabularies provide straightforward terms mapped to the professional medical concepts. With the advancement of medical technology and the emergence of internet social media, people are more connected than before. In terms of medical technology, there are many efforts to build smart devices that can interact and provide health information. On social media, people started not only sharing their climate concerns, politics, or social problems, but also their health problems. The Pew Research Center conducted a telephone survey in 2010 and reported that 80% of the United States internet users looked for a healthcare information. The survey showed that 66% of those users looked for a specific disease or medical issue and roughly 55% of them looked for a remedies treat to their medical problems <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>. Another study showed that the rate of using social media by physicians grew from 41% in 2010 to 90% in 2011 <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref><ns0:ref type='bibr' target='#b22'>[22]</ns0:ref><ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>. In all these cases, any retrieval system will not be able to interact effectively with laypeople unless they have a lexical source or ontology that defines the medical terminology. Medical professionals are well-versed in specialized medical terminology developed to be a precise way for healthcare professionals to communicate with each other. However, this medical jargon is obscure to laymen and may require patients to ask for more details to be sure that they understand their condition and treatment plans <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>. A recent study <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref> showed the effect of health literacy on the accuracy of the information laymen are seeking related to coronavirus (COVID-19) and cancer. The study concluded that much of the cancer and COVID-19 information available does match with patients' health literacy because much of the present information has been written using professional terminology which is hard for laymen to understand. Having a way to map professional medical terminology to easier to understand laymen terms could close the communication gap between patients and the healthcare professional. Recently, steps have been taken to close the gap between the vocabulary the professionals use in healthcare and what laymen use. It was reported by <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref> that approximately five million doctor letters are sent to patients each month. Using words like liver instead of hepatic and brain instead of cerebral could make the doctor's letters much easier to laymen <ns0:ref type='bibr' target='#b29'>[27]</ns0:ref>. Thus, the Academy of Medical Royal Colleges started an initiative in 2017 in which the doctors asked to write to patients directly using plain English instead of medical terminology <ns0:ref type='bibr' target='#b30'>[28]</ns0:ref>. The twentieth century witnessed steps of building a lot of vocabularies that maps professional medical concepts to their laymen terms and vice versa. These vocabularies are commonly known as consumer health vocabularies. Many of these presented vocabularies are built manually or semiautomatically requiring large investments of time and human effort and consequently these vocabularies grow very slowly. In this paper, we present an automatic method to enrich laymen's vocabularies that has the benefit of being able to be applied to vocabularies in any domain.</ns0:p></ns0:div>
<ns0:div><ns0:head>Consumer Health Vocabularies</ns0:head><ns0:p>Consumer health vocabularies can decrease the gap between laymen and professional language and help humans and machines to understand both languages. Zeng et al. reported poor search results when a layman searched for the term heart attack because physicians discuss that condition using the professional concept myocardial infarction <ns0:ref type='bibr' target='#b31'>[29]</ns0:ref>. There are many laymen vocabularies proposed in the field of biomedicine such as the Open-Access and Collaborative Consumer Health Vocabulary (OAC CHV) <ns0:ref type='bibr' target='#b32'>[30]</ns0:ref>, MedlinePlus topics <ns0:ref type='bibr' target='#b33'>[31]</ns0:ref>, and Personal Health Terminology (PHT) <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>. Those laymen vocabularies should grow from time to time to cover new terms proposed by the laypeople to keep system using them updated. Our research studies two common English laymen vocabularies, the Open-Access and Collaborative Consumer Health Vocabulary (OAC CHV) and the MedlinePlus vocabularies. The National Library of Medicine (NLM) integrated these vocabularies to the UMLS ontology. Roughly 56,000 UMLS concepts were mapped to OAC CHV concepts. Many of these medical concepts have more than one associated layman term. This vocabulary has been updated many times to include new terms, and the last update was in 2011 <ns0:ref type='bibr' target='#b32'>[30]</ns0:ref>. All of the updates incorporated human evaluation of the laymen terms before adding them to the concept. We did an experiment using all 56,000 UMLS concepts that had associated OAC CHV laymen terms. We stemmed, downcased, and removed all stopwords, punctuations, and numbers from the concepts and their laymen terms. We then compared the tokens in the concept names with the set of laymen terms. From this experiment, we found that 48% of the included laymen terms were just morphological variations of the professional medical terms with minor changes such as replacing lowercase/uppercase letters, switching between plural/single word forms, or adding numbers or punctuation. Table <ns0:ref type='table'>1</ns0:ref> shows a few examples of the laymen and their associated professional UMLS concepts, demonstrating close relationships between the two. The CUI in this table refers to the Concept Unique Identifier that the UMLS uses to identify its biomedicine concepts. The MedlinePlus vocabulary was constructed to be the source of index terms for the MedlinePlus search engine <ns0:ref type='bibr' target='#b33'>[31]</ns0:ref>. The NLM updates this resource yearly. In the 2018 UMLS version, there were 2112 professional concepts mapped to their laymen terms from the MedlinePlus topics.</ns0:p><ns0:p>Due to the extensive human effort required, when we compared the 2018 UMLS version to the 2020 UMLS version, we found that only 28 new concepts had been mapped to their associated laymen terms. This slow rate of growth motivates the development of tools and algorithms to boost progress in mapping between professional and laymen. Our research enriches laymen's vocabularies automatically based on healthcare text and seed terms from existing laymen vocabularies. Our system uses the Global Vector for Word Representations (GloVe) to build word embeddings to identify words similar to already existing laymen terms in a consumer health vocabulary. These potential matches are ranked by similarity and top matches are added to their associated medical concept. To improve the identification of new laymen terms, the GloVe results were enhanced by adding hyponyms, hypernyms, and synonyms from a well-known English ontology, WordNet. We make three main contributions:</ns0:p><ns0:p> Developing an entirely automatic algorithm to enrich consumer health vocabularies automatically by identifying new laymen terms to be added entirely automatically.  Developing an entirely automatic algorithm to add laymen terms to formal medical concepts that currently have no associated laymen terms.  Improving GloVe algorithm results by using WordNet with a small, domain-specific corpora to build more accurate word embedding vectors. Our work differs from others in that it is not restricted to a specific healthcare domain such as Cancer, Diabetes, or Dermatology. Moreover, our improvement is tied to enhancing the text and works with the unmodified GloVe algorithm. This allows for different word embedding algorithms to be applied. Furthermore, expanding small-size corpus with words from standard sources such as the WordNet eliminates the need to download large-size corpus, especially in a domain that is hard to find large text related to it.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Work</ns0:head></ns0:div>
<ns0:div><ns0:head>Ontology Creation</ns0:head><ns0:p>The past few years have witnessed an increased demand for ontologies in different domains <ns0:ref type='bibr' target='#b34'>[32]</ns0:ref>. According to <ns0:ref type='bibr' target='#b35'>(Gruber, 1995)</ns0:ref>, any ontology should comply with criteria such as clarity, coherence, and extensibility to be considered as a source of knowledge that can provide shared conceptualization <ns0:ref type='bibr' target='#b35'>[33]</ns0:ref>. However, building ontologies from scratch is immensely time-consuming and requires a lot of human effort <ns0:ref type='bibr' target='#b13'>[14]</ns0:ref>. Algorithms that can build an ontology automatically or semi-automatically can help reducing time and labor required to construct that ontology. Zavitsanos et al. presented an automatic method to build an ontology from scratch using text documents. They build that ontology using the Latent Dirichlet Allocation model and an existing ontology <ns0:ref type='bibr' target='#b36'>[34]</ns0:ref>. Kietz et al. <ns0:ref type='bibr' target='#b37'>[35]</ns0:ref> prototyped a company ontology semi-automatically with the help of general domain ontology and a domain-specific dictionary. They started with a general domain ontology called the GermanNet ontology. There are many other recent works presented to build ontologies from scratch, automatically or semi-automatically such as <ns0:ref type='bibr' target='#b38'>[36]</ns0:ref><ns0:ref type='bibr' target='#b39'>[37]</ns0:ref><ns0:ref type='bibr' target='#b40'>[38]</ns0:ref><ns0:ref type='bibr' target='#b41'>[39]</ns0:ref>. Ontologies should not be static. Rather, they should grow as their domains develop enriching existing ontologies with new terms and concepts. <ns0:ref type='bibr' target='#b42'>Agirre et al. (2000)</ns0:ref> used internet documents to enrich the concepts of WordNet ontology. They built their corpus by submitting the concept's senses along with their information to get the most relevant webpages. They used statistical approaches to rank the new terms <ns0:ref type='bibr' target='#b42'>[40]</ns0:ref>. A group at the University of Arkansas applied two approaches to enrich ontologies; 1) a lexical expansion approach using WordNet; and 2) a text mining approach. They projected concepts and their instances extracted from already existing ontology to the WordNet and selected the most similar sense using distance metrics <ns0:ref type='bibr' target='#b43'>[41]</ns0:ref><ns0:ref type='bibr' target='#b44'>[42]</ns0:ref><ns0:ref type='bibr' target='#b45'>[43]</ns0:ref><ns0:ref type='bibr' target='#b46'>[44]</ns0:ref><ns0:ref type='bibr' target='#b47'>[45]</ns0:ref><ns0:ref type='bibr' target='#b49'>[46]</ns0:ref><ns0:ref type='bibr' target='#b50'>[47]</ns0:ref>. Recently, Ali and his team employed multilingual ontologies and documents to enrich not only domain-specific ontologies but also multilingual and multi-domain ontologies <ns0:ref type='bibr' target='#b51'>[48]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Medical Ontologies</ns0:head><ns0:p>The emphasis on developing an Electronic Health Record (EHR) for patients in the United States encouraged the development of medical ontologies to ensure interoperability between multiple medical information systems <ns0:ref type='bibr' target='#b52'>[49,</ns0:ref><ns0:ref type='bibr' target='#b53'>50]</ns0:ref>. There are several healthcare vocabularies that provide human and machine-readable medical terminologies. The Systematized Nomenclature of Medicine Clinical Terms, SNOMED CT, is a comprehensive clinical ontology. It contains more than 300,000 professional medical concepts in multiple languages that has been adopted by many healthcare practitioners <ns0:ref type='bibr' target='#b53'>[50]</ns0:ref>. Another professional vocabulary is the Royal Society of Chemistry's Name reaction Ontology (RXNO). The RXNO has over 500 reactions describing different chemical reactions that require organic compounds <ns0:ref type='bibr' target='#b54'>[51]</ns0:ref>. Recently, He and his team presented the coronavirus ontology with the purpose of providing machine-readable terms related to the coronavirus pandemic that occurs in 2020. This ontology includes all related coronavirus topics such as diagnosis, treatment, transmission, and prevention areas <ns0:ref type='bibr' target='#b55'>[52]</ns0:ref>. Medical ontologies, like all other ontologies, need to grow and adapt from time to time. Zheng and Wang <ns0:ref type='bibr' target='#b56'>[53]</ns0:ref> prototyped the Gene Ontology Enrichment Analysis Software Tool (GOEST). It is a web-based tool that uses a list of genes from the Gene Ontology and enriches them using statistical methods. Recently, Shanavas et al. <ns0:ref type='bibr' target='#b57'>[54]</ns0:ref> presented a method to enrich the UMLS concepts with related documents from a pool of professional healthcare documents. Their aim was to provide retrieval systems with more information about medical concepts.</ns0:p></ns0:div>
<ns0:div><ns0:head>Consumer Health Vocabularies</ns0:head><ns0:p>Ontologies developed to organize professional vocabularies are of limited benefit in retrieval systems used by laypeople. Laymen usually use the lay language to express their healthcare concerns. Having a consumer health vocabulary can bridge the gap between the users' expression of their health questions and documents written using professional language. Consumer Health Vocabularies are particularly difficult to construct because they typically require knowledge of a specialized domain (medicine), a source of laymen's discussion about medicine, and finally a mechanism to map between the two. Zeng et al. detected and mapped a list of consumer-friendly display (CFD) names into their matched UMLS concepts. Their semiautomatic approach used a corpus collected from queries submitted to a MedlinePlus website. Their manual evaluation ended with mapping CFD names to about 1,000 concepts <ns0:ref type='bibr' target='#b58'>[55,</ns0:ref><ns0:ref type='bibr' target='#b59'>56]</ns0:ref>. Zeng's team continued working on that list of names to build what is called now the OAC CHV.</ns0:p><ns0:p>In their last official update to this vocabulary, they were able to define associated laymen terms to about 56,000 UMLS medical concepts <ns0:ref type='bibr' target='#b32'>[30]</ns0:ref>. Several methods have been proposed to enrich such consumer vocabulary, such as He et al. <ns0:ref type='bibr' target='#b60'>[57]</ns0:ref> who used a similarity-based technique to find a list of similar terms to a seed term collected from the OAC CHV. Gu <ns0:ref type='bibr' target='#b61'>[58]</ns0:ref> also tried to enriched the laymen vocabularies leveraging recent word embedding methods. A most recent work presented a method to enhance the consumer health vocabulary by associating laymen terms with relations from the MeSH ontology <ns0:ref type='bibr' target='#b62'>[59]</ns0:ref>. Other recent work showed the variety of consumer vocabularies people use when writing their reviews <ns0:ref type='bibr' target='#b63'>[60]</ns0:ref> Previous research on enriching consumer health vocabularies were either semi-automatic or it did produce an automatic system accurate enough to be used in practice. Our automatic approach uses a recent word embedding algorithm, GloVe, which is further enhanced by incorporating a lexical ontology, WordNet. We work with gold standard datasets that are already listed on the biggest biomedicine ontology, the UMLS. This paper extends work we published in <ns0:ref type='bibr' target='#b64'>[61]</ns0:ref> by including additional datasets for evaluation and incorporating new approaches to improve the GloVe algorithm. These approaches leverage standard auxiliary resources to enrich the occurrence of laymen terms in the corpus. Although there are many large text corpora for general Natural Language Processing (NLP) research, there are far fewer resources specific to the healthcare domain. In order to extract information about domain-specific word usage by laymen for healthcare, we need to construct a domain-specific corpus of laymen's text. To our knowledge, we are the first to leverage MedlinePlus in order to automatically develop consumer health vocabularies.</ns0:p></ns0:div>
<ns0:div><ns0:head>Finding Synonyms to Enrich Laymen Vocabularies</ns0:head><ns0:p>Our work focuses on finding new synonyms, words with the same meaning, to already existing laymen terms. Recent methods of finding synonyms are based on the idea that a word can be defined by its surroundings. Thus, words that appear in similar contexts are likely to be similar in meaning. To study words in text, they need to be represented in a way that allows for computational processing. Word vector representations are a popular technique that represents each word using a vector of feature weights learned from training texts. In general, there are two main vector-learning models. The first models incorporate global matrix factorization whereas the second models focus on local context windows. The global matrix factorization models generally begin by building a corpus-wide co-occurrence matrix and then they apply dimensionality reduction. An early example of this type of model is Latent Semantic Analysis (LSA) <ns0:ref type='bibr' target='#b65'>[62]</ns0:ref> and Latent Dirichlet Allocation (LDA) <ns0:ref type='bibr' target='#b66'>[63]</ns0:ref>. The context-window models are based on the idea that a word can be defined by its surroundings. An example of such models is the skip-gram model <ns0:ref type='bibr' target='#b67'>[64]</ns0:ref> proposed by Mikolov in 2013 and the model proposed by Gauch et al. in <ns0:ref type='bibr' target='#b68'>[65]</ns0:ref>. Word2Vec <ns0:ref type='bibr' target='#b69'>[66]</ns0:ref>, FastText <ns0:ref type='bibr' target='#b71'>[67]</ns0:ref>, GloVe <ns0:ref type='bibr' target='#b72'>[68]</ns0:ref> and WOVe <ns0:ref type='bibr' target='#b73'>[69]</ns0:ref> are all examples of vector learning methods that have been shown to be superior to traditional NLP methods in different text mining applications <ns0:ref type='bibr' target='#b58'>[55,</ns0:ref><ns0:ref type='bibr' target='#b67'>64]</ns0:ref>. Some of these techniques have been applied in the medical field to build medical ontologies, such as <ns0:ref type='bibr' target='#b75'>[71]</ns0:ref><ns0:ref type='bibr' target='#b76'>[72]</ns0:ref><ns0:ref type='bibr' target='#b77'>[73]</ns0:ref><ns0:ref type='bibr' target='#b78'>[74]</ns0:ref><ns0:ref type='bibr' target='#b79'>[75]</ns0:ref>. Our work focuses on the word similarity task, or specifically word synonyms task. In order to find these synonyms, we leveraged the GloVe algorithm. This algorithm has outperformed many vector learning techniques in the task of finding word similarity <ns0:ref type='bibr' target='#b72'>[68]</ns0:ref>. It combines the advantages of two vector learning techniques: global matrix factorization methods and local context window methods <ns0:ref type='bibr' target='#b72'>[68]</ns0:ref>. This algorithm has many applications in different fields such as text similarity <ns0:ref type='bibr' target='#b80'>[76]</ns0:ref>, node representations <ns0:ref type='bibr' target='#b81'>[77]</ns0:ref>, emotion detection <ns0:ref type='bibr' target='#b82'>[78]</ns0:ref> and many others. This algorithm found its way in many biomedicine such as finding semantic similarity <ns0:ref type='bibr' target='#b83'>[79]</ns0:ref>, extracting Adverse Drug Reactions (ADR) <ns0:ref type='bibr' target='#b84'>[80]</ns0:ref>, and analyzing protein sequences <ns0:ref type='bibr' target='#b85'>[81]</ns0:ref>. GloVe is generally used with very large corpora, e.g., a 2010 Wikipedia corpus (1 billion tokens), a 2014 Wikipedia corpus (1.6 billion tokens), and Gigaword 5 (4.3 billion tokens) <ns0:ref type='bibr' target='#b72'>[68]</ns0:ref>. In comparison, our corpus is specialized and much smaller, approximately 1,365,000 tokens. To compensate for the relative lack of training text, we incorporate an auxiliary source of vocabulary, WordNet. The WordNet is a machine-readable English ontology proposed by <ns0:ref type='bibr'>Professor</ns0:ref> Manuscript to be reviewed Computer Science synsets (synonyms) of different word categories such as noun, verb, adjective, and adverb. For every synset, WordNet provides a short definition and sometimes an example sentence. It also includes a network of relations between its synsets. The synonyms, antonyms, hyponyms, hypernyms, meronymy's and some others are all semantic relations that WordNet provides <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. WordNet has been used in many fields to help enrich ontologies in different domains such as <ns0:ref type='bibr' target='#b86'>[82]</ns0:ref><ns0:ref type='bibr' target='#b87'>[83]</ns0:ref><ns0:ref type='bibr' target='#b88'>[84]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methodology</ns0:head><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> illustrates the main steps of our algorithm. Our method starts with a corpus collected from a healthcare social media platform to be used as the source of the new laymen terms. Using this corpus, the GloVe algorithm builds word embeddings. For every UMLS medical concept, there is a list of its associated laymen terms from which we select a seed terms for the concept. Using the GloVe vectors and a similarity metric, we identify the most similar words to these seed term and choose the top-ranked candidate as a new layman term. The next sections explain the methodology steps in detail.</ns0:p></ns0:div>
<ns0:div><ns0:head>Healthcare Corpus</ns0:head><ns0:p>To find new laymen terms, we need text documents that can be used as a source of new laymen terms. Because of the specialized nature of medical terminology, we need domain-specific text related to the field of healthcare. MedHelp.org is a healthcare social media platform that provides a question/answer for people who share their healthcare issues. In this platform, the lay language is used more than formal medical terminology. Instead of writing a short query on the internet that may not retrieve what a user is looking for, whole sentences and paragraphs can be posted on such media <ns0:ref type='bibr' target='#b89'>[85]</ns0:ref> and other members of the community can provide answers. People might use sentences such as 'I can't fall asleep all night' to refer to the medical term 'insomnia' and 'head spinning a little' to refer to 'dizziness' <ns0:ref type='bibr' target='#b91'>[86]</ns0:ref>. Such social media can be an excellent source from which to extract new laymen terms.</ns0:p></ns0:div>
<ns0:div><ns0:head>Seed term list</ns0:head><ns0:p>Our task is to enrich formal medical concepts that already have associated laymen terms by identifying additional related layperson terms. These associated terms are used as seed terms that the system uses to find synonyms and then these synonyms are added to that medical concept. To do so, we need an existing ontology of medical concepts with associated laymen vocabulary. For our experiment, we used two sources of laymen terms: OAC CHV <ns0:ref type='bibr' target='#b32'>[30]</ns0:ref>, and the MedlinePlus consumer vocabulary <ns0:ref type='bibr' target='#b33'>[31]</ns0:ref>. The OAC CHV covers about 56,000 concepts of the UMLS, and the MedlinePlus mapped to about 2,000 UMLS concepts.</ns0:p></ns0:div>
<ns0:div><ns0:head>Synonym Identification Algorithms</ns0:head><ns0:p>This paper reports on the results of applying several algorithms to automatically identify synonyms of the seed terms to add to existing laymen's medical concepts. The algorithms we evaluated are described in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Global Vectors for Word Representations (GloVe)</ns0:head><ns0:p>Our baseline approach uses an unmodified version of GloVe to find the new laymen terms trained only on our unmodified corpus. As reported in <ns0:ref type='bibr' target='#b72'>[68]</ns0:ref>, GloVe starts collecting word contexts using its global word to word co-occurrence matrix. This matrix is a very large and very sparse matrix that is built during a onetime pass over the whole corpus. Given a word to process, i.e., the pivot word, GloVe counts co-occurrences of words around the pivot word within a window of a given size. As the windows shift over the corpus, the pivot words and contexts around them continually shift until the matrix is complete. GloVe builds word vectors for each word that summarize the contexts in which that word was found. Because the co-occurrence matrix is very sparse. GloVe uses the log bilinear regression model to build reduce the dimensionality of the co-occurrence matrix. This model also optimizes word vectors by tuning its weights and reducing errors iteratively until finding the best word representations. By comparing the seed terms words vectors with all other word vectors using the cosine similarity measure, highly similar words, i.e., potential new laymen terms, can be located. The unmodified GloVe algorithm is our baseline to compare with the GloVe improvement methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>GloVe with WordNet</ns0:head><ns0:p>Word embedding algorithms usually use a very large corpus to build their word representations, e.g., 6B words of Google News corpus are used to train the word2vec vectors <ns0:ref type='bibr' target='#b69'>[66,</ns0:ref><ns0:ref type='bibr' target='#b92'>87]</ns0:ref>. In the case of a narrow domain such as healthcare, it is hard to find or build an immense corpus, increasing the sparsity of the co-occurrence matrix and impacting the accuracy of the resulting word vectors. Thus, one of our goals is to investigate the ability of an external ontology to increase the accuracy of word embeddings for smaller corpora. In particular, we present methods to exploit a standard English ontology, WordNet, to enhance GloVe's accuracy on a healthcare domain corpus. WordNet provides a network of relations between its relational synsets such as, synonyms, antonyms, hyponyms, hypernyms, meronymy's and some other relations. In our research, we investigate using the synonym, hyponym, and hypernym relations to augment our corpus prior to running GloVe. We only expand the seed terms in the corpus with their relational synsets. For each seed term, we located the relational synset of interest, e.g., hyponyms we sort them by similarity to the seed term using the Resnik <ns0:ref type='bibr' target='#b93'>[88]</ns0:ref> similarity measurement. Then, we limit the list to not exceed the 10 th most similar synsets. We split them evenly into two subsets of roughly equal total similarity using a round-robin algorithm. We then expand the corpus by adding the first subset of relational synset words to the corpus prior to each seed term occurrence and the second subset after each seed term occurrence. One of issues that arises when using WordNet is the polysemy of its synsets. Many words are ambiguous and thus map to synsets with different meanings adding noise to the expanded context vectors. By limiting the number of words used from WordNet and incorporating words from the context around the seed term, the effect of noise on GloVe's model is decreased. In future, we could explore using the context words around the seed terms to identify the best synsets to use for expansion. Figure <ns0:ref type='figure'>2</ns0:ref> shows the methodology of our system with the WordNet ontology. Expressing the WordNet method, let be a set of n seed terms. Let 𝑆 = {𝑠 1 ,𝑠 2 ,𝑠 3 ,…,𝑠 𝑛 } 𝑇 = '𝑤 1 𝑤 2 be a text of words in the training corpus. Let be a set of relational 𝑤 3 ….𝑤 𝑘 ' 𝑋 = {𝑥 1 ,𝑥 2 ,𝑥,…,𝑥 𝑧 } synset terms for the seed term s i , where i=0,1,2,…,n These relational synsets are sorted according to their degree of similarity to s i using the Resnik similarity measurement <ns0:ref type='bibr' target='#b93'>[88]</ns0:ref>. X is sorted according to Resnik score and divided into two sets X 1 and X 2 . Each set goes to one side of s i . Now, let s i = w j+2 in T, where j=0,1,2,3,…,k. Then, the new text after adding the relational 𝑇 synsets will look like this: ' 𝑇 = ' w j w j + 1 X 1 w j + 2 X 2 ….w j + k Further, consider the effect of T-hat on the GloVe cooccurrence vectors. Assume that s i has the vector . Assume that X has the vector . After expanding the training corpus with the 𝑉 𝑠 𝑖 𝑉 𝑋 relational synsets, the new vector will equal:</ns0:p><ns0:formula xml:id='formula_0'>𝑉 𝑠 𝑖 = + (1) 𝑉 𝑠 𝑖</ns0:formula><ns0:p>𝑉 𝑠 𝑖 𝑉 𝑋 The co-occurrence weights of relational synsets that are already in the corpus will be increased incrementally in the vector, while those that are new to the corpus will expand the vector and their co-occurrence weight will be calculated according to the co-occurrence with the seed term. The following sections outline the WordNet approach above with the three types of relational synsets we used: synonyms, hyponyms, and hypernyms. GloVe WordNet Synonyms (GloVeSyno) Synonyms are any words that share the same meaning. For example, the words auto, machine, and automobile are all synonyms of the word car. Having synonyms around a seed term adds more information about that seed term and help building more accurate seed term vectors. When a seed term found in the training corpus, WordNet provides a list of its synonyms. These synonyms are sorted according to their degree of similarity to the seed term. After that, the synonyms are divided into two lists and each list go to one side of the seed term. Here is an example that demonstrate this process. Let be a text in the training 𝑇 = '𝐼 ℎ𝑎𝑑 𝑎 ℎ𝑒𝑎𝑑𝑎𝑐ℎ𝑒' corpus. T has the seed term s = headache. The WordNet synonyms of this seed term are {concern, worry, vexation, cephalalgia}. Sorting this set according to their degree of similarity results the following set: {worry, cephalalgia, concern, vexation}. This set is divided in to two sets {worry, cephalalgia} and {concern, vexation} and added to the left and right of the s in T. So, the equals: We can see from the that the words that are new to the corpus vocabulary expanded the vector 𝑉 𝑠 and their weights are calculated according to their co-occurrence with the seed term, while the words that are already in the vector, such as worry, their weights increased incrementally.</ns0:p><ns0:formula xml:id='formula_1'>𝑇 𝑇 = '𝐼</ns0:formula><ns0:p>Glove WordNet Hyponyms (GloVeHypo) Hyponyms are those words with more specific meaning, e.g., Jeep is a hyponym of car. The idea here is to find more specific names of a seed term and add them to the context of that seed term. to explain this method, we use the same example we used in the previous section. The hyponyms of the seed term headache that the WordNet provides are {dead_weight, burden, fardel, imposition, bugaboo, pill, business}. Sorting these hypos according to their degree of similarity to the seed term results the set {dead_weight, burden, fardel, bugaboo, imposition, business, pill}. This list is divided into two sets and each set go to one of the seed term's sides. The rest process is the same as the GloVeSyno method. GloVe WordNet Hypernyms (GloVeHyper) Hypernyms are the antonyms of hyponyms. Hypernyms are those words with more general meaning, e.g., car is a hypernym of Jeep. The idea here is to surround a seed term with more general information that represents its ontology. Having this information leads to more descriptive vector that represent that seed term. An example of a seed term hypernyms is the hypernyms of the seed term headache, which are {entity, stimulation, negative_stimulus, information, cognition, psychological_feature, abstraction}. We can see that these hypernyms are broader than the seed term headache. We use the same steps for this relational synset as in the GloVeSyno method by sorting, dividing, and distributing these hypernyms around the seed term in the corpus. After that GloVe builds its co-occurrence matrix from the expanded corpus and builds its word vectors that are used to extract the terms most similar terms to the seed terms from the corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>Similarity Measurement</ns0:head><ns0:p>We use cosine similarity between word vectors to find the terms most similar word to the seed term. This metric is widely used for vector comparisons in textual tasks such as <ns0:ref type='bibr' target='#b87'>[83]</ns0:ref><ns0:ref type='bibr' target='#b88'>[84]</ns0:ref><ns0:ref type='bibr' target='#b89'>[85]</ns0:ref><ns0:ref type='bibr' target='#b91'>[86]</ns0:ref>. Because it focuses on the angle between vectors rather than distances between endpoints, cosine similarity can handle vector divergence in large-size text documents better than Euclidean distance <ns0:ref type='bibr' target='#b98'>[93]</ns0:ref>.</ns0:p><ns0:p>As such, it is one of the most commonly used similarity based metrics <ns0:ref type='bibr' target='#b93'>[88]</ns0:ref><ns0:ref type='bibr' target='#b94'>[89]</ns0:ref><ns0:ref type='bibr' target='#b95'>[90]</ns0:ref>. Cosine similarity (Equation <ns0:ref type='formula'>2</ns0:ref>) produces a score between 0 and 1, and the higher the score between two vectors, the more similar they are <ns0:ref type='bibr' target='#b104'>[97]</ns0:ref>. , ( <ns0:ref type='formula'>2</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_2'>cos_sim (𝑣 1 ,𝑣 2 ) = 𝑣 1 .𝑣 2 |𝑣 1 ||𝑣 2 |</ns0:formula><ns0:p>where is a vector of a seed term in the seed term list, and is a vector of a word in the corpus</ns0:p><ns0:formula xml:id='formula_3'>𝑣 1 𝑣 2</ns0:formula><ns0:p>that GloVe model built. We consider the terms in the list candidate terms for inclusion. The top n candidate terms are the new laymen terms that we add to the UMLS concept.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>Corpus: MedHelp.org has many communities that discuss different healthcare issues such as Diabetes Mellitus, Heart Diseases, Ear and Eye care, and many others. To select the communities to include in our dataset, we did an informal experiment to find the occurrences of laymen terms from the OAC CHV vocabulary on MedHelp.org. We found that the highest density of these OAC CHV terms occur in communities such as Pregnancy, Women's Health, Neurology, Addiction, Hepatitis-C, Heart Disease, Gastroenterology, Dermatology, and Sexually Transmitted Diseases and Infections (STDs / STIs) communities. We thus chose these nine communities for our testbed and downloaded all the user-posted questions and their answers from MedHelp.org from WHEN to April 20, 2019. The resulting corpus is roughly 1.3 Gb and contains approximately 135,000,000 tokens. Table <ns0:ref type='table'>2</ns0:ref> shows the downloaded communities with their statistics. We removed all stopwords, numbers, and punctuations from this corpus. We also removed corpus-specific stopwords such as test, doctor, symptom, and physician. Within our domainspecific corpus, these ubiquitous words have little information content. Finally, we stemmed the text using the Snowball stemmer <ns0:ref type='bibr' target='#b105'>[98]</ns0:ref>, and removed any word less than 3 characters long. The final corpus size was ~900mb. This corpus is available for download from <ns0:ref type='bibr' target='#b106'>[99]</ns0:ref>.</ns0:p><ns0:p>Seed terms: We built the seed term list from the OAC CHV and MedlinePlus vocabularies, choosing seed terms with a unigram form, such as flu, fever, fatigue, and swelling. There are two reason that led us to choose only unigrams for our work. First, GloVe embeddings handle only single word vectors, and second the existing laymen vocabularies are rich with such seed terms. We also chose the professional medical concepts that had a unigram form, then we pick its unigram associated laymen terms. In many cases, the medical concept on these two vocabularies has associate laymen terms that have same names as the concept's name except different morphological forms, such the plural 's', uppercase/lowercase of letters, punctuations, or numbers. We treated these cases and removed any common medical words. After that, we stemmed the terms and listed only the unique terms. For example, the medical concept Tiredness has the laymen terms fatigue, fatigues, fatigued and fatiguing. After stemming, only the term 'fatigu' was kept. To focus on terms for which sufficient contextual data was available, we kept only these laymen terms that occur in the corpus more than 100 times. To validate our system, we need at least two terms for each professional medical concept, one term is used as the seed laymen term and one to be used as the target term for evaluation. Thus, we kept only those medical concepts that have at least two related terms. From the two vocabularies, we were able to create an OAC CHV ground truth dataset of size 944 medical concepts with 2103 seed terms and a MedlinePlus ground truth dataset of size 101 medical concepts with 227 seed terms. Table <ns0:ref type='table'>3</ns0:ref> shows an example of some UMLS medical concepts and their seed terms from the MedlinePlus dataset. When we run our experiments, we select one seed term at random from each of the 944 concepts in the ground truth dataset for testing. We evaluate each algorithm's ability to identify that seed term's ground truth synonyms using the metrics described in the next section.</ns0:p><ns0:p>The OAC CHV dataset is nine times bigger than the MedlinePlus dataset (see Figure <ns0:ref type='figure'>3a</ns0:ref>); the OAC CHV vocabulary covers 56,000 of the UMLS concepts whereas the MedlinePlus covers only 2,112 UMLS concepts. Although it is smaller, MedlinePlus represents the future of laymen terms because the NLM updates this resource annually. In contrast, the last update to the OAC CHV was in 2011. Figure <ns0:ref type='figure'>3b</ns0:ref> shows that 37% of the 101 concepts in MedlinePlus also appear in the OAC CHV dataset and share the same concepts and laymen terms. This indicates that the OAC CHV is still a good source of laymen terms. Baselines and metrics. We consider the basic GloVe results as the baseline for comparison with the WordNet expansion algorithms. First, we tune the baseline to the best setting. We then compare the results when we use those settings with our WordNet-expanded corpora. We evaluate our approach using precision (P), recall (R), and F-score (F), which is the harmonic mean of the previous two <ns0:ref type='bibr' target='#b107'>[100]</ns0:ref>. We also include the number of concepts (NumCon) that the system could find one or more of its seed terms. Moreover, we include the Mean Reciprocal Rank (MRR) <ns0:ref type='bibr' target='#b108'>[101]</ns0:ref> that measures the rank of the first most similar candidate term in the candidate list. It has a value between 0 and 1, and the closer the MRR to 1, the closer the candidate term position in the candidate list.</ns0:p><ns0:p>Based on a set of medical concepts for which we have a seed term and at least one synonym in the ground truth data set, we can measure the precision, recall, and F-Score metrics according to two criteria: (1) the number of concepts for which the system was able to find at least one synonym; and (2) the total number of synonyms for seed terms the system was able to find across all concepts. We call the metrics used to measure these two criteria the macro and micro average metrics, respectively. The macro average measures the number of the concepts for which the algorithm found a match to the ground truth dataset while the micro average measures the number of new terms found. The micro and macro precision, recall, and F-score are computed according to these equations: </ns0:p><ns0:formula xml:id='formula_4'>𝑃 𝑚𝑖𝑐𝑟𝑜 = # 𝑜𝑓 𝑡𝑟𝑢𝑒 𝑠𝑦𝑛𝑜𝑛𝑦𝑚𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 𝑙𝑖𝑠𝑡𝑠 𝑡𝑜𝑡𝑎𝑙 # 𝑜𝑓 𝑡𝑒𝑟𝑚𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 𝑙𝑖𝑠𝑡𝑠 , (<ns0:label>3</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>) 𝑅 𝑚𝑖𝑐𝑟𝑜 = # 𝑜𝑓 𝑡𝑟𝑢𝑒 𝑠𝑦𝑛𝑜𝑛𝑦𝑚𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 𝑙𝑖𝑠𝑡𝑠 𝑡𝑜𝑡𝑎𝑙 # 𝑜𝑓 𝑠𝑦𝑛𝑜𝑛𝑚𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑔𝑟𝑜𝑢𝑛𝑑 𝑡𝑟𝑢𝑡ℎ 𝑑𝑎𝑡𝑎𝑠𝑒𝑡 ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>We illustrate these measurements in the following example. Suppose we have a ground truth dataset of size 25 concepts, and every concept has four synonyms terms. For every concept, a random synonym term selected to be a seed term. The remaining 75 synonyms will be used for evaluation. Suppose the algorithm retrieves five candidate terms for each seed term and it is able to generate results for 20 of the seed terms, creating 20 candidate term lists. That makes 100 candidate terms in total. Assume that only 15 out of the 20 candidate lists contain a true synonym, and each list of those 15 lists includes two true synonyms. Thus, this algorithm extracted 30 true laymen terms. Having all this information, then the P micro = 30/100, R micro = 30/75, P macro = 15/20, and R macro = 15/25.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Experiment 1: Tuning GloVe to the Best Setting</ns0:head><ns0:p>To tune the GloVe algorithm to its best setting, we varied GloVe's parameters on an unmodified corpus and we used the larger of our two datasets, the OAC CHV for testing. The GloVe algorithm has many hyperparameters, but the vector size and the window size parameters have the biggest effect on the results. We evaluated GloVe using the 944 concepts in this dataset on different vector sizes <ns0:ref type='bibr' target='#b107'>(100,</ns0:ref><ns0:ref type='bibr'>200,</ns0:ref><ns0:ref type='bibr'>300,</ns0:ref><ns0:ref type='bibr'>400)</ns0:ref>, varying the window size <ns0:ref type='bibr' target='#b9'>(10,</ns0:ref><ns0:ref type='bibr' target='#b19'>20,</ns0:ref><ns0:ref type='bibr' target='#b32'>30,</ns0:ref><ns0:ref type='bibr' target='#b42'>40)</ns0:ref> for each vector size. We set the candidate list size to n = 10. Figure <ns0:ref type='figure'>4</ns0:ref> shows the macro F-score results of the GloVe algorithm according to these different vector and window sizes. In general, the Fscore results declined with any window size greater than 30. Table <ns0:ref type='table'>4</ns0:ref> reports the micro-precision for GloVe over the same parameter settings. We can see that the micro precision is very low due to the size of the candidate lists created. In particular, we are testing with 944 concept seed terms, and the size of the candidate list is set to 10, so we generate 944x10=9440 candidate terms. However, there are only 2103 truth synonyms, to the microaverages are guaranteed to be quite low. To compensate, we need to determine a good size for the candidate list that balances recall and precision. This is discussed further in Section 5.4. The highest F-score was reported with a vector of size 400 and window of size 30. Thus, we used these settings for all following experiments.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment 2: GloVe with WordNet</ns0:head><ns0:p>Using the best GloVe setting reported in the previous experiment, we next evaluate the GloVeSyno, GloVeHypo, and GloVeHyper algorithms to determine whether or not they can improve on basic GloVe's ability to find layman terms. After processing the corpus, we expand our laymen's corpus with synonyms, hyponyms, and hypernyms from WordNet, respectively. These are then input to the GloVe algorithm using 400 for the vector size and 30 for the window size. Table <ns0:ref type='table'>5</ns0:ref> shows a comparison between the results of these WordNet algorithms, and our baseline GloVe for the OAC CHV and MedlinePlus datasets. The evaluation was done using a candidate list of size n = 10. We report here the macro accuracy of the system for the three algorithms which is based on the number of concepts for which a ground truth result was found. We can see from Table <ns0:ref type='table'>5</ns0:ref> that GloVeSyno outperformed the other algorithms. It was able to enrich synonyms to 57% (546) of the medical concepts listed in the OAC CHV dataset and more than 62% <ns0:ref type='bibr' target='#b66'>(63)</ns0:ref> of the concepts in the MedlinePlus dataset. Table <ns0:ref type='table'>6</ns0:ref> presents the algorithms' performance averaged over the two datasets. On average, the GloVeSyno algorithm produced an F-score relative improvement of 25% comparing to the basic GloVe. Moreover, the GloVeSyno reported the highest MRR over all the other algorithms, which shows that the first most similar candidate term to the seed term fell approximately in the 2 nd position of the candidate list. Furthermore, the GloVeSyno showed a high statistical significance over the two ground truth datasets with P<.001. The GloVeHypo and GloVeHyper results were not good comparing to the other algorithms. The reason is that the hyponyms provide a very specific layman term synsets. For example, the hyponyms of the laymen term edema are angioedema, atrophedema, giant hives, periodic edema, Quincke', papilledema, and anasarca. Such hypos are specific names of the laymen term edema, and they might not be listed in ground truth datasets. We believe that the GloVeHypo algorithm results are promising, but a more generalized and bigger size ground truth dataset is required to prove that. On the other hand, the GloVeHyper algorithm was not good comparing to the basic GloVe algorithm. However, it is better than the GloVeHypo algorithm. The reason that this algorithm did not get a good result is because the degree of abstraction that the hypernym relations provide. For example, the hypernym contagious_disease represents many laymen terms, such as flu, To illustrate the effectiveness of the GloVeSyno algorithm, we show a seed term the candidate synonyms for a selection of concepts in Table <ns0:ref type='table'>7</ns0:ref>. The candidate synonyms that appear in the ground truth list of synonyms are shown bolded. Although only 14 true synonyms from 7 concepts were found, we note that many of the other candidate synonyms seem to be good matches even though they do not appear in the official vocabulary. These results are promising and could be used to enrich medical concepts with missing laymen terms. They could also be used by healthcare retrieval systems to direct laypersons to the correct healthcare topic. On average, the GloVeSyno algorithm outperformed all the others, producing an F-score relative improvement of 25% compared to basic GloVe. The results were statistically significant (p<.001). Additionally, this algorithm found many potentially relevant laymen terms that were not already in the ground truth. We further examined the effect of the WordNet terms on the set of candidate terms extracted from the corpus. With GloVeSyno, 60% of the true positives appeared in the WordNet synsets used for corpus expansion versus 41% with unmodified GloVe. Thus, even after expanding with WordNet, 40% of the true positives appeared in the corpus and not in the nearest WordNet synsets, indicating that the external lexicon and GloVe's word vectors find complementary sets of synonyms.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment 3: Improving the GloVeSyno Micro Accuracy</ns0:head><ns0:p>From our previous experiment, we conclude that the GloVeSyno algorithm was the most effective. However, we next explore it in more detail to see if we can improve its accuracy by selecting an appropriate number of candidate synonyms from the candidate lists. We report evaluation results according to the ground truth datasets, OAC CHV and MedlinePlus. We varied the number of synonyms selected from the candidate lists n=1 to n=100 and measured the micro recall, precision, and F-score. Figure <ns0:ref type='figure'>5</ns0:ref> shows the F-score results and the number of concepts for which at least one true synonym was extracted. This figure reports the results of the GloVeSyno algorithm over the OAC CHV dataset. The F-score is maximized with n=3 with an F-score of 19.06% and 365 out of 944 concepts enriched. After that, it starts to decline quickly and at n=20 the F-score is only 6.75% which further declines to 1.7% at n=100. We note that the number of concepts affected rose quickly until n=7, but then grows more slowly. The best results are with n=2 with an F-score of 19.11%. At this setting, 287 of the 944 concepts are enriched with a micro-precision of 15.43% and recall of 25.11%. The evaluation results over the MedlinePlus dataset looks the same as the results reported for the OAC CHV dataset (See Figure <ns0:ref type='figure'>6</ns0:ref>). The F-score was at its highest score at n=2 with an F-score of 23.12% and 33 out of 101 concepts enriched. The F-score decreased quickly at n= 30 and was at its lowest score at n=100 with an F-score of 1.81%. The number of enriched concepts grew quickly until n=6 and stabilized after n=9 between 64 and 74 enriched concepts. Over the two datasets, the best results are with n=2. Figure <ns0:ref type='figure'>7</ns0:ref> shows the F-score over the Precision and recall for the two datasets. Despite the difference in the number of concepts between the two ground truth datasets, the results show that the F-score is the best at n=2. The figure shows that the behaviors of the GloVeSyno over the two datasets are almost the same over different candidate list settings. Based on these results, we conclude that the best performance for automatically enriching a laymen vocabulary with terms suggested by GloVeSyno be achieved by adding the top two results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion and Future Work</ns0:head><ns0:p>This paper presents an automatic approach to enrich consumer health vocabularies using GloVe word embeddings and an auxiliary lexical source, WordNet. Our approach was evaluated used a healthcare text downloaded from MedHelp.org, a healthcare social media platform using two standard laymen vocabularies, OAC CHV, and MedlinePlus. We used the WordNet ontology to expand the healthcare corpus by including synonyms, hyponyms, and hypernyms for each layman term occurrence in the corpus. Given a seed term selected from a concept in the ontology, we measured our algorithms' ability to automatically extract synonyms for those terms that appeared in the ground truth concept. We found that GloVeSyno and GloVeHypo both outperformed GloVe on the unmodified corpus, however including hypernyms actually degraded performance. GloVeSyno was the best performing algorithm with a relative improvement of 25% in the F-score versus the basic GloVe algorithm. Furthermore, the GloVeSyno showed a high statistical significance over the two ground truth datasets with P<.001. The results of the system were in general promising and can be applied not only to enrich laymen vocabularies for medicine but any ontology for a domain, given an appropriate corpus for the domain. Our approach is applicable to narrow domains that may not have the huge training corpora typically used with word embedding approaches. In essence, by incorporating an external source of linguistic information, WordNet, and expanding the training corpus, we are getting more out of our training corpus. For the future work, we plan to use our expanded corpus to train and evaluate the state of the art word embedding algorithms, such as BERT <ns0:ref type='bibr' target='#b109'>[102]</ns0:ref>, GPT-2 <ns0:ref type='bibr' target='#b110'>[103]</ns0:ref>, CTRL <ns0:ref type='bibr' target='#b111'>[104]</ns0:ref>, and GPT-3 <ns0:ref type='bibr' target='#b112'>[105]</ns0:ref>. Furthermore, we plan to use our collected ground truth datasets to evaluate the recent work done by Huang and his team that is called ClinicalBERT <ns0:ref type='bibr'>[106]</ns0:ref>. Also, for the future work, we plan to do further improvements to the GloVeSyno, GloVeHypo, GloVeHyper algorithms and test them using the UMLS semantics to explore more laymen terms relationships. In our experiments, we implemented our algorithms on only unigram seed terms. We plan to explore applying these algorithms to different word grams of different lengths. Moreover, in our work, we used the MedHelp.org corpus to find new laymen term. Even though this corpus was rich with laymen information, our plan is to use larger healthcare dataset and might apply to multilanguage datasets to find laymen terms in different languages. In addition, we are currently exploring an iterative feedback approach to expand the corpus with words found by GloVe itself rather than those in an external linguistic resource. We are also working on our other project that tackle the problem of adding these laymen terms that laypeople use but not covered in the laymen vocabularies. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>George A. at the Princeton University. The most recent version has about 118,000 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60810:1:1:NEW 8 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60810:1:1:NEW 8 Jul 2021) Manuscript to be reviewed Computer Science rubeola, and scarlatina. Having such hypernym in the context of a layman term did not lead to good results. The hypernym contagious_disease is very general relation that can represent different kind of diseases.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60810:1:1:NEW 8 Jul 2021)Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,136.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,133.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,199.12,525.00,240.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,303.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,330.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,199.12,525.00,201.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>ℎ𝑎𝑑 𝑎 𝑤𝑜𝑟𝑟𝑦 𝑐𝑒𝑝ℎ𝑎𝑙𝑎𝑙𝑔𝑖𝑎 ℎ𝑒𝑎𝑑𝑎𝑐ℎ𝑒 𝑐𝑜𝑛𝑐𝑒𝑟𝑛 v𝑒𝑥𝑎𝑡𝑖𝑜𝑛' Assume that the vector of the seed term s, , before expanding the training corpus looks like 𝑉 𝑠</ns0:figDesc><ns0:table><ns0:row><ns0:cell>this:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>dizzy pain I</ns0:cell><ns0:cell>had</ns0:cell><ns0:cell>a</ns0:cell><ns0:cell>for</ns0:cell><ns0:cell cols='3'>worry please sleep</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>𝑉 𝑠</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell /><ns0:cell>5</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='12'>The for the seed term after expanding the training corpus with the WordNet synonyms will be 𝑉 𝑠</ns0:cell></ns0:row><ns0:row><ns0:cell cols='12'>expanded to have the new words and updated the occurrence of the already in corpus words.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Here is how the looks like: 𝑉 𝑠</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='6'>Cephalalgia dizzy pain I had a</ns0:cell><ns0:cell cols='6'>for concern worry please vexation sleep</ns0:cell></ns0:row><ns0:row><ns0:cell>𝑉 𝑠</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5 10</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>50</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:60810:1:1:NEW 8 Jul 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "
Dear Editors,
We would like to thank all reviewers for their constructive comments regarding changes that helped us improve the paper.
We have added recent literature, showed the significance of our work, and corrected all grammar errors. We think that we have made all recommended changes and our paper is ready to be published in PeerJ.
Mohammed Ibrahim, Ph.D.
Department of Computer Science and Computer Engineering
University of Arkansas, Fayetteville, Ar
E-mail: msibrahi@uark.edu
Phone: 479-276-3531
*Note: Our replays to all reviews were reported according to the line number in the tracked version of our paper and not the clean one.
Reviewer 1 (Anonymous)
1. The significance of the work should be highlighted at the beginning.
a. We have showed that at the end of the introduction line 100.
2. Justification about the use of similarity metric.
a. We have justified that by providing many literatures that have used this metric for their textual tasks (see Line 427: this metric is …). We also supported that with a statement showing that the cosine similarity does better with large text than Euclidean distance metric (See Line 428).
3. Comparisons of the similarities and differences in different algorithms used should be provided, and link this with the results.
a. The sections “GloVe WordNet Synonyms (GloVeSyno) (Line 373)”, “Glove WordNet Hyponyms (Line 403)”, and “GloVe WordNet Hypernyms (Line 413)” show the difference between each approach. Also, the results section discusses the performance of every approach and how the GloVeSyno did better than all other approaches (See Lines 545-600)
4. What are the research and practice implications? This should be included in the discussion section.
a. That was explained in section “Experiment 2: GloVe with WordNet” (See Lines 582-588) and in the Conclusion section (See Lines 645-650).
5. Most references are out-of-date, that is, published five years ago. More recent works in this field (that is, published in recent five years, particularly 2020-2021) should be added. Also, some references have incomplete compilation, e.g., missing volume/issue/page numbering.
a. We have added many recent works (reference [25] Line 83, reference [59] Line 223, reference [60] Line 225, references [38] and reference [39] Line 173)
6. Double-check both definition and usage of acronyms: every acronym should be defined only once (at the first occurrence) and always used afterwards (except for the abstract). There are mistakes in this issue. For example, NLP.
a. We have fixed that over all sections in our paper.
7. The manuscript presents some bad English constructions and grammar mistakes: a professional language editing and careful proofread is strongly needed to sufficiently improve the paper's presentation quality.
a. We have assigned our paper to an English editor who helped having it presented in the best way.
Reviewer 2 (Anonymous)
Basic reporting
The article describes an automatic method to enrich consumer health vocabularies such as OAC CHV and MedLine Plus, through GloVe word embeddings and wordNet. It is an extension of previous work from [1], including additional datasets for evaluation, but it should be detailed.
The article uses an unambiguous text, but there are a few typos:
• Line 95: vice versa, These
• We have fixed that.
• Line 122: refers instead of refer.
• We have fixed that.
• Some double spaces between lines 322 and 335.
• We have fixed that.
• Use of the CHV acronym on the abstract but not on the text.
• We have fixed that.
The article structure is also great, but some of the pictures seem not to be very relevant (3a and 3b)
• This picture is to show that the OAC CHV is still a good source to be used in research.
[1] Ibrahim, M., Gauch, S., Salman, O., & Alqahatani, M. (2020). Enriching consumer health vocabulary using enhanced GloVe word embedding. arXiv preprint arXiv:2004.00150.
Experimental design
The article perfectly fits the scope of the journal. It contains a very complete state of the art with a very rigorous and technical investigation following relevant technologies.
The article is an extension of previous work including additional datasets for evaluation, but it must be detailed to clarify the new points.
• We have added clarification of the new points in Line 232.
Comments for the Author
The article presents an automated method for enriching consumer health vocabularies through GloVe and WordNet word embeddings. It is a great investigation perfectly documented in this article, as it is shown on the quality of the state of the art and the good results obtained. I should only note a few small comments for publication, as well as suggest the possibility of using the UMLS semantic types to improve the semantics of the extensions.
• We have planned that in our future work (see Line 656)
" | Here is a paper. Please give your review comments after reading it. |
202 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>OncoLnc is a tool for interactively exploring survival correlations, and for downloading clinical data coupled to expression data for mRNAs, miRNAs, or lncRNAs. OncoLnc contains survival data for 8,647 patients from 21 cancer studies performed by The Cancer Genome Atlas (TCGA), along with RNA-SEQ expression for mRNAs and miRNAs from TCGA, and lncRNA expression from MiTranscriptome beta. Storing this data gives users the ability to separate patients by gene expression, and then create publication-quality Kaplan-Meier plots or download the data for further analyses. OncoLnc also stores precomputed survival analyses, allowing users to quickly explore survival correlations for up to 21 cancers in a single click. This resource allows researchers studying a specific gene to quickly investigate if it may have a role in cancer, and the supporting data allows researchers studying a specific cancer to identify the mRNAs, miRNAs, and lncRNAs most correlated with survival, and researchers looking for a novel lncRNA involved with cancer lists of potential candidates. OncoLnc is available at http://www.oncolnc.org.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Main article text Introduction</ns0:head><ns0:p>The Cancer Genome Atlas (TCGA) provides researchers with unprecedented amounts of molecular data along with clinical and histopathological information (http://cancergenome.nih.gov/). This data set has not only led to increases in our understanding of cancer <ns0:ref type='bibr' target='#b5'>(Ciriello et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b8'>Hoadley et al. 2014</ns0:ref>), but its scale has also allowed for previously impossible projects such as a comprehensive cataloguing of the human transcriptome <ns0:ref type='bibr' target='#b7'>(Han et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Iyer et al. 2015)</ns0:ref>. However, the size and complexity of this unique data set makes it difficult for cancer researchers to access and fully utilize.</ns0:p><ns0:p>Multiple resources exist to help researchers download or explore TCGA data <ns0:ref type='bibr' target='#b4'>(Cerami et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b6'>Gyorffy et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b11'>Koch et al. 2015)</ns0:ref>. However, a tool does not exist that focuses on survival analyses with TCGA data. Although cBioPortal (http://www.cbioportal.org/) does allow for simple Kaplan-Meier analyses for a range of TCGA cancer studies, it does not allow for more rigorous analyses with Cox regression. More importantly, the p-value obtained from a single analysis can be misleading, and it may be more informative to consider the relative strength of the correlation <ns0:ref type='bibr' target='#b0'>(Anaya et al. 2016)</ns0:ref>.</ns0:p><ns0:p>Furthermore, current tools for survival analyses only allow users to view the results from one cancer at a time. This makes it difficult for a researcher to perform a comprehensive survival study with their gene of interest, and makes it possible that researchers will miss an interesting correlation. Current tools also do not allow for generation of Kaplan-Meier plots with user defined upper and lower groups, and do not allow for simple download of the survival data coupled to the expression data.</ns0:p><ns0:p>It is also important to note that the gene names used in TCGA data are outdated. While many online portals use current mRNA definitions, there is not an online data portal that uses modern miRNA definitions. This is because the TCGA Tier 3 read counts for the 5p and 3p arms are aggregated into one count for the stem-loop, which makes it difficult for researchers who want to obtain information for a specific mature miRNA.</ns0:p><ns0:p>In addition, although the role of long noncoding RNAs (lncRNAs) in cancer is beginning to be appreciated <ns0:ref type='bibr' target='#b18'>(Yarmishyn & Kurochkin 2015)</ns0:ref>, the Tier 3 TCGA mRNA files contain expression data for only the limited number of lncRNAs that were known at the initiation of the TCGA project. As a result, tools for exploring TCGA data will not contain many lncRNAs currently being studied. Although a platform has already been developed to fill this gap <ns0:ref type='bibr' target='#b13'>(Li et al. 2015)</ns0:ref>, to help the scientific community study lncRNAs OncoLnc incorporates analyses and data for MiTranscriptome beta lncRNAs, http://mitranscriptome.org/, in addition to Tier 3 TCGA mRNAs and miRNAs.</ns0:p><ns0:p>All the clinical data was downloaded from https://tcga-data.nci.nih.gov/tcga/ January 5th and 6th, 2016. Cancers were chosen for the study based on the quality and amount of overall survival data. It is possible for a cancer such as PRAD to have a large amount of survival information, but a low number of events (deaths), making a survival analysis difficult. The Tier 3 mRNA and miRNA data was also downloaded from https://tcga-data.nci.nih.gov/tcga/, while the MiTranscriptome data was downloaded from http://mitranscriptome.org/. Definitions for miRNAs were downloaded from http://www.mirbase.org/.</ns0:p><ns0:p>For each cancer, only patients who contained all the necessary clinical information were included in the analysis. In addition, patients had to have a follow up time or time to death greater than 0 days. For each cancer, only genes which met an expression cutoff were included in the analysis (see below for more details). In general only primary solid tumors were included in analyses, and this is implemented by only using samples with '01' in the patient barcode. The exceptions are LAML, which is a blood derived cancer, and therefore has the designation '03', and SKCM, which contains primarily metastatic tumors, and therefore designations '01' and '06' were allowed for SKCM analyses. It is possible for a patient to have more than one sequencing file, and in these cases the counts were averaged.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Overview of OncoLnc</ns0:head><ns0:p>OncoLnc stores over 400,000 analyses, which includes Cox regression results as well as mean and median expression of each gene. For the Cox regression results, in addition to p-values, OncoLnc stores the rank of the correlation. Different cancers contain very different p-value distributions <ns0:ref type='bibr' target='#b0'>(Anaya et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b17'>Yang et al. 2014)</ns0:ref>, and it is unclear what causes this difference. As a result, using one p-value cutoff across cancers is not possible, and the rank of the correlation is a simple way to measure the relative strength of the correlation. The rank is calculated per cancer, per data type. Tables 1-3 contain information about how many genes there are for each cancer and each data type.</ns0:p><ns0:p>The mRNA and miRNA identifiers used by TCGA are out of date, and the identifiers in OncoLnc have been manually curated using NCBI Gene: http://www.ncbi.nlm.nih.gov/gene, and recent miRBase definitions: http://www.mirbase.org/. Over 2,000 mRNA symbols were updated, and these are listed in Table <ns0:ref type='table'>S4</ns0:ref>. Genes which have had their Entrez Gene ID removed from NCBI Gene, or could not be confidently mapped to a single identifier, are not included in OncoLnc but are still included in Table <ns0:ref type='table'>S1</ns0:ref>.</ns0:p><ns0:p>Using OncoLnc is very straightforward. The preferred method of using OncoLnc is to submit a gene at the home page, and this submission is not case sensitive. If a user submits a gene not in the database they will be notified and provided with links to all the possible gene names and IDs. Submission of a valid gene identifier will return correlation results for up to 21 cancers for mRNAs and miRNAs, or 18 cancers for MiTranscriptome beta lncRNAs (Fig. <ns0:ref type='figure'>1</ns0:ref>). If a gene does not meet the expression cutoff for the analysis, it will not be present in the database, and therefore a user may receive less than the maximum possible number of results. For users using OncoLnc on smaller devices, it is possible to perform a single cancer search. The link for this search is on the home page, and the user must submit the TCGA cancer abbreviation along with the gene of interest.</ns0:p><ns0:p>At the results page is a link to perform a Kaplan-Meier analysis for each cancer (Fig. <ns0:ref type='figure'>1</ns0:ref>). The user will be asked how they would like to divide the patients. Patients can be split into any nonoverlapping upper and lower slices, for example upper 25 percent and lower 25 percent. Upon submission users will be presented with a PNG Kaplan-Meier plot, a logrank p-value for the analysis, and text boxes with the data that was plotted (Fig. <ns0:ref type='figure'>2</ns0:ref>). If a user simply wants all the data for that cancer and that gene, the user can submit 100 for 'Lower Percentile', and 0 for 'Upper Percentile'.</ns0:p><ns0:p>Users then have the option to either go to a PDF of the Kaplan-Meier plot, or download a CSV file of the data plotted. In both cases the file name will be the cancer, gene ID, lower percentile, upper percentile, separated by underscores. Gene ID had to be used instead of gene name because there are multiple HUGO gene symbol conflicts between TCGA Tier 3 mRNAs and MiTranscriptome beta, as well as between TCGA mRNA HUGO gene symbols and updated mRNA HUGO gene symbols. In the case that a user performs a search for a name with a conflict, OncoLnc presents a warning message and instructs the user how to proceed.</ns0:p></ns0:div>
<ns0:div><ns0:head>mRNAs</ns0:head><ns0:p>Table 1 contains information about the patients for each Tier 3 mRNA study included in OncoLnc, and how many gene analyses are present in OncoLnc for each study. Tier 3 RNASeqV2 was used for all 21 cancers, and expression was taken from the 'rsem.genes.normalized_results' files. As a result, the expression data in OncoLnc for Tier 3 mRNAs is in normalized RSEM values. Table <ns0:ref type='table'>1</ns0:ref> contains different numbers of genes for the different cancers because an expression cutoff was used to determine if a gene would be included in the analysis. For mRNAs this cutoff was a median expression greater than 1 RSEM, and less than a fourth of the patients with an expression of 0.</ns0:p><ns0:p>The results of every Tier 3 mRNA Cox regression performed are included in Table <ns0:ref type='table'>S1</ns0:ref>. The Tier 3 expression files contain both a HUGO gene symbol and Entrez Gene ID for each gene, but these IDs and gene symbols are not current. To update the gene symbols I downloaded every human gene from NCBI Gene, and updated any symbol for which the Entrez Gene ID was still current. For genes that had deleted or changed Entrez Gene IDs I had to manually curate the Gene IDs and gene symbols. Genes which I could not confidently assign to a modern ID are not included in OncoLnc, but are still included in Table <ns0:ref type='table'>S1</ns0:ref>. Table <ns0:ref type='table'>S1</ns0:ref> includes the original TCGA IDs and symbols along with the updated names and symbols, and Table <ns0:ref type='table'>S4</ns0:ref> Manuscript to be reviewed Computer Science either the symbol or ID changed. OncoLnc allows users to search mRNAs using either an updated HUGO gene symbol or Entrez Gene ID.</ns0:p></ns0:div>
<ns0:div><ns0:head>miRNAs</ns0:head><ns0:p>Table 2 contains information about the patients for each Tier 3 miRNA study included in OncoLnc, and how many gene analyses are present in OncoLnc for each study. Tier 3 miRNASeq was used for every cancer except GBM, which only had microarray data available. The results of every Cox regression performed are included in Table <ns0:ref type='table'>S2</ns0:ref>. Many of the miRBase IDs, or possibly read counts, present in Table <ns0:ref type='table'>S2</ns0:ref> and OncoLnc will be different from the IDs and read counts in TCGA data files and available at other data portals for TCGA data. This is because I went through each expression file and updated the IDs and read counts.</ns0:p><ns0:p>The 'isoform.quantification' files contain both miRBase IDs as well accession numbers. In these files the 5p and 3p arms of miRNAs are referred to with the same ID, for example hsa-let-7b-5p and hsa-let-7b-3p would both be listed as hsa-let-7b. In order to update the names and read counts for the Tier 3 miRNAs I used the read counts assigned to each accession number to obtain reads per million miRNAs mapped for each accession number, and updated the ID with the current miRBase ID. When an accession number was not available I used the genomic coordinates provided to identify the accession number, and therefore ID. GBM names were updated using the 'aliases' file from the miRBase FTP site, and if an alias could not be confidently identified the miRNA was not included in OncoLnc, but is still in Table <ns0:ref type='table'>S2</ns0:ref>.</ns0:p><ns0:p>As a result, all expression values in Table <ns0:ref type='table'>S2</ns0:ref> and in OncoLnc are reads per million miRNA mapped for every cancer except GBM, which are microarray normalized values. The numbers of miRNAs in Table <ns0:ref type='table'>2</ns0:ref> differ because the miRNA may not have been in the expression files for that cancer, or may not have met the expression cutoff. An expression cutoff of a median of .5 reads per million miRNA mapped, and less than one fourth of the patients with 0 expression was used. OncoLnc allows users to search for miRNAs with either a miRBase version 21 mature accession number or ID.</ns0:p></ns0:div>
<ns0:div><ns0:head>lncRNAs</ns0:head><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> contains information about the patients for each MiTranscriptome beta lncRNA analysis, along with how many lncRNAs are included in OncoLnc for each cancer. Normalized lncRNA counts were downloaded from http://mitranscriptome.org/, and these were mapped to patient barcodes using the library information provided. MiTranscriptome beta contains over 8,000 of the most differentially expressed lncRNAs in the entire MiTranscriptome dataset, but the actual number of lncRNAs in OncoLnc for each cancer is far fewer due to the expression cutoff used: a median of .1 normalized counts, and less than a fourth of patients with 0 expression. Table <ns0:ref type='table'>S3</ns0:ref> contains every lncRNA Cox regression performed, and these are all included in OncoLnc. OncoLnc allows users to search for MiTranscriptome beta lncRNAs using either a name or transcript ID.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9120:1:0:NEW 27 Mar 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Depending on the researcher, OncoLnc should be used in different ways. If a researcher is studying a specific gene and looking for a cancer association, they should go to http://www.oncolnc.org and perform a search with their gene of interest. Instead of focusing on p-values, I would focus more on the rank of the correlations for the different cancers, and also on the sign of the Cox coefficients. A positive Cox coefficient indicates high expression of the gene increases the risk of death, while a negative Cox coefficient indicates the opposite. A gene with a high rank in multiple cancers (indicated by a low number, 1 being the best), and Cox coefficients with the same sign could be very interesting. It is also important to look at the level of expression of the gene. Different genes obviously require different levels of expression to exert their effects, but genes with expression near 0 should be dealt with caution. In addition, users can investigate the range of expression of the gene at the Kaplan-Meier plotting page. Genes that have large fold increases from the low expression to high expression group could be interesting candidates.</ns0:p><ns0:p>A researcher studying a specific cancer should download Tables <ns0:ref type='table'>S1, S2</ns0:ref>, and S3 to see which mRNAs, miRNAs, and lncRNAs are most correlated to survival for their cancer. Once they identify some genes of interest they can go to http://www.oncolnc.org to perform further analyses such as checking the range of expression of the gene, or if it is associated with survival in other cancers. Similarly, bioinformaticians looking to perform large scale analyses of prognostic genes can use these tables as a starting point, or if a user wants to change the Cox models they can use the GitHub code to alter the models.</ns0:p><ns0:p>The importance of the ability to perform survival correlations with lncRNAs must be emphasized. There are multiple techniques for identifying protein coding genes that are involved in cancer because mutations that occur in protein coding genes can result in missense mutations, and methods have been developed for identifying which of these mutations are drivers as opposed to simply passengers <ns0:ref type='bibr' target='#b3'>(Carter et al. 2009;</ns0:ref><ns0:ref type='bibr' target='#b10'>Kaminker et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b19'>Youn & Simon 2011)</ns0:ref>. In contrast, because it is unclear how mutations will affect lncRNA function, methods to identify lncRNAs involved in cancer must rely on lncRNA expression. As a result, OncoLnc is one of the few resources available for finding lncRNAs involved in cancer, and if a lncRNA researcher is searching for a novel lncRNA to study, Table <ns0:ref type='table'>S3</ns0:ref> would be a good place to start.</ns0:p><ns0:p>When using OncoLnc it is important to remember that the correlations observed, regardless of pvalue, are still only correlations. Perhaps the largest limitation of OncoLnc is that the Cox models do not account for intra-cancer subtypes. For example, GBM and BRCA both have wellestablished subtypes <ns0:ref type='bibr' target='#b2'>(Brennan et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b14'>Perou et al. 2000)</ns0:ref>. If the expression of a gene correlates with cancer subtypes, and those subtypes correlate with survival, subtype would be a confounding variable. As subtype definitions for the different cancers improve a future version of OncoLnc may be able to incorporate the subtypes in the Cox models.</ns0:p><ns0:p>An analysis is only as good as the data available, and the Tier 3 TCGA RNA-SEQ analyses were performed with outdated software and transcript information. There have been some attempts to reanalyze both the TCGA mRNA RNA-SEQ data and miRNA-SEQ data <ns0:ref type='bibr' target='#b12'>(Kuo et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b16'>Rahman et al. 2015)</ns0:ref>. In the event that TCGA or the scientific community releases a gold standard analysis of TCGA data, a future version of OncoLnc could incorporate this data.</ns0:p><ns0:p>Current data portals for TCGA data only allow users to view the results for one cancer at a time, may or may not offer Cox regression results, do not allow for complete control over separating patients during Kaplan-Meier analysis, and do not allow for download of the data used in the analysis. To my knowledge OncoLnc is the only online resource for TCGA data that includes these features, is the only resource that uses modern gene definitions for TCGA mRNA and miRNA data, and is the only resource for survival analysis of MiTranscriptome beta lncRNAs. In addition, current methods for survival analysis rely on a p-value cutoff of .05 for significance, which may lead to either the study of genes not actually correlated with survival or missing genes that are correlated with survival depending on the cancer. By storing the results of the correlation for every gene, OncoLnc can provide a context for the significance of a correlation. As a result, used correctly OncoLnc can not only increase the sensitivity of finding genes involved in cancer, but also the specificity. This combination of ease of use, results for complex analyses, and tools for exploring and downloading data make OncoLnc an invaluable resource for cancer researchers.</ns0:p></ns0:div><ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>lists genes which had</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9120:1:0:NEW 27 Mar 2016)</ns0:note></ns0:figure>
</ns0:body>
" | "Rebuttal Letter for 'OncoLnc: Linking TCGA survival data to mRNAs, miRNAs, and lncRNAs'
Jordan Anaya1
1omnesres.com, email: omnesresnetwork@gmail.com, twitter: @omnesresnetwork
Corresponding Author:
Jordan Anaya
Charlottesville, VA, US
email address: omnesresnetwork@gmail.com
Dear Editor and Reviewers,
Thank you for taking the time to read and review my submission, 'OncoLnc: Linking TCGA survival data to mRNAs, miRNAs, and lncRNAs'. My responses are below:
Reviewer 1
Comments for the author
The database name is OncoLnc, which is a play on the words 'onco', referring to cancer, and 'link', which sounds like the 'lnc' in lncRNA. It was chosen because OncoLnc is one of only two tools I know of for investigating survival analysis of lncRNAs, and the only tool for performing survival analysis of MiTranscriptome beta lncRNAs. The paper describing MiTranscriptome has been cited 113 times (Iyer et al. 2015).
1. The data for Tables S1-S3 is in OncoLnc, it is what is returned in the search results. I understand that someone studying a specific cancer may just want to look to see what the most highly correlated genes are, which is why I created the supplemental tables. If your requested feature was implemented the user would first have to click on whether they wanted a table for mRNAs, miRNAs, or lncRNAs. Then they would have to click on which cancer they want, and then they would be presented the results of up to 16000 genes. If they are interested in viewing the data in this manner I think it is just easier for them to download the supplemental tables. However, it may be useful to make the tables available to download on OncoLnc, or maybe even better, provide links for the individual sheets.
The advantage of accessing the data via OncoLnc is it provides the results from up to 21 excel sheets on one page. When you search for TP53 for example, OncoLnc is searching through the 21 sheets in Table S1 and presenting the results sorted by cancer name. It would be painful for someone to look through the 21 excel sheets themselves and extract the data for TP53 in each sheet. This is basically what someone has to do when they perform a survival analysis with cBioPortal (or any other currently available tool) since they can only query one cancer at a time.
2. The search results page is only one page removed from the main page, so you simply have to go back to get to the main page. The Kaplan-Meier plotting page opens in a new window, so you should never lose your search results page and always be able to go back to the main page from there. I don't want to introduce unnecessary clutter on the search results or Kaplan pages.
3. Yes, I agree that OncoLnc should provide this. It is a little more complicated than it seems, for example TP53 can be known as BCC7, LFS1, P53, or TRP53. I will work on implementing this feature for mRNAs.
Reviewer 2
Validity of the findings
OncoLnc does not make any claims that the correlations observed indicate the genes are involved in cancer or will be useful biomarkers. OncoLnc is simply a tool that allows users to quickly and easily view the correlations, perform custom Kaplan-Meier calculations, and download the survival and expression data. With that said, the Cox regressions used in OncoLnc are very similar to the Cox regressions I performed in my previous publication which focused only on mRNAs (Anaya et al. 2016), in which I found the Cox regressions to be associated with meaningful biology.
For example, KIRC is a cancer that is classically associated with the 'Warburg effect': the cancer cells rely on glycolysis for their energy and thus are no longer dependent on oxygen (Linehan et al. 2010). My analysis showed that genes associated with fatty acid oxidation tended to have strong negative Cox coefficients, implying these genes are protective. This is consistent with the idea that a metabolic shift is important since expression of these genes suggests the cancers are still relying on oxygen.
In addition, EGFR is a commonly mutated gene in GBM (Brennan et al. 2013), and my results showed that genes involved in EGF signaling have strong positive Cox coefficients in this cancer, implying that activation (expression) of these genes is a poor sign for survival. Similarly, in LUSC EGFR inhibitors have been shown to be unexpectedly effective (Chiu et al. 2014), and my analysis showed that like GBM, LUSC had large positive Cox coefficients for EGF signaling.
If someone is concerned about the accuracy of the calculations in OncoLnc they could always check the result with another data portal that allows survival analysis such as cBioPortal, http://www.cbioportal.org/, or TANRIC, http://ibl.mdanderson.org/tanric/_design/basic/query.html. I should also note that OncoLnc is the only data portal that allows for simple download of the clinical data and expression data if users would prefer to perform their own survival analysis. OncoLnc is also the only resource that makes all the code for performing the Cox regressions publicly available.
Comments for the author
1. Yes, setting the gene value as either 1 or 0 is a way to perform Cox regression, and that is basically what a Kaplan-Meier analysis does (although you can't include grade and age information). If a user is interested in this type of analysis it is very easy to view the expression on the Kaplan-Meier page, and users could use that information to guide them in their Kaplan plotting.
OncoLnc runs on rpy2, so it is possible to allow users to perform custom Cox regressions and I have thought about adding this feature. For example, maybe a user does not want an analysis that includes grade, or wants to add a term such as ER status for BRCA or IDH1 status for LGG. The problem is that when the user gets a p-value for their analysis what does that p-value mean? In my previous publication I showed that cancers such as KIRC and LGG have unusually good p-values when performing Cox regression (or any survival analysis), while cancers such as STAD and OV have poor p-values. This phenomenon has also been observed by (Yang et al. 2014). As a result, most custom analyses with KIRC will give a p-value below .05, which is therefore meaningless. One of the most important features of OncoLnc is that it provides the rank of each correlation to give the p-values meaning, which is only possible by running the model for all the genes in that cancer. In order to provide a rank for a custom analysis OncoLnc would need to run around 16000 regressions for mRNAs, which probably isn't feasible. Not because of the speed of rpy2, but because it would have to query the SQlite3 database for 16000 genes.
On a more practical note, I'm also not sure of all the types of custom analyses users would want. OncoLnc is not supported by funding, so would I be expected to take the time to add a new data option for a custom regression every time a user requested it? I have made all the code for performing the Cox regressions publicly available, so if it is essential to someone's research to perform a different analysis it wouldn't be too difficult assuming they or a lab member have knowledge of Python.
2. Do you mean a manual on the OncoLnc website? I could add a page on OncoLnc that discusses the analysis performed. In the manuscript I added more information in the methods section about the Cox regressions.
3. I answered this in response to Reviewer 1.
4. I added the ability to sort each column.
5. I don't believe this is a useful feature. For example, if you type in P53, based on text similarity OncoLnc would suggest TP53 and TP53I3, along with others. If you don't know what the name is supposed to be how would you know to select TP53? cBioPortal seems to agree and does not offer suggestions based on text similarity, while MEXPRESS does offer suggestions, but only if you get the start of the name correct. For example, MEXPRESS will not suggest TP53 if you type in P53. More useful would be to allow gene synonyms, which is what cBioPortal does and which Reviewer 1 requested and I am working on.
Reviewer 3
Basic reporting
I included more information in the introduction about the limitations of current TCGA data portals that OncoLnc addresses. I do not agree that the importance of survival analyses needs to be justified. Countless papers have been published that attempt to identify prognostic markers for cancers using either mRNA or miRNA expression, and cBioPortal, which is the data portal most similar to OncoLnc, has been cited over 1800 times and has grown to a development team of over 30 people (https://summerofcode.withgoogle.com/organizations/5111396454891520/).
Experimental design
I added more details in the methods section of the paper about the Cox regressions. To answer your questions, age, sex, and grade were chosen simply because they are common variables used as multivariates, and the TCGA does a good job of recording these data. If a patient is missing the information needed for the model they are not included in the analysis and are not present in OncoLnc.
The data in the study is only from two places, the TCGA and MiTranscriptome (which was a reanalysis of TCGA data). I have listed links to the sites in the publication. Although there have been individual TCGA publications that detail the cancer studies for some of the data used in OncoLnc, at the time of those publications there is no guarantee that the RNA-SEQ data was described. Furthermore, these publications do not contain links for download of the data. TCGA data is stored at the official TCGA site: https://tcga-data.nci.nih.gov/tcga/. As per TCGA's publication guidelines, I have acknowledged the TCGA Research Network in the acknowledgments.
Validity of the findings
mRNAs, miRNAs, and lncRNAs are all known to be important in cancer, which is why the TCGA makes this data available and multiple tools have been developed to investigate this data: cBioPortal and the UCSC cancer browser allow for investigations with mRNAs and miRNAs, KM plotter was developed for survival analyses of mRNAs, MEXPRESS allows for analyses with mRNAs, and two independent tools have been developed for lncRNAs, MiTranscriptome and TANRIC. I don't think the importance of these molecules needs to be addressed. You could make an argument that protein data should also be considered, but the TCGA contains limited protein data.
I am confused what type of analysis you are suggesting. Are you suggesting that I allow researchers to perform a survival analysis with multiple genes at the same time? That is a more complicated analysis and is not the goal of OncoLnc. It is sounding like a network analysis which I am not experienced in. Net-Cox, http://compbio.cs.umn.edu/Net-Cox/, is a tool developed for a network type of survival analysis.
I do not agree that it is better to integrate multiple tumor types. To identify the role of a gene in survival it is important to reduce the heterogeneity of the data to limit confounding factors. A cancer researcher is only interested in genes which are prognostic in the cancer they are studying, not which genes are prognostic when five different cancer data sets are mixed together. None of the available data portals mix data from the different cancer studies.
Comments for the author
I discussed the problems with adding custom analyses in response to Reviewer 2.
Anaya J, Reon B, Chen W, Bekiranov S, and Dutta A. 2016. A pan-cancer analysis of prognostic genes. PeerJ 3:e1499. 10.7717/peerj.1499
Brennan CW, Verhaak RG, McKenna A, Campos B, Noushmehr H, Salama SR, Zheng S, Chakravarty D, Sanborn JZ, Berman SH, Beroukhim R, Bernard B, Wu CJ, Genovese G, Shmulevich I, Barnholtz-Sloan J, Zou L, Vegesna R, Shukla SA, Ciriello G, Yung WK, Zhang W, Sougnez C, Mikkelsen T, Aldape K, Bigner DD, Van Meir EG, Prados M, Sloan A, Black KL, Eschbacher J, Finocchiaro G, Friedman W, Andrews DW, Guha A, Iacocca M, O'Neill BP, Foltz G, Myers J, Weisenberger DJ, Penny R, Kucherlapati R, Perou CM, Hayes DN, Gibbs R, Marra M, Mills GB, Lander E, Spellman P, Wilson R, Sander C, Weinstein J, Meyerson M, Gabriel S, Laird PW, Haussler D, Getz G, Chin L, and Network TR. 2013. The somatic genomic landscape of glioblastoma. Cell 155:462-477. 10.1016/j.cell.2013.09.034
Chiu CH, Chou TY, Chiang CL, and Tsai CM. 2014. Should EGFR mutations be tested in advanced lung squamous cell carcinomas to guide frontline treatment? Cancer Chemother Pharmacol 74:661-665. 10.1007/s00280-014-2536-3
Iyer MK, Niknafs YS, Malik R, Singhal U, Sahu A, Hosono Y, Barrette TR, Prensner JR, Evans JR, Zhao S, Poliakov A, Cao X, Dhanasekaran SM, Wu YM, Robinson DR, Beer DG, Feng FY, Iyer HK, and Chinnaiyan AM. 2015. The landscape of long noncoding RNAs in the human transcriptome. Nat Genet 47:199-208. 10.1038/ng.3192
Linehan WM, Srinivasan R, and Schmidt LS. 2010. The genetic basis of kidney cancer: a metabolic disease. Nat Rev Urol 7:277-285. 10.1038/nrurol.2010.47
Yang Y, Han L, Yuan Y, Li J, Hei N, and Liang H. 2014. Gene co-expression network analysis reveals common system-level properties of prognostic genes across cancer types. Nat Commun 5:3231. 10.1038/ncomms4231
" | Here is a paper. Please give your review comments after reading it. |
204 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The Coronavirus pandemic caused by the novel SARS-CoV-2 has significantly impacted human health and the economy, especially in countries struggling with financial resources for medical testing and treatment, such as Brazil's case, the third most affected country by the pandemic. In this scenario, machine learning techniques have been heavily employed to analyze different types of medical data, and aid decision making, offering a low-cost alternative. Due to the urgency to fight the pandemic, a massive amount of works are applying machine learning approaches to clinical data, including complete blood count (CBC) tests, which are among the most widely available medical tests. In this work, we review the most employed machine learning classifiers for CBC data, together with popular sampling methods to deal with the class imbalance. Additionally, we describe and critically analyze three publicly available Brazilian COVID-19 CBC datasets and evaluate the performance of eight classifiers and five sampling techniques on the selected datasets.</ns0:p><ns0:p>Our work provides a panorama of which classifier and sampling methods provide the best results for different relevant metrics and discuss their impact on future analyses. The metrics and algorithms are introduced in a way to aid newcomers to the field. Finally, the panorama discussed here can significantly benefit the comparison of the results of new ML algorithms.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The Coronavirus disease (COVID-19) caused by the novel SARS-CoV-2 has spread from China and quickly transmitted to other countries. Since the beginning of 2020, the COVID-19 pandemic has significantly impacted human health and severely affected the global economy and financial markets <ns0:ref type='bibr'>(82; 83)</ns0:ref>, especially in countries that cannot test their population and develop strategies to manage the crisis.</ns0:p><ns0:p>In a scenario of large numbers of asymptomatic patients and shortages of tests, targeted testing is essential within the population <ns0:ref type='bibr' target='#b100'>(86)</ns0:ref>. The objective is to identify people whose immunity can be demonstrated and allow their safe return to their routine.</ns0:p><ns0:p>The diagnosis of COVID-19 is based on the clinical and epidemiological history of the patient <ns0:ref type='bibr' target='#b53'>(46)</ns0:ref> and the findings of complementary tests, such as chest tomography (CT-scan) <ns0:ref type='bibr'>(14; 38)</ns0:ref> or nucleic acid testing <ns0:ref type='bibr'>(74; 25)</ns0:ref>. Nevertheless, the symptoms expressed by COVID-19 patients are nonspecific and cannot be summarizing the steps taken in this work can be found in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. The survey will first explain the employed methodology, the tested datasets' characteristics, and the chosen evaluation metrics. Afterward, a brief review of the major ML predictors used on CBC COVID-19 datasets is conducted, followed by a review of techniques to handle imbalanced data. Section 1.3 also shows the diversity of approaches already applied to COVID-19 CBC data. This exposition is succeeded by describing the main findings, listing the lessons learned from the survey, and conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>PRELIMINARIES 1.Datasets</ns0:head><ns0:p>At the time of this work, Brazil was the third country most affected by the COVID-19 pandemic, reaching more than 16 million confirmed cases. Thus, discussing data gathered from Brazil can become invaluable to understand SARS-CoV-2 data. Complete datasets used in the present study were obtained from an open repository of COVID-19-related cases in Brazil. The database is part of the COVID-19 Data Sharing/BR initiative <ns0:ref type='bibr' target='#b93'>(79)</ns0:ref>, and it is comprised of information about approximately 177, 000 clinical cases. Patient data were collected from three distinct private health services providers in the São Paulo State, namely the Fleury Group 1 , the Albert Einstein Hospital 2 and the Sírio-Libanês Hospital 3 , and a database for patients from each institution was built. The data from COVID-19 patients was collected from February 26th, 2020 to June 30th, 2020, and the control data (individuals without COVID-19) was collected from November 1st, 2019 to June 30th, 2020.</ns0:p><ns0:p>Patient data is provided in an anonymized form. Three distinct types of patients information are provided in this repository: (i) patients demographic data (including sex, year of birth, and residence zip code); (ii) clinical and/or laboratory exams results (including different combinations of the following data: hemogram and blood cell count results, blood tests for a biochemical profile, pulmonary function tests, and blood gas analysis, diverse urinalysis parameters, detection of a panel of different infectious diseases, pulmonary imaging results (x-ray or CT scans), among others. COVID-19 detection by RT-PCR tests is described for all patients, and serology diagnosis (in the form of specific IgG and IgM antibody detection) is provided for some samples; and (iii) when available, information on each patient clinical progression and transfers, hospitalization history, as well as the disease outcome (primary endpoints, as death or recuperation). Available information is not complete for all patients, with a distinct combination of results provided individually.</ns0:p><ns0:p>Overall baseline characteristics can be found on the complete database, available at the FAPESP COVID-19 Data Sharing/BR 4 . The most common clinical test results available for all patients is the hemogram data. As such, it was selected for the testing of the current sample set. Twenty distinct hemogram test parameters were obtained from the database, including hematocrit (%), hemoglobin (g/dl), platelets (×10 3 µl), mean platelet volume ( f l), red blood cells (×10 6 µl), lymphocytes (×10 3 µl), leukocytes (×10 3 µl), basophils (×10 3 µl), eosinophils (×10 3 µl), monocytes (×10 3 µl), neutrophils (×10 3 µl), mean corpuscular volume (MCV) ( f l), mean corpuscular hemoglobin (MCH) (pg), mean corpuscular hemoglobin concentration (MCHC) (g/dl), red blood cell distribution width (RDW) (%), % Basophils, % Eosinophils, % Lymphocytes % Monocytes, and % Neutrophils (Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>).</ns0:p><ns0:p>Patients with incomplete (missing data) or no data available for the above parameters were not included in the present analysis. For patients with more than a single test result available, a unique hemogram test was used, with the selection based on the blood test date. In this sense, same-day results to the PCR-test collection date was adopted as a reference, or the day closest to the test.</ns0:p><ns0:p>More information regarding the three distinct datasets' distributions can be found in Figs. <ns0:ref type='figure' target='#fig_5'>2 and 3</ns0:ref>.</ns0:p><ns0:p>The most relevant information assessed in the present study is database size, the number of available clinical test results, gender distribution (male or female), and COVID-19 RT-PCR test result (classified as positive or negative) ratio. The parameters for each data subset are described for the original dataset and for the subset of selected samples used in this study (after removal of patients containing missing values), as seen in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>The column 'class ratio' in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows the level of class imbalance for each dataset. It was computed by dividing the number of positive samples by the number of negative samples. The number of negative samples from the Albert Einstein Hospital and the Fleury Group exceeds the positive samples. This is expected from disease data since the number of infections will be small compared to the entire population. However, in the Sírio-Libanês Hospital data, there is over forty times the amount of positive samples compared to negative samples. This represents another source of bias in the data acquisition: the dataset consists of patients tested because they had already shown COVID-19-like symptoms, skewing the data to positive samples. This is crucial because the decision to test a patient for COVID-19 in institutions that struggle with funds is a common judgment call. For data characterization, we use two metrics: the Bhattacharyya Distance (BD); and the Kolmogorov-Smirnov statistics (KS). We will now present both metrics, followed by a discussion of its results in the studied datasets. The goal is to determine the separability between the negative and positive classes among the three datasets. BD calculates the separability between two Gaussian distributions <ns0:ref type='bibr' target='#b4'>(5)</ns0:ref>. However, it depends on the covariance inverse matrix for multivariate cases, which can be nonviable for datasets with high dimensionality, such as the ones employed in this paper. Therefore, we will use its univariate form as in Equation 1 <ns0:ref type='bibr' target='#b35'>(33)</ns0:ref>. Manuscript to be reviewed Manuscript to be reviewed Smirnov test (KS test). The KS test is a non-parametric approach that quantifies the maximum difference between samples' univariate empirical cumulative distribution values (i.e., the maximum separability between two distributions) (69) (Eq. 2).</ns0:p><ns0:formula xml:id='formula_0'>B j (b, s) = 1 4 + ln 1 4 σ 2 b j σ 2 s j + σ 2 s j σ 2 b j + 2 + 1<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>(υ b j − υ s j ) 2 σ 2 b j + σ 2 s j<ns0:label>(1)</ns0:label></ns0:formula><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_2'>D w = max x (|F 1 (x) − F 2 (x)|)<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>where D is the D statistic, such that w denotes which hemogram result is being analyzed, F 1 and F 2 are the cumulative empirical distributions of classes 1 and 2, and x are the obtained hemogram result. D w values belongs to the [0, 1] interval, where values closer to one suggest higher separability between classes (103).</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the D statistics and BD for all variables for the three datasets. Firstly, we will discuss the BD and D statistic results for each dataset, followed by comparing such results among all datasets.</ns0:p><ns0:p>Regarding the dataset separability in the HSL dataset, the D statistic yields Basophils, Basophils#, Monocytes, and Eosinophils as the variables with higher distance between the Cumulative Probability Function from positive and negative diagnosed patients. Complementing this analysis by the BD and the Probability Density Function (PDF) represented in Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, the distribution of Basophils, Basophils#, and Eosinophils from the negative patients has a higher mean. Besides the higher D statistics, the BD is lower than other variables, indicating that the distributions are similar; however, one group (in this case, the negative group) has systematically higher values. On the other hand, the other variable with high D statistic (Monocytes) has a flattened distribution for the negative patients, increasing its variance and consequentially its BD once the positive cases variance is small. The small sample size may jeopardize such distribution for negative patients. Complementarily, it is notable that this variable does not have a linear separation between classes.</ns0:p><ns0:p>As for the HAE dataset, the variables Basophils, Lymphocytes, Eosinophils and Leukocytes yields the higher D statistic. All of them have the same characteristic: similar distribution but with negative distribution with higher values. It is noteworthy that the higher BD (Basophils and Eosinophils) can be Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Regarding the Basophils distributions, the curve from negative cases is flatterer than the positive case curve. Even so, it is notable that the negative distribution has higher values, and both variances are small, resulting in a high BD. For the comparison of the Eosinophils in positive and negative cases, the existence of spurious values increases the variance for both distributions. However, the D statistic indicates that this variable provides good separability between classes.</ns0:p><ns0:p>Moreover, besides having variables with more potential separability (D statistics), the imbalance between classes is much more significant on the HSL dataset, which may bias such analysis. Both HAE and Fleury datasets have similar characteristics regarding classes' sample size proportions. However, the HAE has more variables with high D statistics, and its values are higher as well.</ns0:p><ns0:p>On a final note, CBC data is highly prone to fluctuations. Some variables, such as age and sex, are among the most discussed sources of immunological difference, but others are sometimes unaccounted.</ns0:p><ns0:p>For example, a systematic review in 2015 by Paynter et al. <ns0:ref type='bibr' target='#b98'>(84)</ns0:ref> demonstrated that the immune system is significantly modulated by distinct seasonal changes in different countries, which, by its turn, impact respiratory and infectious diseases. Similarly, circadian rhythm can also impact the circulation levels of different leukocytes <ns0:ref type='bibr' target='#b101'>(87)</ns0:ref>. Distinct countries have specific seasonal fluctuations and, sometimes, extreme circadian regulations -thus, immune responses' inherent sensibility should always be considered a potential bias. This also impacts the comparison between different computational approaches that use datasets from other researchers for testing or training. While this work focuses on the application of ML and sampling algorithms to this data, a more in-depth biological analysis regarding the interaction between sex, age, and systemic inflammation from these Brazilian datasets can be found in the work of Ten-Caten et al. <ns0:ref type='bibr' target='#b109'>(95)</ns0:ref>. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Evaluation Metrics</ns0:head><ns0:p>The metrics to evaluate how well a classifier performs in discriminating between the target condition (positive for COVID-19) and health can be derived from a 'confusion matrix' (Table <ns0:ref type='table'>3</ns0:ref>) that contrasts the 'true' labels obtained from the 'gold standard' to the predicted labels. From it, we have four possible outcomes: either the classifier correctly assigns a sample as positive (with the target condition) or as negative (without the target condition), and in this case, we have true positives, and true negatives or the prediction is wrong, leading to false positives or false negatives. Some metrics can assess the discriminative property of the test, while others can determine its predictive ability <ns0:ref type='bibr' target='#b107'>(93)</ns0:ref>, and not all are well suited for diagnostic tasks because of imbalanced data <ns0:ref type='bibr' target='#b112'>(97)</ns0:ref>.</ns0:p><ns0:p>For instance, accuracy, sometimes also referred to as diagnostic effectiveness, is one of the most used classification performance <ns0:ref type='bibr' target='#b112'>(97)</ns0:ref>. Still, it is greatly affected by the disease prevalence, and increases as the disease prevalence decrease <ns0:ref type='bibr' target='#b107'>(93)</ns0:ref>. Overall, prediction metrics alone won't reflect the biological meaning of the results. Consequentially, especially in diagnostic tasks, ML approaches should always be accompanied by expert decisions on the final results.</ns0:p><ns0:p>This review focuses on six distinct metrics commonly used in classification and diagnostic tasks that are well suited for imbalanced data <ns0:ref type='bibr'>(93; 97)</ns0:ref>. This also allows for a more straightforward comparison of results in the literature. Each of these metrics evaluates a different aspect of the predictions and is listed in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> together with a formula on how they can be computed from the results of the confusion matrix.</ns0:p><ns0:p>Sensitivity (also known as 'recall') is the proportion of correctly positive classified samples among all positive samples. It can be understood as the probability of getting a positive prediction in subjects with the disease or a model's ability to recognize samples from patients (or subjects) with the disease.</ns0:p><ns0:p>Analogously, specificity is the proportion of correctly classified negative samples among all negative samples, describing how well the model identifies subjects without the disease. Sensitivity and specificity are not dependants on the disease prevalence in examined groups <ns0:ref type='bibr' target='#b107'>(93)</ns0:ref>.</ns0:p><ns0:p>The likelihood ratio (LR) is a combination of sensitivity and specificity used in diagnostic tests. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science can range from zero to infinity, and a test is only useful with values larger than 1.0 <ns0:ref type='bibr' target='#b57'>(50)</ns0:ref>. The last metric used in this work is the F 1 -score, also known as F-measure. It ranges from zero to one and is a metric of general classification performance (97).</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3'>Machine Learning Approaches</ns0:head><ns0:p>Among several ML applications in real-world situations, classification tasks stand up as one of the most relevant applications, ranging from classification of types of plants and animals to the identification of different diseases prognoses, such as cancer (42; 44; 43; 52), H1N1 Flu <ns0:ref type='bibr' target='#b29'>(27)</ns0:ref>, Dengue <ns0:ref type='bibr' target='#b127'>(109)</ns0:ref>, and COVID-19 (Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>). The use of these algorithms in the context of hemogram data from COVID-19</ns0:p><ns0:p>patients is summarized in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>. The number of features and characteristics of different datasets might be a barrier for distinctive classification learning techniques. Furthermore, it is of extreme importance a better understanding and characterization of the strengths and drawbacks of each classification technique used <ns0:ref type='bibr' target='#b85'>(72)</ns0:ref>. The following classifiers' choice was based on their use as listed in Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>, as they are the most likely to be used in experiments with COVID-19 data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.1'>Naïve Bayes</ns0:head><ns0:p>One of the first ML classification techniques is based on the Bayes theorem (Eq. 3). The Naïve Bayes classification technique is a probabilistic classifier that calculates a set of probabilities by counting the frequency and combinations of values in the dataset. The Naïve Bayes classifier has the assumption that all attributes are conditionally independent, given the target value <ns0:ref type='bibr' target='#b75'>(64)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>P(A|B) = P(A)P(B|A) P(B)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where P(A) is the probability of the occurrence of event A, P(B) is the probability of occurrence of event B, and P(A|B) is the probability of occurrence of event A when B also occurs. Likewise, P(B|A) is the probability of event B when A also occurs.</ns0:p><ns0:p>In imbalanced datasets, the Naïve Bayes classification algorithm biases the major class results in the dataset, as it happens with most of the classification algorithms. To handle the imbalanced data set in biomedical applications, the work of (80) evaluated different sampling techniques with the NB classification. The used sampling techniques did not show a significant difference in comparison with the imbalanced data set.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.2'>Support Vector Machines</ns0:head><ns0:p>Support Vector Machine (SVM) (34) is a classical supervised learning method for classification that works by finding the hyperplane (being just a line in 2D or a plane in 3D) capable of splitting data points into different classes. The 'learning' consists of finding a separating hyperplane that maximizes the distance between itself and the closest data points from each class, called the support vectors. In the cases Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where the data is not linearly separable, kernels are used to transform the data by mapping it to higher dimensions where a separating hyperplane can be found <ns0:ref type='bibr' target='#b69'>(58)</ns0:ref>. SVM usually performs well on new datasets without the need for modifications. It is also not computationally expensive, has low generalization errors, and is interpretative in the case of the data's low dimensionality. However, it is sensitive to kernel choice and parameter tuning and can only perform binary classification without algorithmic extensions <ns0:ref type='bibr' target='#b69'>(58)</ns0:ref>.</ns0:p><ns0:p>Although SVM achieves impressive results in balanced datasets, when an imbalanced dataset is used, the rating performance degrades as with other methods. In Batuwita and Palade <ns0:ref type='bibr' target='#b11'>(12)</ns0:ref>, it was identified that when SVM is used with imbalanced datasets, the hyperplane is tilted to the majority class. This bias can cause the formation of more false-negative predictions, a significant problem for medical data. To minimize this problem and reduce the total number of misclassifications in SVM learning, the separating hyperplane can be shifted (or tilted) to the minority class <ns0:ref type='bibr' target='#b11'>(12)</ns0:ref>. However, in our previous study, we noticed that for curated microarray gene expression analyzes, even in imbalanced datasets, SVM generally outperformed the other classifiers <ns0:ref type='bibr' target='#b49'>(42)</ns0:ref>. Similar results were highlighted in other reviews (4).</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.3'>K-Nearest Neighbors</ns0:head><ns0:p>The nearest neighbor algorithm is based on the principle that instances from a dataset are close to each other regarding similar properties <ns0:ref type='bibr' target='#b85'>(72)</ns0:ref>. In this way, when unclassified data appears, it will receive the label accordingly to its nearest neighbors. The extension of the algorithm, known as k-Nearest Neighbors (kNN), considers a parameter k, defining the number of neighbors to be considered. The class's determination is straightforward, where the unclassified data receives the most frequent label of its neighbors. To determine the k nearest neighbors, the algorithm considers a distance metric. In our case, the Euclidean Distance (Eq. 4) is used:</ns0:p><ns0:formula xml:id='formula_4'>D(x, y) = n ∑ i=1 |x i − y i | 2<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where x and y are two instances with n comparable characteristics. Although the kNN algorithm is a versatile technique for classification tasks, it has some drawbacks, such as determining a secure way of choosing the k parameter, being sensitive to the similarity (distance) function used <ns0:ref type='bibr' target='#b85'>(72)</ns0:ref>, and a large amount of storage for large datasets <ns0:ref type='bibr' target='#b69'>(58)</ns0:ref>. As the kNN considers the most frequent class of its nearest neighbors, it is intuitive to conclude that for imbalanced datasets, the method will bias the results towards the majority class in the training dataset <ns0:ref type='bibr' target='#b81'>(68)</ns0:ref>.</ns0:p><ns0:p>For biological datasets, kNN is particularly useful for data from non-characterized organisms, where there is little-to-non previous information to identify molecules and their respective bioprocesses correctly.</ns0:p><ns0:p>Thus, this 'guilty by association' approach becomes necessary. This logic can be extrapolated to all types of biological datasets that possess such characteristics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.4'>Decision Trees</ns0:head><ns0:p>Decision trees are one of the most used techniques for classification tasks <ns0:ref type='bibr' target='#b69'>(58)</ns0:ref>, although they can also be used for regression. Decision trees classifies data accordingly to their features, where each node represents a feature, and each branch represents the value that the node can assume <ns0:ref type='bibr' target='#b85'>(72)</ns0:ref>. A binary tree needs to be built based on the feature that better divides the data as a root node to classify data. New subsets are created in an incremental process until all data can be categorized <ns0:ref type='bibr' target='#b69'>(58)</ns0:ref>. The first limitation of this technique is the complexity of constructing a binary tree (considered an NP-Complete problem). Different heuristics were already proposed to handle this, such as the CART algorithm <ns0:ref type='bibr' target='#b19'>(19)</ns0:ref>. Another important fact is that decision trees are more susceptible to overfitting <ns0:ref type='bibr' target='#b69'>(58)</ns0:ref>, requiring the usage of a pruning strategy.</ns0:p><ns0:p>Since defining features for splitting the decision tree is directly related to the training model performance, knowing how to treat the challenges imposed by imbalanced datasets is essential to improve the model performance, avoiding bias towards the majority class. The effect of imbalanced datasets in decision trees could be observed in <ns0:ref type='bibr' target='#b34'>(32)</ns0:ref>. The results attested that decision tree learning models could reach better performance when a sampling method for imbalanced data is applied.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.5'>Random Forest</ns0:head><ns0:p>Random Forests are an ensemble learning approach that uses multiple non-pruned decision trees for classification and regression tasks. To generate a random forest classifier, each decision tree is created from a subset of the data's features. After many trees are generated, each tree votes for the class of Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the new instance <ns0:ref type='bibr' target='#b18'>(18)</ns0:ref>. As random forest creates each tree based on a bootstrap sample of the data, the minority class might not be represented in these samples, resulting in trees with poor performance and biased towards the majority class <ns0:ref type='bibr' target='#b31'>(29)</ns0:ref>. Methods to handle the high-imbalanced data were compared by <ns0:ref type='bibr' target='#b31'>(29)</ns0:ref>, including incorporating class level weights, making the learning models cost-sensitive, and reducing the amount of the majority class data for a more balanced data set. In all cases, the overall performance increased.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.6'>XGBoost</ns0:head><ns0:p>The XGBoost framework was created by Chen and Guestrin <ns0:ref type='bibr' target='#b32'>(30)</ns0:ref>, and is used on decision tree ensemble methods, following the concept of learning from previous errors. More specifically, the XGBoost uses the gradient of the loss function in the existing model for pseudo-residual calculation between the predicted and real label. Moreover, it extends the gradient boosting algorithm into a parallel approach, achieving faster training models than other learning techniques to maintain accuracy.</ns0:p><ns0:p>The gradient boosting performance in imbalanced data sets can be found in <ns0:ref type='bibr' target='#b23'>(22)</ns0:ref>, where it outperforms other classifiers such as SVM, decision trees, and kNN in credit scoring analysis. The eXtreme Gradient</ns0:p><ns0:p>Boost was also applied to credit risk assignment with imbalanced datasets in <ns0:ref type='bibr' target='#b28'>(26)</ns0:ref>, achieving better results than its competitors.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.7'>Logistic Regression</ns0:head><ns0:p>Logistic regression is a supervised classification algorithm that builds a regression model to predict the class of a given data based on a sigmoid function (Eq. 5). As occurs in linear models, in logistic regression, learning models compute a weighted sum of the input features with a bias <ns0:ref type='bibr' target='#b54'>(47)</ns0:ref>. Once the logistic model estimated the probability of p of a given data label, the label with p ≥ 50% will be assigned to the binary classification data.</ns0:p><ns0:formula xml:id='formula_5'>g(z) = 1 1 + e −z<ns0:label>(5)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head n='1.3.8'>Multilayer Perceptron</ns0:head><ns0:p>A multilayer perceptron is a fully connected neural network with at least three layers of neurons: one input layer, one hidden layer, and an output layer. The basic unit of a neural network is a neuron that is represented as nodes in the neural network, and have an activation function, generally, a sigmoid function (Eq. 5), which is activated accordingly to the sum of the arriving weighted signals from previous layers.</ns0:p><ns0:p>For classification tasks, each output neuron represents a class, and the value reported by the i-th output neuron is the amount of evidence in supported i-th class (73), i.e., if an MLP has two output neuronsmeaning that there are two classes -the output evidence could be (0.2, 0.8), resulting to the classification of the class supported by the highest value, in this case, 0.8. Based on the learning model prediction's mean square error, each connection assigned weights are adjusted based on the backpropagation learning algorithm <ns0:ref type='bibr' target='#b86'>(73)</ns0:ref>. Although the MLPs have shown impressive results in many real-world applications, some drawbacks must be highlighted. The first one is the determination of the number of hidden layers. An underestimation of the neurons number can cause a poor classification capability, while the excess of them can lead to an overfitting scenario, compromising the model generalization. Another concern is related to the computational cost of the backpropagation, where the process of minimizing the MSE takes long runs of simulations and training. Furthermore, one of the major characteristics is that MLPs are black-box methods, making it hard to understand the reason for their output <ns0:ref type='bibr' target='#b85'>(72)</ns0:ref>.</ns0:p><ns0:p>Regarding the capabilities of MLPs in biased data, an empirical study is provided by <ns0:ref type='bibr' target='#b84'>(71)</ns0:ref>, showing that MLP can achieve satisfactory results in noisy and imbalanced datasets even without sampling techniques for balancing the datasets. The analysis provided by the authors showed that the difference between the MLP with and without sampling was minimal.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4'>Techniques to Handle Imbalanced Data</ns0:head><ns0:p>As introduced before, the COVID-19 CBC data is highly imbalanced. In a binary classification problem, class imbalance occurs when one class, the minority group, contains significantly fewer samples than the other class, the majority group. In such a situation, most classifiers are biased towards the larger classes and have meager classification rates in the smaller classes. It is also possible that the classifier considers everything as the largest class and ignores the smaller class. This problem is faced not only in the binary class data but also in the multi-class data <ns0:ref type='bibr' target='#b113'>(98)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/24</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A significant number of techniques have been proposed in the last decade to handle the imbalanced data problem. In general, we can classify these different approaches as sampling methods (pre-processing) and cost-sensitive learning <ns0:ref type='bibr' target='#b66'>(56)</ns0:ref>. In cost-sensitive learning models, the minority class misclassification has a higher relevance (cost) than the majority class instance misclassification. Although this can be a practical approach for imbalanced datasets, it can be challenging to set values for the needed matrix cost <ns0:ref type='bibr' target='#b66'>(56)</ns0:ref>.</ns0:p><ns0:p>The use of sampling techniques is more accessible than cost-sensitive learning, requiring no specific information about the classification problem. For these approaches, a new dataset is created to balance the classes, giving the classifiers a better opportunity to distinguish the decision boundary between them <ns0:ref type='bibr' target='#b70'>(59)</ns0:ref>.</ns0:p><ns0:p>In this work, the following sampling techniques are used, chosen due to their prominence in the literature:</ns0:p><ns0:p>Random Over-Sampling (ROS), Random Under-Sampling (RUS), Synthetic Minority Over-sampling TEchnique (SMOTE), Synthetic Minority Over-sampling Technique with Tomek Link (SMOTETomek),</ns0:p><ns0:p>and Adaptive Synthetic Sampling (A-DASYN). All of them are briefly described in this section. A t-SNE visualization of each sampling technique's effect for the three datasets used can be seen in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>4</ns0:ref>. Visualization of the negative (purple) and positive (green) samples from the Albert Einstein Hospital (AE), Fleury Laboratory (FLEURY) and Hospital Sirio Libanês (HSL) using t-SNE for all the different sampling schemes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.1'>Random Sampling</ns0:head><ns0:p>In classification tasks that use imbalanced datasets, sampling techniques became standard approaches for reducing the difference between the majority and minority classes. Among different methods, the most simpler ones are the RUS and ROS. In both cases, the training dataset is adjusted to create a new dataset with a more equanimous class distribution <ns0:ref type='bibr' target='#b70'>(59)</ns0:ref>.</ns0:p><ns0:p>For the under-sampling approach, most class instances are discarded until a more balanced data distribution is reached. This data dumping process is done randomly. Considering a dataset with 100 minority class instances and 1000 majority class instances, a total of 900 majority class instances would be randomly removed in the RUS technique. At the end of the process, the dataset will be balanced with 200 instances. The majority class will be represented with 100 instances, while the minority will also have 100.</ns0:p><ns0:p>In contrast, the random over-sampling technique duplicates minority class data to achieve better data distribution. Using the same example given before, with 100 instances of the minority class and 1000 majority class instances, each data instance from the minority class would be replicated ten times until both classes have 1000 instances. This approach increases the number of instances in the dataset, leading us to 2000 instances in the modified dataset.</ns0:p><ns0:p>However, some drawbacks must be explained. In RUS, the data dumping process can discard a Manuscript to be reviewed</ns0:p><ns0:p>Computer Science considerable number of data, making the learning process harder and resulting in poor classification performance. On the other hand, for ROS, the instances are duplicated, which might cause the learning model overfitting, inducing the model to a lousy generalization capacity and, again, leading to lower classification performance (59).</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.2'>Synthetic Minority Over-sampling Technique (SMOTE)</ns0:head><ns0:p>To overcome the problem of generalization resulting from the random over-sampling technique, <ns0:ref type='bibr' target='#b30'>(28)</ns0:ref> created a method to generate synthetic data in the dataset. This technique is known as SMOTE. To balance the minority class in the dataset, SMOTE first selects a minority class data instance M a randomly.</ns0:p><ns0:p>Then, the k nearest neighbors of M a , regarding the minority class, are identified. A second data instance M b is then selected from the k nearest neighbors set. In this way, M a and M b are connected, forming a line segment in the feature space. The new synthetic data is then generated as a convex combination between M a and M b . This procedure occurs until the dataset is balanced between the minority and majority classes. Because of the effectiveness of SMOTE, different extensions of this over-sampling technique were created.</ns0:p><ns0:p>As SMOTE uses the interpolation of two instances to create the synthetic data, if the minority class is sparse, the newly generated data can result in a class mixture, which makes the learning task harder <ns0:ref type='bibr' target='#b17'>(17)</ns0:ref>.</ns0:p><ns0:p>Because SMOTE became an effective over-sampling technique and still has some drawbacks, different variations of the method were proposed by different authors. A full review of these different types can be found in ( <ns0:ref type='formula'>17</ns0:ref>) and ( <ns0:ref type='formula'>59</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.3'>Synthetic Minority Over-sampling Technique with Tomek link</ns0:head><ns0:p>Although the SMOTE technique achieved better results than random sampling methods, data sparseness can be a problem, particularly in datasets containing a significant outlier occurrence. In many datasets, it is possible to identify that different data classes might invade each class space. When considering a decision tree as a classifier with this mixed dataset, the classifier might create several specialized branches to distinguish the data class <ns0:ref type='bibr' target='#b10'>(11)</ns0:ref>. This behavior might create an over-fitted model with poor generalization.</ns0:p><ns0:p>In light of this fact, the SMOTE technique was extended considering Tomek links ( <ns0:ref type='formula'>99</ns0:ref>) by <ns0:ref type='bibr' target='#b10'>(11)</ns0:ref> for balancing data and creating more well-separated class instances. In this approach, every data instance that forms a Tomek link is discarded, both from minority and majority classes. A Tomek link can be defined In the SMOTE technique, the new synthetic samples are equally created for each minority class data point. However, this might not be an optimized way to produce synthetic data since it can concentrate most of the data points in a small portion of the feature space.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.4'>Adaptive Synthetic Sampling</ns0:head><ns0:p>Using the adaptive synthetic sampling algorithm, ADASYN (60), a density estimation metric is used as a criterion to decide the number of synthetic samples for each minority class example. With this, it is possible to balance the minority and majority classes and create synthetic data where the samples are difficult to learn. The synthetic data generation occurs as follows: the first step is to calculate the number of new samples needed to create a balanced dataset. After that, the density estimation is obtained by the k-nearest neighbors for each minority class sample (Eq. 6) and normalization (Eq. 7). Then the number of needed samples for each data point is calculated (Eq. 8), and new synthetic data is created. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_6'>r i = ∆ i K , i = 1, ..., m s<ns0:label>(6) ri</ns0:label></ns0:formula><ns0:formula xml:id='formula_7'>= r i m s ∑ i=1 r i<ns0:label>(7)</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>g i = ri × G<ns0:label>(8</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where m s is the set of instances representing the minority classes, ∆ i the number of examples in the K nearest neighbors belonging to the majority class, g i defines the number of synthetic samples for each data point, and G is the number of synthetic data samples that need to be generated to achieve the balance between the classes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>EXPERIMENTS AND RESULTS</ns0:head><ns0:p>To evaluate the impact of the data imbalance on the Brazilian CBC datasets, we have applied the sampling techniques described in Section 1.4. They are discussed in three different aspects. The first one is the comparison between classification methods without resampling. In this way, we can compare how each classifier deals with the imbalance. The second aspect is related to the sampling methods of efficiency compared to the original datasets. <ns0:ref type='bibr' target='#b4'>(5,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>(10)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>(10,</ns0:ref><ns0:ref type='bibr' target='#b21'>20,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>(10,</ns0:ref><ns0:ref type='bibr' target='#b9'>10)</ns0:ref>; (100); <ns0:ref type='bibr' target='#b32'>(30,</ns0:ref><ns0:ref type='bibr' target='#b9'>10)</ns0:ref> Each classification model was trained with the same training set (70% of samples) and was tested to the same test set (30% of samples). The features were normalized using the z-score. Evaluation metrics were generated by 31 runs considering random data distribution in each partition. The proposed approach was implemented in Python 3 using Scikit-Learn (85) as a backend. The COVID-19 classes were defined using RT-PCR results from the datasets. Sampling techniques were applied only on the training set. Hyperparameters were optimized using the Randomized Parameter Optimization approach available in scikit-learn and the values in Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref>. The aim of optimizing the hyperparameters is to find a model that returns the best and most accurate performance obtained on a validation set. Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> schematizes the methodological steps used in this work.</ns0:p><ns0:p>Different results were obtained for each classification method with an imbalanced dataset, as can be seen in Figs. <ns0:ref type='figure' target='#fig_11'>5, 6</ns0:ref>, and 7 5 . In terms of F1 Score all classification models achieved values ranging around 0.5 to 0.65 for the Albert Einstein and Fleury datasets. Although the F1 Score is widely used to evaluate classification tasks, it must be carefully analyzed in our case since the misclassification has more impact, especially in false-negative cases, making it necessary to observe other indexes. When the 5 The statistical comparison between the algorithms (Dunn's Multiple Comparison Test with Bonferroni correction) is available in the GitHub repository: https://github.com/sbcblab/sampling-covid The NB model achieved a sensitivity of around 0.77 for the Albert Einstein dataset (Fig. <ns0:ref type='figure' target='#fig_10'>5a</ns0:ref>) and around 0.72 for the Fleury dataset (Fig. <ns0:ref type='figure' target='#fig_11'>6a</ns0:ref>). Hence, it is possible to consider the NB as the classification model to better detect the true positive cases (minority class) in these data sets. However, when considering the specificity (Figs. <ns0:ref type='figure' target='#fig_11'>5b and 6b</ns0:ref>), it is notable that NB achieved the worst performance overall. A possible explanation for this disparity is that NB classifies most of the data as positive for possible SARS-CoV-2 infection. This hypothesis is then confirmed when we analyze the other two indexes (DOR and LR + ),</ns0:p><ns0:p>showing NB bias to the minority class. When considering other classification models regarding sensitivity and specificity, the LR, RF, and SVM achieved better results, ranging from 0.55 to 0.59 for sensitivity and 0.89 to 0.93 for specificity. This better balance between sensitivity and sensibility is mirrored in the F1 Score, where RF, SVM, and LR achieved better performance than other methods (and comparable to NB) while achieving better DOR and LR + . Manuscript to be reviewed specificity. For the HSL dataset, we see the opposite; resampling decreases the sensitivity and improves the specificity. This happens because while for Albert Einstein and Fleury, the majority class is negative, the majority class is positive for HSL. Furthermore, with sampling techniques, the DOR was improved in the Albert Einstein dataset. With Fleury data, the learning models with sampling did not achieve tangible DOR results. A possible explanation of this outcome can be related to the data sparseness, an ordinary circumstance observed in medical or clinical data. This is further corroborated by the data visualization using t-SNE in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Moreover, the number of samples used with the Fleury dataset could be determinant for the poor performance. Nevertheless, overall, no sampling technique appears to be a clear winner, especially considering the standard deviation. The performance of each sampling technique is conditioned by the Manuscript to be reviewed None of the combinations of classifiers and sampling methods achieved satisfactory results for the Sírio-Libanês Hospital dataset (Fig. <ns0:ref type='figure' target='#fig_13'>7</ns0:ref>). The sensitivity of all options was close to one, and the specificity was close to zero, indicating that almost all samples are being predicted as the majority class (in this case, the positive). This was expected due to the large imbalance of this dataset, and even the sampling methods, although able to narrow the gap, were not enough to achieve satisfactory results. Due to these poor results, the other metrics are non-satisfactory, and their results can be misleading. For instance, if one were only to check the F1 Score, the classification results would seem satisfactory. As listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> and illustrated in Fig. <ns0:ref type='figure' target='#fig_13'>7</ns0:ref>, this dataset had the largest imbalance, with over forty times more positive Manuscript to be reviewed Computer Science than negative samples. Moreover, the total number of available samples was the smallest among the three datasets. The results suggest that using standard ML classifiers is not useful for such drastic cases even when sampling techniques are applied, and researchers should be cautious when dealing with similar datasets (low sample quantity and high imbalanced data).</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The results obtained in our simulations showed that ML classification techniques could be applied as an assistance tool for COVID-19 diagnosis in datasets with a large enough number of samples and moderate levels of imbalance (less than 50%), even though some of them achieved poor performance or biased results. It is essential to notice that the NB algorithm reached better classification when targeting the positive cases for SARS-CoV-2. However, it skews the classification in favor of the minority class.</ns0:p><ns0:p>Hence, we believe that SVM, LR, and RF approaches are more suitable to the problem.</ns0:p><ns0:p>Future research can be conducted with these limitations in mind, building ensemble learning models with RF, SVM, and LR, and different approaches to handle the imbalanced data sets, such as the use of cost-sensitive methods. It is also important to note that some of these classifiers, such as MLP, cannot be considered easily interpretable. This presents a challenge for their use of medical data, in which one should be able to explain their decisions. Both issues could be tackled in the future using feature selection (4; 52) or algorithms for explainable artificial intelligence <ns0:ref type='bibr'>(106; 81; 6)</ns0:ref>. The method of relevance aggregation, for instance, can be used to extract which features from tabular data were more relevant for the decision making of neural networks and was shown to work on biological data <ns0:ref type='bibr' target='#b60'>(53)</ns0:ref>. Feature selection algorithms can also be used to spare computational resources by training smaller models and to improve the performance of models by removing useless features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>CONCLUSIONS</ns0:head><ns0:p>The COVID-19 pandemic has significantly impacted countries that cannot test their population and develop strategies to manage the crisis and those with substantial financial limitations. Artificial intelligence and ML play a crucial role in better understanding and addressing the COVID-19 emergency and devising low-cost alternatives to aid decision making in the medical field. In this sense, ML techniques are being applied to analyze different data sources seeking to identify and prioritize patients tested by RT-PCR. Some features that appear to be the most representative of the three analyzed datasets are basophils and eosinophils, which are among the expected results. The work of Banerjee et al. <ns0:ref type='bibr' target='#b8'>(9)</ns0:ref> showed that patients displayed a significant decrease in basophils, as well as eosinophils, something also discussed in other works <ns0:ref type='bibr' target='#b12'>(13)</ns0:ref>.</ns0:p><ns0:p>Having imbalanced data is common, but it is especially prevalent when working with biological datasets, and especially with disease data, where we usually have more healthy control samples than disease cases, and an inherent issue in acquiring clinical data. This work reviews the leading ML methods used to analyze CBC data from Brazilian patients with or without COVID-19 by different sampling and classification methods.</ns0:p><ns0:p>Our results show the feasibility of using these techniques and CBC data as a low-cost and widely accessible way to screen patients suspected of being infected by COVID-19. Overall, RF, LR, and SVM achieved the best general results, but each classifier's efficacy will depend on the evaluated data and metrics. Regarding sampling techniques, they can alleviate the bias towards the majority class and improve the general classification, but no single method was a clear winner. This shows that the data should be evaluated on a case-by-case scenario. More importantly, our data point out that researchers should never rely on the results of a single metric when analyzing clinical data since they show fluctuations, depending on the classifier and sampling method.</ns0:p><ns0:p>However, the application of ML classifiers, with or without sampling methods, is not enough in the presence of datasets with few samples available and large class imbalance. For such cases, that more often than not are faced in the clinical practice, ML is not yet advised. Even for adequate datasets and algorithms, the selection of proper metrics is fundamental. Sometimes, the values can camouflage biases in the results and poor performance, like the NB classifier's case. Our recommendation is to inspect several and distinct metrics together to see the greater picture.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Methodological steps used in this work.</ns0:figDesc><ns0:graphic coords='4,145.47,91.94,413.58,227.83' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 ./ 24 PeerJ</ns0:head><ns0:label>224</ns0:label><ns0:figDesc>Figure 2. Distributions of white blood cells related variables for positive (purple) and negative (green) classes of the three datasets: Albert Einstein Hospital (HAE), Fleury Group (FLE), and Sírio-Libanês Hospital (HSL). The central white dot is the median.</ns0:figDesc><ns0:graphic coords='6,141.73,63.78,413.64,508.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Distributions of red blood cells related variables for positive (purple) and negative (green) classes of the three datasets: Albert Einstein Hospital (HAE), Fleury Group (FLE), and Sírio-Libanês Hospital (HSL). The central white dot is the median.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.69,305.30' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Fig. 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>and 3 it can be noticed that such distributions present different means (as corroborated by the D statistic) combined with spurious values with high distance from the modal distribution point, resulting in a larger variance.The variables yielding the higher D statistic on the FLEURY dataset are Basophils and Eosinophils.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>7 / 24 PeerJ</ns0:head><ns0:label>724</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Confusion matrix of binary classification. True positives = TP; True negatives = TN; False positives = FP; False negatives = FN. 'Gold standard' Subjects with the disease Subjects without the disease Classifier Predicted as positive TP FP (Type I Error) Predicted as negative FN (Type II Error) TN</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>10 / 24 PeerJ</ns0:head><ns0:label>1024</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>as follows: given two samples with different classes S A and S B , and a distance d(S A , S B ), this pair (S A , S B ) is a Tomek link if there is not a case S C that d(S A , S C ) < d(S A , S B ) and d(S B , S C ) < d(S B , S A ). In this way, noisy data is removed from the dataset, improving the capability of class identification.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>) 13/ 24 PeerJ</ns0:head><ns0:label>24</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>14 / 24 PeerJ</ns0:head><ns0:label>1424</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021) Manuscript to be reviewed Computer Science sensitivity index is considered, it draws attention to the disparity between the NB classification model and the others.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Average test results from 31 independent runs for several classifiers and sampling schemes trained on the Albert Einstein Hospital data. Black lines represent the standard deviation, while the white circle represents the median.</ns0:figDesc><ns0:graphic coords='16,141.73,230.37,413.68,431.22' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Average test from 31 independent runs for several classifiers and sampling schemes trained on the Fleury Group data. Black lines represent the standard deviation, while the white circle represents the median.</ns0:figDesc><ns0:graphic coords='17,141.73,63.78,413.63,418.52' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>16 / 24 PeerJ</ns0:head><ns0:label>1624</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Average test results from 31 independent runs for several classifiers and sampling schemes trained on the Sírio-Libanês Hospital. Black lines represent the standard deviation, while the white circle represents the median.</ns0:figDesc><ns0:graphic coords='18,141.73,63.78,413.62,415.01' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>17 / 24 PeerJ</ns0:head><ns0:label>1724</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,141.73,243.62,413.69,216.77' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Data Summary of the initial full dataset and selected subsets of samples. Albert Einstein Hospital (HAE); Fleury Group (FLE) and Sírio-Libanês Hospital (HSL). Class ratio is represented as the ratio of the total of selected positive/negative samples.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell cols='8'>Samples Original Selected Male Female Total Male Female Total PCR Positive PCR Negative</ns0:cell><ns0:cell>Class ratio</ns0:cell></ns0:row><ns0:row><ns0:cell>HAE</ns0:cell><ns0:cell>44879</ns0:cell><ns0:cell>4567</ns0:cell><ns0:cell>758</ns0:cell><ns0:cell cols='3'>642 1400 1461</ns0:cell><ns0:cell cols='2'>1706 3167</ns0:cell><ns0:cell>0.442</ns0:cell></ns0:row><ns0:row><ns0:cell>FLE</ns0:cell><ns0:cell>129597</ns0:cell><ns0:cell>803</ns0:cell><ns0:cell>111</ns0:cell><ns0:cell>145</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>225</ns0:cell><ns0:cell>322</ns0:cell><ns0:cell>547</ns0:cell><ns0:cell>0.468</ns0:cell></ns0:row><ns0:row><ns0:cell>HSL</ns0:cell><ns0:cell>2732</ns0:cell><ns0:cell>515</ns0:cell><ns0:cell>301</ns0:cell><ns0:cell>202</ns0:cell><ns0:cell>503</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>41.916</ns0:cell></ns0:row><ns0:row><ns0:cell>Total samples</ns0:cell><ns0:cell>177208</ns0:cell><ns0:cell cols='2'>5885 1170</ns0:cell><ns0:cell cols='3'>989 2159 1695</ns0:cell><ns0:cell cols='2'>2031 3726</ns0:cell><ns0:cell>0.579</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>1.1.1 Data Characterization</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>6/24 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Separability between the negative and positive classes among the three datasets: Albert Einstein Hospital (HAE), Fleury Group (FLE), and Sírio-Libanês Hospital (HSL). The measurements use the D statistic from the two samples Kolmogorov-Smirnov test and the Bhattacharyya Distance (BD). Results discussed in the main text are in bold.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>HAE</ns0:cell><ns0:cell /><ns0:cell>Fleury</ns0:cell><ns0:cell /><ns0:cell>HSL</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Metric</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>BD</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>BD</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>BD</ns0:cell></ns0:row><ns0:row><ns0:cell>Basophils</ns0:cell><ns0:cell cols='6'>0.44347422 0.109347544 0.39832324 0.123909468 0.5530152 0.128723071</ns0:cell></ns0:row><ns0:row><ns0:cell>Basophils#</ns0:cell><ns0:cell cols='6'>0.26185597 0.044430226 0.26696041 0.072622562 0.5134195 0.060896636</ns0:cell></ns0:row><ns0:row><ns0:cell>Eosinophils</ns0:cell><ns0:cell cols='6'>0.36455659 0.122101467 0.37579268 0.025115306 0.4035785 0.064312109</ns0:cell></ns0:row><ns0:row><ns0:cell>Eosinophils#</ns0:cell><ns0:cell cols='6'>0.27756710 0.079652624 0.29135483 0.021484520 0.2544732 0.053442595</ns0:cell></ns0:row><ns0:row><ns0:cell>Hematocrit</ns0:cell><ns0:cell cols='6'>0.04615025 0.001316698 0.06618487 0.000919540 0.2186879 0.102645043</ns0:cell></ns0:row><ns0:row><ns0:cell>Hemoglobin</ns0:cell><ns0:cell cols='6'>0.04477401 0.001378191 0.04644653 0.000805809 0.2246521 0.075094381</ns0:cell></ns0:row><ns0:row><ns0:cell>Leukocytes</ns0:cell><ns0:cell cols='6'>0.33311854 0.047967439 0.26427531 0.052397912 0.3838635 0.039014095</ns0:cell></ns0:row><ns0:row><ns0:cell>Lymphocytes</ns0:cell><ns0:cell cols='6'>0.36963620 0.062785398 0.25569870 0.061582964 0.3767396 0.403552831</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>Lymphocytes# 0.12035703 0.005370303 0.07941756 0.004850382 0.2118953 0.028591171</ns0:cell></ns0:row><ns0:row><ns0:cell>MCH</ns0:cell><ns0:cell cols='6'>0.04013623 0.001484414 0.12921332 0.003343355 0.1332008 0.046784507</ns0:cell></ns0:row><ns0:row><ns0:cell>MCHC</ns0:cell><ns0:cell cols='4'>0.06169042 0.002329520 0.08791562 0.000733363</ns0:cell><ns0:cell cols='2'>0.241385 0.026525034</ns0:cell></ns0:row><ns0:row><ns0:cell>MCV</ns0:cell><ns0:cell cols='6'>0.02552077 0.001154490 0.11282421 0.003395792 0.1262425 0.005559854</ns0:cell></ns0:row><ns0:row><ns0:cell>MPV</ns0:cell><ns0:cell cols='6'>0.08549484 0.004425247 0.10364774 0.001952862 0.3941352 0.061132905</ns0:cell></ns0:row><ns0:row><ns0:cell>Monocytes</ns0:cell><ns0:cell cols='6'>0.13738870 0.009761443 0.08424503 0.000729551 0.5071239 0.298304067</ns0:cell></ns0:row><ns0:row><ns0:cell>Monocytes#</ns0:cell><ns0:cell cols='6'>0.21209752 0.039584296 0.25387054 0.065947897 0.2866137 0.047593113</ns0:cell></ns0:row><ns0:row><ns0:cell>Neutrophils</ns0:cell><ns0:cell cols='6'>0.20823515 0.017075831 0.20752399 0.028776921 0.2147117 0.017162327</ns0:cell></ns0:row><ns0:row><ns0:cell>Neutrophils#</ns0:cell><ns0:cell cols='6'>0.10296563 0.004152348 0.11590208 0.006048081 0.2412194 0.035741201</ns0:cell></ns0:row><ns0:row><ns0:cell>Platelets</ns0:cell><ns0:cell cols='6'>0.19882651 0.017029215 0.25435615 0.025766064 0.1380053 0.006938773</ns0:cell></ns0:row><ns0:row><ns0:cell>RDW</ns0:cell><ns0:cell cols='6'>0.05403243 0.000959759 0.05025994 0.001709993 0.2370775 0.053964873</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>RedbloodCells 0.03797104 0.001037148 0.07508998 0.003083063 0.1618622 0.040136647</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>attributed to outliers. From</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Metrics used to compare the algorithms. True positives = TP; True negatives = TN; False positives = FP; False negatives = FN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Metric</ns0:cell><ns0:cell>Formula</ns0:cell><ns0:cell>Range</ns0:cell><ns0:cell>Target value</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Sensitivity TP/(TP+FN)</ns0:cell><ns0:cell>[0, 1]</ns0:cell><ns0:cell>∼ 1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Specificity TN/(FP+TN)</ns0:cell><ns0:cell>[0, 1]</ns0:cell><ns0:cell>∼ 1</ns0:cell></ns0:row><ns0:row><ns0:cell>LR+</ns0:cell><ns0:cell cols='3'>sensitivity / (1-specificity) [0, +∞) > 10</ns0:cell></ns0:row><ns0:row><ns0:cell>LR-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>The ratio of the expected test results in samples from patients (or subjects) with the disease to the samples without the disease. LR+ measures how much more likely it is to get a positive test result in samples with the disease than samples without the disease, and thus, it is a good indicator for ruling-in diagnosis. Good diagnostic tests usually have an LR+ larger than 10 (93). Similarly, LR-measures how much less likely it is to get a negative test result in samples with the disease when compared to samples without the disease, being used as an indicator for ruling-out the diagnosis. A good diagnostic test should have an LR-smaller than 0.1<ns0:ref type='bibr' target='#b107'>(93)</ns0:ref>.Another global metric for the comparison of diagnostic tests is the diagnostic odds ratio (DOR).It represents the ratio between LR+ and LR-(97), or the ratio of the probability of a positive test result if the sample has the disease to the likelihood of a positive result if the sample does not have the disease. DOR (1-sensitivity) / specificity [0, +∞) < 0.1 DOR (TP/FN)/(FP/TN) [0, +∞) > 1 F 1 -score TP / (TP + 1/2 (FP + FN)) [0, 1] ∼ 1 8/24 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Studies that use ML algorithms on COVID-19 hemogram data (in alphabetical order by the surname of the first author).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Source</ns0:cell><ns0:cell>Data</ns0:cell><ns0:cell>Algorithms</ns0:cell></ns0:row><ns0:row><ns0:cell>AlJame et al. (2)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>XGBoost</ns0:cell></ns0:row><ns0:row><ns0:cell>Alves et al. (3)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>Random Forest, Decision Tree, Criteria Graphs</ns0:cell></ns0:row><ns0:row><ns0:cell>Assaf et al. (7)</ns0:cell><ns0:cell>Clinical and CBC profile, Sheba Medical Center, Israel</ns0:cell><ns0:cell>MLP, Random Forest, Decision Tree</ns0:cell></ns0:row><ns0:row><ns0:cell>Avila et al. (8)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>Naïve-Bayes</ns0:cell></ns0:row><ns0:row><ns0:cell>Banerjee et al. (9)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>MLP, Random Forest, Logistic Regression</ns0:cell></ns0:row><ns0:row><ns0:cell>Bao et al. (10)</ns0:cell><ns0:cell cols='2'>CBC, Wuhan Union Hosp; Kunshan People's Hosp, China Random Forest, SVM</ns0:cell></ns0:row><ns0:row><ns0:cell>Bhandari et al. (15)</ns0:cell><ns0:cell>Clinical and CBC profile of (non) survivors, India</ns0:cell><ns0:cell>Logistic Regression</ns0:cell></ns0:row><ns0:row><ns0:cell>Brinati et al. (21)</ns0:cell><ns0:cell>CBC, San Raffaele Hospital, Italy</ns0:cell><ns0:cell>Random Forest, Naïve-Bayes, Logistic Regression, SVM, kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>Cabitza et al. (23)</ns0:cell><ns0:cell>CBC, San Raffaele Hospital, Italy</ns0:cell><ns0:cell>Random Forest, Naïve-Bayes, Logistic Regression, SVM, kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>Delafiori et al. (36)</ns0:cell><ns0:cell>Mass spectrometry, COVID-19, plasma samples, Brazil</ns0:cell><ns0:cell>Tree Boosting, Random Forest</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>de Freitas Barbosa et al. (35) CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>MLP, SVM, Random Forest, Naïve-Bayes</ns0:cell></ns0:row><ns0:row><ns0:cell>Joshi et al. (67)</ns0:cell><ns0:cell>CBC of patients from USA and South Korea</ns0:cell><ns0:cell>Logistic Regression</ns0:cell></ns0:row><ns0:row><ns0:cell>Silveira (92)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>XGBoost</ns0:cell></ns0:row><ns0:row><ns0:cell>Shaban et al. (90)</ns0:cell><ns0:cell>CBC, San Raffaele Hospital, Italy</ns0:cell><ns0:cell>Fuzzy inference engine, Deep Neural Network</ns0:cell></ns0:row><ns0:row><ns0:cell>Soares et al. (94)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>SVM, SMOTEBoost, kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>Yan et al. (105)</ns0:cell><ns0:cell>Laboratory test results and mortality outcome, Wuhan</ns0:cell><ns0:cell>XGBoost</ns0:cell></ns0:row><ns0:row><ns0:cell>Zhou et al. (110)</ns0:cell><ns0:cell>CBC, Tongji Hospital, China</ns0:cell><ns0:cell>Logistic Regression</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Hyperparameter ranges used in our analyses.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classifier</ns0:cell><ns0:cell>Parameters</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Naive Bayes</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Support Vector Machines kernel:</ns0:cell><ns0:cell>rbf; linear</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>gama:</ns0:cell><ns0:cell>0.0001 -0.001</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>c:</ns0:cell><ns0:cell>1 -1000</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest</ns0:cell><ns0:cell>n-estimators:</ns0:cell><ns0:cell>50; 100; 200</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>criterion:</ns0:cell><ns0:cell>gini; entropy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max depth:</ns0:cell><ns0:cell>3 -10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>min samples split:</ns0:cell><ns0:cell>0.1 -0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>XgBoost</ns0:cell><ns0:cell>n-estimators:</ns0:cell><ns0:cell>50; 100; 200</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max depth:</ns0:cell><ns0:cell>3 -10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>learning rate:</ns0:cell><ns0:cell>0.0001 -0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Decision Tree</ns0:cell><ns0:cell>criterion:</ns0:cell><ns0:cell>gini; entropy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max depth:</ns0:cell><ns0:cell>3 -10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>min samples split:</ns0:cell><ns0:cell>0.1 -1.0</ns0:cell></ns0:row><ns0:row><ns0:cell>K-Nearest Neighbors</ns0:cell><ns0:cell>n neighbors:</ns0:cell><ns0:cell>3; 5; 7; 10; 15; 50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>weights:</ns0:cell><ns0:cell>uniform; distance</ns0:cell></ns0:row><ns0:row><ns0:cell>Logistic Regression</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Multi Layer Perceptron</ns0:cell><ns0:cell>activation:</ns0:cell><ns0:cell>logistic; tanh; relu</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>solver:</ns0:cell><ns0:cell>sgd; adam</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>alpha:</ns0:cell><ns0:cell>0.0001; 0.001; 0.01</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>learning rate init:</ns0:cell><ns0:cell>0.0001; 0.001; 0.01</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>early stopping:</ns0:cell><ns0:cell>True; False</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>batch size:</ns0:cell><ns0:cell>16; 64; 128</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>hidden layer sizes: (10, 10, 2);</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>https://www.fleury.com.br 2 https://www.einstein.br 3 https://www.hospitalsiriolibanes.org.br 3/24 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='18'>/24 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59682:1:1:NEW 5 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response letter regarding the manuscript 'Comparison of machine
learning techniques to handle imbalanced COVID-19 CBC datasets'
(#CS-2021:03:59682:0:1:REVIEW)
We thank the editors and reviewers for their attentive revisions and careful reading of the
manuscript, leading to an improved version of the text. Below we list all the comments raised
and our replies in blue. In the manuscript, removed sentences are found in red, new
information is colored blue, and translocated text is colored in green.
Editor comments (Helder Nakaya)
1) Thank you for your submission to PeerJ Computer Science.
It is my opinion as the Academic Editor for your article - Comparison of machine learning
techniques to handle imbalanced COVID-19 CBC datasets - that it requires a number of Minor
Revisions.
My suggested changes and reviewer comments are shown below and on your article
'Overview' screen.
Please address these changes and resubmit. Although not a hard deadline please try to
submit your revision within the next 10 days.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a
response letter and any edits or clarifications mentioned in the letter are also inserted into the
revised manuscript where appropriate. It is a common mistake to address reviewer questions in
the response letter but not in the revised manuscript. If a reviewer raised a question then your
readers will probably have the same question so you should ensure that the manuscript can
stand alone without the response letter. Directions on how to prepare a response letter can be
found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
Answer: We thank the Editor and the PeerJ Staff for their comments and contributions.
Our answers to the reviewers questions are listed below.
Reviewer 1 (Anonymous)
2) The authors follow all the suggested guidances: clear and unambiguous, professional
English used throughout; literature references, sufficient field background/context
provided; professional article structure, figures, tables; raw data shared; self-contained
with relevant results to hypotheses; and formal results should include clear definitions of
all terms and theorems, and detailed proofs.
The authors also follow the guidance of this area: original primary research within Aims
and Scope of the journal; Research question well defined, relevant & meaningful; rigorous
investigation performed to a high technical & ethical standard; and methods described with
sufficient detail & information to replicate.
The authors again follow the guidelines: all underlying data have been provided; and the
conclusions are well stated, linked to original research question & limited to supporting results.
This is a study of a public database of covid-19 in Brazil with the objective of comparing
various techniques of data rebalancing. This is a strategy routinely carried out in most health
studies, since the imbalance between classes is frequent in this area. The article does not bring
new insights to the area, but it is technically correct and provides a good tutorial on recent
practical applications of machine learning algorithms.
Answer: We thank the reviewer for the kind comments.
3) The major problem is the quality of the database provided. Although the data comes
from renowned health institutions in Brazil, there have been several reports of
inconsistencies in this database, which is the reason why many ML groups have avoided
using them. For example, the positive class for covid-19 is not expected to be 40 times
more prevalent than the negative (as in the case of HSL), even if only symptomatic
cases are tested.
Answer: We thank the reviewer for the consideration. We highlight that our article aims
to provide a methodology to deal with imbalanced COVID-19 datasets. In this sense, even if
some researchers debate the quality for these datasets, they are still useful examples. This was
highlighted in the following parts in the text:
● Section 1.1, 6th ¶: “The column 'class ratio' in Table 1 shows the level of class
imbalance for each dataset. It was computed by dividing the number of positive
samples by the number of negative samples. The number of negative samples
from the Albert Einstein Hospital and the Fleury Group exceeds the positive
samples. This is expected from disease data since the number of infections will
be small compared to the entire population. However, in the Sírio-Libanês
Hospital data, there is over forty times the amount of positive samples compared
to negative samples. This represents another source of bias in the data
acquisition: the dataset consists of patients tested because they had already
shown COVID-19-like symptoms, skewing the data to positive samples. This is
crucial because the decision to test a patient for COVID-19 in institutions that
struggle with funds is a common judgment call.”
● Results, subsection Experiments and Results, 8th ¶: “None of the
combinations of classifiers and sampling methods achieved satisfactory results
for the Sírio-Libanês Hospital dataset (Fig. 7). The sensitivity of all options was
close to one, and the specificity was close to zero, indicating that almost all
samples are being predicted as the majority class (in this case, the positive). This
was expected due to the large imbalance of this dataset, and even the sampling
methods, although able to narrow the gap, were not enough to achieve
satisfactory results.
Due to these poor results, the other metrics are
non-satisfactory, and their results can be misleading. For instance, if one were
only to check the F1 Score, the classification results would seem satisfactory. As
listed in Table 1 and illustrated in Fig. 7, this dataset had the largest imbalance,
with over forty times more positive than negative samples. Moreover, the total
number of available samples was the smallest among the three datasets. The
results suggest that using standard ML classifiers is not useful for such drastic
cases even when sampling techniques are applied, and researchers should be
cautious when dealing with similar datasets (low sample quantity and high
imbalanced data).”
As can be seen from the text, the datasets are being critically evaluated and used as
examples for the discussion of the manuscript. We also only kept the data from samples with
the full CBC available. We did not perform data imputation.
Additionally, as can be seen on Table 5, numerous studies, from different countries,
have employed these datasets. We also update Table 5 with new studies that employed them.
Regarding the biological analysis of these datasets, we added the following sentence citing a
recent study about them:
● Section 1.1.1, last ¶: “While this work focuses on the application of machine
learning and sampling algorithms to this data, a more in-depth biological analysis
regarding the interaction between sex, age, and systemic inflammation from
these Brazilian datasets can be found in the work of Ten-Caten et al. (96).”
4) The authors address this issue by analyzing the bivariate distribution of variables
according to the two classes. I believe that in this case it would be more appropriate to
use statistical analysis to compare whether the variables have similar distributions for the
three centers, regardless of the outcome.
Answer: We thank the reviewer for raising this topic. In this sense, Fig. 2 and Fig. 3
already provide the required information, and the main text also explains the raised point, as
follows:
● Preliminaries, Dataset characterization subsection, 4th ¶: “All of them have
the same characteristic: similar distribution but with negative distribution with
higher values. It is noteworthy that the higher BD (Basophils and Eosinophils)
can be attributed to outliers. From Fig. 2 and 3 it can be noticed that such
distributions present different means (as corroborated by the D statistic)
combined with spurious values with high distance from the modal distribution
point, resulting in a larger variance.”
● Preliminaries, Datasets subsection, 5th ¶: “More information regarding the
three distinct datasets' distributions can be found in Figs.2 and 3. The most
relevant information assessed in the present study is database size, the number
of available clinical test results, gender distribution (male or female), and
COVID-19 RT-PCR test result (classified as positive or negative) ratio. The
parameters for each data subset are described for the original dataset and for the
subset of selected samples used in this study (after removal of patients
containing missing values), as seen in Table 1.”
5) Another important issue is that the authors present the results of several predictive
metrics, but not the one most used in machine learning models (AUROC). I believe this
metric should be used as the main result of the analysis.
Answer: We thank the reviewer for the consideration and we agree that the AUROC is
one of the most used metrics in machine learning. For this work, however, we chose to present
metrics more related to distinct aspects of a medical classification (LR and DOR for instance).
We believe that by showing six distinct metrics, among them Sensitivity and Specificity, there
should be no loss from the absence of the AUROC. Thus, adding even more metrics would end
up adding redundancy to the plots and making the results less clear. Additional metrics for each
classifier/sampling and dataset can be found in the results available in the Github
(https://github.com/sbcblab/sampling-covid):
accuracy_score; f1_score; f1_score-macro;
f1_score-micro; precision_sc score; roc_auc_score; recall_score; balanced_accuracy_score;
specificity; sensitivity; DOR; LR+; LR-
Reviewer 2 (Patricia Gonzalez)
6) I strongly suggest a review in English for the article. Some adjustments are necessary to
avoid long and redundant sentences. These adjustments will make the article more
straightforward and easier to read.
Answer: We agree with the reviewer that the article can be more objective. Therefore,
we removed several redundant sentences, or information that, overall, had no impact in the
discussion, introduction, or conclusion.
7) The database used has information on a large number of patient samples from a highly
mixed population. The authors can discuss more deeply the influence of this kind of
variability in obtaining predictive models. Likewise, they can describe the strengths and
weaknesses of this variability in obtaining predictive models with good performance.
Answer: We thank the reviewer for the consideration, and we agree that this is a
relevant point. Currently, some studies evaluate the impact of specific genetic variants (SNPs)
on the incidence and severity of COVID-19. Nevertheless, we emphasize that this paper aims to
evaluate different sampling techniques on the selected datasets. Therefore, analyzing the
genetic variability of the population is beyond the scope of our work.
8) The manuscript is not in exactly the same format suggested by the Journal. There is no
discussion session. The discussion took place in the results session.
Answer: We thank the reviewer for the consideration. However, there is a flexibility to
organize the paper in this fashion (we employed the PeerJ Comput. Sci. template). Additionally,
since the other reviewers, the handling editor, and the technical team made no objections about
the format of the manuscript, we chose to maintain it.
9) Thinking about different reference values between men and women, as well, the age
impact on the CBC values, did you evaluate the impact of adding sex information as a
feature?
Answer: We thank the reviewer for the consideration. Although this information is
important, it is beyond the scope of our work. However, we agree with the reviewer that it is
crucial and we now acknowledges this in our text by adding the following sentence to redirect
the readers:
● Section 1.1.1, last ¶: “While this work focuses on the application of machine
learning and sampling algorithms to this data, a more in-depth biological analysis
regarding the interaction between sex, age, and systemic inflammation from
these Brazilian datasets can be found in the work of Ten-Caten et al. (96).”
10) In addition to eliminating samples that did not have all CBC records, and using the
complete blood count of the PCR date, was any other manipulation of the data done by
the authors? It was not clear in the text whether the authors did extra preprocessing of
the datasets.
Answer: We thank the reviewer for the question. In this sense, we indeed made one
preprocessing step. The following was added to the text to clarify it for the readers:
● Results, subsection Experiments and Results, 2nd ¶: “The features were
normalized using the z-score”.
11) In line 137 the authors described the features and scale. Were the scales the same
among the 3 datasets? Did the authors perform any scale process in these datasets?
Answer: We thank the reviewer for the comment. As mentioned in the previous answer,
we used z-score to normalize the features.
12) The time period during which these data sets were collected isn't mentioned, were the
records collected over which period of time?
Answer: We thank the reviewer for addressing this issue. The data was collected from
Feb 26th to June 30th, 2020. We added the same sentence to the manuscript, as follows:
● Preliminaries, Subsection Datasets, 1st ¶: “The data from COVID-19 patients
was collected from February 26th, 2020 to June 30th, 2020, and the control data
(individuals without COVID-19) was collected from November 1st, 2019 to June
30th, 2020”.
13) How the authors defined the positive diagnosis for SARS Cov 2 is not described in the
manuscript. Was the COVID-19 class defined using RT-PCR result?
Answer: As described in Section 1.1, the datasets report the COVID-19 RT-PCR test
result (classified as positive or negative). To make this clearer the following sentence was
added:
● Results, subsection Experiments and Results, 2nd ¶: “The COVID-19 classes
were defined using RT-PCR results from the datasets.”
14) Is the y column in the datasets the SARS Cov 2 classification? The authors can describe
the class definition with more details.
Answer: We agree with the reviewer that the information could be presented in more
detail. Because of this we wrote a proper README file for the GitHub page
(https://github.com/sbcblab/sampling-covid). Under the new “Datasets” section on this page we
wrote:
● “The classes are listed in the 'y' column in the .csv files. Values of 0 indicate
negative RT-PCR results and values of 1 indicate positive RT-PCR results. For
more information regarding the datasets please refer to the main publication.”
15) The authors did not present any statistical test showing whether there is or not a
significant difference between the assessments obtained with each method and/or
algorithm. I recommend the addition of the statistical test results. This information will
help to better guide the discussion. Furthermore, in this way the evaluation does not
appear to be only qualitative or even arbitrary.
Answer: We agree with the reviewer that there is a lack of statistical tests in the original
version. In this way, we opt to provide all the information regarding the post hoc statistical tests
in the supplementary material. As it is not possible to guarantee that all runs belong to the same
data distribution, the Dunn's Multiple Comparison Test with Bonferroni correction was used. All
tables, and code, are provided through the supplementary material. In the paper, we add a
footnote (Section Experiments and Results) to advise the reader that the comparison test is in
the supplementary material. The results were not added directly to the text because the
combination of algorithms, datasets, and metrics demands several large tables that would not fit
the manuscript.
● Section 2, footnote: “The statistical comparison between the algorithms (Dunn’s
Multiple Comparison Test with Bonferroni correction) is available in the GitHub
repository: https://github.com/sbcblab/sampling-covid”
16) Starting in line 214 the authors discuss the possible influence of seasonal fluctuations
and, sometimes, extreme circadian regulations in the datasets variability. However, the
datasets were obtained from a population from the same country. Therefore, it would be
interesting to make clear the characteristics of this population. Subsequently, describe
the possible influence of seasonal fluctuations and extreme circadian regulations, mainly
on datasets obtained from different populations.
Answer: We thank the reviewer for the consideration. However, this statement was used
to explain a biological reality: the immunological system is prone to fluctuation by numerous
factors, including circadian regulators, as the reference points out. There are numerous other
possible variables that could alter the immunological system responses that are well known,
such as diet, life-style, exposure to pollutants, etc. Therefore, we did not aim to actually explain
how circadian regulators could affect the used datasets, it was used to provide an overview that
all immune system-related datasets could have an inherited bias. Cancer datasets, for example,
also possess the same problem, as we discussed on previous works (Feltes, et al., Frontiers in
Genetics, 2020; Feltes et al., Journal of Computational Biology, 2019; Feltes et al., Journal of
Computational Biology 2021 - in press).
Additionally, discussing the possible impacts of circadian regulators in the brazillian
population, which displays exceptional differences their exposition to distinct weathers, sunlight
exposure, night-day cycle, rainy seasons between southern-northern hemispheres is more
adequate for biologically-focused works, since it is a vastly complicated subject, completely
outside the computational focus of our work.
17) Line 548:“Both issues could be tackled in the future using feature selection. Line 549: (3;
48) or algorithms for explainable artificial intelligence (100; 76; 5).” The authors discuss
the need for feature selection mainly for the interpretability of models obtained with
Multi-Layer Perceptron (MLP). However, at no time do they discuss the possibility of
applying feature selection methods to improve the predictive performance of the model.
Or even to simplify the model, and consequently reduce the use of computational
resources.
Answer: We agree with the reviewer that feature selection methods are useful in many
aspects, not only interpretability. To reflect this we added the following sentence at the end of
the last paragraph in Section 2:
● Results, subsection Experiments and Results, last ¶: “The method of
relevance aggregation, for instance, can be used to extract which features from
tabular data were more relevant for the decision making of neural networks and
was shown to work on biological data. Feature selection algorithms can also be
used to spare computational resources by training smaller models and to improve
the performance of models by removing useless features.”
18) The study is interesting, mainly, due to the need for more accessible alternatives for
prioritizing patients to perform the RT-PCR test. The CBC is an interesting biological
information base. In addition, the database used has information on a large number of
patient samples from a highly mixed population. What makes the datasets valuable.
However, the authors found many differences between the classification results obtained
with the 3 datasets. In this way, they could have explored the possibility of obtaining
predictive models from the integration of the 3 datasets. I understand the limitation and
the difficulty of this approach, especially when dealing with such diverse distributions, but
it could be a good complement to the work. Show the impact of generalization and the
increase in the number of samples on the performance of predictive models. In order to
assist other researchers in decision making. They can decide to use only a single
dataset obtained from the same institution, or to use more datasets to define their
predictive covid-19 models.
Answer: We thank the reviewer for the consideration. We have kept the datasets
individualized because we investigate how sampling and classification techniques perform in
different scenarios, i.e., with few/many samples and balanced/unbalanced datasets. We want to
evaluate how to proceed in constructing a classifier when data of this type are available.
Aggregating the data into a single dataset is beyond the scope of this work. In addition, there is
not enough information about the procedures adopted by each laboratory (for example,
equipment used). Differences, even slightly, could introduce a bias in the experiments.
19) In the line 65 and 66 the authors talked about some works, but you mentioned only one.
“and some works suggest routine blood tests as a potential diagnostic tool for 66 COVID-19
(40).“ The authors can fix the phrase or add more references to the sentence.
Answer: We thank the reviewer for spotting this mistake. Because the reference list is
already lengthy we opted by rewriting the sentence:
● Introduction, 3rd ¶: “and Ferrari et al. (45) suggest routine blood tests as a potential
diagnostic tool for COVID-19.”.
20) Line 73: “Recently, artificial intelligence techniques, especially Machine Learning (ML),
have been employed to analyze CBC data and assist in screening of patients with
suspected COVID-19 infection (99; 101; 47; 1;7; 19; 60; 96)”. The authors cite in the text
several studies using CBC data to COVID-19 patients screening, but they did not
describe the findings of these works. Besides, they also do not discuss how the findings
of the other authors corroborate their findings in this work.
Answer: We understand the reviewer's reasoning. However, the goal of the cited studies
(listed in Table 5) in the context of the manuscript was met: to show that several groups are
working on the application of machine learning methods to COVID-19 CBC data, and which
algorithms and datasets are being used. The main message of these studies is already present
and discussed in our text; to review the specificities of each one would be beyond the scope of
this research, and would massively increase article length, which is already lengthy. Our main
focus was the use and comparison of sampling methods, so the revision of several works about
classification and not sampling itself would deviate from the main research.
21) Figure 4: I suggest that the authors arrange the distribution of the t-SNE visualization.
Place each sampling method in a line and each dataset in a column. This will make it
easier to compare sampling methods between datasets. So, instead of having 5 rows
and 4 columns, the Figure would have 6 rows and 3 columns.
Answer: We agree with the reviewer that we can improve the visualization of Fig.4. In
this sense, we aligned the figures as requested by the reviewer and placed the figure in the
“landscape” orientation.
22) Supplementary figures comparing the same sampling method and the same algorithm
for the different datasets could be interesting. It would make visualization easier when
comparing datasets and not just methods.
Answer: We agree with the reviewer that this visualization would be interesting for
comparing datasets. However, our goal is to evaluate and compare different sampling
techniques considering the particularities of each dataset.
23) In the same sense, a stacked Bar Chart showing the number of samples and the
proportion of positive and negative COVID-19 records would be interesting. This figure
would facilitate the reader's understanding of the number of samples and their
distribution among the datasets.
Answer: We thank the reviewer for the suggestion. However, Table 1 already provides
this information. Therefore, it would be redundant to create another figure to depict the data.
24) In
the
datasets
available
in:
https://github.com/sbcblab/sampling-covid/blob/main/AE/DATA The features sexo and
idade, are written in portuguese. Please, translate to english for keeping all features in
the same pattern. We thank the reviewer for noticing this issue. It was corrected
accordingly. We chose to remove the age column as it was not used in our experiments
and analysis, and because it was not available for several samples.
Line 103: typo: “TEchnique” This is not a typo. ‘TEchnique” refers to the “TE” in SMOTE.
Line 116: “more than ≈ 13 million cases” The ≈ is not necessary and the numbers are
more than 14 millions in this moment. V
Line 138: typo: “pla-telets V
Line 141: typo: “red blo-od” V
Line 142: typo: “% Eosino-phils” V
Line: 143 “Patients with incomplete (missing data) or no data available for the above
parameters were not included in the present analysis. For patients with more than a single test
result available, a unique hemogram test was used, with the selection based on the blood test
date - same-day results to the PCR-test collection date was adopted as a reference, or the day
closest to this date.” These sentences must be rewritten to facilitate the reader's understanding.
V
Line 198: typo: “FLEU-RY” V
Line 261: “the identification of different diseases prognoses, such as cancer (39; 48) and
COVID-19 (7)”. It would be interesting to cite more examples. We agree with the reviewer that
this segment could be improved with more examples. Now this sentence reads:
● Section 1.3, 1st ¶: “the identification of different diseases prognoses, such as
cancer (42;53), H1N1 Flu (27), Dengue (110), and COVID-19 (Table 5).”
Line 300: ”Similar results were highlighted in other reviews (3)”. It would be better to cite
more reviews or write in the singular. V
Line 330: “Since defining features for splitting the decision tree is directly related to the
training model performance, dealing with imbalanced datasets is essential to improve the model
performance, avoiding bias towards the majority class.” The meaning of the phrase at first
reading seems controversial. It makes us believe that they want to use the unbalanced data,
and not deal with the date in order to correct this imbalance. A slight adjustment in English
would avoid this mistake on the part of the reader. V
401 typo: “Synthetic Minority Over-sampling Technique (SMO-TE)” V
Line 559: It would be better to remove the citation in the conclusion topic. We chose to
maintain this reference since it provides a reference to the statement of the paragraph.
Line 561: It would be better to remove the citation in the conclusion topic We chose to
maintain this reference since it provides a reference to the statement of the paragraph.
F1 Score sometimes is written with a different font and sometimes not. V
Answer: We thank the reviewer for noticing the typos and mistakes. They were all
corrected, accordingly.
Reviewer 3 (Anonymous)
25) The manuscript is clear, very well written and organized. It explores the pros and cons of
different machine learning techniques to handle imbalanced biological datasets (in this
case, focused on blood sampling related to COVID19).
The literature references are adequate, and the background information is detailed
enough for non-experts to understand the work presented. All figures are clear and informative,
summarizing the results and the overall strategy employed. All definitions necessary are
presented in a clear fashion.
The research question is clearly stated, and well grounded by the literature cited.
It fills a gap in the knowledge regarding machine learning applied to clinical data and
diagnostics. All methods are clearly stated.
The results are sound and the conclusions are clearly stated. All data required to support
the conclusions is presented in the article.
This is a very well written article, dealing with a critical step in the use of machine
learning for diagnostic purposes.
Answer: We thank the reviewer for the kind comments.
26) I see only minor points to be corrected:
l. 3 (title): define CBC V
l. 29: change 'to CBC' to 'for CBC' V
l. 31: change 'evaluated' to 'evaluate' V
l. 61: change 'most optimal' to 'optimal' V
l. 68: remove nominal reference V
l. 91: change 'applications of different' to 'applications on different' V
l. 142: remove dash from 'Eosino-phils' V
l. 161: change 'founds' to 'funds' V
l. 190: change 'Complementary' to 'Complementarily' V
l. 198: remove dash from 'FLEU-RY' V
l. 207: change 'too' to 'as well' V
l. 212: remove nominal reference V
l. 214: remove nominal reference V
l. 234: change 'allows a more' to 'allows for a more' V
l. 397: remove dash from 'ab-out' V
l. 428: remove 'issue' V
l. 483: change 'the best and accurate' to 'the best and most accurate' V
l. 496, 505, 542: change 'Sars-Cov-2' to 'SARS-CoV-2' V
l. 574: change 'point-out' to 'point out' V
l. 577: change 'carefulness to be taken' to 'carefulness must be taken' V
Answer: We thank the reviewer for noticing the typos and mistakes. They were all
corrected, accordingly.
" | Here is a paper. Please give your review comments after reading it. |
205 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The Coronavirus pandemic caused by the novel SARS-CoV-2 has significantly impacted human health and the economy, especially in countries struggling with financial resources for medical testing and treatment, such as Brazil's case, the third most affected country by the pandemic. In this scenario, machine learning techniques have been heavily employed to analyze different types of medical data, and aid decision making, offering a low-cost alternative. Due to the urgency to fight the pandemic, a massive amount of works are applying machine learning approaches to clinical data, including complete blood count (CBC) tests, which are among the most widely available medical tests. In this work, we review the most employed machine learning classifiers for CBC data, together with popular sampling methods to deal with the class imbalance. Additionally, we describe and critically analyze three publicly available Brazilian COVID-19 CBC datasets and evaluate the performance of eight classifiers and five sampling techniques on the selected datasets.</ns0:p><ns0:p>Our work provides a panorama of which classifier and sampling methods provide the best results for different relevant metrics and discuss their impact on future analyses. The metrics and algorithms are introduced in a way to aid newcomers to the field. Finally, the panorama discussed here can significantly benefit the comparison of the results of new ML algorithms.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The Coronavirus disease (COVID-19) caused by the novel SARS-CoV-2 has spread from China and quickly transmitted to other countries. Since the beginning of 2020, the COVID-19 pandemic has significantly impacted human health and severely affected the global economy and financial markets <ns0:ref type='bibr'>(82; 83)</ns0:ref>, especially in countries that cannot test their population and develop strategies to manage the crisis.</ns0:p><ns0:p>In a scenario of large numbers of asymptomatic patients and shortages of tests, targeted testing is essential within the population <ns0:ref type='bibr' target='#b99'>(86)</ns0:ref>. The objective is to identify people whose immunity can be demonstrated and allow their safe return to their routine.</ns0:p><ns0:p>The diagnosis of COVID-19 is based on the clinical and epidemiological history of the patient <ns0:ref type='bibr' target='#b52'>(46)</ns0:ref> and the findings of complementary tests, such as chest tomography (CT-scan) <ns0:ref type='bibr'>(14; 38)</ns0:ref> or nucleic acid testing main advantages, drawbacks, and challenges of the field can significantly aid future works. A workflow summarizing the steps taken in this work can be found in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. The survey will first explain the employed methodology, the tested datasets' characteristics, and the chosen evaluation metrics. Afterward, a brief review of the major ML predictors used on CBC COVID-19 datasets is conducted, followed by a review of techniques to handle imbalanced data. Section 1.3 also shows the diversity of approaches already applied to COVID-19 CBC data. This exposition is succeeded by describing the main findings, listing the lessons learned from the survey, and conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>PRELIMINARIES 1.Datasets</ns0:head><ns0:p>At the time of this work, Brazil was the third country most affected by the COVID-19 pandemic, reaching more than 18 million confirmed cases. Thus, discussing data gathered from Brazil can become invaluable to understand SARS-CoV-2 data. Complete datasets used in the present study were obtained from an open repository of COVID-19-related cases in Brazil. The database is part of the COVID-19 Data Sharing/BR initiative <ns0:ref type='bibr' target='#b91'>(79)</ns0:ref>, and it is comprised of information about approximately 177, 000 clinical cases. Patient data were collected from three distinct private health services providers in the São Paulo State, namely the Fleury Group 1 , the Albert Einstein Hospital 2 and the Sírio-Libanês Hospital 3 , and a database for patients from each institution was built. The data from COVID-19 patients was collected from February 26th, 2020 to June 30th, 2020, and the control data (individuals without COVID-19) was collected from November 1st, 2019 to June 30th, 2020.</ns0:p><ns0:p>Patient data is provided in an anonymized form. Three distinct types of patients information are provided in this repository: (i) patients demographic data (including sex, year of birth, and residence zip code); (ii) clinical and/or laboratory exams results (including different combinations of the following data: hemogram and blood cell count results, blood tests for a biochemical profile, pulmonary function tests, and blood gas analysis, diverse urinalysis parameters, detection of a panel of different infectious diseases, pulmonary imaging results (x-ray or CT scans), among others. COVID-19 detection by RT-PCR tests is described for all patients, and serology diagnosis (in the form of specific IgG and IgM antibody detection) is provided for some samples; and (iii) when available, information on each patient clinical progression and transfers, hospitalization history, as well as the disease outcome (primary endpoints, as death or recuperation). Available information is not complete for all patients, with a distinct combination of results provided individually.</ns0:p><ns0:p>Overall baseline characteristics can be found on the complete database, available at the FAPESP COVID-19 Data Sharing/BR 4 . The most common clinical test results available for all patients is the hemogram data. As such, it was selected for the testing of the current sample set. Twenty distinct hemogram test parameters were obtained from the database, including hematocrit (%), hemoglobin (g/dl), platelets (×10 3 µl), mean platelet volume ( f l), red blood cells (×10 6 µl), lymphocytes (×10 3 µl), leukocytes (×10 3 µl), basophils (×10 3 µl), eosinophils (×10 3 µl), monocytes (×10 3 µl), neutrophils (×10 3 µl), mean corpuscular volume (MCV) ( f l), mean corpuscular hemoglobin (MCH) (pg), mean corpuscular hemoglobin concentration (MCHC) (g/dl), red blood cell distribution width (RDW) (%), % Basophils, % Eosinophils, % Lymphocytes % Monocytes, and % Neutrophils (Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>).</ns0:p><ns0:p>Patients with incomplete (missing data) or no data available for the above parameters were not included in the present analysis. For patients with more than a single test result available, a unique hemogram test was used, with the selection based on the blood test date. In this sense, same-day results to the PCR-test collection date was adopted as a reference, or the day closest to the test.</ns0:p><ns0:p>More information regarding the three distinct datasets' distributions can be found in Figs. <ns0:ref type='figure' target='#fig_3'>2 and 3</ns0:ref>.</ns0:p><ns0:p>The most relevant information assessed in the present study is database size, the number of available clinical test results, gender distribution (male or female), and COVID-19 RT-PCR test result (classified as positive or negative) ratio. The parameters for each data subset are described for the original dataset and for the subset of selected samples used in this study (after removal of patients containing missing values), as seen in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>The column 'class ratio' in Table <ns0:ref type='table'>1</ns0:ref> shows the level of class imbalance for each dataset. It was computed by dividing the number of positive samples by the number of negative samples. The number of negative samples from the Albert Einstein Hospital and the Fleury Group exceeds the positive samples. This is expected from disease data since the number of infections will be small compared to the entire population. However, in the Sírio-Libanês Hospital data, there is over forty times the amount of positive samples compared to negative samples. This represents another source of bias in the data acquisition: the dataset consists of patients tested because they had already shown COVID-19-like symptoms, skewing the data to positive samples. This is crucial because the decision to test a patient for COVID-19 in institutions that struggle with funds is a common judgment call. Datasets with an apparent biased disease prevalence, as is the case with the Sírio-Libanês Hospital data (in reality the positive class for COVID-19 is not expected to be 40 times more prevalent), should be discarded from biological analysis. These datasets are being critically evaluated and used only as examples for this research. The last columns of Table <ns0:ref type='table' target='#tab_3'>2 also</ns0:ref> show that, in general, the variables do not belong to the same distribution for the three centers, regardless of the classes. Table <ns0:ref type='table'>1</ns0:ref>. Data Summary of the initial full dataset and selected subsets of samples. Albert Einstein Hospital (HAE); Fleury Group (FLE) and Sírio-Libanês Hospital (HSL). Class ratio is represented as the ratio of the total of selected positive/negative samples. For data characterization, we use two metrics: the Bhattacharyya Distance (BD); and the Kolmogorov-Smirnov statistics (KS). We will now present both metrics, followed by a discussion of its results in the studied datasets. The goal is to determine the separability between the negative and positive classes among the three datasets. BD calculates the separability between two Gaussian distributions (5). However, it Manuscript to be reviewed depends on the covariance inverse matrix for multivariate cases, which can be nonviable for datasets with 169 high dimensionality, such as the ones employed in this paper. Therefore, we will use its univariate form as 170 in Equation 1 <ns0:ref type='bibr' target='#b35'>(33)</ns0:ref>. Manuscript to be reviewed where σ 2 and υ are the variance and mean of the statistical distributions of the j −th variable for groups b and s, respectively. The first part of Eq. 1 distinguishes classes by the differences between variances, while the second part distinguishes classes by the differences between its weighted means. For classification purposes, we would expect low variance within classes and a high difference between means. Therefore, we will complement the BD value by analyzing the probability density functions to verify which part of Eq. 1 influences the highest BD values.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_0'>171 B j (b, s) = 1 4 + ln 1 4 σ 2 b j σ 2 s j + σ 2 s j σ 2 b j + 2 + 1<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>(υ b j − υ s j ) 2 σ 2 b j + σ 2 s j<ns0:label>(1</ns0:label></ns0:formula><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The other employed characterization metric is the D statistic from the two samples Kolmogorov-Smirnov test (KS test). The KS test is a non-parametric approach that quantifies the maximum difference between samples' univariate empirical cumulative distribution values (i.e., the maximum separability between two distributions) (69) (Eq. 2).</ns0:p><ns0:formula xml:id='formula_2'>D w = max x (|F 1 (x) − F 2 (x)|) (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>where D is the D statistic, such that w denotes which hemogram result is being analyzed, F 1 and F 2 are the cumulative empirical distributions of classes 1 and 2, and x are the obtained hemogram result. D w values belongs to the [0, 1] interval, where values closer to one suggest higher separability between classes (103).</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the D statistics and BD for all variables for the three datasets. Firstly, we will discuss the BD and D statistic results for each dataset, followed by comparing such results among all datasets.</ns0:p><ns0:p>Regarding the dataset separability in the HSL dataset, the D statistic yields Basophils, Basophils#, Monocytes, and Eosinophils as the variables with higher distance between the Cumulative Probability Function from positive and negative diagnosed patients. Complementing this analysis by the BD and the Probability Density Function (PDF) represented in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>, the distribution of Basophils, Basophils#, and Eosinophils from the negative patients has a higher mean. Besides the higher D statistics, the BD is lower than other variables, indicating that the distributions are similar; however, one group (in this case, the negative group) has systematically higher values. On the other hand, the other variable with high Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>D statistic (Monocytes) has a flattened distribution for the negative patients, increasing its variance and consequentially its BD once the positive cases variance is small. The small sample size may jeopardize such distribution for negative patients. Complementarily, it is notable that this variable does not have a linear separation between classes. The variables yielding the higher D statistic on the FLEURY dataset are Basophils and Eosinophils.</ns0:p><ns0:p>Regarding the Basophils distributions, the curve from negative cases is flatterer than the positive case curve. Even so, it is notable that the negative distribution has higher values, and both variances are small, resulting in a high BD. For the comparison of the Eosinophils in positive and negative cases, the existence of spurious values increases the variance for both distributions. However, the D statistic indicates that this variable provides good separability between classes.</ns0:p><ns0:p>Moreover, besides having variables with more potential separability (D statistics), the imbalance between classes is much more significant on the HSL dataset, which may bias such analysis. Both HAE and Fleury datasets have similar characteristics regarding classes' sample size proportions. However, the HAE has more variables with high D statistics, and its values are higher as well.</ns0:p><ns0:p>On a final note, CBC data is highly prone to fluctuations. Some variables, such as age and sex, are among the most discussed sources of immunological difference, but others are sometimes unaccounted.</ns0:p><ns0:p>For example, a systematic review in 2015 by Paynter et al. <ns0:ref type='bibr' target='#b97'>(84)</ns0:ref> demonstrated that the immune system is significantly modulated by distinct seasonal changes in different countries, which, by its turn, impact respiratory and infectious diseases. Similarly, circadian rhythm can also impact the circulation levels of different leukocytes <ns0:ref type='bibr' target='#b100'>(87)</ns0:ref>. Distinct countries have specific seasonal fluctuations and, sometimes, extreme </ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Evaluation Metrics</ns0:head><ns0:p>The metrics to evaluate how well a classifier performs in discriminating between the target condition (positive for COVID-19) and health can be derived from a 'confusion matrix' (Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>) that contrasts the 'true' labels obtained from the 'gold standard' to the predicted labels. From it, we have four possible outcomes: either the classifier correctly assigns a sample as positive (with the target condition) or as negative (without the target condition), and in this case, we have true positives, and true negatives or the prediction is wrong, leading to false positives or false negatives. Some metrics can assess the discriminative property of the test, while others can determine its predictive ability <ns0:ref type='bibr' target='#b106'>(93)</ns0:ref>, and not all are well suited for diagnostic tasks because of imbalanced data <ns0:ref type='bibr' target='#b110'>(97)</ns0:ref>.</ns0:p><ns0:p>For instance, accuracy, sometimes also referred to as diagnostic effectiveness, is one of the most used classification performance <ns0:ref type='bibr' target='#b110'>(97)</ns0:ref>. Still, it is greatly affected by the disease prevalence, and increases as the disease prevalence decrease <ns0:ref type='bibr' target='#b106'>(93)</ns0:ref>. Overall, prediction metrics alone won't reflect the biological meaning of the results. Consequentially, especially in diagnostic tasks, ML approaches should always be accompanied by expert decisions on the final results.</ns0:p><ns0:p>This review focuses on seven distinct metrics commonly used in classification and diagnostic tasks that are well suited for imbalanced data <ns0:ref type='bibr'>(93; 97)</ns0:ref>. This also allows for a more straightforward comparison of results in the literature. Each of these metrics evaluates a different aspect of the predictions and is listed in Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> together with a formula on how they can be computed from the results of the confusion matrix.</ns0:p><ns0:p>Sensitivity (also known as 'recall') is the proportion of correctly positive classified samples among all positive samples. It can be understood as the probability of getting a positive prediction in subjects with the disease or a model's ability to recognize samples from patients (or subjects) with the disease.</ns0:p><ns0:p>Analogously, specificity is the proportion of correctly classified negative samples among all negative samples, describing how well the model identifies subjects without the disease. Sensitivity and specificity are not dependants on the disease prevalence in examined groups <ns0:ref type='bibr' target='#b106'>(93)</ns0:ref>.</ns0:p><ns0:p>The likelihood ratio (LR) is a combination of sensitivity and specificity used in diagnostic tests. The ratio of the expected test results in samples from patients (or subjects) with the disease to the samples without the disease. LR+ measures how much more likely it is to get a positive test result in samples with The number of features and characteristics of different datasets might be a barrier for distinctive classification learning techniques. Furthermore, it is of extreme importance a better understanding and characterization of the strengths and drawbacks of each classification technique used <ns0:ref type='bibr' target='#b84'>(72)</ns0:ref>. The following classifiers' choice was based on their use as listed in Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref>, as they are the most likely to be used in experiments with COVID-19 data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.1'>Naïve Bayes</ns0:head><ns0:p>One of the first ML classification techniques is based on the Bayes theorem (Eq. 3). The Naïve Bayes classification technique is a probabilistic classifier that calculates a set of probabilities by counting the frequency and combinations of values in the dataset. The Naïve Bayes classifier has the assumption that all attributes are conditionally independent, given the target value <ns0:ref type='bibr' target='#b74'>(64)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_4'>P(A|B) = P(A)P(B|A) P(B)<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>where P(A) is the probability of the occurrence of event A, P(B) is the probability of occurrence of event B, and P(A|B) is the probability of occurrence of event A when B also occurs. Likewise, P(B|A) is the probability of event B when A also occurs. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In imbalanced datasets, the Naïve Bayes classification algorithm biases the major class results in the dataset, as it happens with most of the classification algorithms. To handle the imbalanced data set in biomedical applications, the work of ( <ns0:ref type='formula'>80</ns0:ref>) evaluated different sampling techniques with the NB classification. The used sampling techniques did not show a significant difference in comparison with the imbalanced data set.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.2'>Support Vector Machines</ns0:head><ns0:p>Support Vector Machine (SVM) ( <ns0:ref type='formula'>34</ns0:ref>) is a classical supervised learning method for classification that works by finding the hyperplane (being just a line in 2D or a plane in 3D) capable of splitting data points into different classes. The 'learning' consists of finding a separating hyperplane that maximizes the distance between itself and the closest data points from each class, called the support vectors. In the cases where the data is not linearly separable, kernels are used to transform the data by mapping it to higher dimensions where a separating hyperplane can be found <ns0:ref type='bibr' target='#b67'>(58)</ns0:ref>. SVM usually performs well on new datasets without the need for modifications. It is also not computationally expensive, has low generalization errors, and is interpretative in the case of the data's low dimensionality. However, it is sensitive to kernel choice and parameter tuning and can only perform binary classification without algorithmic extensions <ns0:ref type='bibr' target='#b67'>(58)</ns0:ref>.</ns0:p><ns0:p>Although SVM achieves impressive results in balanced datasets, when an imbalanced dataset is used, the rating performance degrades as with other methods. In Batuwita and Palade <ns0:ref type='bibr' target='#b11'>(12)</ns0:ref>, it was identified that when SVM is used with imbalanced datasets, the hyperplane is tilted to the majority class. This bias can cause the formation of more false-negative predictions, a significant problem for medical data. To minimize this problem and reduce the total number of misclassifications in SVM learning, the separating hyperplane can be shifted (or tilted) to the minority class <ns0:ref type='bibr' target='#b11'>(12)</ns0:ref>. However, in our previous study, we noticed that for curated microarray gene expression analyzes, even in imbalanced datasets, SVM generally outperformed the other classifiers <ns0:ref type='bibr' target='#b48'>(42)</ns0:ref>. Similar results were highlighted in other reviews (4).</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.3'>K-Nearest Neighbors</ns0:head><ns0:p>The nearest neighbor algorithm is based on the principle that instances from a dataset are close to each other regarding similar properties <ns0:ref type='bibr' target='#b84'>(72)</ns0:ref>. In this way, when unclassified data appears, it will receive the label accordingly to its nearest neighbors. The extension of the algorithm, known as k-Nearest Neighbors (kNN), considers a parameter k, defining the number of neighbors to be considered. The class's determination is straightforward, where the unclassified data receives the most frequent label of its neighbors. To determine the k nearest neighbors, the algorithm considers a distance metric. In our case, the Euclidean Distance (Eq. 4) is used:</ns0:p><ns0:formula xml:id='formula_5'>D(x, y) = n ∑ i=1 |x i − y i | 2<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>where x and y are two instances with n comparable characteristics. Although the kNN algorithm is a versatile technique for classification tasks, it has some drawbacks, such as determining a secure way of choosing the k parameter, being sensitive to the similarity (distance) function used <ns0:ref type='bibr' target='#b84'>(72)</ns0:ref>, and a large amount of storage for large datasets <ns0:ref type='bibr' target='#b67'>(58)</ns0:ref>. As the kNN considers the most frequent class of its nearest neighbors, it is intuitive to conclude that for imbalanced datasets, the method will bias the results towards the majority class in the training dataset <ns0:ref type='bibr' target='#b80'>(68)</ns0:ref>.</ns0:p><ns0:p>For biological datasets, kNN is particularly useful for data from non-characterized organisms, where there is little-to-non previous information to identify molecules and their respective bioprocesses correctly.</ns0:p><ns0:p>Thus, this 'guilty by association' approach becomes necessary. This logic can be extrapolated to all types of biological datasets that possess such characteristics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.4'>Decision Trees</ns0:head><ns0:p>Decision trees are one of the most used techniques for classification tasks <ns0:ref type='bibr' target='#b67'>(58)</ns0:ref>, although they can also be used for regression. Decision trees classifies data accordingly to their features, where each node represents a feature, and each branch represents the value that the node can assume <ns0:ref type='bibr' target='#b84'>(72)</ns0:ref>. A binary tree needs to be built based on the feature that better divides the data as a root node to classify data. New subsets are created in an incremental process until all data can be categorized <ns0:ref type='bibr' target='#b67'>(58)</ns0:ref>. The first limitation of this technique is the complexity of constructing a binary tree (considered an NP-Complete problem). Different Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>heuristics were already proposed to handle this, such as the CART algorithm <ns0:ref type='bibr' target='#b18'>(19)</ns0:ref>. Another important fact is that decision trees are more susceptible to overfitting <ns0:ref type='bibr' target='#b67'>(58)</ns0:ref>, requiring the usage of a pruning strategy.</ns0:p><ns0:p>Since defining features for splitting the decision tree is directly related to the training model performance, knowing how to treat the challenges imposed by imbalanced datasets is essential to improve the model performance, avoiding bias towards the majority class. The effect of imbalanced datasets in decision trees could be observed in <ns0:ref type='bibr' target='#b34'>(32)</ns0:ref>. The results attested that decision tree learning models could reach better performance when a sampling method for imbalanced data is applied.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.5'>Random Forest</ns0:head><ns0:p>Random Forests are an ensemble learning approach that uses multiple non-pruned decision trees for classification and regression tasks. To generate a random forest classifier, each decision tree is created from a subset of the data's features. After many trees are generated, each tree votes for the class of the new instance <ns0:ref type='bibr' target='#b17'>(18)</ns0:ref>. As random forest creates each tree based on a bootstrap sample of the data, the minority class might not be represented in these samples, resulting in trees with poor performance and biased towards the majority class <ns0:ref type='bibr' target='#b31'>(29)</ns0:ref>. Methods to handle the high-imbalanced data were compared by <ns0:ref type='bibr' target='#b31'>(29)</ns0:ref>, including incorporating class level weights, making the learning models cost-sensitive, and reducing the amount of the majority class data for a more balanced data set. In all cases, the overall performance increased.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.6'>XGBoost</ns0:head><ns0:p>The XGBoost framework was created by Chen and Guestrin <ns0:ref type='bibr' target='#b32'>(30)</ns0:ref>, and is used on decision tree ensemble methods, following the concept of learning from previous errors. More specifically, the XGBoost uses the gradient of the loss function in the existing model for pseudo-residual calculation between the predicted and real label. Moreover, it extends the gradient boosting algorithm into a parallel approach, achieving faster training models than other learning techniques to maintain accuracy.</ns0:p><ns0:p>The gradient boosting performance in imbalanced data sets can be found in <ns0:ref type='bibr' target='#b23'>(22)</ns0:ref>, where it outperforms other classifiers such as SVM, decision trees, and kNN in credit scoring analysis. The eXtreme Gradient Boost was also applied to credit risk assignment with imbalanced datasets in <ns0:ref type='bibr' target='#b28'>(26)</ns0:ref>, achieving better results than its competitors.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.7'>Logistic Regression</ns0:head><ns0:p>Logistic regression is a supervised classification algorithm that builds a regression model to predict the class of a given data based on a sigmoid function (Eq. 5). As occurs in linear models, in logistic regression, learning models compute a weighted sum of the input features with a bias <ns0:ref type='bibr' target='#b53'>(47)</ns0:ref>. Once the logistic model estimated the probability of p of a given data label, the label with p ≥ 50% will be assigned to the binary classification data.</ns0:p><ns0:formula xml:id='formula_6'>g(z) = 1 1 + e −z</ns0:formula><ns0:p>(5)</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.3.8'>Multilayer Perceptron</ns0:head><ns0:p>A multilayer perceptron is a fully connected neural network with at least three layers of neurons: one input layer, one hidden layer, and an output layer. The basic unit of a neural network is a neuron that is represented as nodes in the neural network, and have an activation function, generally, a sigmoid function (Eq. 5), which is activated accordingly to the sum of the arriving weighted signals from previous layers.</ns0:p><ns0:p>For classification tasks, each output neuron represents a class, and the value reported by the i-th output neuron is the amount of evidence in supported i-th class (73), i.e., if an MLP has two output neuronsmeaning that there are two classes -the output evidence could be (0.2, 0. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Regarding the capabilities of MLPs in biased data, an empirical study is provided by <ns0:ref type='bibr' target='#b83'>(71)</ns0:ref>, showing that MLP can achieve satisfactory results in noisy and imbalanced datasets even without sampling techniques for balancing the datasets. The analysis provided by the authors showed that the difference between the MLP with and without sampling was minimal.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4'>Techniques to Handle Imbalanced Data</ns0:head><ns0:p>As introduced before, the COVID-19 CBC data is highly imbalanced. In a binary classification problem, class imbalance occurs when one class, the minority group, contains significantly fewer samples than the other class, the majority group. In such a situation, most classifiers are biased towards the larger classes and have meager classification rates in the smaller classes. It is also possible that the classifier considers everything as the largest class and ignores the smaller class. This problem is faced not only in the binary class data but also in the multi-class data <ns0:ref type='bibr' target='#b111'>(98)</ns0:ref>.</ns0:p><ns0:p>A significant number of techniques have been proposed in the last decade to handle the imbalanced data problem. In general, we can classify these different approaches as sampling methods (pre-processing)</ns0:p><ns0:p>and cost-sensitive learning <ns0:ref type='bibr' target='#b64'>(56)</ns0:ref>. In cost-sensitive learning models, the minority class misclassification has a higher relevance (cost) than the majority class instance misclassification. Although this can be a practical approach for imbalanced datasets, it can be challenging to set values for the needed matrix cost <ns0:ref type='bibr' target='#b64'>(56)</ns0:ref>.</ns0:p><ns0:p>The use of sampling techniques is more accessible than cost-sensitive learning, requiring no specific information about the classification problem. For these approaches, a new dataset is created to balance the classes, giving the classifiers a better opportunity to distinguish the decision boundary between them <ns0:ref type='bibr' target='#b69'>(59)</ns0:ref>.</ns0:p><ns0:p>In this work, the following sampling techniques are used, chosen due to their prominence in the literature:</ns0:p><ns0:p>Random Over-Sampling (ROS), Random Under-Sampling (RUS), Synthetic Minority Over-sampling TEchnique (SMOTE), Synthetic Minority Over-sampling Technique with Tomek Link (SMOTETomek), and Adaptive Synthetic Sampling (A-DASYN). All of them are briefly described in this section. A t-SNE visualization of each sampling technique's effect for the three datasets used can be seen in Fig. <ns0:ref type='figure'>4</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For the under-sampling approach, most class instances are discarded until a more balanced data distribution is reached. This data dumping process is done randomly. Considering a dataset with 100 minority class instances and 1000 majority class instances, a total of 900 majority class instances would be randomly removed in the RUS technique. At the end of the process, the dataset will be balanced with 200 instances. The majority class will be represented with 100 instances, while the minority will also have 100.</ns0:p><ns0:p>In contrast, the random over-sampling technique duplicates minority class data to achieve better data distribution. Using the same example given before, with 100 instances of the minority class and 1000 majority class instances, each data instance from the minority class would be replicated ten times until both classes have 1000 instances. This approach increases the number of instances in the dataset, leading us to 2000 instances in the modified dataset.</ns0:p><ns0:p>However, some drawbacks must be explained. In RUS, the data dumping process can discard a considerable number of data, making the learning process harder and resulting in poor classification performance. On the other hand, for ROS, the instances are duplicated, which might cause the learning model overfitting, inducing the model to a lousy generalization capacity and, again, leading to lower classification performance <ns0:ref type='bibr' target='#b69'>(59)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.2'>Synthetic Minority Over-sampling Technique (SMOTE)</ns0:head><ns0:p>To overcome the problem of generalization resulting from the random over-sampling technique, <ns0:ref type='bibr' target='#b30'>(28)</ns0:ref> created a method to generate synthetic data in the dataset. This technique is known as SMOTE. To balance the minority class in the dataset, SMOTE first selects a minority class data instance M a randomly.</ns0:p><ns0:p>Then, the k nearest neighbors of M a , regarding the minority class, are identified. A second data instance M b is then selected from the k nearest neighbors set. In this way, M a and M b are connected, forming a line segment in the feature space. The new synthetic data is then generated as a convex combination between M a and M b . This procedure occurs until the dataset is balanced between the minority and majority classes. Because of the effectiveness of SMOTE, different extensions of this over-sampling technique were created.</ns0:p><ns0:p>As SMOTE uses the interpolation of two instances to create the synthetic data, if the minority class is sparse, the newly generated data can result in a class mixture, which makes the learning task harder <ns0:ref type='bibr' target='#b16'>(17)</ns0:ref>.</ns0:p><ns0:p>Because SMOTE became an effective over-sampling technique and still has some drawbacks, different variations of the method were proposed by different authors. A full review of these different types can be found in <ns0:ref type='bibr' target='#b16'>(17)</ns0:ref> and <ns0:ref type='bibr' target='#b69'>(59)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.3'>Synthetic Minority Over-sampling Technique with Tomek link</ns0:head><ns0:p>Although the SMOTE technique achieved better results than random sampling methods, data sparseness can be a problem, particularly in datasets containing a significant outlier occurrence. In many datasets, it is possible to identify that different data classes might invade each class space. When considering a decision tree as a classifier with this mixed dataset, the classifier might create several specialized branches to distinguish the data class <ns0:ref type='bibr' target='#b10'>(11)</ns0:ref>. This behavior might create an over-fitted model with poor generalization.</ns0:p><ns0:p>In light of this fact, the SMOTE technique was extended considering Tomek links ( <ns0:ref type='formula'>99</ns0:ref>) by <ns0:ref type='bibr' target='#b10'>(11)</ns0:ref> for balancing data and creating more well-separated class instances. In this approach, every data instance that forms a Tomek link is discarded, both from minority and majority classes. A Tomek link can be defined as follows: given two samples with different classes S A and S B , and a distance d(S A , S B ), this pair</ns0:p><ns0:formula xml:id='formula_7'>(S A , S B ) is a Tomek link if there is not a case S C that d(S A , S C ) < d(S A , S B ) and d(S B , S C ) < d(S B , S A ).</ns0:formula><ns0:p>In this way, noisy data is removed from the dataset, improving the capability of class identification.</ns0:p><ns0:p>In the SMOTE technique, the new synthetic samples are equally created for each minority class data point. However, this might not be an optimized way to produce synthetic data since it can concentrate most of the data points in a small portion of the feature space.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.4.4'>Adaptive Synthetic Sampling</ns0:head><ns0:p>Using the adaptive synthetic sampling algorithm, ADASYN (60), a density estimation metric is used as a criterion to decide the number of synthetic samples for each minority class example. With this, it is possible to balance the minority and majority classes and create synthetic data where the samples are difficult to learn. The synthetic data generation occurs as follows: the first step is to calculate the number of new samples needed to create a balanced dataset. After that, the density estimation is obtained by the Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>k-nearest neighbors for each minority class sample (Eq. 6) and normalization (Eq. 7). Then the number of needed samples for each data point is calculated (Eq. 8), and new synthetic data is created.</ns0:p><ns0:formula xml:id='formula_8'>r i = ∆ i K , i = 1, ..., m s (6) ri = r i m s ∑ i=1 r i<ns0:label>(7)</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>g i = ri × G<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where m s is the set of instances representing the minority classes, ∆ i the number of examples in the K nearest neighbors belonging to the majority class, g i defines the number of synthetic samples for each data point, and G is the number of synthetic data samples that need to be generated to achieve the balance between the classes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>EXPERIMENTS AND RESULTS</ns0:head><ns0:p>To evaluate the impact of the data imbalance on the Brazilian CBC datasets, we have applied the sampling techniques described in Section 1.4. They are discussed in three different aspects. The first one is the comparison between classification methods without resampling. In this way, we can compare how each classifier deals with the imbalance. The second aspect is related to the sampling methods of efficiency compared to the original datasets. <ns0:ref type='bibr' target='#b4'>(5,</ns0:ref><ns0:ref type='bibr' target='#b9'>10,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>(10)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>(10,</ns0:ref><ns0:ref type='bibr' target='#b21'>20,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref>; <ns0:ref type='bibr' target='#b9'>(10,</ns0:ref><ns0:ref type='bibr' target='#b9'>10)</ns0:ref>; (100); <ns0:ref type='bibr' target='#b32'>(30,</ns0:ref><ns0:ref type='bibr' target='#b9'>10)</ns0:ref> Each classification model was trained with the same training set (70% of samples) and was tested to the same test set (30% of samples). The features were normalized using the z-score. Evaluation metrics were generated by 31 runs considering random data distribution in each partition. The proposed approach Hyperparameters were optimized using the Randomized Parameter Optimization approach available in scikit-learn and the values in Table <ns0:ref type='table' target='#tab_7'>6</ns0:ref>. The aim of optimizing the hyperparameters is to find a model that returns the best and most accurate performance obtained on a validation set. Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> schematizes the methodological steps used in this work.</ns0:p><ns0:p>Different results were obtained for each classification method with an imbalanced dataset, as can be seen in Figs. <ns0:ref type='figure' target='#fig_14'>5, 6</ns0:ref>, and 7 5 . In terms of F1 Score all classification models achieved values ranging around 0.5 to 0.65 for the Albert Einstein and Fleury datasets. Although the F1 Score is widely used to evaluate classification tasks, it must be carefully analyzed in our case since the misclassification has more impact, especially in false-negative cases, making it necessary to observe other indexes. When the sensitivity index is considered, it draws attention to the disparity between the NB classification model and the others.</ns0:p><ns0:p>The NB model achieved a sensitivity of around 0.77 for the Albert Einstein dataset (Fig. <ns0:ref type='figure' target='#fig_12'>5 a</ns0:ref>) and around 0.72 for the Fleury dataset (Fig. <ns0:ref type='figure' target='#fig_14'>6 a</ns0:ref>). Hence, it is possible to consider the NB as the classification model to better detect the true positive cases (minority class) in these data sets. However, when considering the specificity (Figs. 5 b and 6 b), it is notable that NB achieved the worst performance overall. A possible explanation for this disparity is that NB classifies most of the data as positive for possible SARS-CoV-2 infection. This hypothesis is then confirmed when we analyze the other two indexes (DOR and LR + ), showing NB bias to the minority class. When considering other classification models regarding sensitivity and specificity, the LR, RF, and SVM achieved better results, ranging from 0.55 to 0.59 for sensitivity and 0.89 to 0.93 for specificity. This better balance between sensitivity and sensibility is mirrored in the F1 Score, where RF, SVM, and LR achieved better performance than other methods (and comparable to NB) while achieving better DOR and LR + .</ns0:p><ns0:p>When considering four key indexes (F1 Score, ROC-AUC Score, Sensitivity, and Specificity), we can observe that the sampling techniques improved the learning models regarding the classification of positive cases of SARS-CoV-2 from the Albert Einstein dataset in comparison with the original data, except for NB. Thus, reducing the bias to the majority class observed in the original data set, especially when considering the specificity (the proportion of correctly classified negative samples among all negative samples). For Albert Einstein and Fleury datasets, sampling techniques improve the sensitivity and lower all classifiers' specificity. For the HSL dataset, we see the opposite; resampling decreases the sensitivity and improves the specificity. This happens because while for Albert Einstein and Fleury, the majority class is negative, the majority class is positive for HSL. Furthermore, with sampling techniques, the DOR was improved in the Albert Einstein dataset. With Fleury data, the learning models with sampling did not achieve tangible DOR results. A possible explanation of this outcome can be related to the data sparseness, an ordinary circumstance observed in medical or clinical data. This is further corroborated by the data visualization using t-SNE in Fig. <ns0:ref type='figure'>4</ns0:ref>.</ns0:p><ns0:p>Moreover, the number of samples used with the Fleury dataset could be determinant for the poor performance. Nevertheless, overall, no sampling technique appears to be a clear winner, especially considering the standard deviation. The performance of each sampling technique is conditioned by the data, metric, and classifier at hand. Regarding the decrease in LR+ when the Albert Einstein or Fleury data is balanced, LR+ represents the probability of samples classified as positive being truly positive. The difference of LR+ values in the original datasets compared to the resampled data is due to the classifier trained on the original data labeling most samples as negative, even when facing a positive sample. Thus, it is important to note that when the data is balanced, the bias towards the negative class diminishes, and the model has more instances being classified as (true or false) positives.</ns0:p><ns0:p>None of the combinations of classifiers and sampling methods achieved satisfactory results for the Sírio-Libanês Hospital dataset (Fig. <ns0:ref type='figure' target='#fig_15'>7</ns0:ref>). The sensitivity of all options was close to one, and the specificity was close to zero, indicating that almost all samples are being predicted as the majority class (in this case, the positive). This was expected due to the large imbalance of this dataset, and even the sampling methods, although able to narrow the gap, were not enough to achieve satisfactory results. Due to these poor results, the other metrics are non-satisfactory, and their results can be misleading. For instance, if one were only to check the F1 Score, the classification results would seem satisfactory. As listed in 5 Manuscript to be reviewed Table <ns0:ref type='table'>1</ns0:ref> and illustrated in Fig. <ns0:ref type='figure' target='#fig_15'>7</ns0:ref>, this dataset had the largest imbalance, with over forty times more positive 530 than negative samples. Moreover, the total number of available samples was the smallest among the three 531 datasets. The results suggest that using standard ML classifiers is not useful for such drastic cases even Manuscript to be reviewed Manuscript to be reviewed Hence, we believe that SVM, LR, and RF approaches are more suitable to the problem. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Future research can be conducted with these limitations in mind, building ensemble learning models with RF, SVM, and LR, and different approaches to handle the imbalanced data sets, such as the use of cost-sensitive methods. It is also important to note that some of these classifiers, such as MLP, cannot be considered easily interpretable. This presents a challenge for their use of medical data, in which one should be able to explain their decisions. Both issues could be tackled in the future using feature selection (4; 52) or algorithms for explainable artificial intelligence (106; 81; 6). The method of relevance aggregation, for instance, can be used to extract which features from tabular data were more relevant for the decision making of neural networks and was shown to work on biological data <ns0:ref type='bibr' target='#b59'>(53)</ns0:ref>. Feature selection algorithms can also be used to spare computational resources by training smaller models and to improve the performance of models by removing useless features.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>CONCLUSIONS</ns0:head><ns0:p>The COVID-19 pandemic has significantly impacted countries that cannot test their population and develop strategies to manage the crisis and those with substantial financial limitations. Artificial intelligence and ML play a crucial role in better understanding and addressing the COVID-19 emergency and devising low-cost alternatives to aid decision making in the medical field. In this sense, ML techniques are being applied to analyze different data sources seeking to identify and prioritize patients tested by RT-PCR. Some features that appear to be the most representative of the three analyzed datasets are basophils and eosinophils, which are among the expected results. The work of Banerjee et al. <ns0:ref type='bibr' target='#b8'>(9)</ns0:ref> showed that patients displayed a significant decrease in basophils, as well as eosinophils, something also discussed in other works <ns0:ref type='bibr' target='#b12'>(13)</ns0:ref>.</ns0:p><ns0:p>Having imbalanced data is common, but it is especially prevalent when working with biological datasets, and especially with disease data, where we usually have more healthy control samples than disease cases, and an inherent issue in acquiring clinical data. This work reviews the leading ML methods used to analyze CBC data from Brazilian patients with or without COVID-19 by different sampling and classification methods.</ns0:p><ns0:p>Our results show the feasibility of using these techniques and CBC data as a low-cost and widely accessible way to screen patients suspected of being infected by COVID-19. Overall, RF, LR, and SVM achieved the best general results, but each classifier's efficacy will depend on the evaluated data and metrics. Regarding sampling techniques, they can alleviate the bias towards the majority class and improve the general classification, but no single method was a clear winner. This shows that the data should be evaluated on a case-by-case scenario. More importantly, our data point out that researchers should never rely on the results of a single metric when analyzing clinical data since they show fluctuations, depending on the classifier and sampling method.</ns0:p><ns0:p>However, the application of ML classifiers, with or without sampling methods, is not enough in the presence of datasets with few samples available and large class imbalance. For such cases, that more often than not are faced in the clinical practice, ML is not yet advised. If the data is clearly biased, like the HSL data, the dataset should be discarded. Even for adequate datasets and algorithms, the selection of proper metrics is fundamental. Sometimes, the values can camouflage biases in the results and poor performance, like the NB classifier's case. Our recommendation is to inspect several and distinct metrics together to see the greater picture.</ns0:p></ns0:div>
<ns0:div><ns0:head>DATA AVAILABILITY</ns0:head><ns0:p>The source code used for the experiments can be accessed in GitHub:https://github.com/sbcblab/samplingcovid. This study was financed in part by the Coordenac ¸ão de Aperfeic ¸oamento de Pessoal de Nível Superior -Brasil (CAPES) -Finance Code 001. There was no additional external funding received for this study; all external funding or sources of support received during this study were mentioned.</ns0:p></ns0:div>
<ns0:div><ns0:head>ACKNOWLEDGMENTS</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Methodological steps used in this work.</ns0:figDesc><ns0:graphic coords='4,145.47,103.89,413.58,227.83' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Distributions of white blood cells related variables for positive (purple) and negative (green) classes of the three datasets: Albert Einstein Hospital (HAE), Fleury Group (FLE), and Sírio-Libanês Hospital (HSL). The central white dot is the median.</ns0:figDesc><ns0:graphic coords='6,141.73,63.78,413.64,508.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>) 5/ 25 PeerJ</ns0:head><ns0:label>25</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Distributions of red blood cells related variables for positive (purple) and negative (green) classes of the three datasets: Albert Einstein Hospital (HAE), Fleury Group (FLE), and Sírio-Libanês Hospital (HSL). The central white dot is the median.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.69,305.30' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>7 / 25 PeerJ</ns0:head><ns0:label>725</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)Manuscript to be reviewed Computer Science circadian regulations -thus, immune responses' inherent sensibility should always be considered a potential bias. This also impacts the comparison between different computational approaches that use datasets from other researchers for testing or training. While this work focuses on the application of ML and sampling algorithms to this data, a more in-depth biological analysis regarding the interaction between sex, age, and systemic inflammation from these Brazilian datasets can be found in the work of Ten-Caten et al.<ns0:ref type='bibr' target='#b108'>(95)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>9 / 25 PeerJ</ns0:head><ns0:label>925</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>10 / 25 PeerJ</ns0:head><ns0:label>1025</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>25 PeerJ</ns0:head><ns0:label>25</ns0:label><ns0:figDesc><ns0:ref type='bibr' target='#b7'>8)</ns0:ref>, resulting to the classification of the class supported by the highest value, in this case, 0.8. Based on the learning model prediction's mean square error, each connection assigned weights are adjusted based on the backpropagation learning algorithm<ns0:ref type='bibr' target='#b85'>(73)</ns0:ref>. Although the MLPs have shown impressive results in many real-world applications, some drawbacks must be highlighted. The first one is the determination of the number of hidden layers. An underestimation of the neurons number can cause a poor classification capability, while the excess of them can lead to an overfitting scenario, compromising the model generalization. Another concern is related to the computational cost of the backpropagation, where the process of minimizing the MSE takes long runs of simulations and training. Furthermore, one of the major characteristics is that MLPs are black-box methods, making it hard to understand the reason for their output<ns0:ref type='bibr' target='#b84'>(72)</ns0:ref>.11/Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 . 25 PeerJ</ns0:head><ns0:label>425</ns0:label><ns0:figDesc>Figure 4. Visualization of the negative (purple) and positive (green) samples from the Albert Einstein Hospital (AE), Fleury Laboratory (FLEURY) and Hospital Sirio Libanês (HSL) using t-SNE for all the different sampling schemes.</ns0:figDesc><ns0:graphic coords='13,144.21,388.09,411.21,215.47' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>13 / 25 PeerJ</ns0:head><ns0:label>1325</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>14 / 25 PeerJ</ns0:head><ns0:label>1425</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021) Manuscript to be reviewed Computer Science was implemented in Python 3 using Scikit-Learn (85) as a backend. The COVID-19 classes were defined using RT-PCR results from the datasets. Sampling techniques were applied only on the training set.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>The statistical comparison between the algorithms (Dunn's Multiple Comparison Test with Bonferroni correction) is available in the GitHub repository: https://github.com/sbcblab/sampling-covid 15/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Average test results from 31 independent runs for several classifiers and sampling schemes trained on the Albert Einstein Hospital data. Black lines represent the standard deviation, while the white circle represents the median.</ns0:figDesc><ns0:graphic coords='17,144.21,63.78,411.12,558.04' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>532 16 / 25 PeerJ</ns0:head><ns0:label>1625</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Average test from 31 independent runs for several classifiers and sampling schemes trained on the Fleury Group data. Black lines represent the standard deviation, while the white circle represents the median.</ns0:figDesc><ns0:graphic coords='18,144.21,63.78,411.10,549.55' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Average test results from 31 independent runs for several classifiers and sampling schemes trained on the Sírio-Libanês Hospital. Black lines represent the standard deviation, while the white circle represents the median.</ns0:figDesc><ns0:graphic coords='19,144.21,63.78,411.10,551.92' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>18 / 25 PeerJ</ns0:head><ns0:label>1825</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>This work was supported by grants from the Fundac ¸ão de Amparo à Pesquisa do Estado do Rio Grande do Sul -FAPERGS [19/2551-0001906-8], Conselho Nacional de Desenvolvimento Científico e Tecnológico -CNPq [311611 / 2018-4], and the Coordenac ¸ão de Aperfeic ¸oamento de Pessoal de Nível Superior -STICAMSUD [ 88881.522073 / 2020-01] and DAAD/CAPES PROBRAL [88881.198766 / 2018-01] .</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>6/25 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Separability between the negative and positive classes among the three datasets: Albert Einstein Hospital (HAE), Fleury Group (FLE), and Sírio-Libanês Hospital (HSL). The measurements use the D statistic from the two samples Kolmogorov-Smirnov test and the Bhattacharyya Distance (BD). Results discussed in the main text are in bold. The last two columns show the Kruskal-Wallis H test (KW) together with its p-value, to compare the variables distributions for the three centers, regardless of the outcome. In this case, results rejecting the Null Hypothesis that data belongs to the same distribution are in bold.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>HAE</ns0:cell><ns0:cell /><ns0:cell>Fleury</ns0:cell><ns0:cell /><ns0:cell>HSL</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Metric</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>BD</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>BD</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>BD</ns0:cell><ns0:cell>KW</ns0:cell><ns0:cell>p-value</ns0:cell></ns0:row><ns0:row><ns0:cell>Basophils</ns0:cell><ns0:cell cols='7'>0.443474 0.1093475 0.398323 0.1239094 0.55301 0.1287230 117.02</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Basophils#</ns0:cell><ns0:cell cols='7'>0.261855 0.0444302 0.266960 0.0726225 0.51341 0.0608966 110.27</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Eosinophils</ns0:cell><ns0:cell cols='6'>0.364556 0.1221014 0.375792 0.0251153 0.40357 0.0643121</ns0:cell><ns0:cell>81.37</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Eosinophils#</ns0:cell><ns0:cell cols='6'>0.277567 0.0796526 0.291354 0.0214845 0.25447 0.0534425</ns0:cell><ns0:cell>64.08</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Hematocrit</ns0:cell><ns0:cell cols='6'>0.046150 0.0013166 0.066184 0.0009195 0.21868 0.1026450</ns0:cell><ns0:cell>69.75</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Hemoglobin</ns0:cell><ns0:cell cols='6'>0.044774 0.0013781 0.046446 0.0008058 0.22465 0.0750943</ns0:cell><ns0:cell>29.04</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Leukocytes</ns0:cell><ns0:cell cols='7'>0.333118 0.0479674 0.264275 0.0523979 0.38386 0.0390140 197.68</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Lymphocytes</ns0:cell><ns0:cell cols='7'>0.369636 0.0627853 0.255698 0.0615829 0.37673 0.4035528 141.34</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'>Lymphocytes# 0.120357 0.0053703 0.079417 0.0048503 0.21189 0.0285911 187.50</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>MCH</ns0:cell><ns0:cell cols='6'>0.040136 0.0014844 0.129213 0.0033433 0.13320 0.0467845</ns0:cell><ns0:cell>7.46</ns0:cell><ns0:cell>0.024</ns0:cell></ns0:row><ns0:row><ns0:cell>MCHC</ns0:cell><ns0:cell cols='7'>0.061690 0.0023295 0.087915 0.0007333 0.24138 0.0265250 511.26</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>MCV</ns0:cell><ns0:cell cols='7'>0.025520 0.0011544 0.112824 0.0033957 0.12624 0.0055598 118.32</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>MPV</ns0:cell><ns0:cell cols='6'>0.085494 0.0044252 0.103647 0.0019528 0.39413 0.0611329</ns0:cell><ns0:cell>77.42</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Monocytes</ns0:cell><ns0:cell cols='6'>0.137388 0.0097614 0.084245 0.0007295 0.50712 0.2983040</ns0:cell><ns0:cell>68.42</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Monocytes#</ns0:cell><ns0:cell cols='6'>0.212097 0.0395842 0.253870 0.0659478 0.28661 0.0475931</ns0:cell><ns0:cell>26.77</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Neutrophils</ns0:cell><ns0:cell cols='7'>0.208235 0.0170758 0.207523 0.0287769 0.21471 0.0171623 208.71</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Neutrophils#</ns0:cell><ns0:cell cols='7'>0.102965 0.0041523 0.115902 0.0060480 0.24121 0.0357412 175.81</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Platelets</ns0:cell><ns0:cell cols='6'>0.198826 0.0170292 0.254356 0.0257660 0.13800 0.0069387</ns0:cell><ns0:cell>39.01</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>RDW</ns0:cell><ns0:cell cols='6'>0.054032 0.0009597 0.050259 0.0017099 0.23707 0.0539648</ns0:cell><ns0:cell>4.28</ns0:cell><ns0:cell>0.1171</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>RedbloodCells 0.037971 0.0010371 0.075089 0.0030830 0.16186 0.0401366</ns0:cell><ns0:cell>30.13</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>As for the HAE dataset, the variables Basophils, Lymphocytes, Eosinophils and Leukocytes yields</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>the higher D statistic. All of them have the same characteristic: similar distribution but with negative</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>distribution with higher values. It is noteworthy that the higher BD (Basophils and Eosinophils) can be</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>attributed to outliers. From Fig. 2 and 3 it can be noticed that such distributions present different means</ns0:cell></ns0:row><ns0:row><ns0:cell cols='9'>(as corroborated by the D statistic) combined with spurious values with high distance from the modal</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>distribution point, resulting in a larger variance.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Confusion matrix of binary classification. True positives = TP; True negatives = TN; False positives = FP; False negatives = FN.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>'Gold standard'</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Subjects with the disease Subjects without the disease</ns0:cell></ns0:row><ns0:row><ns0:cell>Classifier Predicted as positive</ns0:cell><ns0:cell>TP</ns0:cell><ns0:cell>FP (Type I Error)</ns0:cell></ns0:row><ns0:row><ns0:cell>Predicted as negative</ns0:cell><ns0:cell>FN (Type II Error)</ns0:cell><ns0:cell>TN</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Metrics used to compare the algorithms. True positives = TP; True negatives = TN; False positives = FP; False negatives = FN. It represents the ratio between LR+ and LR-(97), or the ratio of the probability of a positive test result if the sample has the disease to the likelihood of a positive result if the sample does not have the disease.DOR can range from zero to infinity, and a test is only useful with values larger than 1.0<ns0:ref type='bibr' target='#b56'>(50)</ns0:ref>. The last metrics used in this work are commonly used to evaluate machine learning classification results. The F 1 -score, also known as F-measure, ranges from zero to one and is the harmonic mean of the precision and recall<ns0:ref type='bibr' target='#b110'>(97)</ns0:ref>. The area under the receiver operating characteristic (AUROC) describes the model's ability to discriminate between positive and negative examples measuring the trade-off between the true positive rate and the false positive rate across different thresholds.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Metric</ns0:cell><ns0:cell>Formula</ns0:cell><ns0:cell>Range</ns0:cell><ns0:cell>Target value</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Sensitivity TP/(TP+FN)</ns0:cell><ns0:cell>[0, 1]</ns0:cell><ns0:cell>∼ 1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Specificity TN/(FP+TN)</ns0:cell><ns0:cell>[0, 1]</ns0:cell><ns0:cell>∼ 1</ns0:cell></ns0:row><ns0:row><ns0:cell>LR+</ns0:cell><ns0:cell cols='3'>sensitivity / (1-specificity) [0, +∞) > 10</ns0:cell></ns0:row><ns0:row><ns0:cell>LR-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>(1-sensitivity) / specificity [0, +∞) < 0.1 DOR (TP/FN)/(FP/TN) [0, +∞) > 1 F 1 -score TP / (TP + 1/2 (FP + FN)) [0, 1] ∼ 1 AUROC Area under the ROC curve [0, 1] ∼ 1 8/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021) Manuscript to be reviewed Computer Science the disease than samples without the disease, and thus, it is a good indicator for ruling-in diagnosis. Good diagnostic tests usually have an LR+ larger than 10 (93). Similarly, LR-measures how much less likely it is to get a negative test result in samples with the disease when compared to samples without the disease, being used as an indicator for ruling-out the diagnosis. A good diagnostic test should have an LR-smaller than 0.1 (93). Another global metric for the comparison of diagnostic tests is the diagnostic odds ratio (DOR). relevant applications, ranging from classification of types of plants and animals to the identification of different diseases prognoses, such as cancer (42; 44; 43; 52), H1N1 Flu (27), Dengue (109), and COVID-19 (Table5). The use of these algorithms in the context of hemogram data from COVID-19 patients is summarized in Table5.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Studies that use ML algorithms on COVID-19 hemogram data (in alphabetical order by the surname of the first author).</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Source</ns0:cell><ns0:cell>Data</ns0:cell><ns0:cell>Algorithms</ns0:cell></ns0:row><ns0:row><ns0:cell>AlJame et al. (2)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>XGBoost</ns0:cell></ns0:row><ns0:row><ns0:cell>Alves et al. (3)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>Random Forest, Decision Tree, Criteria Graphs</ns0:cell></ns0:row><ns0:row><ns0:cell>Assaf et al. (7)</ns0:cell><ns0:cell>Clinical and CBC profile, Sheba Medical Center, Israel</ns0:cell><ns0:cell>MLP, Random Forest, Decision Tree</ns0:cell></ns0:row><ns0:row><ns0:cell>Avila et al. (8)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>Naïve-Bayes</ns0:cell></ns0:row><ns0:row><ns0:cell>Banerjee et al. (9)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>MLP, Random Forest, Logistic Regression</ns0:cell></ns0:row><ns0:row><ns0:cell>Bao et al. (10)</ns0:cell><ns0:cell cols='2'>CBC, Wuhan Union Hosp; Kunshan People's Hosp, China Random Forest, SVM</ns0:cell></ns0:row><ns0:row><ns0:cell>Bhandari et al. (15)</ns0:cell><ns0:cell>Clinical and CBC profile of (non) survivors, India</ns0:cell><ns0:cell>Logistic Regression</ns0:cell></ns0:row><ns0:row><ns0:cell>Brinati et al. (21)</ns0:cell><ns0:cell>CBC, San Raffaele Hospital, Italy</ns0:cell><ns0:cell>Random Forest, Naïve-Bayes, Logistic Regression, SVM, kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>Cabitza et al. (23)</ns0:cell><ns0:cell>CBC, San Raffaele Hospital, Italy</ns0:cell><ns0:cell>Random Forest, Naïve-Bayes, Logistic Regression, SVM, kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>Delafiori et al. (36)</ns0:cell><ns0:cell>Mass spectrometry, COVID-19, plasma samples, Brazil</ns0:cell><ns0:cell>Tree Boosting, Random Forest</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>de Freitas Barbosa et al. (35) CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>MLP, SVM, Random Forest, Naïve-Bayes</ns0:cell></ns0:row><ns0:row><ns0:cell>Joshi et al. (67)</ns0:cell><ns0:cell>CBC of patients from USA and South Korea</ns0:cell><ns0:cell>Logistic Regression</ns0:cell></ns0:row><ns0:row><ns0:cell>Silveira (92)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>XGBoost</ns0:cell></ns0:row><ns0:row><ns0:cell>Shaban et al. (90)</ns0:cell><ns0:cell>CBC, San Raffaele Hospital, Italy</ns0:cell><ns0:cell>Fuzzy inference engine, Deep Neural Network</ns0:cell></ns0:row><ns0:row><ns0:cell>Soares et al. (94)</ns0:cell><ns0:cell>CBC, Albert Einstein Hospital, Brazil</ns0:cell><ns0:cell>SVM, SMOTEBoost, kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>Yan et al. (105)</ns0:cell><ns0:cell>Laboratory test results and mortality outcome, Wuhan</ns0:cell><ns0:cell>XGBoost</ns0:cell></ns0:row><ns0:row><ns0:cell>Zhou et al. (110)</ns0:cell><ns0:cell>CBC, Tongji Hospital, China</ns0:cell><ns0:cell>Logistic Regression</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Hyperparameter ranges used in our analyses.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classifier</ns0:cell><ns0:cell>Parameters</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Naive Bayes</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Support Vector Machines kernel:</ns0:cell><ns0:cell>rbf; linear</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>gama:</ns0:cell><ns0:cell>0.0001 -0.001</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>c:</ns0:cell><ns0:cell>1 -1000</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest</ns0:cell><ns0:cell>n-estimators:</ns0:cell><ns0:cell>50; 100; 200</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>criterion:</ns0:cell><ns0:cell>gini; entropy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max depth:</ns0:cell><ns0:cell>3 -10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>min samples split:</ns0:cell><ns0:cell>0.1 -0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>XgBoost</ns0:cell><ns0:cell>n-estimators:</ns0:cell><ns0:cell>50; 100; 200</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max depth:</ns0:cell><ns0:cell>3 -10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>learning rate:</ns0:cell><ns0:cell>0.0001 -0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Decision Tree</ns0:cell><ns0:cell>criterion:</ns0:cell><ns0:cell>gini; entropy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>max depth:</ns0:cell><ns0:cell>3 -10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>min samples split:</ns0:cell><ns0:cell>0.1 -1.0</ns0:cell></ns0:row><ns0:row><ns0:cell>K-Nearest Neighbors</ns0:cell><ns0:cell>n neighbors:</ns0:cell><ns0:cell>3; 5; 7; 10; 15; 50</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>weights:</ns0:cell><ns0:cell>uniform; distance</ns0:cell></ns0:row><ns0:row><ns0:cell>Logistic Regression</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Multi Layer Perceptron</ns0:cell><ns0:cell>activation:</ns0:cell><ns0:cell>logistic; tanh; relu</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>solver:</ns0:cell><ns0:cell>sgd; adam</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>alpha:</ns0:cell><ns0:cell>0.0001; 0.001; 0.01</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>learning rate init:</ns0:cell><ns0:cell>0.0001; 0.001; 0.01</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>early stopping:</ns0:cell><ns0:cell>True; False</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>batch size:</ns0:cell><ns0:cell>16; 64; 128</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>hidden layer sizes: (10, 10, 2);</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>https://www.fleury.com.br 2 https://www.einstein.br 3 https://www.hospitalsiriolibanes.org.br 3/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='19'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59682:2:1:NEW 13 Jul 2021)</ns0:note>
</ns0:body>
" | "Response letter regarding the manuscript 'Comparison of machine
learning techniques to handle imbalanced COVID-19 CBC datasets'
(#CS-2021:03:59682:1:1:REVIEW)
We thank the editors and reviewers for their attentive revisions and careful reading of
the manuscript, leading to an improved version of the text. Below we list all the comments
raised and our replies in blue. In the manuscript, new information is colored blue.
Editor comments (Helder Nakaya)
Please address the comments of Reviewer 1
Answer: We thank the Editor and the PeerJ Staff for their comments and
contributions. Our answers to the reviewers questions are listed below.
Reviewer 1 (Anonymous)
The authors unfortunately did not improve the quality of the article regarding any of
my three main concerns:
Answer: While we believe the reviewer’s concerns were addressed in our previous
response letter, we also understand that we give an extra effort in communicating our
arguments and improving the manuscript with the requested information. In this sense, we
thank the reviewer for the suggestions and hope the modifications and additions in the text
improved the quality of the article.
1 - There are known issues regarding the quality of the dataset, that have been
noticed by other groups that have given up on using this dataset. For example, the positive
class for covid-19 is not expected to be 40 times more prevalent than the negative (as in the
case of HSL). The authors need to need to make this clearer throughout the manuscript,
especially in the conclusion.
Answer: We thank the reviewer for the consideration. In our previous response letter
we highlighted a few segments of the text that already provided the explanation for the use
of the HSL data in the manuscript. To make this even clearer, the following text was added:
● Section 1.1, 6th ¶: “Datasets with an apparent biased disease prevalence,
as is the case with the Sírio-Libanês Hospital data (in reality the positive class
for COVID-19 is not expected to be 40 times more prevalent), should be
discarded from biological analysis. These datasets are being critically
evaluated and used only as examples for this research.”
● Conclusions, 5th ¶: “If the data is clearly biased, like the HSL data, the
dataset should be discarded.”
2- The authors should perform statistical analysis to compare whether the variables
have similar distributions for the three centers, regardless of the outcome. All that the
authors have provided in the manuscript are visual comparisons according to the classes.
Answer: We thank the reviewer for the consideration. We added this information in
the last columns of Table 2.
● Table 2: “The last two columns show the Kruskal–Wallis H test (KW) together
with its p-value, to compare the variables distributions for the three centers,
regardless of the outcome. In this case, results rejecting the Null Hypothesis
that data belongs to the same distribution are in bold.”
● Section 1.1, 6th ¶: “The last columns of Table 2 also show that, in general,
the variables do not belong to the same distribution for the three centers,
regardless of the classes.”
3- The authors need to use the area under the ROC curve (AUROC) the main metric,
as it is the standard in machine learning studies. It is fine to add the other ones for specific
analyzes, but the AUROC should be the main metric to allow for comparisons with other
studies.
Answer: We thank the reviewer for the suggestion. We added the AUROC metric as
requested and discussed it in the following segments of the manuscript.
● Table 4.
● 1.2 Evaluation Metrics, 6th ¶: “The last metrics used in this work are commonly
used to evaluate machine learning classification results. The F1-score, also
known as F-measure, ranges from zero to one and is the harmonic mean of the
precision and recall (97). The area under the receiver operating characteristic
(AUROC) describes the model’s ability to discriminate between positive and
negative examples measuring the trade-off between the true positive rate and the
false positive rate across different thresholds.”
● Figure 5 (g).
● Figure 6 (g).
● Figure 7 (g).
Reviewer 3 (Anonymous)
The authors have addressed all points raised by this reviewer (as well as those
raised by two other reviewers). I consider the revised manuscript adequate for publication.
Answer: We thank the reviewer for the kind comments.
" | Here is a paper. Please give your review comments after reading it. |
206 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Airborne laser scanning (ALS) has gained importance over recent decades for multiple uses related to the cartography of landscapes. Processing ALS data over large areas for forest resource estimation and ecological assessments requires efficient algorithms to filter out some points from the raw data and remove human-made structures that would otherwise be mistaken for natural objects. In this paper, we describe an algorithm developed for the segmentation and cleaning of electrical network facilities in low density (2.5 to 13 points/m²) ALS point clouds. The algorithm was designed to identify transmission towers, conductor wires and earth wires from high-voltage power lines in natural landscapes. The method is based on two priors i.e. (1) the availability of a map of the highvoltage power lines across the area of interest and (2) knowledge of the type of transmission towers that hold the conductors along a given power line. It was tested on a network totalling 200 km of wires supported by 415 transmission towers with diverse topographies and topologies with an accuracy of 98.6%. This work will help further the automated detection capacity of power line structures, which had previously been limited to high density point clouds in small, urbanised areas. The method is open-source and available online.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Airborne laser scanning (ALS) has gained importance in recent decades for multiple uses related to the cartography of the earth's surface. Active airborne LiDAR systems directly capture the 3D information of surrounding objects and generate highly accurate georeferenced 3D point clouds that describe the structure of the land. While ALS first provided a way to build extremely accurate digital terrain models <ns0:ref type='bibr' target='#b25'>(Nelson, 2013)</ns0:ref>, several other applications have also been developed for predicting and mapping various characteristics of the vegetation and many other features of interest to disciplines such as forestry, ecology and land management. These include the characterisation of wildlife habitat (e.g. <ns0:ref type='bibr' target='#b6'>Goetz et al., 2007)</ns0:ref>, water bodies (e.g. <ns0:ref type='bibr' target='#b3'>Canaz et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Morsy, 2017)</ns0:ref> and forest roads (e.g. <ns0:ref type='bibr' target='#b5'>Ferraz et al., 2016)</ns0:ref>, among many others. In forestry, a substantial amount of work has focused, based on documented best practices <ns0:ref type='bibr' target='#b40'>(White et al., 2013)</ns0:ref>, on the use of ALS data to map forest resources and help forest practitioners make optimised decisions (e.g. <ns0:ref type='bibr' target='#b15'>Li et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bouvier et al., 2015a;</ns0:ref><ns0:ref type='bibr' target='#b0'>Blanchette et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b37'>Tompalski et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Processing ALS data over areas covering hundreds of thousands of square kilometres for forest resource estimation and ecological assessments requires algorithms to filter out some points from the raw data and remove human-made structures that would otherwise be mistaken for natural objects. Resource estimation in broad coverage is generally performed using the area-based approach (ABA) <ns0:ref type='bibr' target='#b27'>(Nilsson, 1996;</ns0:ref><ns0:ref type='bibr' target='#b28'>Naesset, 1997;</ns0:ref><ns0:ref type='bibr' target='#b40'>White et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b19'>Luther et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bouvier et al., 2015b)</ns0:ref>. Under ABA, all points are treated equally to derive metrics summarizing their distribution within individual pixels (typically 20 × 20 meters) <ns0:ref type='bibr' target='#b37'>(Tompalski et al., 2016)</ns0:ref>. The typical metrics used to capture and summarize the vertical distribu-PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57982:1:1:NEW 11 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science tion of points are height-derived, with the premise that higher points belong to taller trees. In the absence of recorded attributes allowing for selective omission of some points from the analysis, those belonging to human-made structures may be incorporated unknowingly as if they were part of the vegetation. Being tall structures commonly found in natural landscapes, the occurrence of transmission towers and wire conductors may thus generate a significant source of error in ABA models. First, transmission towers may be misinterpreted as very large trees, which could introduce bias to biomass estimation models, for example. Second, electrical wires may statistically 'hide' vegetation underneath that may be of ecological interest. In such cases, buffering out the known X and Y positions of the electrical distribution network is not a suitable solution. Classification methods are thus required to facilitate analyses describing the vegetation located either directly underneath or in the vicinity of power lines.</ns0:p><ns0:p>Existing methods to discriminate points belonging to power supply structures can be classified into two main types i.e. line-shape-based detection, and machine learning, with some instances of overlap between the two methods. Line-shape-based detection consists of analysing the structure of the point cloud in a tight neighbourhood around each point <ns0:ref type='bibr' target='#b20'>(Matikainen et al., 2016)</ns0:ref> to estimate the local shape of objects. Such methods are often based on an eigenvalue decomposition to segment linear features <ns0:ref type='bibr' target='#b21'>(McLaughlin, 2006;</ns0:ref><ns0:ref type='bibr' target='#b11'>Jwa et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b12'>Kim and Sohn, 2010;</ns0:ref><ns0:ref type='bibr' target='#b26'>Ni et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b30'>Qin et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chen et al., 2018)</ns0:ref>. For a given point and its neighbourhood, and supposing the computed eigenvalues are defined as λ 0 , λ 1 , and λ 2 , linear features can be identified in a point cloud using the ratio of λ 0 to λ 1 and λ 2 <ns0:ref type='bibr' target='#b11'>(Jwa et al., 2009)</ns0:ref>. Other methods based on a Hough transform have also been reported <ns0:ref type='bibr' target='#b22'>(Melzer and Briese, 2004;</ns0:ref><ns0:ref type='bibr' target='#b18'>Liu et al., 2009)</ns0:ref>. Alternatively, <ns0:ref type='bibr' target='#b17'>Liang et al. (2011)</ns0:ref> selected initial seeds manually and then used a region growing method to segment power lines. These line-shape-based methods have been successfully applied to electrical wire detection, with reported accuracy ranging from 87% to 97%. However, they can not be applied to nonlinear features such as transmission towers.</ns0:p><ns0:p>There has been a growing interest in recent research inthe use of machine learning approaches to classify point cloud scenes, including the use of random forest classifiers <ns0:ref type='bibr'>(Kim and</ns0:ref><ns0:ref type='bibr'>Sohn, 2010, 2012;</ns0:ref><ns0:ref type='bibr' target='#b26'>Ni et al., 2017)</ns0:ref>, convolutional neural networks <ns0:ref type='bibr' target='#b42'>Zhang et al. (2019)</ns0:ref>, graph convolutional neural networks <ns0:ref type='bibr' target='#b16'>Li et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b39'>Wen et al. (2021)</ns0:ref>, Latent Dirichlet Allocation <ns0:ref type='bibr' target='#b41'>Yang and Kang (2018)</ns0:ref>, or a hierarchical unsupervised method <ns0:ref type='bibr' target='#b38'>Wang et al. (2017)</ns0:ref>. Applications are not restricted to power supply infrastructure, but for transmission towers and power lines high levels of accuracy were achieved, with accuracy ranging from 80% to 99%. Line-shape-based methods are simpler to implement and appear to perform very well in high density point clouds. For example, we have used a simple eigenvalue decomposition method to produce Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> where a perfect segmentation of the wire conductors was obtained with minimal programming effort and without any prior information or supervision on the location of the tower or the orientation of the wires. In practice, however, such a method has important limitations, especially in broad, non-urban and low-density coverage. The method assumes wires are continuous structures, but this assumption cannot hold in low-density point clouds, which are typically acquired over vast landscapes. In such datasets, power lines present several gaps with sparsely distributed points (Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>), thus making analyses of the local geometry unfeasible.</ns0:p><ns0:p>Machine learning methods, on the other hand, may be more robust. However, most reported studies were focused on urban scenes, with very high point densities and no evidence that they could be applied to low-density point clouds resulting from sparse structure sampling, such as those reported in Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>.</ns0:p><ns0:p>Even if applicable to such situations, existing models will at least require training with new data. Despite machine learning appearing capable of providing highly accurate semantic segmentation, and even instance segmentation, in high-density point clouds where objects are well-defined and densely sampled, there remain doubts as to the performance of such methods in some sparsely sampled areas of low-density point clouds where transmission towers may be reduced to insulators and wires to sparse outliers, as shown in Figure <ns0:ref type='figure' target='#fig_1'>2d</ns0:ref>. segmentation and monitoring using very high point densities (from 25 to 60 points/m 2 ), and a low flight path often parallel to the power lines (e.g. <ns0:ref type='bibr' target='#b21'>McLaughlin, 2006;</ns0:ref><ns0:ref type='bibr' target='#b14'>Kim and Sohn, 2013;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chen et al., 2018)</ns0:ref>), or even using a dedicated system (e.g. <ns0:ref type='bibr' target='#b30'>Qin et al., 2017)</ns0:ref>. Some reported line-shape-based methods are, however, likely more robust to this issue. For example, <ns0:ref type='bibr' target='#b10'>Jwa and Sohn (2012);</ns0:ref><ns0:ref type='bibr' target='#b21'>McLaughlin (2006)</ns0:ref> used a three-parameter hyperbolic cosine equation that approximates a catenary curve to regress the points belonging to a wire, which was consequently more robust to gaps and sparse sampling. <ns0:ref type='bibr' target='#b21'>McLaughlin (2006)</ns0:ref> achieved an accuracy of ≈87% on wire segmentation in a 2.5 points/m 2 point-cloud while <ns0:ref type='bibr' target='#b10'>Jwa and Sohn (2012)</ns0:ref> reported 75% but with 24 points/m 2 and in more complex scenes.</ns0:p><ns0:p>Despite the diversity and high accuracy of power line segmentation methods described in the literature, none applied to the context of this study. In addition to differences in point cloud densities and the urban vs forested context, previous studies have been designed and tested on much smaller datasets (640 m of power lines for <ns0:ref type='bibr' target='#b10'>Jwa and Sohn (2012)</ns0:ref>, 2 km for <ns0:ref type='bibr' target='#b11'>Jwa et al. (2009)</ns0:ref> and <ns0:ref type='bibr' target='#b24'>Munir et al. (2019)</ns0:ref>, 750 m for <ns0:ref type='bibr' target='#b36'>Sohn et al. (2012)</ns0:ref>, 4 km for <ns0:ref type='bibr' target='#b7'>Guo et al. (2019)</ns0:ref>, 1 km to 2 km for <ns0:ref type='bibr' target='#b22'>Melzer and Briese (2004)</ns0:ref> and <ns0:ref type='bibr' target='#b43'>Zhu and Hyyppä (2014)</ns0:ref>, 10 km for Yang and Kang (2018)), often with linear network topologies with no network forking and few deflections. Finally, none of the existing studies we have reviewed provided access to software for users to implement the method or at least access the source code. Several studies were also described in short proceedings papers with insufficient detail to derive our own implementation of the methods.</ns0:p><ns0:p>According to <ns0:ref type='bibr' target='#b20'>Matikainen et al. (2016)</ns0:ref>, multiple patents have been registered but so far the availability of such tools has eluded the research community.</ns0:p><ns0:p>In this context, there is a need for ready-to-use software that allows forest researchers or practitioners to accurately classify and remove power supply structures in ALS point clouds. We present a method tested on ≈200 km of power lines, across various terrains, tower types, and topologies such as linear networks, but also on the deflections and forks that characterise the structure of transmission towers, and in low-density point clouds with extremely partial sampling of the features of interest, as shown in Figure <ns0:ref type='figure' target='#fig_1'>2d</ns0:ref>. Our method, which was inspired from the previous efforts mentioned above <ns0:ref type='bibr'>(especially McLaughlin (2006)</ns0:ref>), takes advantage of an existing map of the power distribution network as prior information. Unlike small power supply facilities in towns that carry energy to end customers, maps of high-voltage power lines are necessarily available and maintained by power supply companies. We also take advantage of the fact that information about the type of transmission towers that hold the conductor wires along a given power line is also available, and is usually invariant along hundreds of kilometres. Our proposed method is fully open-source, reproducible, documented and ready-to-use. It is implemented in R (R Core Team, 2021) and takes advantages of the lidR package <ns0:ref type='bibr' target='#b34'>(Roussel and Auty, 2021;</ns0:ref><ns0:ref type='bibr' target='#b35'>Roussel et al., 2020)</ns0:ref> to facilitate processing over broad areas. In all figures, the classification is perfect (no false positives or false negatives) but in (a) conductors are linear and continuous structures evenly sampled with no gaps so they could easily be segmented based on geometric analysis. In (b) conductors are linear continuous structures with a few discontinuities (gaps). In (c) conductors are linear discontinuous structures partially sampled, while in (d) conductors are sparsely sampled with more missing parts than sampled parts, and even the towers are extremely sparsely sampled so it is only possible to distinguish hanging insulators.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>METHODS</ns0:head><ns0:p>Our classification method relies on two priors: (1) a map of the high-voltage power lines across the area of interest, and (2) knowledge of the type of transmission towers along a given power line. The first prior is trivial as it simply helps focus on areas of interest where the electrical distribution network is located. The second prior implies querying a database in which the specifications of the towers are used to anticipate the sizes and shapes of the towers we are looking for in a given region of interest, as well as the number of wire conductors, how the wires are handled and arranged, etc. Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> shows, for two tower types, how a tower is described in our software. Then, the point cloud segmentation method consists of four main steps:</ns0:p><ns0:p>(1) tower tracking to find and map the positions of the transmission towers, (2) topology reconstruction to match consecutive, interconnected towers, (3) tower segmentation in which groups of points are identified as belonging to a transmission tower, and (4) wire segmentation in which groups of points are identified as belonging to an electrical wire. Each of these steps is described in further detail in the following sections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Step 1: Tower tracking</ns0:head><ns0:p>The first step consists of mapping the positions of the transmission towers. Starting from the map of the network in the form of spatial lines, we decompose the network into linear sections and then each line is buffered to encompass the power lines (Figure <ns0:ref type='figure' target='#fig_3'>4a</ns0:ref>). This allows for working iteratively on linear sections with a known orientation and is helpful for reducing scene complexity. This way, the algorithm can focus on smaller numbers of points instead of screening the whole scene, which in our case usually covered an area greater than 1 km 2 and was free of any human structures.</ns0:p><ns0:p>In each region of interest we applied a point-cloud-based local maximum filter (LMF) to find the local highest points. The local maximum filter we used is analogous to that normally used to detect individual trees <ns0:ref type='bibr' target='#b29'>(Popescu and Wynne, 2004)</ns0:ref>, but specifically uses narrow but very long rectangular search windows oriented parallel to the wires to reduce the number of false positives. By design, the LMF usually finds all the towers but tends to also produce several false positives corresponding to high vegetation or to sparse earth wire points that are locally higher than neighbouring points (Figure <ns0:ref type='figure' target='#fig_3'>4b</ns0:ref>). In addition, the true positive towers are usually not very accurately positioned because the LMF finds the single highest point of each tower i.e. the 'ears' of the towers instead of the centre (see waist type in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_3'>4b</ns0:ref>). The algorithm then fixes these two issues by removing locations that do not actually correspond to a tower position and centres the position of the tower top for the remaining true positives.</ns0:p><ns0:p>Because the success of steps 2, 3 and 4 is highly dependent on the accuracy of step 1, this correction task is critical. If a single tower is missing, or a single false positive tower remains, it will invalidate subsequent analyses. Consequently, a very robust trimming of false positives is key to the success of the segmentation. To achieve this, we extract the surrounding point cloud at the position of each candidate tower (Figure <ns0:ref type='figure' target='#fig_3'>4b</ns0:ref>). The subset is then analysed to confirm whether or not it actually contains a tower. This is based on an analysis of the vertical and horizontal distribution of the points. Interested readers may consult the publicly available source code for further details of how this task is performed. Here, we opted to provide limited detail because this step is likely to be modified and improved in the future, which could in turn invalidate the description provided in this paper.</ns0:p><ns0:p>Using the subset of points surrounding the position of the tower, the realignment of its position is made by averaging the X and Y coordinates of the points located 5 to 10 meters below the location of the highest point (Figure <ns0:ref type='figure' target='#fig_3'>4b</ns0:ref>)). Because towers are vertically elongated structures, this task is easy to implement and provides good results.</ns0:p><ns0:p>The xyz positions of each tower is then assigned as their centred tops. With the map of the network and the prior information about the tower types, we can also assign the general orientation of the power lines as well as the bounding box of the towers (Figure <ns0:ref type='figure' target='#fig_3'>4c</ns0:ref>). In Figure <ns0:ref type='figure' target='#fig_3'>4c</ns0:ref> the towers drawn in red were found twice in each linear section because the network is forking. These correspond to 'deflection towers', which have wires with different orientations on each side. In figure <ns0:ref type='figure' target='#fig_3'>4c</ns0:ref>, we can see that two deflection towers were detected, but one of them was a false positive. This is not an issue as will be made evident in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2021:02:57982:1:1:NEW 11 Jun 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.2'>Step 2: Topology reconstruction</ns0:head><ns0:p>Once transmission towers are accurately identified and positioned, step 2 consists of retrieving the network topology by determining how towers are interconnected. For this we use the known orientation of the power line and send 500-m beams in the direction of the previous and next towers with a range of angles corresponding approximately to the width of one transmission tower. The topology of the network is built progressively by identifying the beams that intercept the exact location of the next tower. The 500-m limit was used to avoid scenarios whereby implausibly distant towers are connected by mistake. Figure <ns0:ref type='figure' target='#fig_3'>4d</ns0:ref> provides an example of connections in which we retrieved three rows of power lines with one being deflected in a different direction. We can see that the false positive deflection tower presented in Figure <ns0:ref type='figure' target='#fig_3'>4c</ns0:ref> was not an issue because it could not connect with any other tower in the direction of the deflection.</ns0:p><ns0:p>In this step we also needed to introduce the concept of 'virtual towers'. Because two towers are needed to compute a connection, we necessarily had troubles at the edges of our datasets where the last towers could not be connected. Virtual towers are thus added when a tower cannot be matched with another tower. While this was always the case at the edges of the studied area, the concept was also useful in cases where towers were missed (false negatives). In such cases, two consecutive towers may be too far apart to be matched, thus creating a gap in the network. The addition of virtual towers helped make the topology reconstruction more robust. The exact location of missing towers was not known, but this was not our purpose. The objectives pursued with the creation of virtual towers were twofold: (1) to ensure the topological validity of networks, and (2) to reduce the effects of missing towers on the detection of wires. The latter will be made clearer after section 2.4. Although they are an approximation, virtual towers contribute to reducing the effects of errors made in step 1. There is no missing tower in Figure <ns0:ref type='figure' target='#fig_3'>4d</ns0:ref>, but we can see the prolongation of the network in lighter colours after the last towers at the edges of the area.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Step 3: Tower classification</ns0:head><ns0:p>At this stage we know the positions of the towers, their height, their orientation, their type and their interconnection in the network. However, the point cloud remains to be classified. As we know the tower type, we also know their xy dimensions. In this step, we assign all points located within the bounding boxes of the towers down to the ground level as belonging to a transmission tower (Figure <ns0:ref type='figure' target='#fig_5'>5a and b</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Step 4: Wire classification</ns0:head><ns0:p>In this step, we use a well-known phenomenon from physics and geometry to classify points belonging to the conducting wires. An idealized hanging chain or cable supported only at its ends assumes, under its own weight, a type of curve called a catenary. The general equation describing the catenary between two points is <ns0:ref type='bibr' target='#b9'>(Hatibovic, 2014)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_0'>y(x) = 2c (A(x) − B(x)) + h 1 (1) With A(x) = sinh 2 1 2c x − S 2 + c × arcsinh h 2 − h 1 2c × sinh( S 2c )</ns0:formula><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>B(x) = sinh 2 1 2 S 2c − arcsinh h 2 − h 1 2c × sinh( S 2c )<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>Where (x 1 , h 1 ) and (x 2 , h 2 ) are the position and elevation of the cable ends, c a constant that (roughly) corresponds to the tension in the wire, and S the span length i.e. |x 2 − x 1 |.</ns0:p><ns0:p>Despite the apparent complexity of this equation, it has a single parameter c, which is part of the description of the tower, assuming that for a given tower type the tension is always the same. We can thus reconstruct the wire curve from the top of the towers (Figure <ns0:ref type='figure' target='#fig_5'>5c</ns0:ref>). This curve is positioned above the wire, but since we know the tower type we also know the typical distance between the top of the tower and the wires (Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>). We then move this curve below the wires (Figure <ns0:ref type='figure' target='#fig_5'>5d</ns0:ref>) and classify every point above the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science curve that is not already classified as a tower, as a wire. This includes the earth wires that, in practice, look like noise or outliers but are actually wires (figure <ns0:ref type='figure' target='#fig_5'>5e</ns0:ref>).</ns0:p><ns0:p>The final classification for the dataset used to describe the method is showed in Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> was also the result of using this method.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5'>Datasets</ns0:head><ns0:p>To validate the algorithm, we selected 130 1-km 2 tiles containing wires among a 30000-km 2 dataset located in the Côte-Nord region of eastern Quebec, Canada (Figure <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>). Although the tile selection was blind (i.e. made without looking at the point cloud), it was not fully randomised. Tiles were grouped into 28 regions of interest, which were selected manually to include difficult cases, including deflections, forks, gaps, single rows, multiple rows, multiple tower types, etc. A fully randomised selection would have very likely produced only linear sections with waist-type transmission towers, which are the most common in the network. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows some small linear subsets from some of the chosen regions of interest. Our validation dataset consisted of 28 regions of interest, with conductor lengths ranging from 2 km to 12 km in areas ranging from 4 to 8 km 2 . Each section contains an electrical network in single (Figure <ns0:ref type='figure' target='#fig_1'>2a</ns0:ref>) or Manuscript to be reviewed</ns0:p><ns0:p>Computer Science multiple rows (Figure <ns0:ref type='figure' target='#fig_1'>2b</ns0:ref>). To estimate the network length, we multiplied the length of the power lines by the number of rows. For example in Figure <ns0:ref type='figure' target='#fig_1'>2b</ns0:ref> there are 3 times 900 m of wire conductors i.e 2.7 km.</ns0:p><ns0:p>The point density averaged 7 points/m 2 over the full dataset and ranged locally from 2.6 to 13.8</ns0:p><ns0:p>points/m 2 . The full 30000-km 2 dataset was not sampled in a single contract was acquired over several years. Thus, the densities tended to change among regions. Because of the multiple contractors, the acquisition devices also varied. For these reasons, the sampling design specifications are not reported here. Our proposed method should be seen as working independently from the acquisition device and point density.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.6'>Validation</ns0:head><ns0:p>To evaluate the accuracy of the method, we manually assessed the results obtained in our 28 selected regions of interest. In the absence of reference data on the positions of the transmission towers, we manually assessed the tower segmentation and counted the number of true positives, false negatives and false positives. This relatively trivial task, was performed on a total of 415 towers.</ns0:p><ns0:p>Assessing the accuracy of the wire segmentation was more challenging. In the absence of a perfect segmentation upstream of the method, we could not directly and automatically estimate the percentage of points that were correctly or incorrectly classified. The key to the successful application and development Manuscript to be reviewed</ns0:p><ns0:p>Computer Science of our method was to identify situations in which points tend to be misclassified. To this end, counting the percentage of correctly classified points did not bear much relevance. For example, we could miss all points belonging to conductor wires in dataset #15 (for which a subset is presented in Figure <ns0:ref type='figure' target='#fig_1'>2d</ns0:ref>) without significantly affecting the overall percentage of correct classification because this dataset is so sparsely populated that it actually accounts for a very small proportion of the points included in our analysis. We thus chose the length of true positive, false positive and false negative sections of power supply facilities as a quantitative measure of segmentation accuracy. This way, the same weight was attributed to each dataset independently of their sampling density. By design, the misclassification errors output by our algorithm are grouped into continuous clusters. Results were thus free of isolated and randomly-located misclassified points, which ensures the relevance of the chosen metric. This approach also allowed us to derive some key statistics to assess model performance, such as precision, recall and F-score, which were interpreted in kilometres of power lines instead of percentage of points.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57982:1:1:NEW 11 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Accuracy</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref> shows, for the 28 evaluation datasets, the dataset IDs (so we can refer to them in the text), the number of power line rows, the number and types of transmission towers supporting the wires, as well as the number of true positives, false positives and false negatives. Regarding the wires, the table also presents their total length (i.e. the length of the network multiplied by the number of wire rows) as well as the total lengths of true positive, false positive and false negative sections. Overall, 21 of the 28 trials were perfectly classified with with 0 false positives and 0 false negatives. Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> provides an example of such perfect classification obtained in dataset #13. The seven remaining datasets contained various degrees and types of classification errors. The classification errors of datasets #1 #15 and #17 are presented in further detail in the next sections to highlight some limitations of the method. The NA values associated with datasets #10 and #22 correspond to areas where we considered the algorithm had failed. In both cases, largely incorrect results were produced whereby false positives and false negatives were no longer clustered in continuous sections.</ns0:p><ns0:p>For power lines supported by waist-type transmission towers, a total of 1.7 km of structures were misclassified (mostly false negatives), which represents 1.4% of the total studied length. If all power supply structures had been equally densely sampled, this would correspond to an accuracy of 98.6%, a precision of 99.7%, a recall of 98.9%, and an F-score of 0.993 at the point level. For power lines supported by double circuit transmission towers, a total of 740 m of structures were misclassified (false negatives only), which represents 1.4% of the total studied length and corresponds to an accuracy of 98.6%, a precision of 100%, a recall of 98.6%, and an F-score of 0.993. For power lines supported by small waist-type towers, however, the results were largely unusable with several occurrences of false positives at the tower detection stage. Tall trees at random locations were often classified as towers in such situations, which subsequently led to vegetation being classified as wires and the actual wires being missed. Overall, ≈75% of the studied length was unusable in such situations.</ns0:p><ns0:p>It was not possible provide an overall estimate of accuracy for the application of the algorithm to the entire network because the relative proportions of these three tower types was unknown. However, it was evident from our samples that waist-type towers are dominant in the studied region, with double-circuit towers also being relatively frequent and small waist-type towers relatively rare. One key factor associated with the poor results obtained for small waist-type towers was their height being sometimes lower than some of the neighbouring trees.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Imperfect classification</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>8a</ns0:ref> shows a subset of region #1 in which three towers are missing where the conductors split and joined back. Here, the 'split towers' are of a different type to the others on the line, which explains why they were trimmed during the candidate tower identification step. We can observe that where the two side-by-side towers were missed, the section was automatically removed. This left a continuous cluster of unclassified points but none were actually misclassified. This is because the span of the connected towers that were correctly identified on each side was so large that the theoretical wire profile would be passing below ground according to the tension normally applied to this type of tower. Facing this impossible scenario, the algorithm automatically removed the entire section. Where only one of the two side-by-side towers is missing, as seen on the right hand side of Figure <ns0:ref type='figure' target='#fig_8'>8a</ns0:ref>, the topology remains valid. In this case, only one part of the split was correctly classified while the other remained unclassified, once again without any false positive classification.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>8b</ns0:ref> shows a subset of the region #15 that focuses on the missing tower. Consequently, one tower could be not matched to any other and the topology of network was interrupted. Despite this, large sections of the wires were still correctly segmented, again avoiding false positives. This was attributable to the addition of a virtual tower, as explained previously, which left only a small portion of the wires unclassified with no false positives.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>8c</ns0:ref> shows the entire region #17 in which the segmentation missed a lot of features. This scene corresponds to an extreme case where facilities are so sparsely sampled that we can only see the insulators not directly comparable to other methods. First, unlike previous methods cited in the introduction, the application of our method is dependent on the availability of a priori information. Second, our method applies to a totally different type of data than is analysed using most of the other existing methods. As highlighted in the introduction, line-shape-based methods would have likely failed at segmenting our data, at least where the point density is low. Conversely, our method would also fail to segment several of the cases reported in the literature. The accuracy of our results should therefore be interpreted within scope of our study i.e. the percentage of high voltage facilities that can reliably and accurately be removed from low density point clouds to facilitate the subsequent study of the neighbouring vegetation. This method was never intended to be used for monitoring the state of the electrical distribution network. We found <ns0:ref type='bibr' target='#b21'>McLaughlin (2006)</ns0:ref> and <ns0:ref type='bibr' target='#b43'>Zhu and Hyyppä (2014)</ns0:ref> to be the two closest comparable studies to this one.</ns0:p><ns0:p>Zhu and Hyyppä (2014) studied power line segmentation in a forestry context and achieved an average correct classification rate of 93% with 55 pts/m 2 on approximately 2.5 km of power lines. <ns0:ref type='bibr' target='#b21'>McLaughlin (2006)</ns0:ref> worked with a low density point cloud (2.5 points/m 2 ) in a forested area testing their method on a relatively larger network (14 km) and obtained an accuracy of 82%.</ns0:p><ns0:p>Further improvements could be implemented relatively easily by refining the tower detection method in step 1, which represents a very small fraction of the source code (50 lines among more than 1000) for the two most frequent tower types in our study area (waist-type and double-circuit). The tower detection step is indeed the most critical in the process. When tower detection reaches an accuracy of 100%, the overall segmentation will also generally reach an accuracy of 100%. However, if a single tower is missing, or worse a false positive tower is detected, the algorithm will start failing.</ns0:p><ns0:p>An important feature of the method is that, by design, the errors are grouped in contiguous clusters.</ns0:p><ns0:p>This is important and highly advantageous for safely cleaning the point cloud. When false positives or false negatives can be randomly spread over the study area, even a 99% accuracy can lead to a high percentage of pixels in which the underlying vegetation is statically hidden by the remaining outliers, as explained in the introduction. Our method ensures that the vast majority of pixels will be fully cleaned and that only a minority of contiguous pixels may contain measurement artefacts.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57982:1:1:NEW 11 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Imperfect classification and unsupported cases</ns0:head><ns0:p>Most of the regions tested were perfectly segmented according to our manual inspection; however, some errors were still occurring when towers were badly detected. Because our method relies fully on the location of the towers to find the wires, a single error in the tower tracking step may have cascading effects. We observed four cases:</ns0:p><ns0:p>1. Figure <ns0:ref type='figure' target='#fig_8'>8b</ns0:ref> shows a very simple case where a single tower that apparently looks easy to find was missing for an unknown reason. This case will probably be better handled as a result of the continuous improvement of the tower candidate correction step, but we think it highlights that some single towers might be missed. In such cases, the introduction of a virtual tower can limit the effect of a single missing tower on the wire detection. Our example shows that wires were mostly correctly segmented and only a small proportion were missing. Virtual towers can be seen as safeguards providing greater robustness to the method.</ns0:p><ns0:p>2. Figure <ns0:ref type='figure' target='#fig_8'>8a</ns0:ref> shows a case that was simply not handled by our method. We did not anticipate the existence of such topologies and we discovered this case at the validation stage. This case will be harder to process correctly but it remains a rare occurrence over the full network. Limiting errors in such cases will require in-depth redesign of the topology reconstruction step, as well as an improvement of the tower candidate correction step.</ns0:p><ns0:p>3. Figure <ns0:ref type='figure' target='#fig_8'>8c</ns0:ref> shows a very complex scene where the electrical facilities are only sparsely sampled.</ns0:p><ns0:p>Towers are often reduced to 10 to 30 points at their very top and we actually see only the suspension insulators. Despite the limitations of the method in very low density point clouds we found a smaller proportion of false classification than expected and the segmentation accuracy still reached 75%. In the portions that were incorrectly classified, the human eye could not even distinguish the suspension insulators of the towers.</ns0:p><ns0:p>4. Regions #10 #23 and #24 presented cases that were badly handled where small transmission towers were situated adjacent to tall trees, which were wrongly interpreted as towers. Improving this result will require further refinement of the candidate tower correction step, which represents a very small portion of the code that drives the entire method.</ns0:p><ns0:p>In case 4, two situations may occur: (1) a tall tree is close to a tower and the local maximum filter detects the tree instead of the tower, leading to one false negative and one false positive tower; (2) several tall trees are detected as towers and even if the correction step is robust, a few false positive towers may remain because trees and towers are more similar than expected. While this has resulted in very poor outcomes, our results suggest that producing a more robust tower detection method in step 1 would be sufficient to improve the segmentation accuracy. No modifications would likely be required for the other three steps.</ns0:p><ns0:p>Case number 3 is to put in the perspective of case 4. The first step of our method could be made more robust by the addition of tests aiming to determine if the distribution of points matched the expected morphology of a transmission tower rather than another object, such as a tree. Such tests were not included in our method because of situations where the point density is so low that even the human eye is unable to recognize the features of transmission towers in the data. The reality of our sampling implied an important trade-off between the capacity of our algorithm to robustly make the distinction between a tree and a tower, and the ability to robustly recognize towers that are so sparsely sampled that their morphological features are unrecognizable. Our method was made relatively robust to sparse sampling but this comes at the expense of the detectability of small towers. The threshold density at which power supplies are no longer recognizable is hard to estimate. Dataset #17 was sampled with a nominal density of 2.6 points/m 2 , but what matters is the sampling density of the target structures, which may also be affected by the direction of the moving sensor relative to the structure. In this case it was both a low-density point cloud and a sensor moving perpendicular to the power lines.</ns0:p><ns0:p>There are obviously some other limitations to our algorithm for which consequences were not observed in our dataset. Our inability to fully anticipate all possible features of high-tension power lines implies that we cannot provide an exhaustive list, but the following limitations are worth mentioning. Our method Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>does not apply to networks supported by different types of transmission towers along the same wires. A typical case can be found in <ns0:ref type='bibr' target='#b4'>Chen et al. (2018)</ns0:ref> figures 10 and 11. Similarly, it does not apply to networks with very close side-by-side power supply facilities of different types e.g. one row of waist-type towers adjacent to another row of double-circuit towers. Also, any overly-complex scenes, such as networks located in the vicinity of generators containing several wires and towers, would likely make the method inapplicable.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Priors and limits of application</ns0:head><ns0:p>Our method is assisted by prior knowledge of the network including wire orientation and tower types. This is an obvious limitation of the method if such prior information is not available. However, high voltage power lines are usually mapped. In more complex contexts, such as in urban regions where lower voltage conductors are present, having this kind of prior knowledge may not be possible. The segmentation of urban electrical facilities requires different methods and point clouds sampled with higher densities. Our method would not be applicable in such contexts.</ns0:p><ns0:p>Nevertheless, our method still uses a limited set of priors. For example, it does not require prior knowledge of the number of power line rows. Yet, we observed that it performs well on one to four rows (four rows not shown) and there is no hard upper limit. Our method also does not require any prior knowledge of the position of the towers along the network, since it is capable of detecting them. In practice, power supply companies typically maintain maps of high-voltage transmission towers, which may lead to even more accurate results. The method also includes safeguards to prevent against irrelevant false positive classifications. This is what we showed in Figure <ns0:ref type='figure' target='#fig_8'>8a</ns0:ref> where the ground was initially classified as wire but was automatically corrected, and in Figure <ns0:ref type='figure' target='#fig_8'>8b</ns0:ref> where a virtual tower prevented against missing too many wire points. Also, unlike previous methods presented in the literature, we proved that our method works not only on linear sections, but also supports branching and deflections of networks. It works with partial objects in low-density point clouds and does not need any special flying pattern (e.g. following the power line). We believe it is thus applicable to any natural landscape in which broad ALS coverage is available.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Implementation and reproducible work</ns0:head><ns0:p>Broad ALS coverage is normally split into multiple files. We presented the method on single blocks of data loaded in memory, but in practice it would not be possible to load thousands of tiles at once. Unlike methods presented in the literature that require pre-processing to fit each segment into a single file (e.g. <ns0:ref type='bibr' target='#b7'>Guo et al., 2019)</ns0:ref>, our method works independently of the tiling pattern. Each tile is loaded with a buffer, thus ensuring that the virtual towers that are added at its edges are actually out of the processed region and do not affect the classification. This is made possible through the file collection processing engine provided with the lidR package <ns0:ref type='bibr' target='#b34'>(Roussel and Auty, 2021;</ns0:ref><ns0:ref type='bibr' target='#b35'>Roussel et al., 2020)</ns0:ref>. The method is therefore not only workable on small samples but it is also fully integrated into a framework dedicated to processing large areas seamlessly.</ns0:p><ns0:p>The text presented in this paper would not allow the methods to be fully reproduced. We have presented the overall concept, but for the sake of concision several details were omitted about candidate tower cleaning, virtual towers and deflection towers, for example. To ensure reproducibility, the source code has been made publicly available, so that any interested user can implement the method and analyse it in detail. The implementation is made of four functions corresponding to the four steps described in the materials and methods section. Any step can thus be replaced easily by another method to match users' needs without affecting the other steps.</ns0:p><ns0:p>The method is available within the R language (R Core Team, 2020) and is implemented in the lidRplugins package available at https://github.com/Jean-Romain/lidRplugins. lidRplugins is an extension of the lidR package <ns0:ref type='bibr' target='#b34'>(Roussel and Auty, 2021;</ns0:ref><ns0:ref type='bibr' target='#b35'>Roussel et al., 2020)</ns0:ref> and contains experimental methods. Depending on its future improvements and success when applied to other datasets, this algorithm might at some stage be made directly available in the lidR package. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The package does not record every existing transmission tower type in a database. Instead, users have the capability to define and use their own tower types, which makes the method reproducible in jurisdictions where power lines are supported by tower types that are different to those presented here.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSIONS</ns0:head><ns0:p>We created an algorithm dedicated to segmenting electrical network facilities in ALS data over vast natural landscapes. The method is designed to segment transmission towers, conductor wires and earth wires from high-voltage power lines. The method was developed to clean point clouds in an attempt to facilitate ecology-and forestry-oriented studies that use ALS data. Our approach relies on prior information about the network and is thus not fully unsupervised. It is, however, applicable to vast areas containing a wide range of specific cases not explicitly addressed in previous studies, such as wire splitting, wire forking, complex topography, complex topology, earth wires, partial wire sampling, partial tower sampling and non-optimized sampling for electrical network surveys. We studied the limitations of the method through an application in northeastern Quebec, Canada.</ns0:p><ns0:p>From this analysis we conclude that limiting false positives or false negatives at the tower detection stage (step 1) is key to the success of the segmentation. We highlighted the limitations of our methods, but over the whole network tower detection errors remained a rare occurrence. It may be possible to refine the tower candidate correction step with minor efforts and without redesigning the whole algorithm, which has proven to be robust. Further development efforts could be dedicated to the automatic recognition of the tower types to support some cases not covered by our methods, such as power lines supported by various tower types. However, this would only be possible for datasets in which the point density is sufficient to make the morphological features of transmission towers recognizable. Finally, to ensure reproducibility and allow further development, we also provided a ready-to-use open-source tool implemented in a framework dedicated to processing vast areas. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:p>Subsets of datasets #1, #15 and #17 that focus on classification errors. To make the image interpretable we chose to display only the structures belonging to the electrical network. Red points were classified as wire and blue points as transmission tower. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Wire conductor segmentation using an eigenvalues decomposition of the k-nearest neighbourhood of each point to evaluate the elongation of the local point structure. The method works very well without any prior information on the network, as long as the wires are sampled homogeneously and continuously.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. 3D rendering of classified point clouds on a digital terrain model (to scale) representative of the reality of broad natural landscape coverage. Towers (blue) and wires (red) for four selected subsets are shown. In all figures, the classification is perfect (no false positives or false negatives) but in (a)conductors are linear and continuous structures evenly sampled with no gaps so they could easily be segmented based on geometric analysis. In (b) conductors are linear continuous structures with a few discontinuities (gaps). In (c) conductors are linear discontinuous structures partially sampled, while in (d) conductors are sparsely sampled with more missing parts than sampled parts, and even the towers are extremely sparsely sampled so it is only possible to distinguish hanging insulators.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3. Description of two types of towers in the database. All measurements are given within a range of validity because towers of the same type may differ (mainly, but not only, in height). Both types are not to scale, the double circuit type usually being taller.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Main steps of the transmission tower tracking and topology reconstruction displayed on a 4-km 2 region of interest for a difficult case were the network forks. (a) original map of the network split in two linear sections in red and black and the associated buffers with dotted lines; (b) tower candidates correction step with a tower incorrectly positioned, and a false positive because an earth wire point was detected as a local maximum; (c) map of the transmission towers found including their bounding boxes and the orientation of the wires. Red towers are those found twice, once per linear section processed; (d) topology reconstruction to retrieve how towers are connected. Light colours correspond to a network reconstructed with virtual towers because there are no towers beyond the limits of the dataset.</ns0:figDesc><ns0:graphic coords='8,178.44,63.78,340.16,345.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:02:57982:1:1:NEW 11 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Main classification steps. (a) Large red points are the transmission tower positions found in step 1 and are known to be connected following the application of step 2. (b) Using the bounding box of each tower centred on the tower position, we classify all points from top to ground as belonging to a tower (in blue). (c) Reconstruction of the catenary curve in purple. (d) Moving this curve below the wires. (e) Classify the points above the curve as wires, including the earth wires that typically look like outliers.</ns0:figDesc><ns0:graphic coords='9,164.27,63.78,368.52,347.63' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. 3D rendering of points classified as part of the electrical network on a 4 km 2 , true scale, digital terrain model.</ns0:figDesc><ns0:graphic coords='10,164.27,63.78,368.49,233.83' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Location of the 30000 km 2 dataset in the Côte-Nord region of Quebec, Canada. The 28 regions shown in red were used to validate the algorithm.</ns0:figDesc><ns0:graphic coords='10,178.44,345.40,340.15,166.61' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure8. Subsets of datasets #1, #15 and #17 that focus on classification errors. To make the image interpretable we chose to display only the structures belonging to the electrical network. Red points were classified as wire and blue points as transmission tower. Grey points show towers or wires that were not classified. Arrows point to the most important errors: (a) missing towers led to unclassified wires and towers, (b) a missing tower led to unclassified wires and towers, but the addition of a virtual tower led to a correct classification of the majority of the wires anyway, and (c) the sampling of the towers is so sparse that tower detection is almost impossible, and thus subsequent steps failed.</ns0:figDesc><ns0:graphic coords='13,141.73,63.78,425.16,213.54' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>3D rendering of classified point clouds on a digital terrain model (to scale) representative of the reality of broad natural landscape coverage. Towers (blue) and wires (red) for four selected subsets are shown. In all figures, the classification is perfe 3D rendering of classified point clouds on a digital terrain model (to scale) representative of the reality of broad natural landscape coverage. Towers (blue) and wires (red) for four selected subsets are shown. In all figures, the classification is perfect (no false positives or false negatives) but in (a) conductors are linear and continuous structures evenly sampled with no gaps so they could easily be segmented based on geometric analysis. In (b) conductors are linear continuous structures with a few discontinuities (gaps). In (c) conductors are linear discontinuous structures partially sampled, while in (d) conductors are sparsely sampled with more missing parts than sampled parts, and even the towers are extremely sparsely sampled so it is only possible to distinguish hanging insulators. PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57982:1:1:NEW 11 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>Main classification steps. (a) Large red points are the transmission tower positions found in step 1 and are known to be connected following the application of step 2. (b) Using the bounding box of each tower centred on the tower position, we classify all Main classification steps. (a) Large red points are the transmission tower positions found in step 1 and are known to be connected following the application of step 2. (b) Using the bounding box of each tower centred on the tower position, we classify all points from top to ground as belonging to a tower (in blue). (c) Reconstruction of the catenary curve in purple. (d) Moving this curve below the wires. (e) Classify the points above the curve as wires, including the earth wires that typically look like outliers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Subsets of datasets #1, #15 and #17 that focus on classification errors. To make the image interpretable we chose to display only the structures belonging to the electrical network. Red points were classified as wire and blue points as transmission tower. Grey points show towers or wires that were not classified. Arrows point to the most important errors: (a) missing towers led to unclassified wires and towers, (b) a missing tower led to unclassified wires and towers, but the addition of a virtual tower led to a correct classification of the majority of the wires anyway, and (c) the sampling of the towers is so sparse that tower detection is almost impossible, and thus subsequent steps failed. PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57982:1:1:NEW 11 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,70.87,525.00,303.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,250.12,525.00,333.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Towers (unitless)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Wire (km)</ns0:cell></ns0:row><ns0:row><ns0:cell>ID</ns0:cell><ns0:cell cols='3'>Rows Type n</ns0:cell><ns0:cell cols='5'>TP FP FN Length TP</ns0:cell><ns0:cell>FP</ns0:cell><ns0:cell>FN</ns0:cell></ns0:row><ns0:row><ns0:cell>#1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>dc</ns0:cell><ns0:cell cols='3'>32 29 0</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>12.14</ns0:cell><ns0:cell cols='2'>11.4 0</ns0:cell><ns0:cell>0.74</ns0:cell></ns0:row><ns0:row><ns0:cell>#2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>dc</ns0:cell><ns0:cell cols='3'>32 32 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>12.4</ns0:cell><ns0:cell cols='2'>12.4 0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>#3</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>13 13 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>#4</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>12 12 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>#5</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>24 24 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>#6</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>26 26 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>11.2</ns0:cell><ns0:cell cols='2'>11.2 0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>#7</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>11 10 0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>5.7</ns0:cell><ns0:cell cols='2'>5.55 0</ns0:cell><ns0:cell>0.15</ns0:cell></ns0:row><ns0:row><ns0:cell>#8</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>39 39 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>17.2</ns0:cell><ns0:cell cols='2'>17.2 0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell>#9</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>32 32 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#10 1</ns0:cell><ns0:cell>wts</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>4.4</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>NA</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#11 1</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>2.3</ns0:cell><ns0:cell>2.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#12 1</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>2.3</ns0:cell><ns0:cell>2.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#13 3</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>14 14 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6.3</ns0:cell><ns0:cell>6.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#14 2</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>10 10 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4.6</ns0:cell><ns0:cell>4.6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#15 2</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>4.6</ns0:cell><ns0:cell>4.4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.2</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#16 3</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>16 16 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6.9</ns0:cell><ns0:cell>6.9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#17 2</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='2'>10 7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>5.2</ns0:cell><ns0:cell cols='3'>3.85 0.35 1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#18 2</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#19 3</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>14 14 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6.9</ns0:cell><ns0:cell>6.9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#20 2</ns0:cell><ns0:cell>wt</ns0:cell><ns0:cell cols='3'>11 11 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>5.6</ns0:cell><ns0:cell>5.6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#21 1</ns0:cell><ns0:cell>wts</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2.6</ns0:cell><ns0:cell cols='3'>2.45 0.15 0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#22 1</ns0:cell><ns0:cell>wts</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>NA</ns0:cell><ns0:cell>NA</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#23 2</ns0:cell><ns0:cell>dc</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#24 2</ns0:cell><ns0:cell>dc</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#25 2</ns0:cell><ns0:cell>dc</ns0:cell><ns0:cell cols='3'>12 12 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#26 2</ns0:cell><ns0:cell>dc</ns0:cell><ns0:cell cols='3'>12 12 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#27 2</ns0:cell><ns0:cell>dc</ns0:cell><ns0:cell cols='3'>10 10 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>#28 2</ns0:cell><ns0:cell>dc</ns0:cell><ns0:cell cols='3'>14 14 0</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>4.8</ns0:cell><ns0:cell>4.8</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table><ns0:note>10/17 PeerJ Comput. Sci. reviewing PDF | (CS-2021:02:57982:1:1:NEW 11 Jun 2021)Manuscript to be reviewed Computer Science of the results for our 28 evaluation datasets containing from 2 to 12 km of wires.[dc] double circuit, [wt] waist-type, [wts] waist-type-small which look similar to waist-type but with smaller towers. Accuracy for tower segmentation is measured with number of towers missed or falsely detected. Accuracy for wire segmentation is measured with length of wire missed or falsely detected.of the transmission tower in the point cloud. Consequently, several towers were not found, and thus all</ns0:note></ns0:figure>
</ns0:body>
" | "1
Reviewer 1
Basic reporting
Comment 1: The literature is not sufficient. For example, line number 53-57
and 65-66 do mention “some studies”. Which studies? What are the gaps which
motivated you to conduct this research?
Answer 1: The introduction has entirely been rewritten (except the first paragraph). We addressed the three reviewers comments about introduction, which all
converge. We have included more references and we carefully explained what motivated us to conduct this study as well as the expected outcomes.
Change made: Introduction entirely re-written
Comment 2: Introduction is but messy–Reorganise it. The authors has tried
to provide gaps to highlight motivation but the discussion is too divergent. Why
solving this problem is really important? There are few recent studies that you have
missed to include.
Answer 2: See comment 1
Comment 3: What outcome is actually expected?
Answer 3: See comment 1
1.1
Experimental design
Comment 4: The research gaps are not well presented. What authors want to
achieve is not well stated. Section 2, provides all details in one section briefly. For
example, if you discussed about datasets, why not to provide more details about it
(especially, reliability concerns about the dataset). Validation, which should be one
of the main focus, is presented briefly.
Answer 4: The research gap and what we want to achieve is now better presented in the introduction. Following the suggestions of the reviewers, the validation
has been entirely modified and strengthened. We provide an improved explanation
of the reason why a validation based on the percentage of correctly or incorrectly
classified points would not be relevant in our study and we have chosen to validate
our method based on the length of correctly and incorrectly classified wire sections.
This is now explained in the materials and methods. While the method is not standard in the literature, we think the explanations will be convincing. We wish to
thank the reviewer for highlighting this issue with our validation. In line with this
suggestion, we now report accuracy, precision, recall, and F-score from our validation.
Change made: Introduction entirely re-written, new validation method and
metrics
1
Comment 5: Results and discussion is okay but they are not well connected
with previous and further sections. Elaborate each section and make a link between
them.
Answer 5: The introduction, results and discussion sections were reworked substantially
Change made: Substantial amendments made to the discussion, mainly in the
first two sections.
Comment 6: Conclusion need to be modified–what is the key take away message? Do your findings has applicability in industries and how it could provide value
to them?
Answer 6: With the new introduction and discussion, we believe part of this
issue with the take-away message will be fixed. One important element was to focus
more clearly on our main goal, which unlike most previous studies was not oriented
towards network monitoring, but instead to remove the points to clean the scene and
facilitate the study of the vegetation. In addition to this, we modified the conclusion
as recommended.
Change made: Conclusion modified to include a clear take-away message on
the important of the tower detection step.
1.2
Validity of the findings
Comment 7: The validation is provided but there are many threats to validity,
which are not addressed. Provide the details about different threats and mention
that how you actually addressed them (if you have) or failed to address. Can you
study be treated as valid and accurate? The discussion is not sufficient to build this
trust. You must explicitly mention it in detail.
Answer 7: We believe that our answer to comment 4 addressed this comment.
The validation approach has been entirely modified in line with the suggestions. We
also provided a better justification for the chosen approach.
1.3
Comments for the Author
Comment 8: I hope that incorporating my feedback and restructuring your article,
will help to improve your research submission.
Answer 8: It did for sure. Thank you for your constructive review and comments.
2
2
Reviewer 2
2.1
Basic reporting
Comment 9: The paper has deficiencies in English expression, sentence structure,
and tense usage. Please have the article proofread by an expert.
Answer 9: The paper has been reviewed by a native English (British) speaker
who is very familiar with scientific writing.
Comment 10: Literature is not systematically reviewed and presented in line
with current research. Please add a more in depth introduction and literature review by studying more recent publications and explain what this research conveys
differently.
Answer 10: The introduction has been entirely rewritten (except the first paragraph). We addressed the three reviewers’ comments about the introduction, which
all converge, so we now think the manuscript is much more coherent. We included
more references to better position our work with regards to the most up-to-date
knowledge on this topic, and carefully explained the specificity of our study. Through
this process, we believe we were able to better present the knowledge gap, and thus
the main contribution and novelty of our work.
Change made: Introduction entirely re-written
2.2
Experimental design
Comment 11: The authors are advised to include main contributions or research
questions to better highlight the novelty of the research.
Answer 11: See answer to comment 10
Comment 12: The motivation to carry out this work is missing. The introduction section should include a discussion on this aspect.
Answer 12: See answer to comment 10
Comment 13: In the present form it in very difficult to understand various
equations. The authors must elaborate different variables or parameters in each
equation before writing it.
Answer 13: There is a single equation. We do not understand to which “various” equations the reviewer is referring to. We do agree that the catenary equation
is relatively complex. This is why we split it into 3 pieces so each piece fits a single
line. This way, we believe it is a little easier to understand than the form presented
in the cited paper. A key point here is that understanding the equation is not necessary to understand the paper. We added because it is a key part of the algorithm.
One advantage is that it saves interested readers the need to access and read the
full paper of Hatibovic 2014, which contains several similar complex equations that
3
are often displayed over several lines.
Comment 14: The authors must include a discussion on why the eigenvalue
decomposition method is used. Are there other methods available? Can they be
used? Why or why not?
Answer 14: We did not use eigenvalues decomposition. It is a commonly used
approach for wire detection. We did explain how its use may be considered in highdensity point clouds but not in low-density ones.
Change made: Newly presented introduction that includes a better description of the two main families of methods with their reported accuracy, and of the
shortcomings in the context of this study L77-94
Comment 15: Para line 37-44: the authors need to establish the strong background for this claim. Some recent work along with the challenges should be included. Is it one of these challenges? Establish this with support of similar work
references in this regard.
Answer 15: This paragraph has been rewritten to explain how ALS data are
processed in forestry and ecology and consequently how human-made structures can
affect local biomass predictions
.
Change made: Lines 37-44 were entirely re-written in line with these comments.
Comment 16: Para line 46-51: Similarly, authors need to support this claim
with recent work.
Answer 16: We cannot support this claim with the literature. Masking spatial
data with a buffer is a very common procedure when processing geospatial data.
Authors usually do not report this step as it is just common sense. Biomass predictions are locally (wildly) erroneous because of power lines. In the absence of other
solutions, they buffer out the data using geospatial tools from GIS software.
Change made: We removed this paragraph that did not provide much additional information. Instead, we improved the description of the knowledge gap and
of the reason for developing this algorithm, which is to allow the study of the underlying and neighbouring vegetation.
Comment 17: Para line 53-57: References required.
Answer 17: We agree that these were unsupported facts. We believe the new
introduction now provides better context.
Change made: In the process of re-writing the introduction, this paragraph
was removed as it did not bring much useful information.
4
2.3
Validity of the findings
Comment 18: Authors have included an elaborative discussion on validation of
their work.
Answer 18: See comment 19
Comment 19: The missing aspect is: Compare this research’s findings with
recent state-of-the-art and explain the benefits of the approach
Answer 19: We added some important information in our revised manuscript to
put the work in the context of previous studies. A new validation has been produced
(see comment 4) and consequently some paragraphs were changed/moved/added/deleted
in the results and discussion. We believe the new version addresses this comment.
Change made: Introduction re-written, validation done in a different way and
discussion amended. See in particular the second paragraph of the new discussion
L77-94.
2.4
Comments for the Author
Comment 20: The authors are advised to include main contributions or research
questions to better highlight the novelty of the research.
Answer 20: See comment 10
Comment 21: The motivation to carry out this work is missing. The introduction section should include a discussion on this aspect.
Answer 21: See comment 10
Comment 22: In the present form it in very difficult to understand various
equations. The authors must elaborate different variables or parameters in each
equation before writing it.
Answer 22: See comment 13
Comment 23: The authors must include a discussion on why the eigenvalue
decomposition method is used. Are there other methods available? Can they be
used? Why or why not?
Answer 23: See comment 14
Comment 24: Para line 37-44: the authors need to establish the strong background for this claim. Some recent work along with the challenges should be included. Is it one of these challenges? Establish this with support of similar work
references in this regard.
Answer 24: See comment 15
5
Comment 25: Para line 46-51: Similarly, authors need to support this claim
with recent work.
Answer 25: See comment 16
Comment 26: Para line 53-57: References required.
Answer 26: See comment 17
Comment 27: In the present form, the literature reported is very poor. Authors
are advised to include a full dedicated section on latest literature. They should also
mention which research gaps they are addressing.
Answer 27: See comment 10
Comment 28: The paper has deficiencies in English expression, sentence structure, and tense usage. Please have the article proofread by an expert.
Answer 28: See comment 9
Comment 29: Literature is not systematically reviewed and presented in line
with current research. Please add a more in depth introduction and literature review by studying more recent publications and explain what this research conveys
differently.
Answer 29: See comment 10
Comment 30: Compare this research’s findings with recent state-of-the-art and
explain the benefits of the approach
Answer 30: See comments 4,10 and 20
3
3.1
Reviewer 3
Basic reporting
Comment 31: This manuscript developed an algorithm for the segmentation of
electrical network facilities from ALS point clouds. The English language should be
improved to ensure that others could clearly understand your text. So, you should
use more short sentences instead of inserting too many long ones. The literature
review in Section 1 Introduction needs to be more logical. Thank you for providing
the source code or tools to allow this methods to be reproduced easily.
Answer 31: The paper has been reviewed by a native English (British) speaker
who is very familiar with scientific writing. The introduction has been entirely rewritten. Many thanks for highlighting the importance of sharing the source code.
We believe this is a key benefit of this study. See also our answer to comment 1.
6
Change made: Introduction entirely re-written
3.2
Experimental design
Comment 32: This manuscript should clearly define the research question, which
must be relevant and meaningful. The knowledge gap being investigated should
be identified. The designed experiment is somewhat simple, and more data and
results in the experimental area are not presented.The results of power line structure
classification based on ALS mainly carry out qualitative analysis, which lacks the
rigor of quantitative analysis. In addition, this method also has the defect of relying
more on prior knowledge and manual judgment.
Answer 32: See comment 4
Change made: Introduction entirely re-written. Also, we have now provided
a much more detailed, quantitative analysis of the results. We believe this rigorous
quantitative analysis strongly improves the manuscript. Many thanks for this comment.
3.3
Validity of the findings
Comment 33: The experimental data should be robust, statistically sound, and
controlled. The conclusions should be appropriately stated, should be connected to
the original question investigated, and should be limited to those supported by the
results. The method and its meaning in this study should be more clear.
Answer 33: Introduction was re-written to better state the problem we are
trying to solve, the validation was reworked (see comment 4), the discussion was
improved to better focus and better connect with the introduction and results.
Through all these major changes, we believe that we have addressed this comment.
Change made: Introduction entirely re-written, validation was re-done in a
different way. The metric used in the validation is also better justified.
3.4
Comments for the Author
Some issues or questions could be addressed:
Comment 34: 1) How to distinguish between ’good’ and ’poor’ classification
results? What are the quantitative criteria?
Answer 34: The new validation is entirely quantitative, so this comment no
longer applies.
Comment 35: 2) Line 190, the sentence of ’An idealised hanging chain ...
assumed ... follows ...’ should be clear.
7
Answer 35: We are not sure we understand the problem. The sentence was
reviewed and modified by a native English speaker.
Change made: Sentence changed
Comment 36: 3) Line 287, ‘range form ...’ should be ‘range from ...’.
Change made: Fixed thank you
Comment 37: It is recommended to check the language spelling through the
whole manuscript.
Answer 36: See comment 31
8
" | Here is a paper. Please give your review comments after reading it. |
207 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Wireless sensor networks (WSN) have been among the most prevalent wireless innovations over the years, the exciting new Internet of Things (IoT) applications. IoT based WSN integrated with Internet Protocol (IP) allows any physical objects with sensors to be connected ubiquitous and send real-time data to the server connected to the Internet gate. Security in WSN remains an ongoing research trend that falls under the IoT paradigm. WSN node deployed in a hostile environment is likely to open security attacks such as Sybil due to its distributed architecture and network contention implemented in the routing protocol. In a Sybil attack, an adversary illegally advertises several false identities or a single identity that may occur at several locations called Sybil nodes. Therefore, in this paper, we give a survey of the most assuring methods up to date to defend from the Sybil attack. The Sybil attack countermeasure includes encryption, trust, received signal indicator (RSSI), encryption and swarm intelligence. Specifically, we survey different methods, along with their advantages and disadvantages, to mitigate the Sybil attack. We discussed the lesson learned and the future avenues of study and open issues in WSN security analysis.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The Internet of Things ( IoT) gained universal acceptance due to many applications for personal use and the community. IoT represented a collection of 'Things' or embedded devices connected using various wireless technologies such as private and public networks <ns0:ref type='bibr' target='#b16'>(Atzori, Iera, & Morabito, 2010)</ns0:ref>. Based on the application domain, IoT applications are classifiable into five groups, for example, health care <ns0:ref type='bibr' target='#b131'>(Zeb, Islam, Zareei, Mamoon, & Mansoor, 2016)</ns0:ref>; <ns0:ref type='bibr' target='#b10'>Ambarkar & Shekokar, 2020)</ns0:ref>, environmental <ns0:ref type='bibr' target='#b70'>(Kumari & Sahana, 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Behera et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b142'>Zhuang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b56'>Jawad, Nordin, & Gharghan, 2017)</ns0:ref>, smart city <ns0:ref type='bibr' target='#b107'>(Santos, Jimenez, & Espinosa, 2019;</ns0:ref><ns0:ref type='bibr' target='#b81'>Luo, 2019)</ns0:ref>, commercial (G. <ns0:ref type='bibr' target='#b76'>Li et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b61'>Khanna & Tomar, 2016)</ns0:ref>, IoT based robotic (Roy <ns0:ref type='bibr' target='#b106'>Chowdhury, 2017)</ns0:ref> and industry (M. <ns0:ref type='bibr' target='#b138'>Zhu et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Wireless sensor networks (WSNs) are essential subsets of the Internet of Things (IoT) that have emerged as a core technology for a variety of data-centric applications. Almost all IoT network concepts are derived from WSNs. Both terms can be confusing at times, and they have a lot of similarities and differences between IoT and WSN <ns0:ref type='bibr' target='#b99'>(Pundir, Wazid, & Singh, 2020)</ns0:ref>. IoT based WSN integrated with Internet Protocol (IP) allows any physical objects with sensors to be connected ubiquitous and send real-time data to the server connected to the internet gateway. Sensor data is relayed to the base station and is saved in the cloud for future access (Ala' <ns0:ref type='bibr'>Anzy & Othman, 2019;</ns0:ref><ns0:ref type='bibr' target='#b110'>Sheron, Sridhar, Baskar, & Shakeel, 2020)</ns0:ref>. IoT-based WSN devices are powered by batteries that later can be replaced, which poses a significant challenge to application designers. To address these constraints in an IoT-based WSN, significant research has been conducted on managing network power consumption. Most existing research focuses on extending the IoT network lifetime. The purpose of WSN is to gather data from the sensor node in the predetermine or random location and transmit the sensed data back to the base station.</ns0:p><ns0:p>The cumulative confirmed COVID-19 cases between 22 January and 12 October 2020 has reached 38,789,204 confirmed cases and has resulted in 1,095,097 deaths globally, as reported by <ns0:ref type='bibr'>WHO (2020)</ns0:ref>. As a result, the need for monitoring systems is in great demand. The health of COVID-19 patients will be monitor continuously in an isolated room. Six per cent of them need to warded in the Intensive Care Unit (ICU) to save their lives, as reported by <ns0:ref type='bibr' target='#b38'>El-Rashidy et al. (2020)</ns0:ref>. <ns0:ref type='bibr' target='#b49'>Gupta et al. (2020)</ns0:ref> foresee that smart sensors, actuators, devices, and data-driven applications can enable smart connected communities to strengthen the nations' health and economic postures to combat current Covid-19 and future pandemics efficiently. Flying drones regulated the quarantine and wearing of masks for public surveillance. Indoor isolation is made more accessible by robots and digital assistants. With the help of aware IoT devices, it is possible to track the origins of epidemics and ensure that patients follow important medical advice., as highlighted by <ns0:ref type='bibr' target='#b43'>Fedele & Merenda (2020)</ns0:ref>. However, from a security perspective, IoT networks are prone to sensor-based attacks based on a recent survey conducted by <ns0:ref type='bibr' target='#b112'>Sikder et al. (2018)</ns0:ref>. The authors also addressed IoT devices' vulnerability to sensor-based threats due to the lack of enough protection mechanisms to monitor the use of sensors by applications. An attack can be a launch to the IoT based health application use to monitor COVID-19 patient. This security attack can put the patient's life in danger, where the attacker can manipulate the medical IoT devices. Also, attackers can execute out local-scale attacks on individual critical devices that could include human life, such as the 2011 Stuxnet attack <ns0:ref type='bibr'>(Kushner, 2013)</ns0:ref>, the late 2015 powergrid blackout of Ukraine (Dvorkin Yury, 2020), the 2015 Jeep Cherokee attack <ns0:ref type='bibr'>(Schneider David, 2015)</ns0:ref>, the 2017 Brickerbot attack <ns0:ref type='bibr'>(Radware, 2017)</ns0:ref> and the 2018 Philips lightbulbs attack demonstration. In a world where every device is connected to IoT, these attacks have shown how catastrophic and diversified cybercrimes could be. Therefore, it is vital to detect Sybil attackers in WSN to prevent their malicious activities. In other words, Sybil attacks present a significant challenge for WSN, and improved defence mechanisms are required. We believe that the conducted survey work will help the researchers in this Sybil countermeasures in WSN.</ns0:p><ns0:p>The recent survey covers the existing countermeasures to mitigate the WSN and IoT security attack <ns0:ref type='bibr' target='#b23'>(Bhushan & Sahoo, 2017)</ns0:ref>. There is a literature review focus on Sybil attack countermeasures highlighted by <ns0:ref type='bibr'>Vasudeva & Sood (2018</ns0:ref><ns0:ref type='bibr'>), Benkhelifaet et al. (2018)</ns0:ref> and <ns0:ref type='bibr'>Gunturu (2015)</ns0:ref> and its comparison shown in Table <ns0:ref type='table'>1</ns0:ref>. A reader interested in Sybil countermeasures in an online network can read the following survey <ns0:ref type='bibr'>(Al-Qurishi et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b8'>Alharbi, Zohdy, Debnath, Olawoyin, & Corser, 2018)</ns0:ref>. However, there is no previous literature that reports cover any Sybil countermeasures based on swarm intelligence to the best of our knowledge. This paper provides a general review of up-to-date countermeasures use to mitigate the Sybil attack. Also, advantages, limitations and whether the existing proposed method is IoT ready are discussed.</ns0:p><ns0:p>The remainder of this paper is organized as follows. The 'Survey Methodology' section illustrates the approach and methodology used in this literature review on the Sybil attack. In 'Security Attack', we give a general overview of the Sybil attack. Next, we present the existing Sybil attack countermeasures in 'Sybil attack countermeasures'. In 'Discussion', we discuss the comparisons on Sybil countermeasures in WSN and IoT. Section Finally, in 'Conclusions', we conclude the survey by summarizing the paper and outlining future research directions.</ns0:p></ns0:div>
<ns0:div><ns0:head>SURVEY METHODOLOGY</ns0:head><ns0:p>A systematic literature review ( SLR) was carried out to examine countermeasures suggested by previous research studies to thwart Sybil's attack with the <ns0:ref type='bibr'>Kitchenham (2004)</ns0:ref> benchmark, emphasising previous work related to countermeasures for attacks on Sybil. This research approach originated in the medical field to provide adequate knowledge for a repeatable study method <ns0:ref type='bibr'>(Charband & Jafari Navimipour, 2016;</ns0:ref><ns0:ref type='bibr'>Jafari Navimipour & Charband, 2016;</ns0:ref><ns0:ref type='bibr'>Kupiainen, Mäntylä, & Itkonen, 2015)</ns0:ref>.To guide the reader on why we need to focus on the Sybil attack, to discuss Sybil's critical principles and countermeasures as formalized in the following subsections, we chose four research questions.</ns0:p><ns0:p>• RQ1: What is Sybil Attack?</ns0:p><ns0:p>• RQ2: Why is it to focus on Sybil IoT-based WSN environment? These two questions will prove the intent of countermeasure for Sybil Attack.</ns0:p><ns0:p>• RQ3: Where will new researchers concentrate on other methods of tackling the attack on Sybil? This problem aims to help the researcher focus on setting the direction of the proposed method.</ns0:p><ns0:p>• RQ4: How can the Sybil countermeasures achieve more robust algorithms to counteract those attacks?</ns0:p><ns0:p>This research question aims to explain how countermeasures are used to thwart Sybil's attack in achieving better algorithms, identifying challenges and techniques. The study needs are established after conducting a search query using suitable keywords, coming up with research questions, identifying the selection criteria, identifying the data retrieval, and conducting the quality evaluation. The aim of the survey could offer prepared answer and enlightenment for new researchers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Survey Plan and Organization</ns0:head><ns0:p>The articles in this survey were acquired from most respected academic journals and also selected according to the checklist provided in <ns0:ref type='bibr' target='#b65'>(Kitchenham et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b122'>Vasudeva & Sood, 2018b)</ns0:ref> for the quality evaluation. The research articles are acquired included IEEE, Elsevier, Springer, ACM, Wiley and MDPI, as these provided in-depth analysis. We started by filtering article by analysing the titles and abstracts. The entire research article is reviewed when the detailed information was not in the abstract. Hence articles are selected in this analysis based on a detailed inquiry into the nature of their material and documents. This in-depth work enables us to have a consistent and thorough understanding of the countermeasures for Sybil in the IoTbased environment.</ns0:p><ns0:p>The paper analysis was undertaken between late January 2015 and July 2020 for the first filtering. Boolean functions (OR, AND) and specific keywords detailed by synonyms and alternative spellings were used to investigate hundreds of papers in this area further.</ns0:p><ns0:p>('Sybil') and ('attack' or 'attacks')</ns0:p><ns0:p>Next, papers are filtered again to acquire papers more accurately related to the review context. The filtering process is to guarantee that no papers were overlooked in our review using the keywords search below:</ns0:p><ns0:p>('Sybil') and ('attack' or 'attacks') and (IoT OR 'internet of things') and (WSN OR 'wireless sensor networks') -'book' -'conference'</ns0:p></ns0:div>
<ns0:div><ns0:head>Eligibility Criteria</ns0:head><ns0:p>Articles were evaluated based on the Quality Assessment Checklist (QAC) to be selected in our survey review list <ns0:ref type='bibr' target='#b65'>(Kitchenham et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b122'>Vasudeva & Sood, 2018b)</ns0:ref>. The articles in this review that matched the research aim and objectives are selected according to the following criteria:</ns0:p><ns0:p> Does the study paper identify the Sybil attack countermeasure methods?</ns0:p><ns0:p> Is the methodology listed in the research paper?</ns0:p><ns0:p> Do testing methodologies use the resources available for re-implementation (simulation or real system)?</ns0:p><ns0:p> Does the research paper focus on WSN?</ns0:p><ns0:p> Is the evaluation analysis done appropriately?</ns0:p><ns0:p>If 'yes,' the papers are chosen after the following conditions have been met:  Any article that meets the criteria provided when there is a match in the keywords, the article is selected  The article is filtered after gone through the abstract and later will be recorded in the final list  Articles related to the countermeasure of the attack on Sybil will be included.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Filtration and Quality Evaluation</ns0:head><ns0:p>The search engine for Google scholars was used to locate the primary studies with the automated search. The search led to the discovery of 28,800 articles that were considered significant for the study. Data for all publications cited, abstracts and keywords of all articles are further analysed in an Excel sheet. Through three articles resulting from the initial search, phase by phase. In this segment, we search for keywords automatically and then find 372 journal articles and conference papers. Then, we include the year range 2010-2020, which is reduced to 333 journals. Then, we choose six famous publishers; 186 articles have been selected. Next, we checked whether any research papers satisfied the criteria or were ignored. When the abstract was found to be inadequate, the entire article was then checked, considering the requirements for inclusion or exclusion given above <ns0:ref type='bibr' target='#b65'>(Kitchenham et al., 2009)</ns0:ref>. Then, according to the publication time, the number of 28 articles were selected and analysed. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Security Attack</ns0:head><ns0:p>Generally, the security attack can be classified based on the attacker's objective, on which layer the attack is carried out. The method of attack reviewed in the previous literature is shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. Firstly, objective-based security attack can be divided into the passive and active attack. A passive attack can bring down the network, eavesdropping to collect personal information, node destruction, and node malfunction. In an active attack, the attacker has the objective to take down the targeted network to become useless. Several examples of active attacks can be further classified into flooding, jamming, Denial-of-Service (DoS), black hole, sinkhole, Sybil and wormhole. Secondly, the various passive and active attacks in WSN and IoT can be categorised according to OSI layers <ns0:ref type='bibr' target='#b25'>(Butun, Osterberg, & Song, 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Farjamnia, Gasimov, & Kazimov, 2019)</ns0:ref>. Different types of attack in the IoT environment are described in <ns0:ref type='bibr' target='#b2'>(Ahmad & Salah, 2017)</ns0:ref>. <ns0:ref type='bibr' target='#b119'>Usman & Gutierrez ( 2018)</ns0:ref> categorized the author focus on wormhole attack and as well as other attacks are reviewed in <ns0:ref type='bibr' target='#b40'>(Farjamnia et al., 2019)</ns0:ref>. Finally, the attack is categorized according to the attack method and how the malicious node can achieve its objective. Besides, the author also highlighted the mitigation strategies against security attacks in Pervasive and Mobile Computing. Sybil, DoS, hello and sinkhole are layered network attacks in WSN that are still relevant in IoT environments <ns0:ref type='bibr' target='#b17'>(Aufner, 2019)</ns0:ref>. Thus, it is applicable to any IoT devices which uses the communication layer to communicate.</ns0:p><ns0:p>Based on the earlier discussion on the attack, the countermeasures of security attack consist of prevention, detection, and mitigation. Firstly, the prevention method's main objective is to hinder the malicious attack from taking place in the first place. Secondly, the detection countermeasures that able to detect when there is a security breach in the network. The countermeasure method can identify the type of attack and launch the mitigation solution to reduce the damage done by malicious activity, as highlighted in Fig. <ns0:ref type='figure'>3</ns0:ref>. Hence, the mitigation method is the steps taken to reduce the aftereffect of a security attack. Those three components are a complete protection framework and cannot be considered separately in defence of WSNs and IoT against different types of attacks, as highlighted in <ns0:ref type='bibr' target='#b25'>(Butun et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Such security attacks cause serious vulnerabilities to be routed inside the underlying network. Many attacks are less extreme, and others more severe <ns0:ref type='bibr' target='#b88'>(Md Zin, Badrul Anuar, Mat Kiah, & Ahmedy, 2015)</ns0:ref>. One of the first attacks in the WSN environment is the Sybil attack, leading to further security attack as a black hole and wormhole, as highlighted by <ns0:ref type='bibr' target='#b93'>Murali & Jamalipour (2020)</ns0:ref>. These attacks can interrupt the operation of WSN, which considers as IoT devices from collecting data sensors then stored in the cloud. Later, disruption caused IoT application like smart building, which houses many companies, to go haywire. Hence, this review paper focuses on the countermeasure method for Sybil and attack, which will be discussed later in the next section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sybil Attack</ns0:head><ns0:p>Sybil attack is defined by <ns0:ref type='bibr'>Newsome et al. (2002)</ns0:ref> when the malicious node can fake its own identity during an attack or stole the identity from working valid nodes. Sybil attack utilises fake identities to send false information, as highlighted by Romadhani et al. ( <ns0:ref type='formula'>2017</ns0:ref>) and <ns0:ref type='bibr' target='#b100'>Zhang et al. (2005)</ns0:ref>. In an ad-hoc network, Sybil attack that utilises multiple fake identities are discussed in <ns0:ref type='bibr' target='#b82'>Lv et al. (2008)</ns0:ref>. In geographic routing, fake identities exist in the network with faked locations explored by <ns0:ref type='bibr' target='#b108'>Sha et al. (2013)</ns0:ref> and <ns0:ref type='bibr' target='#b45'>García-Otero et al. (2010)</ns0:ref>, as shown in Fig. <ns0:ref type='figure'>4</ns0:ref>. Alternatively, a high-resource Sybil Attack can participate in the selection process by listening and transmitting its fake location during the protocol handshakes. <ns0:ref type='bibr' target='#b95'>Newsome et al. (2004)</ns0:ref> highlighted countermeasures for Sybil Attack, namely radio testing and random key pre-distribution. However, the author did not mention any limitation of the sensor network based on the listed method. <ns0:ref type='bibr' target='#b47'>Goyal et al. (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b58'>John et al. (2015)</ns0:ref> classified the Sybil Attack into a few countermeasures categories.</ns0:p><ns0:p>An attacker tries to get more attention from nearby nodes in this attack to intercept data packets. An attack that affects the process of packet delivery is routed are called routing attacks. The simplest routing attack type is an altering attack in which the attacker modifies the routing information by creating routing loops or fake error messages, as highlighted in <ns0:ref type='bibr' target='#b89'>Mosenia & Jha (2017)</ns0:ref>.</ns0:p><ns0:p>WSN security strategies can be broken down into two categories: prevention-based and detection-based. Due to restricted resources and a broadcast medium, mitigation methods such as encryption are challenging for WSNs. Also, all possible attacks will not be protected by the suggested cryptographic solutions. An attacker can easily obtain the symmetric key. The whole network is compromised since the attacker will decrypt all encrypted data using the symmetric key. The second line of defence is called the Intrusion Detection System (IDS), is important to detect malicious parties that try to use the weakness in the security and potential insecurities and detect the attacks that have not been detected before <ns0:ref type='bibr'>(Hidoussi et al., 2015)</ns0:ref>.</ns0:p><ns0:p>WSN nodes are deployed in the environment without any supervision. The unattended nature of WSNs, adversaries, can readily produce such blackholes attack. Severely compromised node and DOS attacks can interfere with the standard data delivery between sensor nodes and sink or even partition the topology, as Shu et al. ( <ns0:ref type='formula'>2010</ns0:ref>) highlighted. In the next chapter, we will address the Sybil Attack countermeasures suggested by the previous researchers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sybil Attack Countermeasures</ns0:head><ns0:p>In this section, the countermeasures used to mitigate the Sybil attack can jeopardize humans' lives by monitoring IoT medical devices or other critical IoT applications. Radio resource testing (RTT) is a countermeasure that can distinguish direct forms of Sybil attack <ns0:ref type='bibr' target='#b18'>Balachandran & Sanyal (2012)</ns0:ref>. <ns0:ref type='bibr' target='#b95'>Newsome et al. (2004)</ns0:ref> stated that resource testing is the popular countermeasure to lower the probability of being attack rather than eliminating Sybil attacks for good. <ns0:ref type='bibr' target='#b113'>Ssu, Wang, & Chang (2009)</ns0:ref> proposed the RTT mechanism that assumed nodes in the network could not transmit and receive. Also, <ns0:ref type='bibr' target='#b37'>Douceur (2007)</ns0:ref> has proven that a trusted certification method can eliminate the Sybil Attack using a central authority. Some researchers proposed that key management <ns0:ref type='bibr' target='#b96'>(Paul, Sinha, & Pal, 2013;</ns0:ref><ns0:ref type='bibr' target='#b100'>Zhang et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b39'>Eschenauer & Gligor, 2002)</ns0:ref> or encryption for authentication using asymmetric key cryptography which is not suitable due to higher overhead and not scalable <ns0:ref type='bibr' target='#b24'>(Boneh & Franklin, 2001;</ns0:ref><ns0:ref type='bibr' target='#b140'>Zhu et al. (2006)</ns0:ref>. <ns0:ref type='bibr' target='#b113'>Ssu et al. (2009)</ns0:ref>. Other methods include Sybil Attacker detecting by verifying neighbouring nodes' set, which caused higher communication overhead. Software-based attestation is a method where the verifier performs through various software or hardware challenges against its neighbouring node <ns0:ref type='bibr' target='#b116'>(Steiner & Lupu, 2016)</ns0:ref>.</ns0:p><ns0:p>The radio signal is susceptible to the interference dan signal attenuation caused by the surrounding, which influences the precision of detecting a malicious node using RSSI-based and Time Difference of Arrival-based Scheme(TDOA) based countermeasures. <ns0:ref type='bibr' target='#b26'>Chan et al. (1994)</ns0:ref> proposed two localisation methods, namely the estimation of TDOA and solving hyperbolic position. <ns0:ref type='bibr' target='#b127'>Wen et al. (2008)</ns0:ref> explained the TDOA ratio with the sender's identity. A Sybil is detected once the beacon nodes calculated and find the same TDOA ratio for two different identities. These countermeasures are no longer in trend as many researchers currently moving towards the proposed method, as shown in Fig. <ns0:ref type='figure'>5</ns0:ref>. In most current literature, researchers focus on encryption and RSSI mechanisms that 29% of the solution provided for Sybil countermeasures. The rest of the solutions accounted for 14%, 14% and 7 % for trust, artificial intelligence and encryption hybrid. Lastly, 3% and 4% are accounted for rule-based anomaly and multi-kernel.</ns0:p><ns0:p>Li & Cheffena (2019) proposed a multi-Kernel based expectation-maximization (MKEM) countermeasure for Sybil attacks. The innovative countermeasure analyses the radio resource of the sensor node to produce channel vectors. These channel vectors are comprised of the power gain and delay spread of the channel impulse response extracted out from the received packet of the sensor node. In addition, gap statistical analysis method to validate and EM method to summarize the detection results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cryptography</ns0:head><ns0:p>Cryptography is a popular research area for WSN just before the IoT becomes the technological trend. Although cryptography requires lots of processing power, there is still an ongoing research area among researchers. <ns0:ref type='bibr' target='#b69'>Kouicem et al. (2018)</ns0:ref> highlighted two fundamental distribution approaches: key deterministic and Probabilistic key distribution. In deterministic approaches, to provide maximum security coverage connection, each entity can make a secure link to others. However, the key management protocol becomes defenceless when under a security attack. Also, PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54560:1:0:NEW 18 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b96'>Paul et al. (2013)</ns0:ref> highlighted three types of cryptographic technique: creating and managing, distributing, and validating keys for identities. Firstly, Symmetric key distribution is a method using a similar key is utilised for encryption and decryption of the messages, namely Encryption Standard (AES), Rivest Cipher 4(RC4) and Triple Data Encryption Standard (3DES). However, key management problems and scalability issues are the main disadvantages of the symmetric key. WSN nodes are powered by batteries that are not suitable for implementing public-key cryptography due to high processing power and high network load when generating keys, distributing key, and maintaining key. Devices equipped with cryptographic is more likely to expose to brute force attack. Asymmetric key distribution utilised public key for encryption while the private key utilised for decryption. <ns0:ref type='bibr' target='#b52'>Jain et al. (2020)</ns0:ref> proposed a node authentication method for wireless sensor node to avoid security attacks and provide secure communication channels. The base station is responsible for generating the random value and a secret value to distribute among the sensor nodes. Each node is responsible for storing its secret and random value. <ns0:ref type='bibr' target='#b133'>Zhang & Zhou (2010)</ns0:ref> proposed using the Markle hash tree, trust values and message authentication codes for location verification algorithm. This approach works well with networks that are organized in a tree or hierarchical structure. This method falls in the encryption hybrid taxonomy shown in the previous pie chart diagram. <ns0:ref type='bibr' target='#b27'>Claycomb & Shin (2011)</ns0:ref> proposed a method based on a security policy that utilized key establishment to combine group-based distribution model and identitybased encryption. <ns0:ref type='bibr'>He et al. (2011)</ns0:ref> proposed combining the merits of both public and symmetric cryptographic methods for key management in WSNs, that each node is configured with a public key system to establish end-to-end symmetric keys with other nodes like EDDK. <ns0:ref type='bibr' target='#b35'>Dong & Liu ( 2012)</ns0:ref> proposed a scheme that deploys auxiliary nodes to execute the key establishment to help key establishment between sensor nodes. This method utilizes a secured Fuzzy Clustering Algorithm to determine the nodes that securely join the cluster. The cluster head oversees routing based on criteria, trust value and energy. <ns0:ref type='bibr' target='#b62'>Kim & Kim (2013)</ns0:ref> proposed a scalable and robust hierarchical key establishment scheme that enhances resilience against node capture, traffic analysis, and acknowledgement spoofing attack. In addition, this scheme provides periodic critical updates without communication costs for key transport. <ns0:ref type='bibr' target='#b103'>Razaque & Rizvi (2017)</ns0:ref> proposed a method to combat the Sybil attack, which comprises two novel algorithms. The first algorithm fragments the data to avoid detection from the malicious node. The second algorithm aims to provide authentication for nodes joining the network through encryption. many different approaches in RSSI to counter-attack and which varied between researcher. <ns0:ref type='bibr' target='#b32'>Demirbas et al. (2006)</ns0:ref> proposed a countermeasure method Sybil attack by only two receivers. To improve accuracy, <ns0:ref type='bibr' target='#b125'>Wang et al. (2007)</ns0:ref> came up with a countermeasure using RSSI from multiple neighbours instead of two neighbour nodes. Also, the status message can be used to validate the location in the hierarchical network that utilises Jake Channel. <ns0:ref type='bibr' target='#b135'>Zhong et al. (2004)</ns0:ref> proposed the location verification based on RSSI signal using four or more detector nodes to detect the signals that can verify a node's location. <ns0:ref type='bibr' target='#b82'>Lv et al. ( 2008)</ns0:ref> proposed a method for stationary wireless sensor networks called Cooperative Received Signal Strength (RSS) based Sybil Detection (CRSD) to estimate the distance between two identities and to locate the correlation of location between the unique identities of multiple neighbouring nodes. <ns0:ref type='bibr' target='#b72'>Lazos et al. (2005)</ns0:ref> proposed a method that utilises the target node to determine its position using beacon information transmitted by both benevolent and malicious anchor nodes.</ns0:p><ns0:p>García-Otero et al. ( <ns0:ref type='formula'>2010</ns0:ref>) proposed innovative and lightweight location verification methods to detect and isolate Sybil attack. The distributed trust model is integrated with the routing protocol mainly to defence from routing attack. <ns0:ref type='bibr' target='#b0'>Abbas et al. (2013)</ns0:ref> utilise one neighbouring node to detect RSS in mobile environments. In Secure and Scalable Geographic, Opportunistic Routing with received signal strength (SGOR) is an opportunistic routing protocol proposed by <ns0:ref type='bibr' target='#b83'>Lyu et al. (2015)</ns0:ref>. This proposed trust method and a combination of calculating the difference of distance beacon messages and RSSI to detect the malicious nodes' fake location and defend against grey hole attack. The proposed method can defend from other attacks such as rushing, wormhole, replay, and collusion. However, this method's limitation is when the attacker has higher energy capacity and higher transmission power, which can easily deceive the sender about its location. <ns0:ref type='bibr'>Kumari et al. ( 2017)</ns0:ref> provided a framework using authentication and RSSI against Sybil attack. The RSS values are calculated from the arrival angle, stored in the database at each node. The RSSI threshold value determines if the nodes fall within the safety zone and the precautionary zone. Also, the ant colony optimisation method was used to determine the optimized route for the packet to travel from source to destination. The second category assumes that a node can occur at one location at a specific time. Raja et al. ( <ns0:ref type='formula'>2017</ns0:ref>) suggested another encryption approach using the Fujisaki Okamoto (FO) algorithm and their implementations. FO algorithm is an encryption method that offers a good defence against Sybil attacks by using IDbased verification. In the proposed scheme, multiple performance metrics were analysed, especially the high energy consumption is used as an indicator to sense Sybil attack in wireless sensor networks. <ns0:ref type='bibr' target='#b129'>Yuan et al. (2018)</ns0:ref> presented a lightweight Approximate Point-in Triangulation Test (SF-APIT) algorithm that can pinpoint Sybil attacks in a wireless network in a distributed way using a range of free and iterative refinement-based methods. The individual node implemented the algorithm was based on RSS, which does not cost any overhead in WSN. Based on the node location, the node utilises three beacons in the triangulation method to calculate the possible combination overlapped triangle region, which can estimate the unknown node's location. Therefore, the centroid of the overlapping area is considered as the approximate location of this node. <ns0:ref type='bibr' target='#b46'>Giri et al. (2020)</ns0:ref> proposed a countermeasure that protects the beacon node from Sybil attack by implementing the information-theoretic approach. Any localization algorithm can use this approach to provide protected localization in WSNs for the Sybil attack. <ns0:ref type='bibr'>Liu (2020)</ns0:ref> proposed an improved RSSI-based Sybil Attack Detection Scheme in Wireless Sensor Networks. The proposed method able to quickly detect malicious nodes with minimum energy consumption.</ns0:p><ns0:p>The hierarchical topology of cluster network has many advantages in energy efficiency due to less communication, scalability, and routing. In addition, the proposed method utilised both RSSI and CSI to protect the hierarchical cluster network from Sybil attack. <ns0:ref type='bibr' target='#b53'>Jamshidi et al. (2019)</ns0:ref> proposed a lightweight method that consists of two algorithms for detecting Sybil node masquerading as cluster heads and cluster members. <ns0:ref type='bibr'>Sarigiannidis et al. ( 2015)</ns0:ref> proposed a secure communication mechanism for clustered WSNs based on the elliptic curve cryptography (ECC) that allows end-users to recover data collected confidentiality. The proposed method has a firm reliance on historical records, making this approach not stable and durable. <ns0:ref type='bibr' target='#b12'>Angappan et al. (2020)</ns0:ref> proposed a localized scheme for Sybil node detection called NoSad using RSSI value and the intra-cluster communication, which can be deployed to the device. However, NoSad is not stable when there is a minimum of two Sybil node and cannot cater to mobility in WSN. <ns0:ref type='bibr' target='#b55'>Jan et al. (2015)</ns0:ref> propose an innovative detection countermeasure for Sybil attack in a centralized clustering-based hierarchical network. Sybil nodes with fake identities are detected before the cluster to ensure that usage of the resources is optimized. The detection of Sybil nodes is achieved by analysing the received signal strength from any two high energy nodes. <ns0:ref type='bibr' target='#b124'>Wang et al. ( 2018)</ns0:ref> proposed a Sybil attack detection using Channel State Information (CSI) and a self-adaptive multiple signal classification algorithm RSSI for dynamic and static nodes in the clustered network.</ns0:p></ns0:div>
<ns0:div><ns0:head>Trust</ns0:head><ns0:p>According to <ns0:ref type='bibr' target='#b51'>Ishmanov et al. (2017)</ns0:ref>, there is not much research done on security attack detection based on unrelated criteria such as packet drop and packet modification. <ns0:ref type='bibr' target='#b87'>Mawgoud et al. (2020)</ns0:ref> highlighted that trust could be set up automatically without personal interaction with previously unregistered and unknown peer neighbours in typical IoT scenarios. <ns0:ref type='bibr' target='#b59'>Karlof & Wagner (2003)</ns0:ref> highlighted that the trust centre uses a key shared between two nodes for node verification to secure the network. <ns0:ref type='bibr' target='#b132'>Zhan, Shi, & Deng ( 2012)</ns0:ref> proposed the trust management and encryption method that can detect and guess the future behaviour of Sybil PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54560:1:0:NEW 18 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Attacker. During next-hop selection, this trust information is vital to select a safe path to the destination. <ns0:ref type='bibr' target='#b132'>Zhan et al. ( 2012)</ns0:ref> proposed selecting the next hop based on trust and energy criteria. The energy watcher module calculates the energy cost for neighbouring nodes and the node's energy, where this information stored in the neighbourhood table. The energy watcher module also approximates the average energy required to route the packet from sender to destination. Alsaedi et al. ( <ns0:ref type='formula'>2017</ns0:ref>) proposed a method to detect Sybil attack based on name, location and energy information for each time a new message was routed to the sender. The proposed method also uses a multi-level system where each rule to recognise a Sybil attacker is given to specific agents. These Sybil attackers engage in data aggregation at different stages to collude the aggregated data to disclose invalid data. Also, these malicious nodes may modify and tamper with the timestamps of a message with multiple identities, which can cause havoc to synchronise local clocks in IoT devices. <ns0:ref type='bibr' target='#b84'>Maddar et al. (2017)</ns0:ref> proposed an innovative detection method for Sybil nodes with fake identities before the cluster formation in a centralized clustering-based hierarchical network to optimize the usage of the resources. The detection countermeasure works by analyzing neighbouring nodes the received signal strength. <ns0:ref type='bibr' target='#b57'>Jinhui et al. (2018)</ns0:ref> proposed a method that can effectively predict energy consumption and increase the detection rate to detect malicious nodes. <ns0:ref type='bibr' target='#b78'>Liu et al. (2007)</ns0:ref> explained that landmarks are required to be trusted. All routing protocols are related to their mechanism of localisation and cannot be isolated from them. Garcia et al. proposed a lightweight method that consists of localisation and intrusion identification techniques using distrusted trust model to thwart several security attacks. <ns0:ref type='bibr' target='#b97'>Prathusha et al. (2017)</ns0:ref> proposed secure geographic routing (GSR), which is have been modified from SecuTPGF. GSR's advantage is that it uses low computational power to combat security attacks such as spoofing and an assault on Sybil by introducing SHA-3 node and message authentication. <ns0:ref type='bibr' target='#b136'>Zhou et al. (2015)</ns0:ref> proposed a watchdog method that implements energy consumption optimisation while providing just enough security. The validation technique through a watchdog mechanism able to defend against Sybil attack.</ns0:p></ns0:div>
<ns0:div><ns0:head>Artificial Intelligent</ns0:head><ns0:p>Intrusion detection systems are the example of artificial intelligence applications in the cybersecurity field. Cybersecurity solutions can distinguish between legitimate or malicious node through detailed traffic analysis Cyberattacks were first detected with rule-based systems, which could detect attacks based on their signatures at the beginning of the Internet. Swarm Intelligence (SI) is a subdivision of artificial intelligence where the inspiration of this algorithm mimics biological swarms' intelligent behaviour in solving and simulating real problems. The SI algorithms are intended to investigate the concepts of simple individuals who can display sophisticated and complex swarm optimization behaviours through collaboration, organisation, knowledge exchange, and learning between swarm members. <ns0:ref type='bibr' target='#b67'>(Kolias, Kambourakis, & Maragoudakis, 2011)</ns0:ref>. These swarm intelligences can be categorized according to the year when they are invented. Particle Swarm Optimization and Ant Colony Optimization was invented before the year 2000. Artificial Fish Swarm and Bacterial Foraging Optimization need further development to enhance, and Firefly Optimization and Artificial Bee Colony optimization are widely used optimization during the year 2000 until 2010. Pigeon inspired optimization, Grey wolf optimizer, and Butterfly optimization algorithm require further development.</ns0:p><ns0:p>Prithi & Sumathi (2020) proposed a method called Learning Dynamic Deterministic Finite Automata (LD 2 FA) and Particle Swarm Optimization (PSO) for intrusion detection, and the data is transmitted securely over-optimized path. LD2FA-PSO got a 16% increase in throughput than cluster-based IDS, almost 70% rise in throughput than lightweight IDS, 6% and 32% increment in network lifetime over PSO and GLBCA, respectively; almost 30% and 54% improve in network lifetime over GA and LDC, respectively. The energy consumed is almost 3% and 6% lesser than PSO and GA, and 13% higher energy is consumed than LDC. <ns0:ref type='formula'>2020</ns0:ref>) used swarm intelligence algorithms based on the bee to provide a secure routing scheme. The proposed routing mechanism utilize primary scout bee and secondary scout bee to carried out the secured and optimized routing. In many scenarios, it improves data efficiency while also providing security against flood, spoof, and Sybil attacks. Its disadvantages include that when the solution is close to the global optimum, it is possible to get stuck in the local optimum, resulting in stagnation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Raghav et al. (</ns0:head></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This paper has reviewed the countermeasures used to defence against Sybil attack. Table <ns0:ref type='table'>3</ns0:ref> provides a comparative summary of the proposed method to countermeasure against Sybil attack in term of its advantage, limitation, scalability readiness and classification of detection, prevention. Besides security, scalability is also essential for deploying many devices under the IoT paradigm to become a major success <ns0:ref type='bibr' target='#b14'>(Arellanes & Lau, 2020)</ns0:ref>. Security countermeasures should expand to many sensor nodes and intelligent devices <ns0:ref type='bibr' target='#b79'>(Lu & Xu, 2019)</ns0:ref>. Comparing the proposed method will help the future researcher evaluate and identify any research gap that will help them innovate or develop new countermeasures in the future. The proposed method to combat Sybil attack is the random key pre-distribution, cryptographic method, radio resource testing, received signal strength indicator (RSSI) localisation techniques, time difference of arrival (TDOA) localisation technique, neighbouring node information, Trust, watchdog, RFID, clustering, and geographic routing.</ns0:p><ns0:p>Sybil attack countermeasures are of the simplified method due to the neighbouring node, and trust information is exchanges of control message between one or more nodes so that the sender can validate the identity of its neighbouring nodes. Also, this information is used as criteria's in selecting the best route from the sender to the destination nodes. Watchdog is used to monitor the neighbouring nodes in a centralised or decentralised scheme using the physical and data link layer. This information is used in selecting the best route for multi-hop routing.</ns0:p><ns0:p>Cryptographic and random key pre-distribution is implemented in the application layer, where its encryption and decryption process utilises the processing and memory resources. However, this authentication using asymmetric key cryptography has a higher overhead and not scalable. Also, the encryption process requires high computation and memory resources for the cryptography method and its attributes. The limitation for pre-distribution of the key. However, the proposed method utilises high computational overhead, computational delays and a high load of control messages transmitted to nodes. Keys are store in databases that are vulnerable to attacks. One of the significant challenges is developing a lightweight key delivery network for sensor nodes with limited resources to support numerous protocols, applications and services at all IoT layers levels (B. B. <ns0:ref type='bibr' target='#b48'>Gupta & Quamara, 2018)</ns0:ref>.</ns0:p><ns0:p>Radio resource testing, RSSI and TDOA measure the physical layers described by Almas <ns0:ref type='bibr' target='#b9'>Shehni et al. (2017)</ns0:ref> for the Sybil attack. RSSI and TDOA are two methods to locate Sybil Attack by measuring signal strength and the distance between beacons. The RSSI method used less energy than other methods and did not require any special requirements or additional details. According to the studies, the distances between nodes from the RSSI can be easily calculated based on RSSI information. RSSI-countermeasure methods are popular among researchers to detect Sybil Attack <ns0:ref type='bibr' target='#b32'>(Demirbas & Song, 2006)</ns0:ref>. However, the limitation of RSSI are susceptible to interference, environment factor, the need for a beacon node, receiver system delay, non-line of sight transmission, and a malicious node with high power transmission could easily deceive the good node with its fake location and identity. The disadvantage of TDOA is implemented in a highly dense network which can cause false detection of an honest node being detected as an attacker. An honest node's location is at the exact location as the detector node are the leading cause of false detection. Also, an attacker with a directional antenna could easily overcome being detected. These methods are not suitable for IoT devices that are mobile <ns0:ref type='bibr' target='#b128'>(Wu & Ma, 2019)</ns0:ref>. RSSI has some limitation where there is no line of sight communication due to the obstruction of obstacle between a beacon node and a dumb node which caused the signal to get reflected from the surroundings <ns0:ref type='bibr' target='#b105'>(Ren et al., 2007)</ns0:ref>. Hence, from the summary of countermeasures proposed by the previous researcher, the future researcher should use RSSI due to its energy efficiency. To complement the limitation of RSSI, trust countermeasures based on energy due to energy heterogeneity of IoT devices should be combined with RSSI to enable the detection of Sybil malicious nodes.</ns0:p><ns0:p>Software attestation is a method where software routines that transmitted to the neighbouring node for validation. These routines are stored inside the memory, and the neighbouring nodes are required to respond to the challenge within specific criteria like integrity PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54560:1:0:NEW 18 May 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science validation in software and hardware, time duration, how software routine is read in the memory, and the interaction method. For example, the radio resource testing technique extracts the battery or energy level from these network devices. High energy devices are assumed to be malicious attacker nodes. However, this approach can cause the communication overhead to increase due to control packets for resource verification. Fig. <ns0:ref type='figure'>6</ns0:ref> illustrates the proposed method that the researcher has developed from the year 2010 until 2020. The statistical charts show an increase in the encryption and trust method proposed by the researchers in 2017. In the year 2020, there is an increase in the proposed method using RSSI and less focus on encryption. Most of the artificial intelligence scheme proposed by the researchers in 2020 is used to optimize the routing process in complement with security. Hence, in the next section future researcher should try to integrate artificial intelligence to optimize the method such as cross layer, Software Defined Network (SDN), cross platform intrusion detection and blockchain. Lesson learned and future direction WSN security is a hot research topic. There are many challenges and issues in WSN's security which future researcher can explore and provide a new solution. Specific requirements and constraints, such as low complexity and reliability, must be imposed on the provided solution. This section briefly discusses lessons learned from the previously proposed method and possible future directions for Sybil Attack countermeasures.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cross-layer</ns0:head><ns0:p>Lesson learned: There is a possibility that an attack could be launch from the different layer during the communication process. Hence, this requires security countermeasure to handle cross-layer attacks and require access to all information from multiple layers. Significantly through joint optimization of multiple network layers. Besides security, cross-layer information also beneficial in term of optimizing energy efficiency Dhivya Devi & Vidya (2019) discussed and explored the cross-layer design approaches that have been in WSN. For example, some proposed methods implement a cross-layer in detecting intrusion and routing <ns0:ref type='bibr' target='#b41'>(Fatema & Brad, 2013;</ns0:ref><ns0:ref type='bibr' target='#b117'>Umar et al., 2017)</ns0:ref>. The motivation to implement cross-layer design due to it can optimise the network performance in the wireless sensor. The cross-layer design allows the ease of exchanging information between layers, which helps the WSN be energy efficient and increase QoS parameters. Based on the proposed method for Sybil attack, countermeasures surveyed, not many works of literature in focusing security attack. The method in detecting Sybil attack should incorporate the cross-layer approach to increase accuracy in detecting security attacks. The future researcher can utilise the RSSI, which lightweight from the physical layer with an upper layer such as Trust and Mobile agent for detecting Sybil Attack. A cross-layer method for detecting Sybil attack with a mobile agent is proposed by <ns0:ref type='bibr' target='#b44'>Gandhimathi (2016)</ns0:ref>. Cross-layer allows sharing of information among the MAC and networks layers is to optimise network performance. Also, this information can utilise by a mobile agent to prevent a security attack. However, the proposed method to prevent Sybil attack and another kind of attack increases the communication overhead.</ns0:p></ns0:div>
<ns0:div><ns0:head>Software-Defined Network</ns0:head><ns0:p>Lessons learned: SDN and SDMN are the current trending research topics for 5G communication security. The exact security method for SDN and SDMN remain unexplored by researchers. With the deployment of SDN and SDMN in the 5G communication, innovative techniques are needed in this area.</ns0:p><ns0:p>Apart from novel security solutions for IoT, there is a developing trend of SDN that allows reconfiguration of the network and central monitoring with possible centralised routing algorithm. This emerging paradigm opens up the researcher's door to develop a lightweight security framework running from the SDN controller, running at the central controller <ns0:ref type='bibr' target='#b50'>(Hameed, Khan, & Hameed, 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cross-Platform Intrusion Detection</ns0:head><ns0:p>Lesson learned: The IoT has from WSN where the sensor nodes are assumed to homogenous device with limited resources to heterogeneous devices with different capabilities but still limited in energy constraints. <ns0:ref type='bibr' target='#b29'>Colom et al. (2018)</ns0:ref> highlighted in the survey that the current trend of IDS is moving toward a universal and cross-platform method. The proposed method able to handle device heterogeneity, scalability of IoT network and limitation.</ns0:p><ns0:p>Security and malware attack on the Internet could also be deployed in IoT due to various protocols utilized at every layer in the heterogeneous devices. The interoperability issues and lack of standard in IoT becomes a security challenge. Many IoT devices that launched in the market have a security flaw due to that security was not the top priority and have not been considered in the past. The previous IoT devices are lack authentication method or able to detect or prevent an attack. A big challenge for Intrusion detection methods to be deployed in the IoT environment is a big challenge due to the heterogeneity of devices. One example cross-platform intrusion detection; an innovative home application must retrieve information from personal healthcare sensing with a secure connection. Therefore, we need a quick, efficient and robust intrusion detection countermeasure to provide an undisruptive and continuous connection to multiple IoT platforms. WSN sensor nodes are distributed and placed in an extreme and complex environment, so it is crucial to implement secure authentication between sensor nodes in WSN <ns0:ref type='bibr' target='#b31'>(Cui et al., 2020)</ns0:ref>. Blockchain is suitable for IoT with a hierarchical topology that has limited memory, computation, and energy. Merkle trees were incorporated into blockchain technology to provide efficient and reliable digital timestamps <ns0:ref type='bibr'>(Dorri et al., 2017)</ns0:ref>. Blockchain has been applied in the security framework, and security countermeasure is still in the experimental phase, which will be the future direction of the research <ns0:ref type='bibr' target='#b91'>(Mubarakali, 2021)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper discussed different countermeasures to defend the IoT-based WSN from the Sybil attack launched from various application domains. We have expanded on their modus operandi, advantages, and limitations of each countermeasure's categories. Although various researchers have proposed several countermeasures, there is no efficient method to overcome most attacks with complete geographic routing accuracy. Also, we have observed that the trust mechanism is the most popular countermeasures for the Sybil attack from 2015 and 2020. The new researcher should investigate developing a framework that is lightweight to secure IoT network. Developing a secure framework for IoT, which consists of heterogeneous devices with different wireless technologies, is challenging. The development of a security framework for these IoT devices should consider IoT's scalability and resource constraint <ns0:ref type='bibr' target='#b101'>(Razacheema, Alsmadi, & Ikki, 2018)</ns0:ref>.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Fig. 1 illustrate the procedure used to choose the articles for review. Researchers and scholars mostly published their contributions in the established journal. Hence, conference papers have been excluded from this survey. PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54560:1:0:NEW 18 May 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2020:10:54560:1:0:NEW 18 May 2021) Manuscript to be reviewed Computer Science Blockchain Lesson learned: Blockchain is the latest decentralised distributed system technology designed and invented by Bayer et al. in 1992. Proof of work, asymmetric cryptography, electronic signatures, and hash functions are all used in blockchain technology (Lazrag, Chehri, Saadane, & Rahmani, 2020).</ns0:figDesc></ns0:figure>
</ns0:body>
" | "
Prof Dr Chakchai So-In
Academic Editor,
PeerJ Computer Science
Dear Chakchai So-In,
Comment from reviewer 1
Response
Location in the revised manuscript
The paper does not have a single focus. I could not know what the paper tries to focus either attack or countermeasure/ Sybil+Black hole or WSN/IoT. In additional, I do not know why the authors select to survey on these two attacks. Why two together, not only one? This can weaken the paper.
Thank you for highlighting the focus of the paper. Based on discussion with team, I have decided to focus on Sybil attack.
IoT is a concept where many wireless technologies are integrated. WSN is one of that wireless technologies. This concepts are mentioned in the following research articles:-
Xu, G., Shi, Y., Sun, X., & Shen, W. (2019). Internet of things in marine environment monitoring: A review. Sensors (Switzerland), 19(7), 1–21. https://doi.org/10.3390/s19071711
Pundir, S., Wazid, M., & Singh, D. P. (2020). Intrusion Detection Protocols in Wireless Sensor Networks Integrated to Internet of Things Deployment : Survey and Future Challenges. IEEE Access, 8, 3343–3363. https://doi.org/10.1109/ACCESS.2019.2962829
Prasanth, A., & Jayachitra, S. (2020). A novel multi-objective optimization strategy for enhancing quality of service in IoT-enabled WSN applications. Peer-to-Peer Networking and Applications, 13(6), 1905–1920. https://doi.org/10.1007/s12083-020-00945-y
Line 39-58
“in regard to the technical novelty as well as the performance analysis comparatively with the state-of-the-art methods”
We have included additional columns in Table 3 and Table 4 to classify existing countermeasures according to the categories such as prevention, detection and mitigation. This classification can assist future researchers in evaluating whether the existing solutions can be improved.
Also, we added a column on the IoT readiness based on the results of simulations and experimentations conducted by the author. This would show the potential of the proposed method in terms of scalability and readiness for IoT deployment.
Line 598 - 605
his is a survey paper, but the authors take too much effort on methodology and research questions while very few in the main knowledge (compared to the methodology). Nevertheless, the authors do not need to focus only articles published in five years. This could be indicated as a critical review article if the survey is more comprehensive with your own discussion. If the authors intend to propose a survey article, the authors need much more works on content gathering and statistics (quantitative analysis). Study from this survey paper https://ieeexplore.ieee.org/document/8792139
Thank you for your recommendation and for highlighting the improvement that should be made. I have expanded the year of articles published to be ten years to be included in this survey.
Overall, the body of knowledge (from the Security Attack section) is too limited where the reader almost learn nothing besides list of related literature with summary. For example, Sybil attack in Blockchain and electronic vote is not found. I suggest that the authors convert to a Review Article by focusing on one attack, studying on more literature (not only the last five years) especially on more recent articles, writing a critical discussion coming up with something the authors can use such as a framework, and removing the methodology section.
Thank you for highlighting blockchain and electronic vote. I have included some journal on Sybil attack countermeasure using blockchain in the discussion section as a future direction.
We feel that the inclusion of the methodology section is important to highlight the various established sources referred to for the articles
Thank you for your kind attention and consideration.
Regards,
Akashah Arshad
" | Here is a paper. Please give your review comments after reading it. |
208 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>At present, industrial robotics focuses more on motion control and vision, whereas humanoid service robotics (HSRs) are increasingly being investigated and researched in the field of speech interaction. The problem and quality of human-robot interaction (HRI) has become a widely debated topic in academia. Especially when HSRs are applied in the hospitality industry, some researchers believe that the current HRI model is not well adapted to the complex social environment. HSRs generally lack the ability to accurately recognize human intentions and understand social scenarios. This study proposes a novel interactive framework suitable for HSRs. The proposed framework is grounded on the novel integration of Trevarthen 's ( <ns0:ref type='formula'>2001</ns0:ref>) companionship theory and neural image captioning (NIC) generation algorithm. By integrating image-to-natural interactivity generation and communicating with the environment to better interact with the stakeholder, thereby changing from interaction to a bionic-companionship. Compared to previous research a novel interactive system is developed based on the bioniccompanionship framework. The humanoid service robot was integrated with the system to conduct preliminary tests. The results show that the interactive system based on the bionic-companionship framework can help the service humanoid robot to effectively respond to changes in the interactive environment, for example give different responses to the same character in different scenes.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Humanoid service robots (HSRs) have seen a sharp rise in adoption recently and are seen as one of the major technologies that will drive the service industries in the next decade <ns0:ref type='bibr' target='#b20'>(Harris et al., 2018)</ns0:ref>. An increasing number of researchers are committed to investigating HSRs to help humans complete repetitive or high-risk service and interactive tasks such as serving patients with infectious diseases, delivering meals and so on. Delivery robots, concierge robots, and chat robots have been increasingly used by travel and hospitality companies <ns0:ref type='bibr' target='#b22'>(Ivanov, 2019)</ns0:ref>. Although the contribution of these achievements mainly comes from the rapid development of robotics engineering, <ns0:ref type='bibr' target='#b23'>Ivanov et al. (2019)</ns0:ref> indicated that future research focus will gradually shift from robotics engineering to human-robot interaction (HRI), thus opening up interdisciplinary research directions for researchers.</ns0:p><ns0:p>In the early days, <ns0:ref type='bibr'>Fong et al. (2003)</ns0:ref> proposed that in order to make robots perform better, the robot needs to be able to use human skills (perception, cognition, etc.) and benefit from human advice and expertise. This means that robots that rely solely on self-determination have limitations in performing tasks. The authors further propose that the collaborative work between humans and robots will be able to break this constraint, and research on human-robot interaction has begun to emerge. <ns0:ref type='bibr'>Fong et al. (2003)</ns0:ref> believe that to build a collaborative control system and complete human-robot interaction, four key problems must be solved. (1) The robot must be able to detect limitations (what can be done and what humans can do), determine whether to seek help, and identify when it needs to be resolved. (2) The robot must be self-reliant and secure. (3) The system must support dialog. That is, robots and humans need to be able to communicate with each other effectively. However, dialog is restricted at present. Through collaborative control, dialog should be two-way and require a richer vocabulary. (4) The system must be adaptive. Although most of the current humanoid service robots already support dialog and can complete simple interactive tasks, as propounded in the research, such dialog in the present time remains limited and 'inhuman.' In the process of interacting with robots, humans always determine the state of the robot (the position of the robot or the action the robot is doing) through vision, and then communicate with the robot through a dialog system. However, HSR cannot perform this yet as they do not seem to fully satisfy the two-way nature of dialog. Therefore, this research responds to the current gap and attempts to differ from the current HRI research. This research attempts to introduce deep learning into the existing dialog system of HSR, thus advancing the field.</ns0:p><ns0:p>With the continuous development of humanoid robots, more and more humanoid robots are used in the service industry, especially the hospitality industry. Human-Robot Interaction (HRI) has become a hot potato by more and more researchers <ns0:ref type='bibr'>(Yang & Chew, 2020)</ns0:ref>. However, with the deepening of research, some researchers found that when humans interact with humanoid service robots (HSRs), humans hope that HSRs should have the ability and interest to interact with the dynamic thoughts and enthusiasm of the partner's relationship, and can recognize the environment, blended with what others think is meaningful and the emotions to express sympathy <ns0:ref type='bibr'>(Yang & Chew, 2020)</ns0:ref>. This coincides with Trevarthen Companionship Theory <ns0:ref type='bibr' target='#b42'>(Trevarthen, 2001)</ns0:ref>, so the concept of human robot companion (HRC) was proposed this research. The earlier concept of the robot companion is mentioned by <ns0:ref type='bibr' target='#b9'>Dautenhahn et al. (2005)</ns0:ref>: HSRs need to have a high degree of awareness and sensitivity to social environment. Through the review of the above literatures, it is proposed to establish an interactive and companion framework for HSRs using deep learning and neural image caption generation, thus advance the current field of HSRs to tackle with bionic-interactive tasks of the service industry and further evolve from conventional HRI to Human and Robot Companion (HRC) (See Tab. 1).</ns0:p><ns0:p>This study proposes that the introduction of visual data into the current HRI model of HSRs enables HSRs to have a high level of sensitivity to the social dynamic environment while interacting with humans, thereby enhancing the current HRI model to HRC. With the continuous development of deep learning, some researchers have recently realized the transformation of static pictures or videos from conventional camera input into text descriptions <ns0:ref type='bibr' target='#b26'>(Li et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hu et al., 2020;</ns0:ref><ns0:ref type='bibr'>Luo, 2019)</ns0:ref>. This deep learning algorithm model is called neural image capturing (NIC). This research attempts to adapt and integrate NIC into HSRs and propose a novel framework (bionic-companionship framework) to enhance the traditional HRI experience. This framework aims to improve the current HRI interaction mode in the field of HSRs to a higher level of HRC <ns0:ref type='bibr'>(Yang & Chew, 2021)</ns0:ref>. The bionics in this research refers to the humanoid service robot imitating all the tastes of life, trying to adapt to the seven emotions of ancient human nature (joy, anger, sadness, fear, love, disgust, liking) and six biological wills (life, death, eyes, ears, mouth, nose) <ns0:ref type='bibr' target='#b6'>(Chew et al.,2021)</ns0:ref>. The system proposed in this study combines visual intelligence and Speech Intelligence, and imitates human behavior in social activities, which is in line with the concept of robot bionics proposed by researchers such as <ns0:ref type='bibr' target='#b6'>Chew et al (2021)</ns0:ref>. Therefore, this study believes that the proposed system is a bionic system.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related works</ns0:head><ns0:p>With the continuous development of HRI research, industrial robots have been able to interact with humans accurately and self-adaptively. Some advanced control systems <ns0:ref type='bibr' target='#b55'>(Zhang et al., 2020)</ns0:ref> and algorithms <ns0:ref type='bibr' target='#b39'>(Tang et al., 2020)</ns0:ref> have been proposed as Industrial robots provide reliable support for completing interactive tasks in an industrial environment. However, as HSRs began to enter the service industry, some research cases began to discover that there are still problems with the interaction of HSRs in the social environment. Caleb Solly (2018) believed that users can also help robots when robots help users; meanwhile, users can give feedback to optimize the system. The feedback reflects not only the optimization of the robot system but also the satisfaction of customers. <ns0:ref type='bibr' target='#b8'>Chung's (2018)</ns0:ref> study indicated that hotels in the hospitality industry want to collect customer feedback in real-time to immediately disseminate positive feedback and respond to unsatisfactory customers while they are still on the scene. Guests want to inform their experience without affecting their privacy. Stakeholders in the hospitality industry hope that intelligent robots can interact more with users. <ns0:ref type='bibr'>Besides, Rodriguez (2015)</ns0:ref> concluded that the optimal distance between users and robots is 69.58 cm. Specifically, interaction with a certain greeting mode can attract users to maintain a longer interaction time; robots with the active search are more attractive to participants. The interaction time is longer than that of passively searching robots, suggesting that robots should be designed to keep at a certain distance from humans and consider adding the ability to allow robots to actively identify customers and attract them.</ns0:p><ns0:p>Research suggests that the current interactive system used by HSRs lacks the ability to process and adapt to dynamic social environments. The dynamic social environment here refers to the same human behavior and language often expressing different meanings in different social situations, such as In different situations, the handshake may require two completely different interactive messages to respond. Therefore, this research proposes the concept of HRC to develop a new interactive mode to solve the current problems faced by HRI in the hospitality industry. For a more detailed comparison of HRI and HRC, please refer to the video in the appendix link (https://youtu.be/fZmV4MKeYtQ).</ns0:p></ns0:div>
<ns0:div><ns0:head>Review of Neural Image Captioning</ns0:head><ns0:p>The challenge of generating natural language descriptions from visual data has been extensively researched in the field of computer vision. However, early research has mainly focused on generating natural language descriptions from video-type visual data <ns0:ref type='bibr' target='#b17'>(Gerber, 1996;</ns0:ref><ns0:ref type='bibr' target='#b27'>Mitchell et al, 2012)</ns0:ref>. These systems convert complex visual data into natural languages using rule-based systems. However, because the rules are artificially designed, these systems are sufficiently robust, bionic, and have been shown to be beneficial in limited applications such as traffic scenarios <ns0:ref type='bibr' target='#b46'>(Vinyals et al., 2015)</ns0:ref>. In the past decade, various researchers, inspired by the successful use of sequence-to-sequence training with neural networks for machine translation, proposed a method for generating image descriptions based on recurrent neural networks (RNNs) <ns0:ref type='bibr' target='#b7'>(Cho et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b38'>Sutskever et al., 2014)</ns0:ref>. In fact, this method of replacing the encoder in the encoder-decoder framework in machine translation with image features transforms the original complex task of generating image data caption into a simple process of 'translating' the image into a sentence <ns0:ref type='bibr' target='#b7'>(Cho et al., 2014)</ns0:ref>. Furthermore, <ns0:ref type='bibr' target='#b13'>Donahue et al. (2014)</ns0:ref> used long short-term memory (LSTM) for end-to-end large-scale visual learning processes. In addition to images, <ns0:ref type='bibr' target='#b13'>Donahue et al. (2014)</ns0:ref> also applied LSTM to videos, allowing their models to generate video descriptions. <ns0:ref type='bibr' target='#b46'>Vinyals et al. (2015)</ns0:ref> and <ns0:ref type='bibr' target='#b33'>Kiros et al. (2014)</ns0:ref> initially proposed the structure of a currently popular neural image generation algorithm based on the combination of a convolutional neural network (CNN) image recognition model and a natural language processing (NLP) structured model. Moreover, the neural image captioning algorithm based on the attention mechanism has also attracted extensive attention in the field of computer vision. <ns0:ref type='bibr' target='#b11'>Denil et al. (2012)</ns0:ref> proposed a real-time target tracking and attention recognition model driven by sight data. <ns0:ref type='bibr' target='#b53'>Tang et al. (2014)</ns0:ref> proposed an attentiongeneration model based on deep learning. From the perspective of visual neuroscience, the model requires object-centric data collection for model generation. Subsequently, <ns0:ref type='bibr' target='#b45'>Mnih et al. (2014)</ns0:ref> proposed a new recurrent neural network model, which can adaptively select specific areas or locations to extract information from images or videos and process the selected area at high resolution. As the algorithm has increasingly mature, the application of the algorithm in related fields has also been breaking through recently, such as the caption generation of car images <ns0:ref type='bibr' target='#b5'>(Chen et al., 2017)</ns0:ref>, the description generation of facial expressions <ns0:ref type='bibr' target='#b32'>(Kuznetsova et al., 2014)</ns0:ref>, and educational NAO robots driven by image caption generation for video Q&A games for children's education <ns0:ref type='bibr' target='#b24'>(Kim, 2015)</ns0:ref>. Recent research on image caption generation also shows that the accuracy and reliability of the technology have increased <ns0:ref type='bibr' target='#b12'>(Ding et al., 2019)</ns0:ref>. In addition, reinforcement learning to automatically correct image caption generation networks have also been proposed <ns0:ref type='bibr' target='#b14'>(Fidler, 2017)</ns0:ref>. These deep learning-based studies have undoubtedly laid a foundation for the possible NIC integration with HSRs as proposed in this study. The novel integration led to the possibility for humanoid robots to interact with humans while recognizing the social environment in real time, thereby improving the interactive service quality of the HSRs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Neural Image Caption Generation Algorithm 'Crash Into' Robot</ns0:head><ns0:p>An increasing number of studies have been conducted on HRI combined with image caption generation algorithm. <ns0:ref type='bibr' target='#b24'>Kim et al. (2015)</ns0:ref> used the structure of a convolutional neural network (CNN) combined with RNN + deep concept hierarchies (DCH) to design and develop an educational intelligent humanoid robot system for play video games with children. In this study, CNN was used to extract and pre-process cartoons with educational features, and RNN and DCH were used to convert the collected video features into Q&A about cartoons. During the game, after watching the same cartoon, the child and the robot ask and answer questions based on the content of the cartoon. The research results show that such a system can interact effectively with children. However, for HRIs, such simple and limited-structured Q&A conditions cannot satisfy all the interaction scenarios required. <ns0:ref type='bibr' target='#b3'>Cascianelli et al. (2018)</ns0:ref> used a gull-gated recurrent unit (GRU) encoder-decoder architecture to develop a human-robot interface that provides interactive services for service robots. This research solves a problem called natural language video description (NLVD). The authors also compared the performance when using LSTM and GRU with two different algorithms to solve these problems. They demonstrated that the GRU algorithm runs faster and consumes less memory. This type of model may be more suitable for HSRs. Although the research model is competitive on public datasets, the experimental results on the designed datasets show that the model suffers from significant overfitting. This proves that in the actual model training process, a specific training dataset for HSR interaction should be established, and other methods (such as transfer learning) should be considered to improve the generalization ability of the model for interactive tasks. <ns0:ref type='bibr' target='#b25'>Luo et al. (2019, June)</ns0:ref> created a description template to add various image features collected by the robot, such as face recognition and expression, to the generated description. Compared with the previous models, their interaction is slightly more natural and closer to the human description. However, Luo et al. use the model to provide limited services to industry managers, hard to generalize, and not for developing an entire HRI framework.</ns0:p><ns0:p>Like the research on robot vision language, research on robot vision action is in its infancy. <ns0:ref type='bibr' target='#b54'>Yamada et al. (2016)</ns0:ref> used RNNs to enable robots to learn commands online from humans and respond with corresponding behaviors. This research furthermore provides a reference and direction for humanoid robots to use deep learning to obtain online learning capabilities for human commands. Inspired by the above study, the rationale and hypothesis proposed in the present research are that the description generated by the neural image captions can drive HSRs to perform appropriate behaviors, and HSRs can even obtain online learning capabilities of interacting with surrounding people through studying and analyzing social environments. <ns0:ref type='bibr' target='#b40'>Tremblay et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b28'>Nguyen et al. (2018)</ns0:ref> believe that non-experts often lack the rationality of task descriptions when issuing instructions to robots. They use deep learning to allow robots to automatically generate human-readable instructions' descriptions according to the surrounding social environment. In addition, <ns0:ref type='bibr' target='#b28'>Nguyen et al. (2018)</ns0:ref> also used visual data to make humanoid robots imitate and learn human actions under corresponding commands so that the robot can learn how to complete the corresponding tasks only through visual data; however, social robots cannot complete precise control of movements when they imitate movements of visual data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Contribution to the Knowledge: The Bionic-Companionship Framework with NIC for HSRs</ns0:head><ns0:p>The contribution of the present study is the novel investigation and design of the bioniccompanionship framework for HSRs, adapting and integrating neural image caption generation algorithms and bionic humanoid robots, to be validated in a lab-controlled environment and reallife exploration. The new HRC framework is anticipated to enhance HRI to reach a new state, making it possible for HSRs to become bionic companions of humans.</ns0:p><ns0:p>This study proposes adapting and integrating deep learning techniques to one of the world's most advanced HSRs so that robots can autonomously and in a timely fashion convert pictures or data information captured by robotic visions and sensors into texts or sentences in order to respond and communicate more naturally with humans. The conceptual model of the proposed system consists of various modules, as shown in Figures <ns0:ref type='figure'>1, 2</ns0:ref>. The contributions of this research are summarized as follows:</ns0:p><ns0:p>(1) In order to solve the current problems of HSRs in the hospitality industry, a new interactive concept -HRC is proposed.</ns0:p><ns0:p>(2) A novel bionic interaction framework is designed based on the proposed HRC. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>(3) A system that can be used on HSRs is developed based on the bionic interaction framework, and the system has been tested and verified. The preliminary results prove that the system can enable HSRs to handle dynamic social environments.</ns0:p></ns0:div>
<ns0:div><ns0:head>Humanoid service robot used in research</ns0:head><ns0:p>The design and investigation of this HRC framework involves using the Canbot U05E humanoid robot (see Fig. <ns0:ref type='figure'>1</ns0:ref> for the high-level design, Figs 2-5 for further details) <ns0:ref type='bibr'>(Canbot, 2021)</ns0:ref>. The robot's 22-degree-of-freedom motion joints enable it to perform a variety of simulated movements, such as raising the head, turning the head, raising the arm, shaking the crank, shaking hands, leaning back, walking, and turning, and based on the proposed framework, it can acquire natural human behaviors and, as a result, efficiently interact with humans. In addition, Canbot U05E's advanced vision system and sensors can collect more complete environmental data for the proposed design and make the novel framework more robust. The robot is designed to imitate the human's seven senses, providing strong support for the concept and implementation of the bionic partner designed in this study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Bionic-Companionship Framework</ns0:head><ns0:p>In this study, we review the previous works on this topic and research gaps in the literature and describe a novel humanoid service robot and human interaction framework with neural image subtitles as its core (details are shown in Fig. <ns0:ref type='figure'>2</ns0:ref>). The framework uses the structure of the NIC algorithm to better realize the interaction of HSRs from HRI to the direction of bioniccompanionship. According to the initial descriptions of robot companions, as in the studies by <ns0:ref type='bibr' target='#b43'>Turkle (2006)</ns0:ref> and <ns0:ref type='bibr' target='#b24'>(Kim et al., 2015)</ns0:ref>, the proposed framework should provide HSRs with more natural interactions and a more sensitive understanding of the environment, and hence, the design of the framework is divided into two subsystems (see the dotted red).</ns0:p></ns0:div>
<ns0:div><ns0:head>Image/Video Description Generation System</ns0:head><ns0:p>These subsystems are the core modules of the entire interactive framework. HSRs collect visual data of the surrounding environment through equipped visual sensors (such as HD or 3D cameras) and sensors (such as tactile and radar). The type of visual data collected depends on the complexity of the interactive task to be completed by HSRs. It is generally considered that more complex interactive tasks require the use of continuous images or real-time videos. The system uses the latest neural image generation algorithm structure and CNN to perform feature extraction on the pictures and video data of the surrounding social environment, and converts the data into feature vector sequences that can be used by RNN. Finally, the RNN completes the process of generating an interactive description from the visual data. HSRs use a speech synthesis system that converts these descriptions into voices to communicate with humans. This process is different from the past mode of using HSRs human sensing sensors and setting fixed interactive feedback; the innovation of this system is that HSRs can automatically and naturally generate interactive feedback. This means that the change in the scene during the interaction will cause a continuous change in the interaction feedback, and this change is not preset by humans. In addition, in further conversation interactions, human voice response and social environment data will be coordinated by HSRs and produce continuous conversation interaction behavior.</ns0:p></ns0:div>
<ns0:div><ns0:head>Command-Robot Behavior System</ns0:head><ns0:p>For HSRs, simple conversation interactions are insufficient. HSRs should generate corresponding motions based on visual and human behavior data. For example, when humans wave to a robot, the robot should also actively respond. The hypothesis of this study is to classify or cluster description text generated from visual data and use these classified description texts to control the motions of HSRs in response to complex interactive tasks. For example, when the description generated by neural image captions is 'Hello', then HSRs will automatically determine whether 'Hello' matches a category that requires interactive motion and performs corresponding motions such as waving.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pilot testing, Preliminary Results, and Discussion</ns0:head><ns0:p>In the present study, we designed and integrated a classic NIC model on the HSR and performed a preliminary evaluation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Introduction to HSR-NIC Model</ns0:head><ns0:p>The structure of the HSR-NIC algorithm used in this study was adapted and enhanced from the model structure proposed by Mao et al. ( <ns0:ref type='formula'>2014</ns0:ref>) who used a classic encoder-decoder structure. In this study, the encoder uses the Xception pre-trained CNN to convert the input image into a feature vector. The word sequence is then input into the LSTM after a layer of word embedding layer, and finally, and add operation is performed on the word features output by the LSTM and the image features extracted by the trained CNN. These are then input into a decoder composed of a single-layer fully connected layer, which generates the probability distribution of the next word using a softmax layer. The LSTM introduced by the model can solve the long-term dependency problem in the traditional RNN, thereby improving the accuracy of the model. The dense representation of word embedding can reduce the amount of calculations involved in the model; it also enables the model to capture similar relationships between words. In addition, the model used in this study also introduces a dropout layer with a probability of 50% to increase the robustness of the model. The teacher forcing mechanism was used during model training to accelerate the model training process. The optimizer used in the research is Adam, which has the advantages of making the model converge more quickly and automatically adjusting the learning rate with learning. The variables of the model are updated by minimizing the cross-entropy loss between the probability distribution of the predicted result and the probability distribution of the true result and back-propagation. The model structure diagram as follow (Fig. <ns0:ref type='figure'>3</ns0:ref>):</ns0:p></ns0:div>
<ns0:div><ns0:head>Model Forward Propagation Process</ns0:head><ns0:p>The training process of the image captioning task can be described as follows: For a picture in the training set, its corresponding description is a sequence that represents the words in the sentence. For model , given input image from the HSR's vision, the probability of the model generating 𝜃 𝛪 sequence is expressed as (1)</ns0:p><ns0:formula xml:id='formula_0'>𝛲(S│𝛪;𝜃) = 𝛱 𝑁 𝑡 = 0 𝛲(𝑆 𝑡 |𝑆 0 ,𝑆 1 ,…,𝑆 𝑡−1 ,𝛪;𝜃)</ns0:formula><ns0:p>The logarithm of the likelihood function is used to obtain the log-likelihood function:</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>𝑙𝑜𝑔 P(𝑆│𝐼;𝜃) = ∑ 𝑁 𝑡 = 0 𝑙𝑜𝑔 𝑃(𝑆 𝑡 |𝑆 0 ,𝑆 1 ,…𝑆 𝑡−1 ,𝐼;𝜃)</ns0:formula><ns0:p>The training objective of the model is to maximize the sum of the log-likelihoods of all training samples: = (3)</ns0:p><ns0:p>𝜃 * arg max 𝜃 ∑ (𝐼,𝑆) 𝑙𝑜𝑔 𝑃(𝑆|𝐼;𝜃)</ns0:p><ns0:p>where (I, S) is the training sample. This method of maximum likelihood estimation is equivalent to empirical risk minimization using the log-loss function. Therefore, in the forward propagation process of this research model, the image feature vector Iv is extracted from the image using the CNN, and a two-dimensional vector of shape (batch size, 2048) is the output.</ns0:p><ns0:p>(4)</ns0:p><ns0:formula xml:id='formula_2'>𝐼 𝑣 = 𝐶𝑁𝑁 𝜃 𝑐 (𝐼)</ns0:formula><ns0:p>The extracted image features need to be encoded by a fully connected layer into the context feature vector that can be matched with word features. The word feature vector is the output C O t of the LSTM over the time step. The input word of LSTM passes through a word-embedding layer to generate a dense vector representation W(s). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Finally, word feature Ot and context feature C are together input into a decoder composed of a single fully connected layer after the softmax calculation generates the probability distribution of the next word . 𝑃(𝑆 𝑖 |𝐼;𝜃) ( <ns0:ref type='formula'>6</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_3'>𝑃(𝑆 𝑖 │𝐼;𝜃) = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑊 𝜃 (𝐶 + 𝑂 𝑡 ))</ns0:formula><ns0:p>The loss function is expressed as</ns0:p><ns0:formula xml:id='formula_4'>(7) 𝐿 = ∑ 𝑇 𝑡 = 1 𝑦 (𝑡) 𝑙𝑜𝑔 𝑝 (𝑡) + (1−𝑦 𝑡 )𝑙𝑜𝑔 (1−𝑝 𝑡 )</ns0:formula></ns0:div>
<ns0:div><ns0:head>Training Dataset</ns0:head><ns0:p>For the present study, we use Flickr 8k <ns0:ref type='bibr' target='#b36'>(Rashtchian et al., 2010)</ns0:ref> as the training dataset. This is a new benchmark collection for sentence-based image descriptions and searches. It consists of 8,000 images. Each image was paired with five different captions. These captions provide content descriptions of the objects and events in the picture. The images do not contain any well-known people or locations but depict random scenes and situations. Examples of datasets are shown in Fig. <ns0:ref type='figure'>4</ns0:ref>. The Flickr 8k dataset not only contains images of animals and objects, but also of some social scenes. These data can help robots to better understand natural, day-to-day scenes.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Process of Humanoid Service Robot Generating Image Captions</ns0:head><ns0:p>To explore the feasibility of the bionic-companionship framework, preliminary tests were conducted on a real humanoid service robot <ns0:ref type='bibr'>(Canbot U05E)</ns0:ref>. The process of generating image captions by a humanoid service robot is divided into four steps, as shown in Fig 5 <ns0:ref type='figure'>.</ns0:ref> Step 1. The HSR-NIC API is responsible for controlling the robot to call the high-definition camera to collect surrounding environment information (the data collection in this study is focused on HSR capture images). The collected data will be sent to the local host service program through the HTTP protocol and wait for a response from the HSR.</ns0:p><ns0:p>Step 2. The HSR-NIC localhost server program receives the data, and the requests perform preliminary processing and cleaning of the data (image) and send the data (image) to the HSR-NIC model server program to wait for the calculation result (the generated caption description).</ns0:p><ns0:p>Step 3. The HSR-NIC model server program analyzes the image data according to the training parameters saved before, generates the descriptive caption, and returns it to the local server.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59590:1:2:NEW 23 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Step 4. The HSR-NIC local server program sends the caption description to the robot application through the HTTP protocol, and the robot application controls the robot to respond according to the caption description, such as speech synthesis and motion control.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preliminary Test Results and Limitation</ns0:head><ns0:p>In this study, we conducted a preliminary test on a humanoid service robot integrated with the NIC algorithm. The results of the preliminary test were found to be promising.</ns0:p><ns0:p>With the discuss of the last chapter, the research will integrate the NIC into the HSRs to make the HSRs take advantage of the change of the surrounding environment interact with the human better. Therefore, the system proposed by this research will combine qualitative analysis and quantitative analysis to initially validate the performance of the system.</ns0:p><ns0:p>This study introduces the cross-entropy loss curve of the last 50 epochs of the model as the evaluation metric for quantitative analysis. As shown in the Fig. <ns0:ref type='figure'>6</ns0:ref>, the model finally converges to the minimum loss value of 2.65 in the training set and 2.71 in the validation set, which proves that the model has no over-fitting and under-fitting, and has generalization ability. Since the loss value is calculated from the sum of the difference between the probability value of each predicted word in the predicted description and the true value, the loss value will be affected by the sentence length of the predicted description. In related work, researchers <ns0:ref type='bibr' target='#b26'>(Li et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hu et al., 2020)</ns0:ref> used some more reliable evaluation methods to evaluate the performance of the model, including the BLUE4 <ns0:ref type='bibr' target='#b30'>(Papineni et al., 2002)</ns0:ref> and CIDEr <ns0:ref type='bibr'>(Vedantam et al., 2015)</ns0:ref>. These evaluation metrics are usually used in the field of machine translation instead of manual evaluation. Since the tasks handled by the NIC model can be regarded as translated from images/scenes into English, the evaluation metrics can also be applied to the evaluation of NIC. This study will use qualitative analysis to replace quantitative analysis of metrics such as BLUE4 and CIDEr, so as to further evaluate the preliminary performance of HSR after the integrated NIC model.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure'>7</ns0:ref> and Fig. <ns0:ref type='figure'>8</ns0:ref>, the researcher conducted two sets of tests in three different scenarios with HSR. In the first set of tests, the researcher wore a hat and changed scenarios. In the second set of tests, the researcher did not wear a hat, and the scene switching method was the same as in the first set. It can be seen from the experimental results that the humanoid robot can complete the perception of scene switching through this algorithm and generate a rough description of the scene. In the first set of tests, most of the content described was accurate. The robot equipped with the NIC algorithm can effectively identify 'man', 'black shirt', , and 'sitting on a bench'. However, in the second group of tests, there were many errors in the recognition results. This could be attributed to the researcher's long hair. Interestingly, researchers with long hair are easily identified as women or children. This indicates that the accuracy of the NIC algorithm still has room for improvement. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In addition, in order to test the performance of the system in a dynamic environment. The researcher conducted the test in a real environment (As Fig. <ns0:ref type='figure'>8</ns0:ref>). The researcher selected six real environments as the test data and let the robot generate interactive information. Among the six real interactive environments, there are three scenes that can be more accurately recognized by the robot and produce corresponding descriptions. The description information can correspond to the test environment, and the corresponding part of the description has been highlighted with the same color in the Fig. <ns0:ref type='figure'>8</ns0:ref>. Some of the objects, facilities, and human movements in these scenes can be accurately predicted, such as sidewalk, traffic, bench, building, building, etc. However, in the other three environments, the robot did not give an accurate description. The researchers believe that this may be due to the fact that the training set does not contain objects in these three environments, causing the model to fail to learn how to express the 'unfamiliar environment' .</ns0:p><ns0:p>In general, as per the results of the to two experimental sets, it was proven that the robot equipped with the NIC algorithm can capture the changes in the surrounding environment and generate different feedbacks according to the changes. The results also demonstrate the feasibility of the proposed bionic-companionship framework. Although there is still a gap between the prediction results of the algorithm and the real communication scene, the researcher believes that special data collection for some specific interaction scenarios and model training for these specific data can be effective in addressing this gap. Future research directions will mainly focus on improving the accuracy of algorithms and achieving more human-like interactions. (The detailed process is shown in the HSR-NIC demo video). In addition, the researcher believes that the scene understanding of static images is the basis for dealing with dynamic environments. Some researches have mentioned that the introduction of related algorithms of object detection into NIC can identify and generate descriptions of scenes in dynamic environments. This is also the current research limitation of this research and the research challenges that will be faced in the future.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This study presents a review of neural image generation algorithms and application cases in the field of robotics, and proposes a novel humanoid service robot and human interaction framework based on the bionic-companionship theory. The subsystems of the bionic-companionship framework are designed and introduced in detail. Preliminary tests also initially proved that the framework could increase the sensitivity of HSRs to changes in the surrounding environment. The proposed framework will contribute to further development from HRI to HRC. Future work will focus on implementing each of the subsystems in the framework and applying the framework to HSRs to verify its performance. 2) More enthusiastic and bionic interaction capabilities (automatically detect whether they are regular customers, and greet enthusiastically)</ns0:p><ns0:p>3) The robot in HRC remember the customers' past orders and provide meal recommendation for well-being.</ns0:p><ns0:p>When you enter a hotel, you see a reception area dominated by robots. When you approach the reception area, the HSR will say 'Welcome to Hotel XYZ, please follow the instructions to check-in on my display screen'. After completing the check-in, the robot will tell you the room number and issue you a room card, you go to your room, change a suit and prepare to go downstairs to eat. When you go back to the reception, the robot says 'welcome, please follow the instructions to place an order on my display'. You choose a few dishes that look good on the screen of the robot, but when the food comes up you don't seem to be satisfied with the taste… </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59590:1:2:NEW 23 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>𝐶 = 𝑊 𝜃 (𝐼 𝑣 ), 𝑂 𝑡 = 𝐿𝑆𝑇𝑀 𝜃 (𝑊(𝑠)) PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59590:1:2:NEW 23 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59590:1:2:NEW 23 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,363.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,276.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,336.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,301.50' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59590:1:2:NEW 23 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Responses to Editor’s Comments:
We really appreciate your helpful comments. Here are responses to your precious comments. In the below, [Q] means questions or comments from reviewers and [A] means the corresponding answers.
[Q1] Based on the reviewers' comments, a major revision is needed to improve the current manuscript where more experimental results should be added and the reference list should be enriched. Generally speaking, the paper is well-written with publishable contents however careful proofreading is necessary as an essential part of revision.
[A] Thanks for the editor’s comments. Based on the editor’s comments, We have proofreading the article (the certification as Fig 1) and added more analysis of the research results, enriching the citation list.
Fig 1. The certificate of Elsevier language editing services
According to the editor’s comments, we have addressed all the comments raised by the
reviewers and indicated the revised lines in the manuscript. We have provided the detailed answer to each comment from the next page.
Responses to Reviewer 1’s Comments:
We really appreciate your helpful comments. Here are responses to your precious comments. In the below, [Q] means questions or comments from reviewers and [A] means the corresponding
answers.
[Q1] More newest references should be cited.
[A] Thanks for the reviewer’s comments. Based on the reviewer’s comments, we have added some of the latest literature and research as references
[Q2] The English writing should be polished.
[A] Thanks for the reviewer’s comments. Based on the reviewer’s comments,we have already apply the editing service for our article (the certification as Fig 1 and is submitted as attachment).
Responses to Reviewer 2’s Comments:
We really appreciate your helpful comments. Here are responses to your precious comments. In the below, [Q] means questions or comments from reviewers and [A] means the corresponding
answers.
[Q] As the NIC model is the critical component in the proposed framework, more details related to the NIC should be described, e.g. Optimizer, training protocol, fusion strategy etc.
[A] Thanks for the reviewer’s comments. Based on the reviewer’s comments, we have strengthened the description of the NIC model part as follows
The structure of the HSR-NIC algorithm used in this study was adapted and enhanced from the model structure proposed by Mao et al. (2014) who used a classic encoder-decoder structure. In this study, the encoder uses the Xception pre-trained CNN to convert the input image into a feature vector. The word sequence is then input into the LSTM after a layer of word embedding layer, and finally, and add operation is performed on the word features output by the LSTM and the image features extracted by the trained CNN. These are then input into a decoder composed of a single-layer fully connected layer, which generates the probability distribution of the next word using a softmax layer. The LSTM introduced by the model can solve the long-term dependency problem in the traditional RNN, thereby improving the accuracy of the model. The dense representation of word embedding can reduce the amount of calculations involved in the model; it also enables the model to capture similar relationships between words. In addition, the model used in this study also introduces a dropout layer with a probability of 50% to increase the robustness of the model. The teacher forcing mechanism was used during model training to accelerate the model training process. The optimizer used in the research is Adam, which has the advantages of making the model converge more quickly and automatically adjusting the learning rate with learning. The variables of the model are updated by minimizing the cross-entropy loss between the probability distribution of the predicted result and the probability distribution of the true result and back-propagation. The model structure diagram as follow (Fig. 3):
[Q] Although the authors have presented some preliminary tests in this study, the design of testing cases needs to be described and justified. Meanwhile, both quantitative and qualitative analyses are desirable to validate the framework. The NIC model could be validated separately before deploying to the entire system.
[A]Thanks for the reviewer’s comments. Based on the reviewer’s comments, we have strengthened the description of the qualitative analysis part in ‘Preliminary Test Results and Limitation’ part as follow:
With the discuss of the last chapter, the research will integrate the NIC into the HSRs to make the HSRs take advantage of the change of the surrounding environment interact with the human better. Therefore, the system proposed by this research will combine qualitative analysis and quantitative analysis to initially validate the performance of the system.
This study introduces the cross-entropy loss curve of the last 50 epochs of the model as the evaluation metric for quantitative analysis. As shown in the Fig. 6, the model finally converges to the minimum loss value of 2.65 in the training set and 2.71 in the validation set, which proves that the model has no over-fitting and under-fitting, and has generalization ability. Since the loss value is calculated from the sum of the difference between the probability value of each predicted word in the predicted description and the true value, the loss value will be affected by the sentence length of the predicted description. In related work, researchers (Li et al., 2020; Hu et al., 2020) used some more reliable evaluation methods to evaluate the performance of the model, including the BLUE4 (Papineni et al., 2002) and CIDEr ( Vedantam et al., 2015). These evaluation metrics are usually used in the field of machine translation instead of manual evaluation. Since the tasks handled by the NIC model can be regarded as translated from images/scenes into English, the evaluation metrics can also be applied to the evaluation of NIC. This study will use qualitative analysis to replace quantitative analysis of metrics such as BLUE4 and CIDEr, so as to further evaluate the preliminary performance of HSR after the integrated NIC model.
Fig 6. Loss curve of NIC model on training set and validation set
As shown in Fig. 7 and Fig. 8, the researcher conducted two sets of tests in three different scenarios with HSR. In the first set of tests, the researcher wore a hat and changed scenarios. In the second set of tests, the researcher did not wear a hat, and the scene switching method was the same as in the first set. It can be seen from the experimental results that the humanoid robot can complete the perception of scene switching through this algorithm and generate a rough description of the scene. In the first set of tests, most of the content described was accurate. The robot equipped with the NIC algorithm can effectively identify “man”, “black shirt”, and “sitting on a bench”. However, in the second group of tests, there were many errors in the recognition results. This could be attributed to the researcher’s long hair. Interestingly, researchers with long hair are easily identified as women or children. This indicates that the accuracy of the NIC algorithm still has room for improvement.
In addition, in order to test the performance of the system in a dynamic environment. The researcher conducted the test in a real environment (As Fig. 8). The researcher selected six real environments as the test data and let the robot generate interactive information. Among the six real interactive environments, there are three scenes that can be more accurately recognized by the robot and produce corresponding descriptions. The description information can correspond to the test environment, and the corresponding part of the description has been highlighted with the same color in the Fig. 8. Some of the objects, facilities, and human movements in these scenes can be accurately predicted, such as sidewalk, traffic, bench, building, building, etc. However, in the other three environments, the robot did not give an accurate description. The researchers believe that this may be due to the fact that the training set does not contain objects in these three environments, causing the model to fail to learn how to express the 'unfamiliar environment' .
Fig 8. Real social environment test examples
In general, as per the results of the to two experimental sets, it was proven that the robot equipped with the NIC algorithm can capture the changes in the surrounding environment and generate different feedbacks according to the changes. The results also demonstrate the feasibility of the proposed bionic-companionship framework. Although there is still a gap between the prediction results of the algorithm and the real communication scene, the researcher believes that special data collection for some specific interaction scenarios and model training for these specific data can be effective in addressing this gap. Future research directions will mainly focus on improving the accuracy of algorithms and achieving more human-like interactions. (The detailed process is shown in the HSR-NIC demo video). In addition, the researcher believes that the scene understanding of static images is the basis for dealing with dynamic environments. Some researches have mentioned that the introduction of related algorithms of object detection into NIC can identify and generate descriptions of scenes in dynamic environments. This is also the current research limitation of this research and the research challenges that will be faced in the future.
[Q] The abbreviation NRC presents in line 65 should be clarified, Human-Robot Collaboration (HRC)?
[A]Thanks for the reviewer’s comments. Based on the reviewer’s comments, we have strengthened the description of the HRC in introduction part as follow:
With the continuous development of humanoid robots, more and more humanoid robots are used in the service industry, especially the hospitality industry. Human-Robot Interaction (HRI) has become a hot potato by more and more researchers (Yang & Chew, 2020). However, with the deepening of research, some researchers found that when humans interact with humanoid service robots (HSRs), humans hope that HSRs should have the ability and interest to interact with the dynamic thoughts and enthusiasm of the partner's relationship, and can recognize the environment, blended with what others think is meaningful and the emotions to express sympathy (Yang & Chew, 2020). This coincides with Trevarthen Companionship Theory (Trevarthen, 2001), so the concept of human robot companion (HRC) was proposed this research. The earlier concept of the robot companion is mentioned by Dautenhahn et al. (2005): HSRs need to have a high degree of awareness and sensitivity to social environment. Through the review of the above literatures, it is proposed to establish an interactive and companion framework for HSRs using deep learning and neural image caption generation, thus advance the current field of HSRs to tackle with bionic-interactive tasks of the service industry and further evolve from conventional HRI to Human and Robot Companion (HRC) (See Table 1).
Table 1. Scenario Based Comparison for HRI and HRC
Responses to Reviewer 3’s Comments:
We really appreciate your helpful comments. Here are responses to your precious comments. In the below, [Q] means questions or comments from reviewers and [A] means the corresponding
answers.
[Q]Major concerns:
[Q] A major concern is about the statement of 'bionic-companionship'. As to me, deep learning or called deep neural learning is not a fully bionic pathway to Artificial Intelligence. Here the term 'bionic' may mislead authors. To my understanding, bionics, more or less, should draw some inspirations from biological systems for some embodiments in engineering systems and technology. I did not see very clear evidence for this point. How would your service robot behave, or interact with humans resembling a biological entity?
[A] Thanks for the reviewer’s comments. Based on the reviewer’s comments, We have enhanced the explanation of the concept of bionics in the introduction as follow:
The bionics in this research refers to the humanoid service robot imitating all the tastes of life, trying to adapt to the seven emotions of ancient human nature (joy, anger, sadness, fear, love, disgust, liking) and six biological wills (life, death, eyes, ears, mouth, nose) (Chew et al.,2021). The system proposed in this study combines visual intelligence and Speech Intelligence, and imitates human behavior in social activities, which is in line with the concept of robot bionics proposed by researchers such as Chew et al (2021). Therefore, this study believes that the proposed system is a bionic system.
[Q] Figure presentation needs to be SIGNIFICANTLY changed and improved. Even when I zoom in, I cannot see clearly the embedded information (Fig. 5). Could you also replace Fig. 3 with a more structural model illustration on NIC ? Fig. 1 could be separated into two sub-figures.
[A] Thanks for the reviewer’s comments. Based on the reviewer’s comments, we have modified the figure presentation:
Fig 3. Neural image captioning model structure for HSR
Fig 5. The Infrastructure of the Humanoid Service Robot Generating Neural Image Captions with as part of the Bionic Companionship Framework
[Q] Related work is insufficient. Considering convincing readers from wider communities, it is better to see some additional points:
(1) Research milestones in HSRs
(2) Differences (including SoTA systems, and challenges) between the research focusing on industrial robotics and HSRs which have been stressed in the Abstract
(3) Emergence of challenges and methods when bridging HRI to HSR
[A] Thanks for the reviewer’s comments. Based on the reviewer’s comments, We have strengthened the description of related work part and explained some differences and challenges from the SoTA system as follow.
With the continuous development of HRI research, industrial robots have been able to interact with humans accurately and self-adaptively. Some advanced control systems (Zhang et al., 2020) and algorithms (Tang et al., 2020) have been proposed as Industrial robots provide reliable support for completing interactive tasks in an industrial environment. However, as HSRs began to enter the service industry, some research cases began to discover that there are still problems with the interaction of HSRs in the social environment. Caleb Solly (2018) believed that users can also help robots when robots help users; meanwhile, users can give feedback to optimize the system. The feedback reflects not only the optimization of the robot system but also the satisfaction of customers. Chung’s (2018) study indicated that hotels in the hospitality industry want to collect customer feedback in real-time to immediately disseminate positive feedback and respond to unsatisfactory customers while they are still on the scene. Guests want to inform their experience without affecting their privacy. Stakeholders in the hospitality industry hope that intelligent robots can interact more with users. Besides, Rodriguez (2015) concluded that the optimal distance between users and robots is 69.58 cm. Specifically, interaction with a certain greeting mode can attract users to maintain a longer interaction time; robots with the active search are more attractive to participants. The interaction time is longer than that of passively searching robots, suggesting that robots should be designed to keep at a certain distance from humans and consider adding the ability to allow robots to actively identify customers and attract them.
Research suggests that the current interactive system used by HSRs lacks the ability to process and adapt to dynamic social environments. The dynamic social environment here refers to the same human behavior and language often expressing different meanings in different social situations, such as In different situations, the handshake may require two completely different interactive messages to respond. Therefore, this research proposes the concept of HRC to develop a new interactive mode to solve the current problems faced by HRI in the hospitality industry. For a more detailed comparison of HRI and HRC, please refer to the video in the appendix link (https://youtu.be/fZmV4MKeYtQ).
[Q] Experiments - It is good to see the interesting, preliminary testing results in Fig. 6. However, how does this result reflect your statement of dealing with social dynamic environment?
[A] Thanks for the reviewer’s comments. Based on the reviewer’s comments, We have enhanced the description of the experimental part and added a description of the solution to the dynamic social environment.
With the discuss of the last chapter, the research will integrate the NIC into the HSRs to make the HSRs take advantage of the change of the surrounding environment interact with the human better. Therefore, the system proposed by this research will combine qualitative analysis and quantitative analysis to initially validate the performance of the system.
This study introduces the cross-entropy loss curve of the last 50 epochs of the model as the evaluation metric for quantitative analysis. As shown in the Fig. 6, the model finally converges to the minimum loss value of 2.65 in the training set and 2.71 in the validation set, which proves that the model has no over-fitting and under-fitting, and has generalization ability. Since the loss value is calculated from the sum of the difference between the probability value of each predicted word in the predicted description and the true value, the loss value will be affected by the sentence length of the predicted description. In related work, researchers (Li et al., 2020; Hu et al., 2020) used some more reliable evaluation methods to evaluate the performance of the model, including the BLUE4 (Papineni et al., 2002) and CIDEr ( Vedantam et al., 2015). These evaluation metrics are usually used in the field of machine translation instead of manual evaluation. Since the tasks handled by the NIC model can be regarded as translated from images/scenes into English, the evaluation metrics can also be applied to the evaluation of NIC. This study will use qualitative analysis to replace quantitative analysis of metrics such as BLUE4 and CIDEr, so as to further evaluate the preliminary performance of HSR after the integrated NIC model.
Fig 6. Loss curve of NIC model on training set and validation set
As shown in Fig. 7 and Fig. 8, the researcher conducted two sets of tests in three different scenarios with HSR. In the first set of tests, the researcher wore a hat and changed scenarios. In the second set of tests, the researcher did not wear a hat, and the scene switching method was the same as in the first set. It can be seen from the experimental results that the humanoid robot can complete the perception of scene switching through this algorithm and generate a rough description of the scene. In the first set of tests, most of the content described was accurate. The robot equipped with the NIC algorithm can effectively identify “man”, “black shirt”, and “sitting on a bench”. However, in the second group of tests, there were many errors in the recognition results. This could be attributed to the researcher’s long hair. Interestingly, researchers with long hair are easily identified as women or children. This indicates that the accuracy of the NIC algorithm still has room for improvement.
In addition, in order to test the performance of the system in a dynamic environment. The researcher conducted the test in a real environment (As Fig. 8). The researcher selected six real environments as the test data and let the robot generate interactive information. Among the six real interactive environments, there are three scenes that can be more accurately recognized by the robot and produce corresponding descriptions. The description information can correspond to the test environment, and the corresponding part of the description has been highlighted with the same color in the Fig. 7. Some of the objects, facilities, and human movements in these scenes can be accurately predicted, such as sidewalk, traffic, bench, building, building, etc. However, in the other three environments, the robot did not give an accurate description. The researchers believe that this may be due to the fact that the training set does not contain objects in these three environments, causing the model to fail to learn how to express the 'unfamiliar environment' .
Fig 8. Real social environment test examples
In general, as per the results of the to two experimental sets, it was proven that the robot equipped with the NIC algorithm can capture the changes in the surrounding environment and generate different feedbacks according to the changes. The results also demonstrate the feasibility of the proposed bionic-companionship framework. Although there is still a gap between the prediction results of the algorithm and the real communication scene, the researcher believes that special data collection for some specific interaction scenarios and model training for these specific data can be effective in addressing this gap. Future research directions will mainly focus on improving the accuracy of algorithms and achieving more human-like interactions. (The detailed process is shown in the HSR-NIC demo video). In addition, the researcher believes that the scene understanding of static images is the basis for dealing with dynamic environments. Some researches have mentioned that the introduction of related algorithms of object detection into NIC can identify and generate descriptions of scenes in dynamic environments. This is also the current research limitation of this research and the research challenges that will be faced in the future.
[Q] Clarifying the significance - I found some repetitions with the authors' recent conference paper. What is the main extension of this journal paper? Either deeper/more systematic investigation or new proposed method?
[A] The contributions of this research are summarized and highlighted in the article as follows:
(1) In order to solve the current problems of HSRs in the hospitality industry, a new interactive concept - HRC is proposed
(2) A novel bionic interaction framework is designed based on the proposed HRC.
(3) A system that can be used on HSRs is developed based on the bionic interaction framework, and the system has been tested and verified. The preliminary results prove that the system can enable HSRs to handle dynamic social environments.
[Q] Small recommendations:
1. Abstract:
(1) 'The problem and quality of human-robot interaction (HRI) has become a widely debated topic in academia.' - Could you extend it a bit more to say what exactly the major problem is?
(2) Could you emphasise more the main contribution DIFFERENTLY to previous works?
(3) 'changes in the interactive environment' - ambiguous, what kinds of changes?
[A]Thanks for the reviewer’s comments. Based on the reviewer’s comments, We have enhanced the description of the abstract.
At present, industrial robotics focuses more on motion control and vision, whereas humanoid service robotics (HSRs) are increasingly being investigated and researched in the field of speech interaction. The problem and quality of human-robot interaction (HRI) has become a widely debated topic in academia. Especially when HSRs are applied in the hospitality industry, some researchers believe that the current HRI model is not well adapted to the complex social environment. HSRs generally lack the ability to accurately recognize human intentions and understand social scenarios. This study proposes a novel interactive framework suitable for HSRs. The proposed framework is grounded on the novel integration of Trevarthen ’s (2001) companionship theory and neural image captioning (NIC) generation algorithm. By integrating image-to-natural interactivity generation and communicating with the environment to better interact with the stakeholder, thereby changing from interaction to a bionic-companionship. Compared to previous research a novel interactive system is developed based on the bionic-companionship framework. The humanoid service robot was integrated with the system to conduct preliminary tests. The results show that the interactive system based on the bionic-companionship framework can help the service humanoid robot to effectively respond to changes in the interactive environment, for example, give different responses to the same character in different scenes.
[Q] Introduction:
(1) 'high-risk service and interactive tasks' - for some examples?
(2) Maybe I missed some info, what is the 'HRC' herein short for?
(3) How do the authors define the 'social dynamic environment', empirically? or with any standard? This is important as to be addressed by the proposed system.
(4) I would like to see the organization of the rest of this paper at the end of Introduction.
End of introduction
[A] Thanks for the reviewer’s comments. Based on the reviewer’s comments, We have enhanced the description of the introduction.
Humanoid service robots (HSRs) have seen a sharp rise in adoption recently and are seen as one of the major technologies that will drive the service industries in the next decade (Harris et al., 2018). An increasing number of researchers are committed to investigating HSRs to help humans complete repetitive or high-risk service and interactive tasks such as serving patients with infectious diseases, delivering meals and so on. Delivery robots, concierge robots, and chat robots have been increasingly used by travel and hospitality companies (Ivanov, 2019). Although the contribution of these achievements mainly comes from the rapid development of robotics engineering, Ivanov et al. (2019) indicated that future research focus will gradually shift from robotics engineering to human-robot interaction (HRI), thus opening up interdisciplinary research directions for researchers.
In the early days, Fong et al. (2003) proposed that in order to make robots perform better, the robot needs to be able to use human skills (perception, cognition, etc.) and benefit from human advice and expertise. This means that robots that rely solely on self-determination have limitations in performing tasks. The authors further propose that the collaborative work between humans and robots will be able to break this constraint, and research on human-robot interaction has begun to emerge. Fong et al. (2003) believe that to build a collaborative control system and complete human-robot interaction, four key problems must be solved. (1) The robot must be able to detect limitations (what can be done and what humans can do), determine whether to seek help, and identify when it needs to be resolved. (2) The robot must be self-reliant and secure. (3) The system must support dialog. That is, robots and humans need to be able to communicate with each other effectively. However, dialog is restricted at present. Through collaborative control, dialog should be two-way and require a richer vocabulary. (4) The system must be adaptive. Although most of the current humanoid service robots already support dialog and can complete simple interactive tasks, as propounded in the research, such dialog in the present time remains limited and “inhuman.” In the process of interacting with robots, humans always determine the state of the robot (the position of the robot or the action the robot is doing) through vision, and then communicate with the robot through a dialog system. However, HSR cannot perform this yet as they do not seem to fully satisfy the two-way nature of dialog. Therefore, this research responds to the current gap and attempts to differ from the current HRI research. This research attempts to introduce deep learning into the existing dialog system of HSR, thus advancing the field.
With the continuous development of humanoid robots, more and more humanoid robots are used in the service industry, especially the hospitality industry. Human-Robot Interaction (HRI) has become a hot potato by more and more researchers (Yang & Chew, 2020). However, with the deepening of research, some researchers found that when humans interact with humanoid service robots (HSRs), humans hope that HSRs should have the ability and interest to interact with the dynamic thoughts and enthusiasm of the partner's relationship, and can recognize the environment, blended with what others think is meaningful and the emotions to express sympathy (Yang & Chew, 2020). This coincides with Trevarthen Companionship Theory (Trevarthen, 2001), so the concept of human robot companion (HRC) was proposed this research. The earlier concept of the robot companion is mentioned by Dautenhahn et al. (2005): HSRs need to have a high degree of awareness and sensitivity to social environment. Through the review of the above literatures, it is proposed to establish an interactive and companion framework for HSRs using deep learning and neural image caption generation, thus advance the current field of HSRs to tackle with bionic-interactive tasks of the service industry and further evolve from conventional HRI to Human and Robot Companion (HRC) (See Table 1).
This study proposes that the introduction of visual data into the current HRI model of HSRs enables HSRs to have a high level of sensitivity to the social dynamic environment while interacting with humans, thereby enhancing the current HRI model to HRC. With the continuous development of deep learning, some researchers have recently realized the transformation of static pictures or videos from conventional camera input into text descriptions (Li et al., 2020; Hu et al., 2020; Luo, 2019). This deep learning algorithm model is called neural image capturing (NIC). This research attempts to adapt and integrate NIC into HSRs and propose a novel framework (bionic-companionship framework) to enhance the traditional HRI experience. This framework aims to improve the current HRI interaction mode in the field of HSRs to a higher level of HRC (Yang & Chew, 2021). The bionics in this research refers to the humanoid service robot imitating all the tastes of life, trying to adapt to the seven emotions of ancient human nature (joy, anger, sadness, fear, love, disgust, liking) and six biological wills (life, death, eyes, ears, mouth, nose) (Chew et al.,2021). The system proposed in this study combines visual intelligence and Speech Intelligence, and imitates human behavior in social activities, which is in line with the concept of robot bionics proposed by researchers such as Chew et al (2021). Therefore, this study believes that the proposed system is a bionic system.
" | Here is a paper. Please give your review comments after reading it. |
209 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In supervised machine learning, specifically in classification tasks, selecting and analyzing the feature vector to achieve better results is one of the most important tasks. Traditional methods such as comparing the features' cosine similarity and exploring the datasets manually to check which feature vector is suitable is relatively time consuming. Many classification tasks failed to achieve better classification results because of poor feature vector selection and sparseness of data. In this paper, we proposed a novel framework topic2features (T2F) to deal with short and sparse data using the topic distributions of hidden topics gathered from dataset and converting into feature vectors to build supervised classifier. For this we leveraged unsupervised topic modelling LDA (Latent dirichlet allocation) approach to retrieve the topic distributions employed in supervised learning algorithms. We make use of labelled data and topic distributions of hidden topics that were generated from that data. We explore how the representation based on topics affect the classification performance by applying supervised classification algorithms.</ns0:p><ns0:p>Additionally, we did careful evaluation on two types of datasets and compared them with baseline approaches without topic distributions and other comparable methods. The results show that our framework performs significantly better in terms of classification performance compared to the baseline(without T2F) approaches and also yields improvement in terms of F1 score compared to other compared approaches.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Learning to classify the short text, social media data, and large web collections have been extensively studied in the past decade. Many text classification methods with different set of features have been developed to improve the performance of classifiers and achieved satisfactory results <ns0:ref type='bibr'>( Škrlj et al., 2021)</ns0:ref>.</ns0:p><ns0:p>With rapid growth of online businesses, communication and publishing applications. Textual data available in variety of forms, such as customer reviews, movie reviews, chats, and news feeds etc. Dissimilar from normal documents, these type of texts have noisy data, much shorter, and consists of few sentences, so it poses a lot of challenges in classifying and clustering. Text classification methods typically fail to achieve desire performance due to sparseness in data. Generally, text classification is a task to classify the document into one or more categories based on content and some features <ns0:ref type='bibr' target='#b7'>(Dilawar et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Given a set of documents, a classifier is expected to learn a pattern of words that are appeared in the documents to classify the document into categories. Many deep learning techniques achieve the state of art results and have become a norm in text classification tasks <ns0:ref type='bibr' target='#b5'>(Devlin et al., 2018)</ns0:ref>, showing good results on variety of tasks including the classification of social media data <ns0:ref type='bibr' target='#b46'>(Tomašev et al., 2015)</ns0:ref> and news data categorization <ns0:ref type='bibr' target='#b22'>(Kusner et al., 2015)</ns0:ref>. In spite of achieving the satisfactory results on various classification tasks, deep learning is not yet optimized for different context such as where the number of documents in the training data is low, or document contains very short and noisy text <ns0:ref type='bibr' target='#b40'>(Rangel et al., 2016)</ns0:ref>. In order to classify the data, we need different set of features along with the data so that better PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science classification performance can be achieved. And for the classification to be successful, enough data with different features must be available to train a successful classifier <ns0:ref type='bibr' target='#b39'>(Pavlinek and Podgorelec, 2017)</ns0:ref>. Large datasets with multiple features and labeled data do not just assure better generalization of algorithm, but also provide satisfactory performance. However, in reality we do not have a large number of features along with the content, and sometimes we also have few labelled instances. This norm is typical in many fields such as speech recognition, classical text mining, social media data classification <ns0:ref type='bibr' target='#b10'>(Fiok et al., 2021)</ns0:ref>. Of course we can do feature engineering and labeling manually but labeling is considered to be difficult and time consuming and selection of features is unavailable when you do not have a lot of features associated with datasets <ns0:ref type='bibr' target='#b32'>(Meng et al., 2020)</ns0:ref>. Many semi supervised learning methods of text classification are based on less labelled data and important feature selection and focus on similarities between dependent and independent variables. Since many methods are based on analyzing the similarity measures of label and unlabeled data, the representation of content and its features is important <ns0:ref type='bibr' target='#b39'>(Pavlinek and Podgorelec, 2017)</ns0:ref>. As a matter of fact, the representation of unstructured content and features is more important than choosing the right machine learning algorithm <ns0:ref type='bibr' target='#b21'>(Kurnia et al., 2020)</ns0:ref>. While you can represent the structured content in a uniform way with feature vectors, unstructured content you can represent in various ways. In text classification, some researchers leveraged vector space models representation, where features are based on words as independent units and values extracted from different vector weighting schemes such as term frequency-inverse document frequency (TF-IDF) <ns0:ref type='bibr' target='#b39'>(Pavlinek and Podgorelec, 2017)</ns0:ref>. But in these representations word orders and semantic meanings are ignored <ns0:ref type='bibr' target='#b45'>(Sriurai, 2011)</ns0:ref> that ultimately impact the classifier performance. In addition these word vectors are sparse and high dimensional, so it is impossible to use just any machine learning algorithm on them seamlessly <ns0:ref type='bibr' target='#b1'>(Andoni et al., 2018)</ns0:ref>. For features vector representations, different techniques, such as most common ones are TF-IDF, bag of words and word embeddings are utilized to fine tune their classifiers, but sparseness still remains in the representation. In this situation we can use topic models. When we have low number of features, topic models consider context and compact the representation of content <ns0:ref type='bibr' target='#b4'>(Colace et al., 2014)</ns0:ref>.</ns0:p><ns0:p>In this way we can represent each document in latent topic distribution space instead of word space or document space. So inspired by the idea of, contexts in which we do not have many features, in which we have sparseness and noisiness in data and also the semi supervised approaches in which we have less labelled data, we present a novel framework for text classification of various datasets of relatively same nature with hidden topics distributions retrieved from those datasets that can deal successfully with large, short, sparse and noisy social media and customer reviews datasets. The underlying approach is that we have collected datasets of different natures and then trained a classification algorithm based on labeled training dataset and discovered topics retrieved from those datasets. The framework mainly based on combining the unsupervised LDA topic modelling approach and powerful machine learning text classification classifiers such as MaxEnt (MaxEntropy) and SVM (Support Vector machine). This research has the following contributions:</ns0:p><ns0:p>1. We propose a novel T2F model that leverages LDA topics distributions to represent features instead of using traditional features to build classifiers. The proposed model represents features in a way to capture the context of data.</ns0:p><ns0:p>2. We have reached the promising results and give the new way to solve the feature selection problem in order to achieve best classification results.</ns0:p><ns0:p>3. Each and every aspect of model variation with different parameters analyzed in results and discussion.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Different studies applied various feature engineering techniques to improve the performance of classifiers, such as researchers in <ns0:ref type='bibr' target='#b29'>(Masood and Abbasi, 2021)</ns0:ref> used graph embeddings to classify the twitter users into different categories of rebel users, while researchers in <ns0:ref type='bibr' target='#b13'>(Go et al., 2009)</ns0:ref> used emoticons along with pos, unigram and bigram features classify the tweet sentiments and in another research work researchers compute the emojis sentiments <ns0:ref type='bibr' target='#b19'>(Kralj Novak et al., 2015)</ns0:ref>. While when you see the famous topic modelling technique people has leveraged this for variety of tasks such as event detection during disasters <ns0:ref type='bibr' target='#b43'>(Sokolova et al., 2016)</ns0:ref>. To extract high quality topics from short and sparse text, researchers proposed VAETM (Variational auto encoder topic model) approach <ns0:ref type='bibr' target='#b52'>(Zhao et al., 2021)</ns0:ref> by combining the word vectors and entity vectors representations. Researchers in <ns0:ref type='bibr' target='#b51'>(Yun and Geum, 2020)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>representation as an input to support vector machine classifier to classify the patents. Most of the time, topic modelling is mostly used for the purpose of extracting topics and analyzing those topics to aid the organization in decision making <ns0:ref type='bibr' target='#b34'>(Mutanga and Abayomi, 2020)</ns0:ref>. Recent study by <ns0:ref type='bibr' target='#b27'>(Liu et al., 2020)</ns0:ref> explored the topic embeddings generated from LDA to classify the email data, specifically they improved the email text classification with LDA topic modelling by converting email text into topic features. In the medical domain, researchers in <ns0:ref type='bibr' target='#b44'>(Spina et al., 2021)</ns0:ref> proposed a method that extract nigh time features from multisensory data by using LDA and classify COPD(chronic obstructive and pulmonary disease) disease patients, they regard LDA topic distributions as powerful predictors in classifying the data. In another approach, researchers <ns0:ref type='bibr' target='#b26'>(Li and Suzuki, 2021)</ns0:ref> used LDA based topic modelling document representation to fine grained the word sense disambiguation, they proposed a Bag of sense model in which a document is a multiset of word senses and LDA topics word distributions mapped into senses. Using the text summarization techniques to label the topics generated from LDA topic distributions is also one of the attempts made by researchers <ns0:ref type='bibr' target='#b3'>(Cano Basave et al., 2014)</ns0:ref>. Recent work has applied summarization methods to generate topic labels. <ns0:ref type='bibr' target='#b48'>(Wan and Wang, 2016)</ns0:ref> Proposed a novel method for topic labeling that runs summarization algorithms over documents relating to a topic. Four summarization algorithms are tested: TopicLexRank, MEAD, Submodular and Summary label. Some various vector based methods have been also applied to label the topics. Researchers in <ns0:ref type='bibr' target='#b0'>(Alokaili et al., 2020)</ns0:ref> developed a tool to measure the semantic distance between phrase and a topic model. They proposed a sequence to sequence neural based approach to name topics using distant supervision. It represents phrase labels as word distributions and approaches the label problem as an optimization problem. Recent studies have shown that similarity measures of features are more efficient when based on topic models techniques than they are based on bag of words and TF-IDF <ns0:ref type='bibr' target='#b50'>(Xie and Xing, 2013)</ns0:ref>. In this context, the semantic similarity between two documents was also investigated <ns0:ref type='bibr' target='#b36'>(Niraula et al., 2013)</ns0:ref>. The most related work to our context is probably the use of topic modelling features to improve the word sense disambiguation by <ns0:ref type='bibr' target='#b26'>(Li and Suzuki, 2021)</ns0:ref> and also the work in <ns0:ref type='bibr' target='#b39'>(Pavlinek and Podgorelec, 2017)</ns0:ref> in which they present features representation with semi supervised approach using self-training learning. As our ultimate motivation is to classify the text with good performance, so for classification of text a lot of methods and frameworks have been developed.</ns0:p><ns0:p>If we look at aspect of feature engineering techniques, then there are a lot of mechanism used in different studies to tune the feature for better text classification. In this way, Researchers in <ns0:ref type='bibr' target='#b35'>(Nam et al., 2015)</ns0:ref> used the social media hashtags for sentiment classification of texts, they collected the data with the hashtags and make use of hashtags to classify the sentiments in positive and negative categories. Before the topic modelling techniques, graph embeddings were also used with n gram features to better classify the text, <ns0:ref type='bibr' target='#b41'>(Rousseau et al., 2015)</ns0:ref> analyzed the text categorization problem as graph classification problem, and they represent the textual documents as graph of words. They used a combination of n grams and graph word representation to increase the performance of text classifiers. Researchers in <ns0:ref type='bibr' target='#b28'>(Luo, 2021)</ns0:ref> leveraged the word frequency, question marks, full stops, initial word and final word of document. While use of word taxonomies as means for constructing new semantic features that may improve the performance of the learned classifiers was explored by <ns0:ref type='bibr'>( Škrlj et al., 2021)</ns0:ref>. In text mining, <ns0:ref type='bibr' target='#b9'>(Elhadad et al., 2018)</ns0:ref> present an ontology-based web document classifier, while <ns0:ref type='bibr' target='#b17'>(Kim et al., 2018)</ns0:ref> propose a clustering-based algorithm for document classification that also benefits from knowledge stored in the underlying ontologies.</ns0:p></ns0:div>
<ns0:div><ns0:head>TOPIC MODELLING</ns0:head><ns0:p>Latent Dirichlet Allocation (LDA) first introduced by <ns0:ref type='bibr' target='#b2'>(Blei et al., 2003)</ns0:ref>, it is a probabilistic generative model that can be used to estimate the multinomial observations by unsupervised learning approach. With respect to model the topics, it is a method to perform latent semantic analysis (LSA). The main idea behind LSA is to extract latent structure of topics or concepts from the given documents. Actually the term latent semantic coined by <ns0:ref type='bibr' target='#b18'>(Kim et al., 2020)</ns0:ref> who showed that the co-occurrence of words in the documents can be used to show the semantic structure of document and ultimately find the concept or topic. With LDA each document is represented as multinomial distribution of topics where topic can be seen as high level concepts to documents. The assumption on which it is based is that document is a collection created from topics, where each topic is presented with mixture of words.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed For each document sample mεM topic proportions θ m from the alpha dirichlet distribution, Then for each word placeholder n in the document m, we:</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>1. We randomly Choose a topic Z m ,n in accordance with proportions of sample topic 2. We randomly choose a word W m ,n from the set of multinomial distributions φ k of already chosen topic.</ns0:p><ns0:p>In the generalization process of LDA, the α and β are hyper vector parameters that determine the dirichlet prior on θ m is a collection of topic distributions for all the documents and on parameter φ , they determine the word distributions per topic <ns0:ref type='bibr' target='#b39'>(Pavlinek and Podgorelec, 2017)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Parameters and variables:</ns0:head><ns0:p>M: total no of documents N: total no of words K: is number of topics φ k :word distributions of topic K Z m ,n: a document topic over words W m ,n: topic words of specific document α: Hyper vector parameter β : hyper vector parameter θ m: topic distribution of document Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The figure <ns0:ref type='figure'>2</ns0:ref> shows the abstract model, that depicts the generic framework, and the detailed framework in the figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref> depicted that we aim to build and train text classifiers with the use of hidden semantic topics.</ns0:p><ns0:p>The framework consist of following tasks:</ns0:p><ns0:p>1. Collect the appropriate dataset from any domain, we choose amazon product reviews dataset and social media dataset of different disasters.</ns0:p><ns0:p>2. Apply the LDA topic modelling with different parameters on dataset and generate the hidden semantic topics with weights and select the appropriate LDA model.</ns0:p><ns0:p>3. Create the topic distributions for every review/tweet/document using the LDA model and convert into feature vectors to feed in supervised algorithms.</ns0:p><ns0:p>4. Build supervised learning classifier and get F1, Accuracy, Precision and Recall score to check the classification performance of proposed model.</ns0:p><ns0:p>5. Also did experiment on unseen data, by applying the LDA topic distributions of current data and investigate if it generalizes.</ns0:p><ns0:p>The first step is more important choosing the appropriate dataset, the dataset must be large enough and rich enough to cover variety of topics that suited to classification problems. This means that nature of data should be discriminative enough to be observed by humans. The second step explains that we apply the famous topic modelling approach such as LDA (latent dirichlet allocation) for creating topics from datasets. There are a lot of topic modelling approaches for topic modelling such as pLSA or LDA.</ns0:p><ns0:p>We choose LDA because it has more concrete document generation. LDA briefly discussed in LDA topic modelling section. 3) As topic modelling gives the number of topics per documents, we developed different LDA models with different setting such as with 10, 15, 20 topics also with the lemmatized data and using bigrams and trigrams, and also with different iterations. We observed the topic distributions outputs were impressive and satisfy our supervised learning classifiers, then we grab the topic distributions.</ns0:p><ns0:p>4) We build the classifiers by using the topic distributions as feature vectors, we choose supervised learning classifiers such as MaxEntropy, Max entropy with stochastic gradient descent (sgd) optimizer and mostly used support vector machine (SVM). 5) This is an additional step to test the framework on unseen data, we run the classifiers on unseen data by creating topic distributions from current LDA model and see if it generalizes or not. The extensive detail of each step will be discuss further in relevant section. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Datasets</ns0:head><ns0:p>Selecting the appropriate dataset is more important because the topics generated from these datasets directly impacts the classifier results and performance of classifiers. Therefore to make these things in mind, we choose two large datasets of various nature. One dataset is amazon review datasets about products and people sentiments about the products. The total reviews were data span a period of more than 10 years, including all 500,000 reviews up to October 2012 <ns0:ref type='bibr' target='#b30'>(McAuley and Leskovec, 2013)</ns0:ref>. Another dataset was actually a collection various disaster related social media datasets, and we collected it from <ns0:ref type='bibr' target='#b38'>(Olteanu et al., 2014)</ns0:ref>, <ns0:ref type='bibr' target='#b15'>(Imran et al., 2013)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>All the pre-processing steps are shown in figure <ns0:ref type='figure'>4</ns0:ref>, like removing punctuations, transforming to lowercase letters, and make into lists, the detail of remaining steps is in following section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Tokenization and lemmatization</ns0:head><ns0:p>Tokenization is the process of breaking the document or tweets into words called tokens. A token is basically an individual part of sentence having some semantic values. Like Sentence 'hurricane is coming' would be tokenized into 'hurricane', 'is', 'coming'. We have utilized the Spacy function with core English language model for tokenization and lemmatization 1 . The beauty of this Spacy function is that it gives you part of speech detail of every sentence, and you can chose from that which part of speech you need for further processing in the specific context. Spacy is capable enough to also give sentence dependencies in case you need them while performing graph embedding's. After tokenization, we need to see which part of sentence we need and also need to extract the words into their original forms. Both lemmatization process and stemming process use for this purpose. Many typical text classification techniques uses stemming with the help of port stemmer, and snowball stemmer, such as words 'compute', 'computer', 'computing', 'computed' would be reduced into word 'comput', a little draw back with stemming is that it reduces the word into its root form without looking into it the word is found in dictionary of that specific language or not, as you can see 'comput' is not a dictionary word. There comes the lemmatization, with Spacy we performed the lemmatization, lemmatization also reduced the word into its root form but by keeping mind the dictionary database. With lemmatization the above examples of words <ns0:ref type='bibr'>('compute','computer','computed','computing')</ns0:ref> would be reduced to root form as <ns0:ref type='bibr'>('compute','computer','computed','computing')</ns0:ref> respectively by keeping in mind the dictionary.</ns0:p><ns0:p>While implementing the Lemmatization part we keep the sentence with words having only the 'nouns', 'adjectives', and 'verbs' , that is useful if you need to be more specified about the LDA topics and in this way your topic distributions make more sense.</ns0:p></ns0:div>
<ns0:div><ns0:head>Bigrams and trigrams</ns0:head><ns0:p>Sometime in large and sparse texts we see the nouns or adjectives that make of multiple words, so to make the semantic context of words into sentences we need bigrams and even trigrams so that it will not break into single separate unigram tokens and lost the meaning and semantic of sentence. Bigrams is an approach to make words that are of two tokens to remains into its semantic shape so that sentence contextual meaning would not be lost. We achieved this through Genism's phrases class 2 that allows us to group semantically related phrases into one token for LDA implementation. Such as ice cream, new york listed as single tokens. The output of genism's phrases bigram mod class is a list of lists where each one represents reviews, documents or tweets and strings in each list is a mix of unigrams and bigrams. In the same way, for the sake of uniformity of three phrases words tokens, we applied trigrams through genism's phrases class to group semantically related phrases into single tokens for LDA implementation. This normally mostly applies on country names such as united states of america, or people's republic of china, etc. The output of genism's phrases trigrams is a list of list where each list represents review, document or tweets and strings in the list is a mix of, unigrams, bigrams and trigrams.</ns0:p><ns0:p>To make the LDA model more comprehensive and specific we applied the bigrams and trigrams.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sparse vector for LDA model</ns0:head><ns0:p>Once you have the list of lists of different bigrams, and trigrams then after that you pass into genism's dictionary class. This will give the representation in the form of word frequency count of each word in strings in the list. As Genism's LDA implementation needs text as sparse vector for LDA model. We have used the Genism's library doc2bow 2 simply counts the occurrences of each word in documents and create and return the spare vectors of our text reviews to feed into LDA model. The sparse vector [(0, 1),</ns0:p><ns0:p>(1, 1)] therefore reads: in the document 'Human computer interaction', the words computer (id 0) and human (id 1) appear once; the other ten dictionary words appear (implicitly) zero times 2 .</ns0:p></ns0:div>
<ns0:div><ns0:head>LDA Model</ns0:head></ns0:div>
<ns0:div><ns0:head>Apply LDA Model</ns0:head><ns0:p>To apply the LDA model there is a specific representation of the content that we need in the form of corpus and along with the corpus we need the dictionary that assist that corpus. For different LDA models, we Manuscript to be reviewed</ns0:p><ns0:p>Computer Science create different type of corpus, with unigrams, bigrams and trigrams. LDA model is specifically described in detail in figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>LDA Model selection</ns0:head><ns0:p>LDA model selection was most difficult task, as it can ultimately impacts the results of supervised classifiers. So to choose the best LDA model with how many number of topics in the model was time consuming task. To find the exact number of topics for suited for better LDA model was the main focus of previous studies <ns0:ref type='bibr' target='#b14'>(Greene et al., 2014)</ns0:ref>. The first technique was manual, that is choose the different number of range of topics and check and investigate the results if it makes any sense. The second one was analyzing the coherence score metrics of LDA models more coherence increase means better model.</ns0:p><ns0:p>Then we also explored the models by giving number of various topics and every topic distributions results</ns0:p><ns0:p>and vectors feed into supervised algorithms and check which one gives the better results in terms of F1 score. Approach one was very time consuming, the second one was to see the coherence score but that just check the topic identifications has not a large impact on supervised algorithms results, the third approach seems suitable in our context but our main purpose was to classify the documents/reviews with best results. Genism's also provides Hierarchical dirichlet process (HDP) 3 class that used to seek the correct number of topics for different type of datasets. You do not need to type the number of topics in HDP class and automatically seeks the no of topics based on data. You just need to run this for few times, and if it gives you the same results with same number of topics again and again then those number of topics are perfect learning topics for your type of data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Hierarchical dirichlet process</ns0:head><ns0:p>According to Genism's documents, hierarchical dirichlet process (HDP) based on stick breaking construction that is an analogy used in Chinese restaurant process 3 . For example, in the figure <ns0:ref type='figure' target='#fig_2'>5</ns0:ref>, we need to assign 8 to any of the topic C1, C2, C3. There is a 3/8 probability that 8 will land in topic C1 topic, 4/8 probability that 8 will land in C2 topic, there is 1/3 probability that 8 will land in topic C3. HDP discovered the topics in the coherent way, like bigger the cluster is the more likely for the word to join that cluster of topics. It is good way to choose a fixed set of topics for LDA model <ns0:ref type='bibr' target='#b49'>(Wang et al., 2011)</ns0:ref>.</ns0:p><ns0:p>While implementing HDP on our datasets, we also test our third approach as well which was manually give the topic number and check the classifier results, to ensure the consistency we built around 15 LDA models with different parameters, and compared with HDP results and choose the best ones, that has high influence on classifiers results and that gives best classifier results. In the end, the best LDA models were with lemmatized texts (with nouns, adjectives and verbs), with 100 iterations and with 10, 15 and 20 topics. ), and new customer 8 needs to be assigned to any of the table, so 3/8 probability that customer will be assign to C1, 4/8 probability 8 will be assign to C2 and 1/8 probability that customer will be assign to C3.</ns0:p></ns0:div>
<ns0:div><ns0:head>Train Classifiers with topic distributions</ns0:head><ns0:p>Train the classifiers with topic distributions contains these steps: first, we choose the text classification algorithm from different learning methods; second we incorporate the topic distributions with some manu- Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ally engineered features into the train data, test data and future unseen data with specified representation that classifier needed. Then in the end, we train the classifier and get the F1 measure scores.</ns0:p></ns0:div>
<ns0:div><ns0:head>Choose classical text classification Learning Methods</ns0:head><ns0:p>We have chosen the logistic regression classifier aka MaxEnt (MaxEntropy) and Support vector machines to evaluate our framework. The reason for choosing these classifiers are that our implementation of topic distribution works with data represented as dense or sparse arrays of floating point values for the feature vectors. So these models fit this type of implementation context and can handle the sparse and dense type of feature arrays with floating values. Also MaxEnt makes no independence assumptions for its features like unigrams, bigrams and trigrams. It implies that we can add features and phrases to MaxEnt without the fear of feature overlapping <ns0:ref type='bibr' target='#b13'>(Go et al., 2009)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Integrate topic distributions into dataset</ns0:head><ns0:p>After implementing the LDA model on data and getting topics from the data, we created the topic distributions and incorporate the topic distribution, original document and one manually coded feature which is frequency of document, into the classifier in a way that resulting vector representation would be according to the machine learning classifier format. This algorithm consists of two main components, first it creates the topic distribution in the form of probability and second one is convert those topic probability distributions along with length of each document to create topic oriented feature embeddings. As presented in algorithm we intended to learn topic based feature embeddings to be used in classifiers. We begin by creating topic distributions for all the documents in the dataset, which is nothing but a word distributions along with their weights, subsequently we convert these topics into format to create a feature vectors that ultimately used in classifiers. For this it utilized get document topic function to be applied on extracted topic words (line 7), that gives output in the form of integer and float values of each topic, after that algorithm learns the topic distributions float values and mapped into feature vectors embeddings to be used in classifier (line 9 to 11).</ns0:p><ns0:p>After we doing topic inference through LDA, we will integrate the topic distributions tdm = tdm, 1. . . .., tdm,2. . . .., tdm,k and original document di= dim,1 . . . . dim,2. . . .. , dim,n in order that resulting vector is suitable for the chosen learning technique. Because our classifier only can take discrete feature attributes, so we need to convert our topic distributions into the form of discrete values. Here we describe how we integrate the topic distribution into documents to get resultant feature vector to be used into classifiers.</ns0:p><ns0:p>Because our classifiers requires discrete feature attributes so it is necessary to discretize probability values in tdm to obtain topic names. The topic name appears once or several times depending on its probability.</ns0:p><ns0:p>For example, topic with probability 0.016671937 appear 6 times will be denoted as 0.016671937:6. Here <ns0:ref type='bibr'>tdm = tdm, 1. . . .., tdm,2. . . .., tdm,k;</ns0:ref> where tdm= topic distribution of a single document di= dim,1 . . . . dim,2. . . .. , dim,n; where di= documents/reviews/tweets from dataset Where each Topic distribution (t dm ,k) is computed as follows:</ns0:p><ns0:formula xml:id='formula_0'>t dm k = n k m ∑ k j=1 n j m (1)</ns0:formula><ns0:p>where We built multiple LDA with bigrams, trigrams, with different range of topics, and ultimately we estimated the best LDA model with best hyper-parameters settings and that yields better results when feed into supervised algorithms. The best one is 20 topics, with 100 iterations, and with bigrams. For this, we use the LDA get document topic function from Genism's library 4 on our topic distributions and get the topic distributions in the form of discretized probability values. Extracted LDA topics makes the data more related, these are nothing but the probability distributions of words from documents, we built multiple topic models with various settings. We are more interested to see how the hidden topics semantic structure can be convert into and apply on supervised algorithm and see if it can improve the performance of supervised classification.</ns0:p><ns0:formula xml:id='formula_1'>n m k =number of</ns0:formula></ns0:div>
<ns0:div><ns0:head>Train classifiers</ns0:head><ns0:p>We trained support vector machines classifiers and MaxEnt with stochastic gradient descent (sgd) optimization as it gives good results in terms of speed and performance. We have investigated that amazon review dataset has some disproportionate amount of classes, so on amazon dataset to handle the class imbalance we use the parameter class weight with value balanced. This will approximate under sampling to correct for this. Beside this, all classifiers were applied with same parameters. One thing to be noted here, while we were implementing these classifiers we noticed a modified Huber loss option in stochastic gradient implementation, the benefit of using this is that it avoid misclassification and it punishes you more on outliers as it brings tolerance to outliers as well as probability estimates 5 . So we leveraged this parameter to avoid misclassifications and punishing the outliers.</ns0:p></ns0:div>
<ns0:div><ns0:head>EVALUATION</ns0:head><ns0:p>To verify our proposed framework we performed two classification tasks with two different nature of datasets, first task was to check the sentiments of people from amazon reviews, in short classify the reviews into different categories of sentiments, the second task was to classification of social media data into different categories of relatedness (on topic, off topic, relevant, irrelevant) during natural crisis and disasters. Tweet documents are very short in comparison to reviews of amazon datasets, each include tweet id, tweet, tweet time and label. Amazon review datasets are bit longer. Each contains several sentence and also describes a particular features about various products. Both datasets were sparse, short text, noisy and hard enough to verify our framework. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Evaluation measures</ns0:head><ns0:p>Typically classification algorithms have accuracy, F1 measure, Precision and recall measures to measure the performance of model. Accuracy is a measure to identify all correctly classified categories. Precision is a measure to identify positive from all predicted positive classes, while recall is measure to correctly identify positive classes from actual positive classes, and F1 is a harmonic mean of precision and recall (Elhadad et al., 2018), 6 . So, to evaluate the performance of our proposed framework, we used the F1, Precision and recall measure scores, because during preprocessing of our data we have investigated that our datasets have imbalanced classes, and F1 measure scores is a better metric for imbalanced classes' datasets. Beside F-measure, precision and recall scores, we also intended to measure the statistical significance of our model, so we employed a k fold cross validation tests on our proposed model and as well on baseline approaches. We determine the classification accuracy of each fold on our datasets, and evaluate the average classification accuracy of our proposed and compared it with the baseline approaches,</ns0:p><ns0:p>to check the effectiveness of our model. The major advantage of k fold cross validation is that it takes every observation of data has a chance of appearing in training and testing set. The higher the mean performance of the model, the better the model is, so mean accuracies on k fold cross validation and average F1 score are dominantly used as evaluation measures.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical Validity test</ns0:head><ns0:p>To have statistical validity of our model and and compare it with baseline to observe any significant difference in performance, we ran 5x2cv paired t test on our dataset, although there are many statistical tests, but we applied this test, the reason behind it that it is paired test. In machine learning, this means that the test data for the baseline and the trained model are the same, in our context it is same, because we used the same amazon and social media datasets for baseline and proposed model. As it name implies this test typically split the dataset into two parts (training and testing) and repeat the splitting(50% training and 50% testing) 5 times, in each of iteration, <ns0:ref type='bibr' target='#b6'>(Dietterich, 1998)</ns0:ref>. In each of the 5 iterations, we fit A and B to the training split and evaluate their performance (pA and pB) on the test split. After this, it again rotate the test and train sets and compute performance again, which results in 2 performance difference measures:</ns0:p><ns0:formula xml:id='formula_2'>p 1 = p 1 A − p 1 A (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:formula xml:id='formula_4'>p 2 = p 2 A − p 2 A (3)</ns0:formula><ns0:p>Then it estimates the estimate mean and variance of differences through following equations; mean is:</ns0:p><ns0:formula xml:id='formula_5'>p = p 1 + p 2 2 (4)</ns0:formula><ns0:p>and variance is:</ns0:p><ns0:formula xml:id='formula_6'>s 2 = (p 1 − p 2 ) 2 + (p 2 − p 2 ) 2 (5)</ns0:formula><ns0:p>The formula of computing t-test statistics for this test is as follows:</ns0:p><ns0:formula xml:id='formula_7'>t = p1 1 1/5 ∑ 5 i=1 S 2 i (6)</ns0:formula><ns0:p>where p1 1 is p1 from very first iteration. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Our threshold significance level α =0.05 for rejecting the null hypothesis that both classifiers have same performance on this dataset. Under the null hypotheses, t-statistics value approximately follows a t-distribution with 5 degrees of freedom, so its value should remain in a given confidence interval which is 2.571 for 5% threshold, and it indicates that both classifiers have equal performance, if t-statistics value greater than this value, we can reject the null hypotheses. You can implement the 5 by 2 fold cv paired t-test from scratch, but there is a package called MLxtend that implements this test and gives you t-values and p-values of two models 7 , in its parameters, we gave the models names and scoring mode was mean accuracy.</ns0:p></ns0:div>
<ns0:div><ns0:head>Amazon reviews sentiment classification</ns0:head><ns0:p>Sentiment analysis is a typical classification problem, used in various ways, some researchers apply sentiment analysis on reviews of movies <ns0:ref type='bibr' target='#b42'>(Shen et al., 2020)</ns0:ref>. Many deep learning and natural language processing techniques proposed for sentiment analysis <ns0:ref type='bibr' target='#b47'>(Ullah et al., 2020)</ns0:ref>. For sentiment classification in our article, and we have considered a public dataset that we collected from data repository Kaggle.</ns0:p><ns0:p>The dataset description already given in dataset section, which is the collection of customer reviews of customers about amazon products. The reviews are assigned into two categories positive and negative. A review classified into one of two classes. 1/5 of the total reviews we used as test data and remaining used as training data. We apply retrieved bigrams, trigrams and lemmatized text from the dataset and apply LDA model with different parameters, with 10, 15, and 20 topics respectively. One thing is to be noted here is that we lemmatized the data and take only nouns, adjectives and verbs so that to grab the actual meaning from reviews and apply LDA model on actual contextual meaning of text or reviews. After the lemmatization process, it remains with 378,123 reviews. As seen in figure <ns0:ref type='figure'>6</ns0:ref>, these two algorithms (with bigrams features and 20 topic distributions) are slightly better than the other algorithm. In figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>, a noise factor can be seen as MaxEnt and MaxEnt SGD underperformed in terms of recall scores. As we can see, the model with bigrams, and 20 topics achieved the highest F1, precision and recall score of 91% with support vector machine classifier, while when we apply the trigrams into LDA model, the MaxEnt classifier algorithm achieved the best result. It implies that amazon review dataset has large texts and when you lower the topics then it works well with trigrams and when you increase the topics it works well with bigrams. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Result and analysis of Amazon dataset</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Comparison with baseline approaches amazon dataset</ns0:head><ns0:p>Below in the table 3 we provide the result without the LDA model, we compare the result with baseline approaches by classifying the data without leveraging the topic distributions. We have implemented most commonly used TFIDF feature vectors with different classifiers on our amazon review dataset as baseline.</ns0:p><ns0:p>When we apply the classifiers TFIDF feature vector representations then F1 scores decreases about 9% and 17% with support vector machine and Multinomial Naive bayes classifiers respectively as compared to T2F with support vector machine. This means that topic distributions give better results because it semantically captures the words within the documents and their distributions, so that classification performance would be increased, and our proposed framework able to achieve the higher classification results than the baseline approaches. While TFIDF has been popular in its own regard, there still remain a void where understanding the context of word was concerned, this is where word embedding techniques such as doc2vec can be utilized <ns0:ref type='bibr' target='#b23'>Le and Mikolov (2014)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>training and testing in iterations. We ran 5-fold CV experiments on the baseline approaches to measure the classification accuracy and also on the proposed model. The detail accuracy also shown in table <ns0:ref type='table' target='#tab_12'>3</ns0:ref>.</ns0:p><ns0:p>The comparison of accuracy shown in figure <ns0:ref type='figure' target='#fig_7'>9</ns0:ref>, starting from fold 1 our approach performs less than the baseline, but after fold 1 it performs better on each fold than the applied baseline and overall average classification accuracy is also higher, the last two columns shows the average classification accuracy which improved from 79.3% to 81%, i.e.; classification error reduces from 20.7% to 19%. This means that within the dataset with a certain degree of words shared among the documents our framework is capable to reduce the classification error and increase the classification mean accuracy.</ns0:p><ns0:p>To compare the proposed model with the applied baselines approaches and check which approach has more statistical significance on the same amazon dataset, we ran the 5X2 CV paired test on models, and compare the applied baselines with the proposed MaxEnt sgd model, we compared MaxEnt sgd because, if we see the table 3, the mean accuracy of MaxEnt sgd is higher than other proposed models.</ns0:p><ns0:p>We computed the 5X2 cv paired t-test's t-value and p-value of models, then compare it in the following </ns0:p></ns0:div>
<ns0:div><ns0:head>Social media data classification</ns0:head><ns0:p>In order to find out how our method work well with other kind of data and in different domains, we leveraged the social media datasets from the domain of disasters. We performed experiments with tweet classification with the categories of 'on topic', and 'off-topic', 'support government', 'criticize government'. For the sake of simplicity we take off-topic and on-topic categories. On-topic means tweet is related to and within the context of specific disaster, similarly off-topic means tweet text is not about the disaster. There are numerous applications of classification in the context of natural disasters or pandemics such as classify the situational information from twitter in pandemics <ns0:ref type='bibr' target='#b24'>Li et al. (2020)</ns0:ref>. Some researchers utilized the topic modelling techniques and analyze the topics during disasters by leveraging twitter data <ns0:ref type='bibr' target='#b16'>Karami et al. (2020)</ns0:ref>. As social media is one of the main and user oriented text data source so that is why we have taken the social media datasets to check the efficiency of our framework. After the pre-processing steps we remains with 61,220 tweets. <ns0:ref type='formula'>2009</ns0:ref>) thus having a higher coverage, and social media is the same kind of noisy unstructured text data. Also in a two class scenario it works well because of binary nature of target class and we have target class is binary in social media dataset. Interestingly support vector machine outperformed others algorithms in terms of recall that implies that support vector machine can also somehow, if not at all, handle the noisy data of social media. The social media dataset was short and noisy such as it contains slangs, etc. Therefore, you can see that highest F1 score with any parameter setting reached upto 78% as compared to 91% with amazon dataset. But that also satisfactory in the context of social media data with unsupervised topic modelling. As compared to amazon dataset large texts, the social media dataset gives more satisfactory with less topics, this is because of length of tweets, which implies that LDA topic model with less topic setting gives more good results than the LDA topic model with more topics.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison with baseline approaches on social media dataset</ns0:head><ns0:p>In comparison with baseline approaches that we implemented with different types of word embedding techniques such as TFIDF and doc2vec, when apply on social media dataset, it reaches upto 75% high in terms of F1 score with doc2vec embeddings on logistic regression classifier, but still less than the overall highest 78% F1 score with topic distributions as feature vectors, which shows how topic distributions accurately capture the contextual meaning and classify he data accurately. However, on interesting aspect is to analyze that F1, precision and recall score increases while implementing doc2vec embeddings which indicates among the baseline approaches doc2vec performs best. As we have ran experiments on amazon datasets same we ran 5 fold cross-validation(CV) on social media dataset to determine the classification significance by comparing the mean accuracy, although the classification mean accuracy drops as compared to when apply on amazon dataset, but still it gives 73% mean accuracy with proposed T2F approach on MaxEnt sgd classifier, when it compared to baseline approaches it falls to 69% with docs2vec feature on MaxEnt classifier, which is highest among the baseline baseline approaches only, but still lower than the proposed T2F approach. The mean accuracy each fold comparison of highest baseline and highest proposed given in figure <ns0:ref type='figure' target='#fig_9'>13</ns0:ref>, it starts from fold 1 to fold 5 then in the end last two bars showing the mean accuracy which depicts how the classifiers feed with the topic distribution features classified the data significantly better than the baseline approaches and also even better than the mostly used NLP deep learning baseline approach doc2vec. To compare the proposed model with the applied baselines approaches and check which approach has more statistical significance on the same social media dataset, we ran the 5X2 CV paired test on models, and compare the applied baselines with the proposed MaxEnt sgd model, same we compared MaxEnt sgd because also on social media dataset, if we see the table 5, the mean accuracy of MaxEnt sgd is higher than other proposed models. We computed the 5X2 cv paired t-test's t-value and p-value of models, then compare it in the following table. We computed every fold(2 folds) of each iteration(5 iterations) and listed the mean results in the table. You can see in the table 6, proposed MaxEnt sgd comparison with every baseline model has p-value less than the threshold value which is α= 0.05, and also t-statistics value is greater than the threshold value, thus we can reject the null hypotheses and accept that two models have significantly different performance, and proposed MaxEnt sgd(T2F) with better mean accuracy has performed significantly better than the applied baselines.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithms</ns0:head></ns0:div>
<ns0:div><ns0:head>Result and analysis of unseen data</ns0:head><ns0:p>To further investigate the efficiency of our framework we validate the LDA model on completely unseen data, for this we chose the amazon dataset that has a data of reviews on yearly basis, we prepared the </ns0:p></ns0:div>
<ns0:div><ns0:head>OVERALL RESULTS ANALYSIS</ns0:head><ns0:p>To examine the performance of our framework and ultimately the classifiers that based on our proposed framework, we ran a 5 fold CV, so that in each run 1/5 of the reviews and tweets are held as validation data and remaining held as training data. This setting repeated for every fold and in the last we checked the F1, precision and recall scores of our classifiers to check the performance. The detailed measure scores of classifiers while compared with baseline methods are shown in results and analysis sections of amazon and social media dataset separately. While analyzing the amazon dataset results, the appropriate model was with 20 topics, and with bi-gram vectors, it maybe because amazon dataset contains relatively large texts and more information. When we change the parameters and try LDA model with bigrams and decrease the topic numbers then it ultimately effects on average F1 score and recall scores of classifiers, as F1 score drastically decreases from top 91% to lowest 64%, similarly recall scores drops to 65% from the high 91%. We compared our approach with the baseline approach such as without topic distributions feature vectors, and we implemented the typical text classifiers with different embedding schemes such TFIDF and doc2vec as baseline, and overall our approach fairly performed well in classifying the data the text.</ns0:p><ns0:p>In table 8, baseline method overall highest achieved scores are given in comparison with T2F approach, doc2vec applied on amazon dataset performs best among the baselines, but still it is 5% lower than the proposed method in terms of f1 score, t2f when applied with SVM classifier achieved the best outcomes and yield 91% f1 score, on MaxEnt sgd classifier it achieved 88% f1 score, still 2% higher than the highest score implemented baseline method which was 86%. To further analyze from the perspective of different types of embedding schemes researchers proposed in their studies, we have decided to compare our proposed approach with approaches from other research studies who classify the textual data, we have picked up performance results from those research studies, those leveraged different feature embedding schemes, and compared it with our proposed approach, we compared with those approaches from prior studies, because their context was also to classify the data by using different feature vectors schemes, like researchers in <ns0:ref type='bibr' target='#b29'>(Masood and Abbasi, 2021)</ns0:ref> used graph embedding features to classify the social media textual data (284k tweets) into 3 categories and highest overall F1 score was 87% as compared to 91% f1 score of our framework, see results in table <ns0:ref type='table' target='#tab_25'>9</ns0:ref>, for the dataset, they manually collected the tweets, manually labelled them into categories of rebel users and classify them. Another study uses the feature vector embedding combining initial letter, paragraph and frequency features to classify the English documents (174 documents) of 4 different categories and their F1 scores falls by 7% and 5% while using MaxEnt and support vector machine algorithms respectively <ns0:ref type='bibr' target='#b28'>(Luo, 2021)</ns0:ref>. Graph of words and sub graph feature representations experimented instead of typical bag of word features by <ns0:ref type='bibr' target='#b41'>(Rousseau et al., 2015)</ns0:ref> and maximum F1 score reached upto 79% while implementing on amazon dataset (16000 user reviews), their dataset is related to our amazon review dataset to some extent, instead they just used the portion of amazon reviews dataset only about about specific products categories such as Kitchen, DVD's, books and electronics and we are using amazon reviews about all the products. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Prior study Methods F1 Score Precision Recall SRI</ns0:note></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>A novel framework, with integration of topic distributions features from unsupervised topic modelling approach considering the features selection is presented. It deals with sparse, user oriented, short and slang's type of data from different domains. Relevant features extraction to increase the classifier performance is the main purpose of this framework. We focus on semantic unsupervised generated structure of words that occurred in the texts to classify the user reviews or tweets and how it can assist in supervised classification. Beside classification of user reviews or tweets, recent studies concatenated recently evolved doc2vec with LDA topics, researchers in <ns0:ref type='bibr' target='#b33'>(Mitroi et al., 2020)</ns0:ref> proposed topicdoc2vec model for classifying the sentiment from textual data, they applied doc2vec for vectorizing the textual content and LDA to detect topics and then they combined both doc2vec vector representation of the best topic of the document through LDA, and named it as topicdoc2vec. Their approach claimed to be as an approach that adds context of the topic the classification process. Although, it is an effective approach to construct the context of document through combined embeddings, but this can also be done by only converting LDA topics and its probability distributions to feature vectors like we did in our framework, that is easy to use, flexible with mixture of distributions of words, hashtags and geotags to create topical embeddings specifically for location context, their embeddings with co-occurrence and location contexts are specified with hashtag vector and geotag context vector respectively. Indeed it is an interesting approach to explore the LDA topic model more for creating embeddings, but it is specified and restricted to geo-located textual data.</ns0:p><ns0:p>Our framework gives comparatively better results in comparison to other prior study approaches that used typical features such as TFIDF, graph embeddings and graph of words features, see figure <ns0:ref type='figure' target='#fig_0'>16</ns0:ref> and baseline approaches, see figure <ns0:ref type='figure' target='#fig_6'>17</ns0:ref>. In this framework, we did not feed classifier with classic TFIDF representations or doc2vec word embeddings, instead we feed classifiers with novel topic distributions features after getting topics on dataset. This approach can be seen as semi supervised in a way that it feeds the feature vectors from unsupervised topic modelling approach into supervised classification algorithms.</ns0:p><ns0:p>While building LDA models we analyzed that to extract the most relevant topic distributions, careful text preprocessing is very necessary as it ultimately impacts the model performance. In this regard, we leveraged lemmatization instead of stemming in our text preprocessing, because it gives or reduced the words into its root form with the contextual meaning <ns0:ref type='bibr' target='#b47'>(Ullah et al., 2020)</ns0:ref>. This framework is flexible in a way that it only requires text contents and categories in which you want to classify the data, and this framework is capable to be applied into different domains, like opinion mining, social media sentiment classification, user reviews classification, customer complaints classification government organization.</ns0:p><ns0:p>From the results we found out that for large type text such as documents and large reviews LDA model with more topics would be more suitable and for the sparse short and slang type of texts, LDA model with less topics would be feasible.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>The proposed framework implements a LDA topic model with text classifiers, which is able to make a text classification by leveraging the hidden topics retrieved from datasets. The method was tested on two datasets of two different domains, datasets have noisy values, sparse data and imbalanced ratios within, and our proposed method handle that as well in a way to classify the text. From the results, it is evident that our method outperforms other baseline approaches and comparable methods by reasonably good margin in terms of average F1 scores. We have measured the validity of our model through the 5-fold cross In addition, we apply our model on unseen data, that includes by utilizing the topic distributions from specific years data and apply it to completely unseen data and in this behavior it also gives good results in terms of evaluation measures performance. When compared to baseline and prior study approaches, results show improvement while using T2F representations, with highest 91% average F1 score with svm classifiers along with bi-grams, and highest mean accuracy of 81%. Moreover, the search for the best combination of parameters based on how evaluation measures are performed. We got the best combination of svm classifiers using bi-grams on amazon dataset that yields highest average F1 score, and with MaxEnt classifier with both 15 and 10 topics and trigrams combination that gives highest average F1 score on social media dataset, and then on 5-fold cross validation evaluation the MaxEnt sgd classifier with bigrams and 20 topics gives best mean accuracy results. We find that our T2F model outperforms other baselines and prior study approaches on average F1 score and mean accuracy, overall our framework performs better if we see evaluation measures results and indicate that topic oriented features can be leveraged as one of the feature representation technique while classifying the texts. Also with these findings we prepared a model that paved the way to create topic oriented features (T2F) representation of content to classify it, it can be applied into any text classification context. Furthermore, we have demonstrated that text representation based on LDA topic modelling have more semantic meaning and can improve the classification performance while performing in a semi supervised manner. Anyhow, there are many improvements can be made, such as one can apply this method on medical domain datasets. In future, we will extend our framework to automatic labeling of data to prepare a labeled dataset to be use in supervised algorithms, we will gather the topic distributions and apply ranking algorithms and analyze the topics in terms of weightage and label the documents, reviews or tweets, this will reduce the cost of human labels and will also remove the need of gathering the labelled datasets, because not every public dataset has labels. Also, while applying classification on labelled dataset we will explore some deep learning classifiers such as used by <ns0:ref type='bibr' target='#b38'>(Olteanu et al., 2014)</ns0:ref> and will investigate the impact on the classification performance of these classifiers.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Generalization of LDA topic modelling model</ns0:figDesc><ns0:graphic coords='5,141.73,63.78,413.58,170.43' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Proposed framework in detail showing each steps involved</ns0:figDesc><ns0:graphic coords='6,141.73,482.74,413.57,223.57' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Chinese restaurant analogy: HDP process based on Chinese restaurant analogy, in this analogy, C1,C2,C3 are tables and surrounding them are customers (1,2,...7), and new customer 8 needs to be assigned to any of the table, so 3/8 probability that customer will be assign to C1, 4/8 probability 8 will be assign to C2 and 1/8 probability that customer will be assign to C3.</ns0:figDesc><ns0:graphic coords='9,224.45,477.60,248.14,125.33' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>distributions do 11 topic distributions ← feature vector(topic dist length of documents); 12 feature vectors ← array(topic dist and length of document);</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)Manuscript to be reviewed Computer Science is simple example of integrating the topic distribution into its bag of words vector to obtain the resultant vector.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>words in documents assigned to topic(k), and n m j =total no. of words in document(m), di m = [confection, century, light, pillow, citrus, gelatin, nuts, case, filbert, , chewy, flavorful, yummy, brother, sister] and td m : [0.18338655(td1), 0.18334754 (td2), . . . ., . . . ., . . . ., 0.016671937 (tdn) ,. . . ..] . Applying discretization intervals di m ∪ td m = rvwhere rv (resultant vector to be used in classifier) =[[confection, century, light, pillow, citrus, gelatin, nuts, case, filbert, chewy, flavorful, yummy, brother, sister]], Topic1: Topic2: Topic3: Topic3: Topic4:Topic3: Topic3:Topic3: Topic8: Topic9: Topic10</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Comparison of amazon dataset precision scores with different parameters</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Classification accuracy comparison between baseline and proposed approach on amazon dataset: the figure shows the 5 fold cross validation scores of best performing classifier of baseline and best performing classifier of proposed, demonstrating each fold results of classifiers and comparing those classifiers who achieved highest classification average accuracy.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 10 .Figure 11 .Figure 12 .</ns0:head><ns0:label>101112</ns0:label><ns0:figDesc>Figure 10. Comparison of recall scores of social media dataset with different parameters</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Classification accuracy comparison between baseline and proposed approach on social media dataset: this figure shows the 5 fold cross validation scores of best performing classifier of baseline and best performing classifier of proposed, demonstrating each fold results of classifiers and comparing those classifiers who achieved highest classification average accuracy.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='7,141.73,558.19,413.56,148.11' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Latent Dirichlet Allocation (LDA) generalization process model.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>PROPOSED FRAMEWORK 150 Figure 2. Abstract model explanation of proposed framework 4/26 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>include tweet id, tweet content, time, tweet related- ness. Table 2. Dataset Statistics in detail Data pre-processing Figure 4. Pre-processing steps involved in data pr-processing 6/26 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell>that contains tweets from various disasters annotated on the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>basis of relatedness. The tweets were collected during 7 crisis occurred during 2012 and 2013 and human</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>made crisis or natural disaster occurred in 2016. The total tweets were 70k, with categories of different</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>relatedness such as relevant, irrelevant, on topic or off topic. Full detail of datasets given in table 2. To</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>check the effectiveness in various domains of our LDA models and classifiers we choose these datasets of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>different nature, tweet datasets is mostly short texts and noisy, and amazon review dataset is more large</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>text, and compact detail of about products in the form of reviews.</ns0:cell></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>Amazon user Re-</ns0:cell><ns0:cell>Total 568,454 Reviews</ns0:cell></ns0:row><ns0:row><ns0:cell>views</ns0:cell><ns0:cell>1. 256,059 users</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2. 74,528 products</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>3. 260 users with 50 reviews</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>4. Target Categories: Positive, Negative</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5. Dataset includes, Summary, Text, Sentiment score, Product</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>ID</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Social media Dataset Total 70k tweets with different categories of relatedness</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1. Total 7 crisis related datasets each contains 10k tweets.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2. On topic (related to crisis), off topic (not related to crisis)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>3. Tweets</ns0:cell></ns0:row></ns0:table><ns0:note>Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>bigrams 15 topics and bigrams 10 topics and bigrams 20 topics and trigrams 15 topics and trigrams 10 topics and trigrams F1 Measures Scores LDA Topics with various parameters F1 Scores Amazon Dataset Support Vector Machine MaxEnt SGD optimizer classifier MaxEnt Classifier Figure</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>0% 20 topics and Computer Science 20 topics and bigrams</ns0:cell><ns0:cell>10%</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>30%</ns0:cell><ns0:cell>40%</ns0:cell><ns0:cell>50%</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>70%</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>100%</ns0:cell></ns0:row><ns0:row><ns0:cell>0%</ns0:cell><ns0:cell>10%</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>30%</ns0:cell><ns0:cell>40%</ns0:cell><ns0:cell>50%</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>70%</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>100%</ns0:cell></ns0:row></ns0:table><ns0:note>6. Comparison of amazon dataset f1 measure scores with different parameters 7 http://rasbt.github.io/mlxtend/user_guide/evaluate/paired_ttest_5x2cv/ 12/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>15 topics and bigrams 10 topics and bigrams 20 topics and trigrams 15 topics and trigrams 10 topics and trigrams Precision Measure Scores LDA Topics with various parameters Precision Scores Amazon dataset Support Vector Machine MaxEnt SGD optimizer classifier MaxEnt Classifier</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>15 topics and bigrams 10 topics and bigrams 20 topics and trigrams 15 topics and trigrams 10 topics and trigrams Recall Measure Scores LDA Topics with various parameters Recall Scores Amazon Dataset Support Vector Machine MaxEnt SGD optimizer classifier Figure</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>20 topics and bigrams</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0%</ns0:cell><ns0:cell>10%</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>30%</ns0:cell><ns0:cell>40%</ns0:cell><ns0:cell>50%</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>70%</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>90%</ns0:cell><ns0:cell>100%</ns0:cell></ns0:row></ns0:table><ns0:note>8. Comparison of amazon dataset recall scores with different parameters 13/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>. So we also implemented doc2vec with logistic regression classifier as one of our baseline approach to analyze if it increases performance, the f1 score reaches to 86% as compared to TFIDF approach with SVM classifier which gives 82%. But still it lower than the F1 score of 91% which we get by applying LDA topic distributions as feature vectors. Amazon Dataset F1, Precision,Recall and Average accuracy statistics: comparison with baseline approaches evaluation measures results</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Algorithms</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='8'>F1 score Precision Recall Mean Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>SVM (TFIDF)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>82%</ns0:cell><ns0:cell /><ns0:cell>83%</ns0:cell><ns0:cell /><ns0:cell>80%</ns0:cell><ns0:cell /><ns0:cell>74%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Multinomial Naive Bayes(TFIDF)</ns0:cell><ns0:cell cols='2'>74%</ns0:cell><ns0:cell /><ns0:cell>76%</ns0:cell><ns0:cell /><ns0:cell>75%</ns0:cell><ns0:cell /><ns0:cell>71%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>MaxEnt (TFIDF)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>71%</ns0:cell><ns0:cell /><ns0:cell>72%</ns0:cell><ns0:cell /><ns0:cell>68%</ns0:cell><ns0:cell /><ns0:cell>73%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>MaxEnt (doc2vec)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>86%</ns0:cell><ns0:cell /><ns0:cell>77%</ns0:cell><ns0:cell /><ns0:cell>90%</ns0:cell><ns0:cell /><ns0:cell>79%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>MaxEnt Sgd (proposed T2F)</ns0:cell><ns0:cell cols='2'>88%</ns0:cell><ns0:cell /><ns0:cell>91%</ns0:cell><ns0:cell /><ns0:cell>88%</ns0:cell><ns0:cell /><ns0:cell>81%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>MaxEnt (proposed T2F)</ns0:cell><ns0:cell /><ns0:cell cols='2'>77%</ns0:cell><ns0:cell /><ns0:cell>83%</ns0:cell><ns0:cell /><ns0:cell>77%</ns0:cell><ns0:cell /><ns0:cell>73%</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>SVM (proposed T2F)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>91%</ns0:cell><ns0:cell /><ns0:cell>87%</ns0:cell><ns0:cell /><ns0:cell>91%</ns0:cell><ns0:cell /><ns0:cell>77%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='8'>5-fold CV evaluation on amazon dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>100%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>80% 90%</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>75%</ns0:cell><ns0:cell>79%</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>78%</ns0:cell><ns0:cell>82%</ns0:cell><ns0:cell>81%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>79.30%</ns0:cell><ns0:cell>81.00%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>70%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Classification</ns0:cell><ns0:cell>40% 50%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>30%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>20%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>10%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Fold 1</ns0:cell><ns0:cell cols='2'>Fold 2</ns0:cell><ns0:cell cols='2'>Fold 3</ns0:cell><ns0:cell cols='2'>Fold 4</ns0:cell><ns0:cell cols='2'>Fold 5</ns0:cell><ns0:cell cols='2'>Average</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Baseline (doc2vec)</ns0:cell><ns0:cell cols='4'>Proposed T2F(MaxEnt with SGD)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>table .</ns0:head><ns0:label>.</ns0:label><ns0:figDesc>We computed every fold(2 folds) of each iteration(5 iterations) and listed the mean results in the table. You can see in the table 4, proposed MaxEnt sgd comparison with every baseline model has p-value less than the threshold value which is α= 0.05, and also t-statistics value is greater than the threshold value, thus we can reject the null hypotheses and accept that two models have significantly different performance, and T2F with MaxEnt sgd with better mean accuracy has performed significantly better than the applied baselines.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>MaxEnt sgd(T2F) MaxEnt sgd(T2F)</ns0:cell></ns0:row><ns0:row><ns0:cell>Algorithms</ns0:cell><ns0:cell>t-statictics value</ns0:cell><ns0:cell>p-value</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (TFIDF)</ns0:cell><ns0:cell>3.248</ns0:cell><ns0:cell>0.0437</ns0:cell></ns0:row><ns0:row><ns0:cell>Multinomial Naive Bayes(TFIDF)</ns0:cell><ns0:cell>4.784</ns0:cell><ns0:cell>0.0079</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxEnt (TFIDF)</ns0:cell><ns0:cell>4.562</ns0:cell><ns0:cell>0.0060</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxEnt (doc2vec)</ns0:cell><ns0:cell>2.932</ns0:cell><ns0:cell>0.0362</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Comparison of each baseline model with the Proposed MaxEnt sgd(T2F) on same amazon dataset, and the t-value and p-value scores are listed.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_15'><ns0:head>Result and analysis of Social media disaster dataset</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>20 topics and bigrams</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0%</ns0:cell><ns0:cell>10%</ns0:cell><ns0:cell>20%</ns0:cell><ns0:cell>30%</ns0:cell><ns0:cell>40%</ns0:cell><ns0:cell>50%</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>70%</ns0:cell><ns0:cell>80%</ns0:cell><ns0:cell>90%</ns0:cell></ns0:row></ns0:table><ns0:note>15/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_16'><ns0:head>15 topics and bigrams 10 topics and bigrams 20 topics and trigrams 15 topics and trigrams 10 topics and trigrams Recall Measure Scores LDA models witj different parameters Recall Scores social media dataset Support Vector Machine MaxEnt SGD optimizer classifier MaxEnt Classifier</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Social media dataset F1, Precision,Recall and Average accuracy statistics: comparison with baseline approaches evaluation measures.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>F1 score Precision Recall Mean Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (TFIDF)</ns0:cell><ns0:cell>68%</ns0:cell><ns0:cell>71%</ns0:cell><ns0:cell>70%</ns0:cell><ns0:cell>67%</ns0:cell></ns0:row><ns0:row><ns0:cell>Multinomial Naive Bayes (TFIDF)</ns0:cell><ns0:cell>73%</ns0:cell><ns0:cell>76%</ns0:cell><ns0:cell>74%</ns0:cell><ns0:cell>68%</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxEnt (TFIDF)</ns0:cell><ns0:cell>52%</ns0:cell><ns0:cell>60%</ns0:cell><ns0:cell>54%</ns0:cell><ns0:cell>65%</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxEnt (doc2vec)</ns0:cell><ns0:cell>77%</ns0:cell><ns0:cell>75%</ns0:cell><ns0:cell>74%</ns0:cell><ns0:cell>69%</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxEnt Sgd (proposed T2F)</ns0:cell><ns0:cell>78%</ns0:cell><ns0:cell>76%</ns0:cell><ns0:cell>76%</ns0:cell><ns0:cell>73%</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxEnt (proposed T2F)</ns0:cell><ns0:cell>77%</ns0:cell><ns0:cell>76%</ns0:cell><ns0:cell>76%</ns0:cell><ns0:cell>69%</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (proposed T2F)</ns0:cell><ns0:cell>70%</ns0:cell><ns0:cell>65%</ns0:cell><ns0:cell>77%</ns0:cell><ns0:cell>68%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_19'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>LDA model of 2011 data and use the same model to get feature topic distributions from 2012 data, it is to be noted that LDA model did not see this 2012 data, it is completely unseen for the trained LDA model. When we get the test vectors from 2012 data and re run the classifiers, results are reasonably well, as you can see in figure 14, it gives 87% F1 score, 87% precision score with support vector machine classifier and 79% F1 score with MaxEnt classifier and 81% F1 score with MaxEnt sgd classifier. The Comparison of each baseline model with the Proposed MaxEnt sgd(T2F) model on same social media dataset, and the t-value and p-value scores are listed. 5 fold CV test also applied on this data and in classification accuracy, the SVM classifier gives the best 533 classification performance with 83% mean accuracy. Also each fold unseen data test results shown in 534 figure 15 with all three classifiers. This also implies even if the model did not see the data, it classified it 535 with good classification accuracy and F1 scores. Results indicate that this framework also works well 536 with unseen data of the same context. We did the validity tests through 5-fold cross validation and by 537 through Mcnemar's test by using the model trained on 2011 data, and test it on 2012 unseen data.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>18/26</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_20'><ns0:head>Measure Scores Evaluation measures statistics on unseen data Support vector Machine MaxEnt Classifier MaxEnt sgd Classifier Figure 14.</ns0:head><ns0:label /><ns0:figDesc>Comparative results of evaluation measures statistics on unseen data</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='8'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>5-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>100%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>80% 90%</ns0:cell><ns0:cell>79%</ns0:cell><ns0:cell>77%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>76%</ns0:cell><ns0:cell>79%</ns0:cell><ns0:cell>81%</ns0:cell><ns0:cell>78%</ns0:cell><ns0:cell>77%</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>79%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>85%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>84%</ns0:cell><ns0:cell>78.00% 82.40%</ns0:cell><ns0:cell>80%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>74%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell>60% 70%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Classification</ns0:cell><ns0:cell>40% 50%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>30%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>20%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>10%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>0%</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Fold 1</ns0:cell><ns0:cell cols='3'>Fold 2</ns0:cell><ns0:cell cols='3'>Fold 3</ns0:cell><ns0:cell cols='3'>Fold 4</ns0:cell><ns0:cell cols='3'>Fold 5</ns0:cell><ns0:cell cols='2'>Average</ns0:cell></ns0:row><ns0:row><ns0:cell cols='10'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>19/26</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_21'><ns0:head>fold CV Evaluation on unseen data SVM MaxEnt MaxEnt Sgd Figure 15.</ns0:head><ns0:label /><ns0:figDesc>Comparative results of each fold of all models on unseen data: figure shows the every fold result of our model that we applied on unseen data, by using the train vectors of previous data, we applied on this unseen to check the validity of our models that trained on previous data.</ns0:figDesc><ns0:table /><ns0:note>0.0.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_22'><ns0:head>1 McNemar's statistical test on unseen data</ns0:head><ns0:label /><ns0:figDesc>We also did a hypotheses test through by applying McNemar's test to check whether these classifiers are statistically significant on unseen data. In machine learning we can use McNemar's test to compare the performance accuracy of two models<ns0:ref type='bibr' target='#b31'>(McNemar, 1947)</ns0:ref> 8 . The McNemar's test operates on contingency table values that showed in table 7.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>correctly classified by both A and B (n00) correctly classified by A but not by B (n01)</ns0:cell></ns0:row><ns0:row><ns0:cell>correctly classified by B but not by A (n10) correctly classified neither by A or B (n11)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_23'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Contingency table for McNemar's statistics testWe ran chi squared McNemar's with threshold p-value of 0.05. As shown in figure15the SVM and MaxEnt with sgd have higher average accuracy than the MaxEnt, so we ran McNemar's test on these two classifiers and the computed p value was 0.005667, and it indicates that these two classifiers are different, SVM performs significantly better than the MaxEnt, and final result is statistically significant.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>A = SVM classifier</ns0:cell></ns0:row><ns0:row><ns0:cell>B= MaxEnt sgd classifier</ns0:cell></ns0:row><ns0:row><ns0:cell>The McNemar's test is computed as follows:</ns0:cell></ns0:row><ns0:row><ns0:cell>(|n01 − n10| − 1) 2</ns0:cell></ns0:row><ns0:row><ns0:cell>n01 + n10</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_24'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>All the comparisons with baseline approaches and some other proposed approached in different research studies implies that our novelistic framework performed fairly better. It also indicates that apply the topic modelling with more topics when you have large sentence texts. This can be applied into other classification problems, online complaints, document classification, news classification and medical text classification. Comparative results of Best Average F1,Precision and Recall score with baseline approaches</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Methods</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell cols='2'>Precision Recall</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Score</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>TFIDF SVM SGD</ns0:cell><ns0:cell>82%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>80%</ns0:cell></ns0:row><ns0:row><ns0:cell>TFIDF Multinomial NB</ns0:cell><ns0:cell>74%</ns0:cell><ns0:cell>76%</ns0:cell><ns0:cell>75%</ns0:cell></ns0:row><ns0:row><ns0:cell>TFIDF MaxEnt</ns0:cell><ns0:cell>71%</ns0:cell><ns0:cell>72%</ns0:cell><ns0:cell>68%</ns0:cell></ns0:row><ns0:row><ns0:cell>doc2vec MaxEnt</ns0:cell><ns0:cell>86%</ns0:cell><ns0:cell>77%</ns0:cell><ns0:cell>92%</ns0:cell></ns0:row><ns0:row><ns0:cell>T2F (SVM)</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>87%</ns0:cell><ns0:cell>91%</ns0:cell></ns0:row><ns0:row><ns0:cell>T2F (MaxEnt)</ns0:cell><ns0:cell>81%</ns0:cell><ns0:cell>83%</ns0:cell><ns0:cell>78%</ns0:cell></ns0:row><ns0:row><ns0:cell>T2F (MaxEnt sgd)</ns0:cell><ns0:cell>88%</ns0:cell><ns0:cell>91%</ns0:cell><ns0:cell>88%</ns0:cell></ns0:row></ns0:table><ns0:note>21/26PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_25'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>(profile, content+ graph) Masood and Abbasi (2021) 87% Comparative results of Best Average F1,Precision and Recall score with prior studies work from the perspective of using different feature representation embedding schemes</ns0:figDesc><ns0:table><ns0:row><ns0:cell>91%</ns0:cell><ns0:cell>90%</ns0:cell></ns0:row></ns0:table><ns0:note>Figure 16. Comparative results of evaluation measures in comparison with prior studies approaches</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_26'><ns0:head>TFIDF MaxEnt doc2vec MaxEnt Measures Scores Comparison with applied baseline approaches F1 Scores Precision Recall Figure</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>120%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>100%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>80%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>60%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>40%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>20%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0%</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>T2F (svm)</ns0:cell><ns0:cell>T2F (maxEnt) T2F (maxEnt sgd) TFIDF SVM SGD</ns0:cell><ns0:cell>TFIDF</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Multinomial NB</ns0:cell></ns0:row></ns0:table><ns0:note>22/26PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)Manuscript to be reviewed 17. Comparative results of evaluation measures in comparison with baseline approaches and gives good classification performance. Most of the researchers leverages LDA in combination with other techniques to create a joint topic oriented word embeddings for specific context, like researchers in (Geetha T V, 2020) they build a joint topical model through LDA and that model associates topics</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_27'><ns0:head /><ns0:label /><ns0:figDesc>that yields 81% classification accuracy, 5X2 fold CV paired t-test and McNemar's test statistics.</ns0:figDesc><ns0:table /><ns0:note>23/26PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)Manuscript to be reviewedComputer Science validation</ns0:note></ns0:figure>
<ns0:note place='foot' n='17'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)</ns0:note>
<ns0:note place='foot' n='8'>http://rasbt.github.io/mlxtend/user_guide/evaluate/mcnemar/ 20/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='26'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2021:03:59166:1:1:NEW 8 Jun 2021)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
School of Information Engineering, School of Software,
Zhengzhou University, 450001
Zhengzhou, China
http://www.zzu.edu.cn
May 28th, 2021
Dear Editor,
We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns.
We believe that the manuscript is now suitable for publication in Peerj computer science journal.
Lei Shi,
Professor, School of Software,
Zhengzhou University.
On Behalf of all authors.
Reviewer 1
Basic reporting
The language of the manuscript is clear and unambiguous. The paper is structured well, including figures, tables and shared raw data.
Experimental design
The research questions are well defined, meaningful and relevant.
Validity of the findings
The experimental results are well stated.
Comments for the author
In this paper, a novel classification scheme based on LDA topic modelling has been presented to classify noisy and sparse textual data. In general, the paper is structured well and contributes to the literature. However, a number of issues listed below should be taken into consideration to improve the content of the paper:
Comments for the author
1. The language of manuscript should be enhanced.
Response: This was very useful suggestion, and we have thoroughly read the full manuscript and remove the possible grammatical mistakes, and also enhanced the language of manuscript.
2. The manuscript lacks of discussing recent studies (2020-2021 papers) on LDA2vec
Response: We have now cited 4 recent papers related to LDA2vec of 2020-2021 in “related work section” on page 2-3 of our manuscript, and in the discussion section (lines 601-621,page 22 of modified new manuscript) we have added discussion of two recent studies that show how and in which context they leveraged the LDA model to extract feature representation for classification task.
3. The empirical results should include statistical validity tests.
Response: For statistical validity of our model an compare it with applied baselines, we have implemented and included the statistical tests named 5X2 fold CV (cross validation) paired t test and Mcnemar’s test on our both the datasets, 5X2 fold CV paired t test used to compared performance of two models on same dataset, McNemar’s test used to check the validity of trained model on unseen data. The 5X2 fold CV model described in “statistical validity test’ section on page 11 of manuscript and McNemar’s test described in “McNemar’s statistical test on unseen data” section on page 21 of manuscript. The results of 5X2 fold CV paired t test on both datasets separately given in “Comparison with baseline approaches amazon dataset” and “Comparison with baseline approaches social media dataset” respectively. The results of McNemar’s test given in “McNemar’s statistical test on unseen data section on page 21 of updated manuscript file”.
Reviewer 2
Basic reporting
The authors present a text classification scheme enriched with LDA-based features.
Overall, the manuscript was ambiguous at some sections (see attached PDF), however, the overall idea is clear.
The references are clearly provided, however, please re-check that the citation style is consistent (sometimes you have just numbers, e.g., (14)).
Response: We have re-checked and yes we agreed that there was a mistake on “Result analysis section on page 22 of manuscript” where we only give the number, not cited the reference properly, it is now corrected and we properly cited the reference. Corrected content is on page 22 line 585 in updated manuscript file. Also we rechecked all reference citation styles, it is consistent now.
Article structure is OK, with the exception of some low-res images.
Response; we have now replaced all the low resolution images, with high resolution images.
Reviewer 2 stated that manuscript was ambiguous at some sections and asked us to see attached PDF, we are very thankful to the reviewer for pointing out these ambiguous structural points, we have seen the attached pdf and corrected all the these points that will enhance the manuscript quality and hope that manuscript will be unambiguous now. The modified points detail from attached pdf is as follows:
In abstract section Reviewer pointed some grammatical mistakes that we modified, and also point out some ambiguous terms, such as in the sentence:
• We leveraged unsupervised topic modelling LDA to retrieve topic inference, reviewer pointed that what is mean by inferences? Is it predictions?
Response: We agreed that term was somewhat ambiguous, so we corrected it as, “we leveraged topic modelling LDA approach to retrieve the topic distributions”, and in topic distributions we mean by probability distributions of different topics results that given by LDA.
In other comment on abstract section “sentence somewhat unclear, what are some labelled?” on the following sentence,
• We make use of some labelled data and topic distributions of hidden topics that were generated from the data.
Response: agreed, we have deleted the word” some” which was ambiguous, now the sentence is: we make use of labelled data and topic distributions of hidden topics that were generated from data. Labelled data we mean by our labelled datasets.
In Introduction, Reviewer also pointed out some grammatical mistakes such as remove ”s” add ”ing”, ”the” “and” remove” but” at some points,
Response: We have corrected all these grammatical mistakes and modified the introduction section of manuscript.
However Review 2 asks questions on sentence on page 2, line 70:
• For dimensionality reduction, different techniques such as cosine similarity, chi squared are utilized to fine tune the classifiers. He asks “What exactly is this?” top 2 closet ones”.
Response: We agreed that this part of sentence was ambiguous and not related, so we have deleted the first part of sentence and we updated the sentence and detailed it unambiguously, so new sentence on page 2 line 70 is now;
• For features vector representations, different techniques, such as most common ones are TF-IDF, bag of words and word embeddings are utilized to fine tune their classifiers.
On page 2 line 94- 95 in annotated pdf file Reviewer 2 asked us to cite the relevant reference.
Response: We added the reference in our reference section and also cite the reference.
Also in some portion of Relate work section, Reviewer 2 pointed out grammatical mistakes.
Response: We have corrected the mistakes in final manuscript file.
Figure 1, on page 4 of annotated manuscript file, Reviewer 2 suggests us to define all parameters of figure 1.
Response; we have added detail of every parameter used in figure 1. Changes can be seen on page 4 of modified manuscript file.
There were some grammatical mistakes on line on page 4, line 145 and on line 153 of annotated pdf file.
Response; Changes have been made and can be seen on line 156 164 of page 5 in modified manuscript file.
In figure 3, in proposed framework section, Reviewer asked question on unseen data: Does this work without unseen data, is this necessary input?
Response: Yes, model can work without this, it is not a necessary input, it is an additional input, to check the validity of model on unseen data. Further detail about unseen data result analysis given in “Result and analysis section on unseen data” of modified manuscript. (page 18, from line 522.)
On page 6, line 192, 193 of annotated pdf file, Reviewer suggests some words to be replaced.
Response; we have replaced the words that reviewer suggested changes can be seen on page 7, line 203,204 of modified manuscript file.
On page 8, line 247, Review suggested us to add the citation and on page 8, Review 2 add description of figure 5.
Response; we have added the relevant reference, and added the description of figure 5, changes can be seen on page 8, line 259 and description of figure 5.
On page 9, line 293, reviewer asked to describe the K superscripts in formula,
Response; we have updated it, defined k superscripts on line 321 of page 10 of updated manuscript file.
On page 10, review pointed the bad resolution of figure 6 of annotated manuscript file, and suggested us to maybe plot a plot bar.
Response; we agreed that figure 6 resolution was bad, although can be described without the figure, so we have deleted the figure, and figure content described in the algorithm 1 (line1- line 4), on page 9, and in text on line 309-310 on page 9, of modified manuscript file.
On page 16, line 447, reviewer asks about the amazon dataset from the cited paper, reviewer stated” is this same dataset”?
Response; First, we have corrected the citation, it was wrongly cited before (Luo 2021), now we have cited the correct citation (Rousseau et al. 2015) in that specific line, and about the reviewer question. The source of dataset is same as our amazon user reviews dataset, but the amazon dataset used by (Rousseau et al. 2015) on line 590 of page 22 of modified manuscript consists of about product reviews about different sub collections (products only) classified as positive and negative.
In table 5, on page 17 of annotated manuscript file review commented “You mention tax2vec in the paragraph above, but it’s not in the table?
Response; Here tax2vec approach reviewer referred to citation reference (Sˇkrlj et al. (2021).) on line 446 page 16 of annotated manuscript file, it was mistakenly cited wrong, now we have cited the correct reference as (Luo (2021). On line 589 of page 22 of modified manuscript file, also we mentioned this approach correctly as IPF SVM Luo (2021) in table 8 on page 22 of modified manuscript file.
In table 5, on page 17 of annotated manuscript file review commented “add the datasets of prior studies?
Response; In overall result analysis section on page 21 of modified manuscript, we have added the details of prior study work, why we compare those with our approach and also includes details of each of the datasets used by those prior studies.
In Conclusion and future sections of manuscript, reviewer suggested us to “Comment on possible further comparisons to other types of approaches.”
Response; we have made comments on comparison with baseline as well as comparison with other prior study approaches. See conclusion and future work section, page 24, lines 628 to 638 of modified manuscript file.
Experimental design
The authors evaluate their method first by exploring different hyperparameter settings that offer sufficient performance. In the second step, they compare to existing baselines. The first part is done adequately, however, I have concerns related to the second part of the evaluation.
Here, the authors compare their method to 'baselines', however it remains unclear whether they actually run the experiments against the baseline methods or just picked up performances from the literature. In the latter case, please additionally clarify why the results are comparable. Furthermore, I would suggest using at least one of the following baselines other researchers are familliar with, so it is clear that your method actually works:
1.) doc2vec + LR (scikit-learn)
2.) BERT (end-to-end)
3.) BERT (sentence-bert) + LR (scikit-learn)
Comparison against these could shed additional light on whether the proposed method performs well (and when). Finally, the number of data sets could be larger, you are only exploring the reviews, however I would be willing to believe there is potential if the proper comparisons are conducted.
Response: As referred to baselines, we have implemented the baselines approaches by ourselves without topic distributions on both datasets, and compare the results with our proposed T2F model (with topic distributions features), you can see the detail comparison with baselines approaches on amazon dataset on pages from 14 to 16, and comparison with baseline approaches on social media dataset on pages from 18 to 19 of modified manuscript file. Beside this author suggested us to implement at least one more baseline approach, from the given list, we have successfully implemented the doc2vec + Logistic regression(MaxEnt) classifier on both datasets and added the evaluation measure results in comparison with baseline approaches section on mentioned pages. We have explored the two type of datasets with proposed and baselines and approaches, and results comparisons are given separately. Finally the reviewer willing to believe on potential of the current dataset, if proper comparisons are conducted, and yes we have conducted more proper comparisons with baseline approaches, by applying 5-fold cross validation tests and 5X2fold paired t-statistics test on both datasets (reviews and social media), also comparison among the proposed model classifiers on unseen data through 5-fold cross validation and Mcnemar’s s test statistic (see pages 20-21) and in the end overall comparison with applied baseline and prior study approaches (see pages 22-23).
In addition one thing needs to be mentioned and clarified that baseline approaches are the approaches that we have implemented without topic distributions representation of text, and prior study approaches are the approaches of literature work, and we get performance results of these prior approaches, and we compared these studies from the perspective of utilizing different feature embedding schemes because they also proposing different types of feature vector representation such as graph embeddings, doc embedding features (Masood and Abbasi (2021)), graph of word features (Rousseau et al. (2015), text frequency features embedding (Luo (2021)), and structured feature vector composed of weighted pair of words (Colace et al. (2014)) to improve the classifiers performance, and we want to see how the scores of different embeddings from prior studies affecting the classification results, in the end the purpose is to accurately classify the textual data by proposing new feature vector representation scheme.
Validity of the findings
The findings are in alignment with the empirical evaluation as it stands -> the proposed method is superior to others. One problem I have with the claim is statistical significance claim: You claim that the results are significantly better. I saw no statistical tests being conducted to verify/prove this claim.
Response: we completely agree, and now we have implemented the 5X2 fold paired t-statistics test on both of our datasets separately, with baselines and proposed methods to prove this claim statistically, this statistics test is used to compare the performances of classifiers and validate which classifier is significantly better than the other. See comparison with baseline approaches sections (pages 14 to 16 and pages 18-19) for the detail.
Conclusions could be longer (see attached PDF).
Response: We have modified the conclusion, see conclusion and future works section on pages 23-24 of modified manuscript file.
Comments for the author
I've left numerous comments also in the attached PDF, which will hopefully improve the manuscript's quality.
Response: We are very thankful to the reviewer for the comments and suggestion that he made to improve our manuscript’s quality, these comments and suggestions were very useful and we have modified manuscript according to the each comment and suggestion given in annotated pdf file.
Reviewer 3
Basic reporting
The article is a proposal on framework called Topic2features. Generally, it has been well-written in terms of paper organization and writing style. The framework has been described in detail, has been tested with amazon dataset, and yet has been compared and evaluated with few algorithms and benchmark.
I have no major issues to accept the paper as it is.
Experimental design
accceptable
Validity of the findings
acceptable
Comments for the author
Very good paper, and well written.
Response: we are very thankful to reviewer 3 for all the comments and accepting paper as it is.
Reviewer 4
Basic reporting
The paper proposed a novel framework topic2features (T2F) to deal with short and
sparse data using the topic distributions of hidden topics gathered from the dataset and converting them into features to build a supervised classifier.
It seems that the merit of the study is very interesting and it contains a novel of a new approach of representation that is based on topics instead of a bag of words.
The authors fail to give a clear idea of the main approach of this study. The approach is described a very vague and it is hard to get how the topics were used in a vector space.
The study needs to be improved dramatically with more description of the algorithm using pseud ocode or more clear workflows and figures.
I will not be able to judge the content of the study without a clear understanding of the novelty of topic representations. This section should be improved.
Also, I would suggest to the author to send the article for English proof before resubmitting it.
Response: agreed, we have modified and we have given the pseudo code in algorithm 1 to describe that how topic distributions maps into features vectors space to feed into classifier, and have given more description in the form of equation and its description, see integrating topic distributions into dataset section on pages(9-10) of modified manuscript file.
Experimental design
The study shows satisfactory experiments
Validity of the findings
I will not be able to judge the content of the study without a clear understanding of the novelty of topic representations. This section should be improved.
Comments for the author
Line 287: might me should be 'might be'
Response: the new algorithm with pseudo code is given and in its description, this sentence that contains this “might me” word was not needed and it is deleted.
Figure 6 should be replaced with a more clear image or just replace with a regular table.
Figure 7 should be replaced with a more clear image or just replace with a regular table.
Figure 6 and figure 7 just contained the topic distributions screenshots, after carefully reading the manuscript we think that these screenshots figures are not needed, so we deleted it and we have described topic distributions in the algorithm 1 description section, see pages (9-10) of modified manuscript file.
" | Here is a paper. Please give your review comments after reading it. |
210 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Clawpack is a software package designed to solve nonlinear hyperbolic partial differential equations using high-resolution finite volume methods based on Riemann solvers and limiters. The package includes a number of variants aimed at different applications and user communities. Clawpack has been actively developed as an open source project for over 20 years. The latest major release, Clawpack 5, introduces a number of new features and changes to the code base and a new development model based on GitHub and Git</ns0:p><ns0:p>submodules. This article provides a summary of the most significant changes, the rationale behind some of these changes, and a description of our current development model.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>The Clawpack software suite <ns0:ref type='bibr' target='#b15'>[14]</ns0:ref> is designed for the solution of nonlinear conservation laws, balance laws, and other first-order hyperbolic partial differential equations not necessarily in conservation form.</ns0:p><ns0:p>The underlying solvers are based on the wave propagation algorithms described by LeVeque in <ns0:ref type='bibr' target='#b43'>[39]</ns0:ref>, and are designed for logically Cartesian uniform or mapped grids or an adaptive hierarchy of such grids.</ns0:p><ns0:p>The original Clawpack was first released as a software package in 1994 and since then has made major strides in both capability and interface. More recently a major refactoring of the code and a move to GitHub for development has resulted in the release of Clawpack 5.0 in January, 2014. Beyond enabling a distributed and better managed development process a number of user-facing improvements were made including a new user interface and visualization tools, incorporation of high-order accurate algorithms, parallelization through MPI and OpenMP, and other enhancements.</ns0:p><ns0:p>Because scientific software has become central to many advances made in science, engineering, resource management, natural hazards modeling and other fields, it is increasingly important to describe and document changes made to widely used packages. Such documentation efforts serve to orient new and existing users to the strategies taken by developers of the software, place the software package in the context of other packages, document major code changes, and provide a concrete, citable reference for users of the software.</ns0:p><ns0:p>With this in mind, the goals of this paper are to:</ns0:p><ns0:p>• Summarize the development history of Clawpack,</ns0:p><ns0:p>• Summarize some of the major changes made between the early Clawpack 4.x versions and the most recent version, Clawpack 5.3,</ns0:p><ns0:p>• Summarize the development model we have adopted, for managing open source scientific software projects with many contributors, and</ns0:p><ns0:p>• Identify how users can contribute to the Clawpack suite of tools.</ns0:p><ns0:p>This paper provides a brief history of Clawpack in Section 1.1, a background of the mathematical concerns in Section 1.2, the modern development approach now being used in Section 2, the major feature additions in the Clawpack 5.x major release up until Version 5.3 in Section 3. Some concluding thoughts and future plans for Clawpack are mentioned in Section 4.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>History of Clawpack</ns0:head><ns0:p>The first version of Clawpack was released by LeVeque in 1994 <ns0:ref type='bibr' target='#b41'>[37]</ns0:ref> and consisted of Fortran code for solving problems on a single, uniform Cartesian grid in one or two space dimensions, together with some Matlab <ns0:ref type='bibr' target='#b49'>[45]</ns0:ref> scripts for plotting solutions. The wave-propagation method implemented in this code provided a general way to apply recently developed high-resolution shock capturing methods to general hyperbolic systems and required only that the user provide a 'Riemann solver' to specify a new hyperbolic problem. Collaboration with Berger <ns0:ref type='bibr' target='#b10'>[9]</ns0:ref> soon led to the incorporation of adaptive mesh refinement (AMR) in two space dimensions, and work with Langseth <ns0:ref type='bibr' target='#b40'>[36,</ns0:ref><ns0:ref type='bibr' target='#b39'>35]</ns0:ref> led to three-dimensional versions of the wave-propagation algorithm and the software, with three-dimensional AMR then added by Berger. Version 4.3 of Clawpack contained a number of other improvements to the code and formed the basis for the examples presented in a textbook <ns0:ref type='bibr' target='#b43'>[39]</ns0:ref> published in 2003. That text not only provided a complete description of the wave propagation algorithm, developed by LeVeque, but also is notable in that the codes used to produce virtually all of figures in the text were made available online <ns0:ref type='bibr' target='#b43'>[39]</ns0:ref>.</ns0:p><ns0:p>In 2009, Clawpack Version 4.4 was released with a major change from Matlab to Python as the recommended visualization tool, and the development of a Python user interface for specifying the input data.</ns0:p><ns0:p>In 2009, Clawpack Version 4.4 was released with a major change from Matlab to Python as the recommended visualization tool, and the development of a Python user interface for specifying the input data. Finally in January of 2013 the 4.x versions of Clawpack ended with the release of 4.6.3 1 Version 5 of Clawpack introduces both user-exposed features and a number of modern approaches to code development, interfacing with other codes, and adding new capabilities. The move to git version control also allowed a more complete open source model. These changes are the subject of the rest of this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Hyperbolic problems</ns0:head><ns0:p>In one space dimension, the hyperbolic systems solved with Clawpack typically take the form of conservation laws q t (x, t) + f (q(x, t)) x = 0 <ns0:ref type='bibr' target='#b0'>(1)</ns0:ref> or non-conservative linear systems q t (x, t) + A(x)q(x, t) x = 0, <ns0:ref type='bibr' target='#b2'>(2)</ns0:ref> where subscripts denote partial derivatives and q(x, t) is a vector with m ≥ 1 components. Here the components of q represent conserved quantities, while the function f represents the flux (transport) of q. Equation (1) generalizes in a natural way to higher space dimensions; see the examples below.</ns0:p><ns0:p>The coefficient matrix A in <ns0:ref type='bibr' target='#b2'>(2)</ns0:ref> or the Jacobian matrix f (q) in ( <ns0:ref type='formula'>1</ns0:ref>) is assumed to be diagonalizable with real eigenvalues for all relevant values of q, x, and t. This condition guarantees that the system is hyperbolic, with solutions that are wave-like. The eigenvectors of the system determine the relation between the different components of the system, or waves, and the eigenvalues determine the speeds at which these waves travel. The right hand side of these equations could be replaced by a 'source term' ψ(q, x, t) to give a non-homogeneous equation that is sometimes called a 'balance law' rather than a conservation law. Spatially-varying flux functions f (q, x) in (1) can also be handled using the f-wave approach <ns0:ref type='bibr' target='#b6'>[5]</ns0:ref>.</ns0:p><ns0:p>Examples of equations solved by Clawpack include:</ns0:p><ns0:p>• Advection equation(s) for one or more tracers; in the simplest, one-dimensional case we have:</ns0:p><ns0:formula xml:id='formula_0'>q t + (u(x, t)q) x = 0.</ns0:formula><ns0:p>The velocity field u(x, t) is typically prescribed from the solution to another fluid flow problem, such as wind. Typical applications include transport of heat, energy, pollution, smoke, or another passively-advected quantity that does not influence the velocity field.</ns0:p><ns0:p>• The shallow water equations, describing the velocity (u, v) and surface height h of a fluid whose depth is small relative to typical wavelengths.</ns0:p><ns0:formula xml:id='formula_1'>h t + (hu) x + (hv) y = 0 (3) (hu) t + hu 2 + 1 2 gh 2 x + (huv) y = −gb x<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>(hv</ns0:p><ns0:formula xml:id='formula_2'>) t + hv 2 + 1 2 gh 2 y + (huv) x = −gb y (5)<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Here g is a constant related to the gravitational force and b(x, y) is the bathymetry, or bottom surface height. Notice that the bathymetry enters the equations through a source term; additional terms could be added to model the effect of bottom friction. These equations are used, for instance, to model inundation caused by tsunamis and dam breaks, as well as to model atmospheric flows.</ns0:p><ns0:p>• The Euler equations of compressible, inviscid fluid dynamics, consist of conservation laws for mass, momentum, and energy. The wave speeds depend on the local fluid velocity and the acoustic wave velocity (sound speed). Source terms can be added to include the effect of gravity, viscosity or heat transfer. These systems have important applications in aerodynamics, climate and weather modeling, and astrophysics.</ns0:p><ns0:p>• Elastic wave equations, used to model compressional and shear waves in solid materials. Here even linear models can be complex due to varying material properties on multiple scales that affect the wave speeds and eigenvectors.</ns0:p><ns0:p>Discontinuities (shock waves) can arise in the solution of nonlinear hyperbolic equations, causing difficulties for traditional numerical methods based on discretizing derivatives directly. Modern shock capturing methods are often based on solutions to the Riemann problem that consists of equations <ns0:ref type='bibr' target='#b0'>(1)</ns0:ref> or <ns0:ref type='bibr' target='#b2'>(2)</ns0:ref> together with piecewise constant initial data with a single jump discontinuity. The solution to the Riemann problem is a similarity solution (a function of x/t only), typically consisting of m waves (for a system of m equations) propagating at constant speed. This is true even for nonlinear problems, where the waves may be shocks or rarefaction waves (through which the solution varies continuously in a self-similar manner).</ns0:p><ns0:p>The main theoretical and numerical difficulties of hyperbolic problems involve the prescription of physically correct weak solutions and understanding the behavior of the solution at discontinuities. The Riemann solver is an algorithm that encodes the specifics of the hyperbolic system to be solved, and it is the only routine (other than problem-specific setup such as initial conditions) that needs to be changed in order to apply the code to different hyperbolic systems. In some cases, the Riemann solver may also be designed to enforce physical properties like positivity (e.g., for the water depth in GeoClaw) or to account for forces (like that of gravity) that may be balanced by flux terms.</ns0:p><ns0:p>Clawpack is based on Godunov-type finite volume methods in which the solution is represented by cell averages. Riemann problems between the cell averages in neighboring states are used as the fundamental building block of the algorithm. The wave-propagation algorithm originally implemented in Clawpack (and still used in much of the code) is based on using the waves resulting from each</ns0:p><ns0:p>Riemann solution together with limiter functions to achieve second-order accuracy where the solution is smooth together with sharp resolution of discontinuities without spurious numerical oscillations (see <ns0:ref type='bibr' target='#b43'>[39]</ns0:ref> for a detailed description of the algorithms). Higher-order WENO methods have also been developed relying on the same Riemann solvers. These methods can be found in PyClaw (see Section Adaptive mesh refinement (AMR) is essential for many problems and has been available in two space dimensions since 1995, when Marsha Berger joined the project team and her AMR code for the Euler equations of compressible flow was generalized to fit into the software which became AMRClaw <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref>, another package included in the Clawpack ecosystem. AMRClaw was carried over to three space dimensions using the unsplit algorithms introduced in <ns0:ref type='bibr' target='#b40'>[36]</ns0:ref>. Starting in Version 5.3.0, dimensional splitting is also supported in AMRClaw, which can be particularly useful in three space dimensions where the unsplit algorithms are much more expensive. Other recent improvements to AMRClaw are discussed in Section 3.4.</ns0:p><ns0:p>There are several other open source software projects that provide adaptive mesh refinement for hyperbolic PDEs. The interested reader may want to investigate AMROC <ns0:ref type='bibr' target='#b17'>[16]</ns0:ref>, BoxLib 2 , Chombo <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>,</ns0:p><ns0:p>Gerris <ns0:ref type='bibr' target='#b54'>[50]</ns0:ref>, OpenFOAM <ns0:ref type='bibr' target='#b50'>[46]</ns0:ref>, or SAMRAI <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref>, for example.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Development Approach</ns0:head><ns0:p>Clawpack's development model is driven by the needs of its developer community. The Clawpack project consists of several interdependent projects: core solver functionality, a visualization suite, a general adaptive mesh refinement code, a specialized geophysical flow code, and a massively parallel Python framework. Changes to the core solvers and visualization suite have a downstream effect on the other codes, and the developers largely work in an independent, asynchronous manner across continents and time zones.</ns0:p><ns0:p>The core Clawpack software repositories are:</ns0:p><ns0:p>• clawpack -responsible for installation and coordination of other repositories,</ns0:p><ns0:p>• riemann -Riemann solvers used by all the other projects,</ns0:p><ns0:p>• visclaw -a visualization suite used by all the other projects,</ns0:p><ns0:p>• clawutil -utility functions used by most other projects,</ns0:p><ns0:p>• classic -the original single grid methods in 1, 2, and 3 space dimensions,</ns0:p><ns0:p>• amrclaw -the general adaptive mesh refinement framework in 2 and 3 dimensions,</ns0:p><ns0:p>• geoclaw -solvers for depth-averaged geophysical flows which employs the framework in amrclaw, and</ns0:p><ns0:p>• pyclaw -a Python implementation and interface to the Clawpack algorithms including highorder methods and massively parallel capabilities.</ns0:p><ns0:p>A release of Clawpack downloaded by users contains all of the above. The repositories riemann, visclaw, and clawutil are sometimes referred to as upstream projects, since their changes affect all the remaining projects in the above list, commonly referred to as downstream projects. There are some variations on this, for instance AMRClaw is upstream of GeoClaw, which uses many of the algorithms and software base from AMRClaw. To coordinate this the clawpack repository points to the most recent known-compatible version of each repository.</ns0:p><ns0:p>Beyond the major core code repositories, additional repositories contain documentation and extended examples for using the packages:</ns0:p><ns0:p>• doc -the primary documentation source files. These files are written in the markup language reStructured Text 3 , and are then converted to html files using Sphinx 4 . Other documentation such as drafts of this paper are also found in this repository.</ns0:p><ns0:p>• clawpack.github.com -the html files created by Sphinx in the doc repository are pushed to this repository, and are then automatically served on the web. These appear at http://www. clawpack.org, which is configured to point to http://clawpack.github.com. The name of this repository follows GitHub convention for use with GitHub Pages 5 .</ns0:p><ns0:p>• apps -applications contributed by developers and users that go beyond the introductory examples included in the core repositories.</ns0:p><ns0:p>The Clawpack 4.x code is also available in the repository clawpack-4.x but is no longer under development.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Version Control</ns0:head><ns0:p>The Clawpack team uses the Git distributed version control system to coordinate development of each major project. The repositories are publicly coordinated under the Clawpack organization on GitHub 6 with the top-level clawpack super-repository responsible for hosting build and installation tools, as well as providing a synchronization point for the other repositories. The remaining 'core Clawpack repositories' listed above are subrepositories of the main clawpack organization. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>GitHub itself is a free provider of public Git repositories. In addition to repository hosting, the Clawpack team uses GitHub for issue tracking, code review, automated continuous integration via Travis CI 7 , and test coverage tracking via Coveralls 8 for the Python-based modules. The issue tracker on GitHub supports cross-repository references, simplifying communication between Clawpack developer sub-teams. The Travis CI service, which provides free continuous integration for publicly developed repositories on GitHub, runs Clawpack's test suites through nose 9 on proposed changes to the code base, and through a connection to the Coveralls service, reports on any test failures as well as changes to test coverage.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Submodules</ns0:head><ns0:p>The clawpack 'super-repository' serves two purposes. First, it contains installation utilities for each of the sub-projects. Second, it serves as a synchronization point for the project repositories. The remainder of this section provides more details on how Git submodules enable this synchronization.</ns0:p><ns0:p>Whenever possible, teams of software developers coordinate their development in a single unified repository. In situations where this isn't possible, one option provided by Git is the submodule, which allows a super-repository (in this case, clawpack), to nest sub-repositories as directories, with the ability to capture changes to sub-repository revisions as new revisions in the super-repository. Under the hood, the super-repository maintains pointers to the location of each submodule and its current revision. The submodule directories contain normal Git repositories, all of the coordination happens in the super-repository.</ns0:p><ns0:p>Each of the other core Clawpack repositories listed above is a submodule of the clawpack repository. Every commit that creates a new revision to the clawpack repository describes top-level installation code as well as the revisions of each of the submodules. In this way, Git submodules allow</ns0:p><ns0:p>Clawpack team members to work asynchronously on independent projects while reusing and maintaining common software infrastructure.</ns0:p><ns0:p>Typically the Clawpack developers advance the master development branch of the top-level clawpack repository any time a major feature is added or a bug is fixed in one of the upstream projects that might affect code in other repositories. By checking out a particular revision in the clawpack repository and performing a git submodule update, all repositories can be updated to versions that are intended to be consistent and functional.</ns0:p><ns0:p>In particular, when Travis CI runs the regression tests in any project repository (performed automatically for any pull request), it starts by installing Clawpack on a virtual machine and the current head of the clawpack/master branch indicates the commit from each of the other projects that must be checked out before performing the tests. If the clawpack repository has not been properly updated following changes in other upstream projects, these tests may fail.</ns0:p><ns0:p>Any new release of Clawpack is a snapshot of one particular revision of clawpack and the related revisions of all submodules. These particular revisions are also tagged for future reference with consistent names, such as v5.3.1. (Git tags simply provide a descriptive name for a particular revision rather than having to refer to a Git hash code.)</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Contributing</ns0:head><ns0:p>Scientists who program are often discouraged from sharing code due to existing reward mechanisms and the fear of being 'scooped'. However, recent studies indicate that scientific communities that openly share and develop code have an advantage because each researcher can leverage the work of many others <ns0:ref type='bibr' target='#b57'>[53]</ns0:ref>, and that paper citation rates can be increased by sharing code <ns0:ref type='bibr' target='#b58'>[54]</ns0:ref> and/or data <ns0:ref type='bibr' target='#b53'>[49]</ns0:ref>.</ns0:p><ns0:p>Moreover, journals and funding agencies are increasingly requiring investigators to share code used to obtain published results. One of the goals of the Clawpack project is to facilitate code sharing by users, by providing an easy mechanism to refer to a specific version of the Clawpack software and ensuring that past versions of the software remain available on a stable and citable platform.</ns0:p><ns0:p>On the development side, we expect that the open source development model with important discussions conducted in public will lead to further growth of the developer community and additional contributions from users. Over the past twenty years, many users have written code extending Clawpack with new Riemann solvers, algorithms, or domain-specific problem tools. Unfortunately, much of this code did not make it back into the core software for others to use. Many of the development changes in Clawpack 5.x were done to encourage contributions from a broader community. We have begun to see an increase in contributions from outside the developers' groups, and hope to encourage more of this in the future.</ns0:p><ns0:p>The primary development model is typical for GitHub projects: a contributor forks the repository on GitHub, then develops improvements in a branch that is pushed to her own fork. She issues a 'pull request' (PR) when the branch is ready to be merged into the main repository. Increasingly, contributors are also using PRs as a way to conveniently post preliminary or prototype code for discussion prior to further development, often marked WIP for 'work in progress' to signal that it is not ready to merge.</ns0:p><ns0:p>After a PR is issued, other developers, including one or more of the maintainers for the corresponding project, review the code. The Travis CI server also automatically runs the tests on the proposed new code. The test results are visible on the GitHub page for the PR. Usually there is some iteration as developers suggest improvements or discuss implementation choices in the code. Once the tests are passing and it is agreed that the code is acceptable, a maintainer merges it.</ns0:p><ns0:p>An additional benefit of using the GitHub platform is that any version of the code is accessible either through the command line git interface, through the GitHub website, or a number of available applications on all widely used platforms. More important however is the ability to tag a particular version of a repository with a digital object identifier (DOI) via GitHub and Zonodo 10 . The combination of these abilities provides the capability for Clawpack to not only be accessible at any version but also allows for the citability of versions of the code used for particular results within the scientific literature.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Releases</ns0:head><ns0:p>Although Clawpack is continuously developed, it is convenient for users to be able to install stable versions of the software. The Clawpack developers provide these releases through two distribution channels: GitHub and the Python Package Index (PyPI). Full source releases are available on GitHub.</ns0:p><ns0:p>Alternatively, the PyClaw subproject and its dependencies can be installed automatically using a PyPI client such as pip.</ns0:p><ns0:p>Clawpack does not follow a calendar release cycle. Instead, releases emerge when the developer community feels enough changes have accumulated since the last release to justify the cost of switching to a new release. For the most part, Clawpack releases are versioned using an M.m.p triplet, representing the major (M), minor (m), and patch (p) versions respectively. In the broader software engineering community, this is often referred to as semantic versioning. Small changes that fix bugs and cosmetic issues result in increments to the patch-level. Backwards-compatible changes result in an increase to the minor version. The introduction of backwards-incompatible changes require that the major version be incremented. In addition, the implementation of significant new algorithms or capability will also justify the increment of major release number, and is often an impetus for providing another release to the public. In practice, the Clawpack software has frequently included changes in minor version releases that were not entirely backwards compatible, but these have been relatively minor and documented in the release notes. Major version numbers have changed infrequently and related to major refactoring of the code as in going from 4.x to 5.0.</ns0:p><ns0:p>Staring with Version 5.3.1, the tarfiles for Clawpack releases will also be archived on Zenodo Manuscript to be reviewed</ns0:p><ns0:p>Computer Science permanent link <ns0:ref type='bibr' target='#b56'>[52]</ns0:ref> that does not depend on the long-term existence of GitHub.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5'>Dependencies</ns0:head><ns0:p>Running any part of Clawpack requires a Python interpreter and the common Python packages numpy <ns0:ref type='bibr' target='#b34'>[30]</ns0:ref>, f2py <ns0:ref type='bibr' target='#b52'>[48]</ns0:ref>, matplotlib <ns0:ref type='bibr' target='#b30'>[27]</ns0:ref>, as well as (except for the pure-Python 1D code) GNU make and a Fortran compiler. Other dependencies are optional, depending on which parts of Clawpack are to be used:</ns0:p><ns0:p>• IPython/Jupyter if using the notebook interfaces <ns0:ref type='bibr' target='#b51'>[47]</ns0:ref>.</ns0:p><ns0:p>• PETSc <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>, if using distributed parallelism in PyClaw.</ns0:p><ns0:p>• OpenMP, if using shared-memory parallelism in AMRClaw or GeoClaw.</ns0:p><ns0:p>• MATLAB, if using the legacy visualization tools.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Advances</ns0:head><ns0:p>This section describes the major changes in each of the code repositories in moving from Clawpack 4.x to the most recent version 5.3. A number of the repositories have seen only minor changes as the bulk of the development is focused on current research interests. There are a number of minor changes not listed here and the interested reader is encouraged to refer to the change logs 12 and the individual Clawpack Git repositories for a more complete list.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Global Changes</ns0:head><ns0:p>Substantial redesign of the Clawpack code base was performed in the move from Clawpack 4.x to 5.x. Major changes that affected all aspects of the code include:</ns0:p><ns0:p>• The interface to the Clawpack Riemann solvers was changed so that one set of solvers can be used for all versions of the code (including PyClaw via f2py 13 ). Rather than appearing in scattered example directories, these Riemann solvers have all been collected into the new riemann repository. Modifications to the calling sequences were made to accommodate this increased generality.</ns0:p><ns0:p>• Calling sequences for a number of other Fortran subroutines were also modified based on experiences with the Clawpack 4.x code. These can also be used as a stand-alone product for those who only want the Riemann solvers.</ns0:p><ns0:p>• Python front-ends were redesigned to more easily specify run-time options for the solver and visualization. The Fortran variants (ClassicClaw, AMRClaw, and GeoClaw) all use a Python script to facilitate setting input variables. These scripts create text files with a rigidly specified format that are then read in when the Fortran code is run. The interface now allows updates to the input parameters while maintaining backwards compatibility.</ns0:p><ns0:p>• The indices of the primary conserved quantities were reordered. In Clawpack 4.x, the mth component of a system of equations in grid cell (i, j) (in two dimensions, for example), was stored in q(i,j,m). In order to improve cache usage and to more easily interface with PETSc <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>, a global change was made to the ordering so that the component number comes first; i.e. q(m,i,j). A seemingly minor change like this affects a huge number of lines in the code and cannot easily be automated. The use of version control and regression tests was crucial in the successful completion of the project.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Riemann: A Community-Driven Collection of Approximate Riemann Solvers</ns0:head><ns0:p>The methods implemented in Clawpack, and all modern Godunov-type methods for hyperbolic PDEs, are based on the solution of Riemann problems as discussed in Section 1. In the Fortran-based packages (Classic, AMRClaw, and GeoClaw) the Riemann solver is selected at compile-time by modifying a problem-specific Makefile. In PyClaw, the Riemann solver to be used is selected at run-time. This is made possible by compiling all of the Riemann solvers (when PyClaw is installed) and generating Python wrappers with f2py. For PyClaw, riemann also provides metadata (such as the number of equations, the number of waves, and the names of the conserved quantities) for each solver so that setup is made more transparent.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>ClassicClaw</ns0:head><ns0:p>The classic repository contains code implementing the wave propagation algorithm on a single uniform grid, in much the same form as the original Clawpack 1.0 version of 1994 but with various enhancements added through the years. Following the introduction of Clawpack 4.4 the three-dimensional routines were left out of the Python user interfaces and plotting routines. These have been reintroduced in Clawpack 5. Additionally the OpenMP shared-memory parallelism capabilities have been extended</ns0:p><ns0:p>to the three-dimensional code.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>AMRClaw</ns0:head><ns0:p>Fortran code in the AMRClaw repository performs block-structured adaptive mesh refinement <ns0:ref type='bibr' target='#b7'>[6,</ns0:ref><ns0:ref type='bibr' target='#b8'>7]</ns0:ref> for both Clawpack and GeoClaw applications. The algorithms implemented in AMRClaw are discussed in detail in <ns0:ref type='bibr' target='#b10'>[9,</ns0:ref><ns0:ref type='bibr' target='#b45'>41]</ns0:ref>, but a short description is given here to set the stage for a description of recent changes. This type of refinement solves the PDE on a hierarchy of logically rectangular grids. One (or more) level 1 grids comprise the entire domain, while grids at finer level are created and destroyed (as opposed to moving these grids) to follow important features in the solution.</ns0:p><ns0:p>AMRClaw includes the functionality for:</ns0:p><ns0:p>• Coordinating the flagging of points where refinement is needed, with a variety of criteria possible for flagging cells that need refinement from each level to the next finer level (including Richardson extrapolation, gradient testing, or user-specified criteria) 14 , 14 See http://www.clawpack.org/flag.html • Organizing the flagged points into efficient grid patches at the next finer level, using the algorithm of <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>,</ns0:p><ns0:p>• Interpolating the solution to newly created fine grids and initializing auxiliary data (topography, wind velocity, metric data and so on) on these grids,</ns0:p><ns0:p>• Averaging fine grid solutions to coarser grids,</ns0:p><ns0:p>• Orchestrating the adaptive time stepping (i.e. sub-cycling in time),</ns0:p><ns0:p>• Interpolating coarse grid solution to fine grid ghost cells, and</ns0:p><ns0:p>• Maintaining conservation at patch boundaries between resolution levels.</ns0:p><ns0:p>AMRClaw now allows users to specify 'regions' in space-time [</ns0:p><ns0:formula xml:id='formula_3'>x 1 , x 2 ] × [y 1 , y 2 ] × [t 1 , t 2</ns0:formula><ns0:p>] in which refinement is forced to be at least at some level L 1 and is allowed to be at most L 2 . This can be useful for constraining refinement, e.g. allowing or ensuring resolution of only a small coastal region in a global tsunami simulation. Previously the user could enforce such conditions by writing a custom flagging routine, but now this is handled in a general manner so that the parameters above can all be specified in the Python problem specification. Multiple regions can be specified, and a simple rule is used to determine the constraints at a grid cell that lies in multiple regions.</ns0:p><ns0:p>Auxiliary arrays are often used in Clawpack to store data that describes the problem and the routine. The routine setaux must then be provided by the user to set these values each time a new grid patch is created. For some applications computing these values can be time-consuming. In Clawpack 5.2, this code was improved to allow reuse of values from previous patches at the same level where possible at each regridding time. This is backward compatible, since no harm is done if previously written routines are used that still compute and overwrite instead of checking a mask.</ns0:p><ns0:p>In Clawpack 5.3 the capability to specify spatially varying boundary conditions was added. For a single grid, it is a simple matter to compute the location of the ghost cells that extend outside the computational domain and set them appropriately. With AMR however, the boundary condition routine can be called for a grid located anywhere in the domain, and may contain fewer or larger numbers of ghost cells. For this reason, the boundary condition routines do not assume a fixed number of ghost cells.</ns0:p><ns0:p>Anisotropic refinement is allowed in both two and three dimensions. This means that the spatial and temporal refinement ratios can be specified independently from one another (as long as the temporal refinement satisfies the CFL condition). In addition, capabilities have been added to automatically select the refinement ratio in time on each level based on the CFL condition. This has only been Manuscript to be reviewed implemented in GeoClaw where the wave speed in the shallow water equations depends on the local depth. The finest grids are often located only in shallow coastal regions, so a large refinement ratio in space does not lead to a large refinement ratio in time.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>AMRClaw has been parallelized using OpenMP directives. The main paradigm in structured AMR is an outer loop over levels of refinement, and in inner loop overall grids at that level, where the same operation is performed on each grid (i.e. taking a time step, finding ghost cells, conservation updates, etc.). This inner loop is parallelized using a parallel for loop construct one thread is assigned to operate on one grid. Dynamic scheduling is used with a chunk size of one. To help with load balancing, grids at each level are sorted from largest to smallest, using the total number of cells in the grid as an indicator of work. In addition there is a grids are limited to a maximum of 32 cells in each dimension, otherwise they are bisected until this condition is met. Note that this approach causes a memory bulge.</ns0:p><ns0:p>Each thread must have its own scratch arrays to save the incoming and outgoing waves and fluxes for future conservation fix-ups. The bulge is directly proportional to the number of threads executing.</ns0:p><ns0:p>For stack-based memory allocation per thread, the use of the environment variable OMP STACKSIZE to increase the limit may be necessary.</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows two snapshots of the solution to a three-dimensional shock-bubble interaction problem found in the Clawpack apps repository, illustrating localized phenomena requiring adaptive refinement. In Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> we show scalability tests and some timings for this example, when run on a 40 core Intel Xeon Haswell machine (E5-2670v3 at 2.3 GHz), using KMP AFFINITY compact with one thread per core. For timing purposes, the only modifications made to the input parameters was to turn off check-pointing and graphics output. The plot on the left shows that most of the wall clock time is in the integration routine (stepgrid), which closely tracks the total time. The second chunk of time is in the regridding, which contains algorithms that are not completely scalable. Very little time is in the filling of ghost cells, mostly from other patches but also includes those at domain boundaries. The efficiency is above 80% until 24 cores, then drops off dramatically. Note that there are only two grids on level, and an average of 22.8 level 2 grids. Most of the work is on level 3 grids, where there are an average of 138.1 grids over all the level 3 timestep. This is very coarse for large numbers of cores (hence the dropoff in efficiency). At 40 cores, there are less than 4 grids per core, and the grids are very different sizes.</ns0:p><ns0:p>The target architecture for AMRClaw and GeoClaw are multi-core machines. PyClaw on the other hand scales to tens of thousands of cores using MPI via PETSc <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref> but is not adaptive. Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='3.5'>GeoClaw</ns0:head><ns0:p>The GeoClaw branch of Clawpack was developed to solve the two-dimensional shallow water equations over topography for modeling tsunami generation, propagation, and inundation. The AMRClaw code formed the starting point but it was necessary to make many modifications to support the requirements of this application, as described briefly below. This code originated with the work of George <ns0:ref type='bibr' target='#b21'>[19,</ns0:ref><ns0:ref type='bibr' target='#b22'>20,</ns0:ref><ns0:ref type='bibr' target='#b23'>21]</ns0:ref> and was initially called TsunamiClaw. Later it became clear that many other geophysical flow applications have similar requirements and the code was generalized as GeoClaw.</ns0:p><ns0:p>One of the major issues is the treatment of wetting and drying of grid cells at the margins of the flow. The handling of dry states in a Riemann solver is difficult to handle robustly, and has gone through several iterations. GeoClaw must also be well-balanced in order to preserve steady states, in particular the 'ocean at rest'. To achieve this, the source terms in the momentum equations arising from variations in topography are incorporated into the Riemann solver rather than using a fractional step splitting approach. This is critical for modeling waves that have very small amplitudes relative to the variations in the depth of the ocean. See <ns0:ref type='bibr' target='#b44'>[40]</ns0:ref> for a general discussion of such methods and <ns0:ref type='bibr' target='#b22'>[20,</ns0:ref><ns0:ref type='bibr' target='#b23'>21]</ns0:ref> for details of the Riemann solver used in GeoClaw. Other features of GeoClaw include the ability to solve the equations in latitude-longitude coordinates on the surface of the sphere, and the incorporation of source terms modeling bottom friction using a Manning formulation. More details about the code and tsunami modeling applications can be found in <ns0:ref type='bibr' target='#b9'>[8,</ns0:ref><ns0:ref type='bibr' target='#b45'>41]</ns0:ref>. In 2011, a significant effort took place to verify and validate GeoClaw against the US National Tsunami Hazard Mitigation Program (NTHMP) benchmarks <ns0:ref type='bibr' target='#b27'>[25]</ns0:ref>. NTHMP approval of the code allows GeoClaw to be used in hazard mapping projects that are funded by this program or other federal and state agencies, e.g. <ns0:ref type='bibr' target='#b25'>[23,</ns0:ref><ns0:ref type='bibr' target='#b26'>24]</ns0:ref>. One such project is illustrated in Fig. <ns0:ref type='figure' target='#fig_14'>4</ns0:ref>.</ns0:p><ns0:p>In addition to a variety of tsunami modeling applications, GeoClaw has been used to solve dam break problems in steep terrain <ns0:ref type='bibr' target='#b19'>[18]</ns0:ref>, storm surge problems <ns0:ref type='bibr' target='#b48'>[44]</ns0:ref> (see Fig. <ns0:ref type='figure'>5</ns0:ref>), and submarine landslides <ns0:ref type='bibr' target='#b38'>[34]</ns0:ref>. The code also formed the basis for solving the multi-layer shallow water equations for storm surge modeling <ns0:ref type='bibr' target='#b46'>[42,</ns0:ref><ns0:ref type='bibr' target='#b47'>43]</ns0:ref>, and is currently being extended further to handle debris flow modeling in the packages D-Claw <ns0:ref type='bibr' target='#b31'>[28,</ns0:ref><ns0:ref type='bibr'>22]</ns0:ref> (see Figs. <ns0:ref type='figure' target='#fig_16'>6 and 7</ns0:ref>).</ns0:p><ns0:p>Nearly one quarter of the files in the AMRClaw source library have to be modified for GeoClaw.</ns0:p><ns0:p>There are currently 113 files in the AMRClaw 2D library, of which 26 are replaced by a GeoClawspecific files of the same name in the GeoClaw 2D library. For example, to preserve a flat sea surface when interpolating, it is necessary to interpolate the surface elevation (topography plus water depth) rather than simply interpolating the depth component of the solution vector as would normally be done in AMRClaw. An additional 24 files in the GeoClaw shallow water equations library handle other Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>complications introduced by the need to model tsunamis and storm surge.</ns0:p><ns0:p>Several other substantial improvements in the algorithms implemented in GeoClaw have been made between versions 4.6 and 5.3.0, including:</ns0:p><ns0:p>• In depth-averaged flow, the wave speed and therefore the CFL condition depends on the depth.</ns0:p><ns0:p>As a result, flows in shallow water that have been refined spatially may not need to be refined in time. This 'variable-time-stepping' was easily added along with the anisotropic capabilities that were added to AMRClaw.</ns0:p><ns0:p>• The ability to specify topography via a set of topo files that may cover overlapping regions at different resolutions has been added. The finite volume method requires cell averages of topography, computed by integrating a piecewise bilinear function constructed from the input topo files over each grid cell. In Clawpack 5.1.0, this was improved to allow an arbitrary number of nested topo grids. When adaptive mesh refinement is used, regridding may take place every few time steps. Improvements were made in 5.2.0 so that topography could be copied rather than always being recomputed in regions where there is an existing old grid.</ns0:p><ns0:p>• The user can now provide multiple dtopo files that specify changes to the initial topography at a series of times. This is used to specify sea-floor motion during a tsunamigenic earthquake, but can also be used to specify submarine landslide motion or a failing dam, for example.</ns0:p><ns0:p>• A number of new Python modules has been developed to assist the user in working with topo and dtopo files. These are documented in the Clawpack documentation and several of them are illustrated with Jupyter notebooks found in the Clawpack Gallery.</ns0:p><ns0:p>• New capabilities were added in 5.0.0 to monitor the maximum of various flow quantities over a specified time range of a simulation. This capability is crucial for many applications where the maximum flow depth at each point, maximum current velocities in a harbor, or maximum momentum flux (a measure of the hydrodynamic force that would be exerted by the flow on a structure) is desired. Arrival time of the first wave at each point can also be monitored. Such capabilities were included in the 4.x version of the code, but were more limited and did not always perform properly near the edges of refinement patches. In Version 5.2 these routines were further improved and extended. The user can specify a grid of points on which to monitor values, and the new code is more flexible in allowing one-dimensional grids (e.g. a transect), two-dimensional rectangular grids, or an arbitrary set of points 15 .</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6'>PyClaw</ns0:head><ns0:p>PyClaw is an object-oriented Python package that provides a convenient way to set up problems and can be used on large distributed-memory parallel machines. For the latter capability, PyClaw relies on PETSc <ns0:ref type='bibr' target='#b4'>[4]</ns0:ref>. Lower-level code (whatever gets executed repeatedly and needs to be fast) from the earlier Fortran Classic and SharpClaw codes is automatically wrapped at install time using f2py.</ns0:p><ns0:p>Recent applications of PyClaw include studies of laser light trapping by moving refractive index perturbations <ns0:ref type='bibr' target='#b55'>[51]</ns0:ref>, instabilities of weakly nonlinear detonation waves <ns0:ref type='bibr' target='#b18'>[17]</ns0:ref>, and effective dispersion of nonlinear waves via diffraction in periodic materials <ns0:ref type='bibr' target='#b37'>[33]</ns0:ref>. Two of these are depicted in Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref>. 15 Described in http://www.clawpack.org/fgmax.html </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.1'>Librarization and extensibility</ns0:head><ns0:p>Scientific software is easier to use, extend, and integrate with other tools when it is designed as a library <ns0:ref type='bibr' target='#b13'>[12]</ns0:ref>. Clawpack has always been designed to be extensible, but PyClaw takes this further in several ways. First, it is distributed via a widely-used package management system, pip. Second, the default installation process ('pip install clawpack') provides the user with a fully-compiled code and does not require setting environment variables. Like other Clawpack packages, PyClaw provides several 'hooks' for users to plug in custom routines (for instance, to specify boundary conditions). In PyClaw, these routines -including the Riemann solver itself -are selected at run-time, rather than at compile-time. These routines can be written directly in Python, or (if they are performance-critical) in a compiled language (like Fortran or C) and wrapped with one of the many available tools. Problem setup (including things like initial conditions, algorithm selection, and output specification) is also performed at run-time, which means that researchers can bypass much of the slower code-compile-execute-postprocess cycle. It is intended that PyClaw be easily usable within other packages (without control of main()).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.2'>Python geometry</ns0:head><ns0:p>PyClaw includes Python classes for describing collections of structured grids and data on them. These classes are also used by the other codes and VisClaw, for post-processing. A mesh in Clawpack always consists of a set of (possibly mapped) tensor-product grids (interval, quadrilateral, or hexahedral), also referred to as patches. At present, PyClaw solvers operate only on a single patch, but the geometry and grids already incorporate multi-patch capabilities for visualization in AMRClaw and GeoClaw.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.3'>PyClaw solvers</ns0:head><ns0:p>PyClaw includes an interface to both the Classic solvers (already described above) and those of Sharp-Claw <ns0:ref type='bibr' target='#b35'>[31]</ns0:ref>. SharpClaw uses a traditional method-of-lines approach to achieve high-order resolution in transverse shocks that arise from instabilities; see <ns0:ref type='bibr' target='#b18'>[17]</ns0:ref>. Right: Dispersion of waves in a layered medium with matched impedance and periodically-varying sound speed; see <ns0:ref type='bibr' target='#b37'>[33]</ns0:ref>.</ns0:p><ns0:p>than 120 million degrees of freedom and was run on two racks of the Shaheen I BlueGene/P supercomputer. The code has been demonstrated to scale with better than 90% efficiency in even larger tests on tens of thousands of processors on both the Shaheen I (BlueGene/P) and Shaheen II (Cray XC40) supercomputers at KAUST. A hybrid MPI/OpenMP version is already available in a development branch and will be included in future releases.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7'>VisClaw : Visualizing Clawpack output</ns0:head><ns0:p>A practical way to visualize the results of simulations is essential to any software package for solving PDEs. This is particularly true for simulations making use of adaptive mesh refinement, since gauge data) numerically interpolated from the simulation at fixed spatial locations are most useful when compared graphically to observational data. Finally, to more thoroughly analyze the computational data, simulation data should be made available in formats that can be easily exported to GIS tools such as ArcGIS 18 or the open source alternative QGIS 19 . For exploration of preliminary results or communicating results to non-experts, Google Earth is also helpful.</ns0:p><ns0:p>The latest release of Clawpack includes many specialized VisClaw routines for handling the above issues with plotting geo-spatial data. Topography or bathymetry data that was used in the simulation will be read by the graphing routines, and, using distinct colormaps, both water and land can be viewed on the same plot. Additionally, gauge locations can be added, along with contours of water and land.</ns0:p><ns0:p>One dimensional gauge plots are also created, according to user-customizable routines. In these gauge plotting routines, users can easily include observational data to compare with GeoClaw simulation results.</ns0:p><ns0:p>In for ArcGIS or QGIS, the KML files created for Google Earth can be edited for export, along with associated PNG files to these other GIS applications.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7.2'>Matlab plotting routines</ns0:head><ns0:p>The Matlab plotting tools available in early versions of Clawpack are still included in VisClaw.</ns0:p><ns0:p>While most of the one and two dimensional capabilities available originally in the Matlab suite have been ported to Python and matplotlib, the original Matlab routines are still available in the Matlab suite of plotting tools. Other plotting capabilities, such as two dimensional manifolds embedded in three dimensional space, or three dimensional plots of fully three-dimensional data are only available in the Matlab routines in a way that interfaces directly with Clawpack. More advanced three-dimensional plotting capabilities are planned for future releases of VisClaw.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Conclusions</ns0:head><ns0:p>Clawpack has evolved over the past 20 years from its genesis as a small and focused software package Manuscript to be reviewed Scaling Results for AMRClaw Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Oso, WA Landslide Simulation showing transverse shocks that arise from instabilities; see <ns0:ref type='bibr' target='#b18'>[17]</ns0:ref>. Right: Dispersion of waves in a layered medium with matched impedance and periodically-varying sound speed; see <ns0:ref type='bibr' target='#b37'>[33]</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9313:1:1:NEW 14 May 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: An illustration showing grid cells on levels one and two, and only grid outlines on levels three and four.</ns0:figDesc><ns0:graphic coords='11,139.68,37.60,311.05,233.28' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2: AMRClaw example demonstrating a shock-bubble interaction in the Euler equations of compressible gas-dynamics at two times, illustrating the need for adaptive refinement to capture localized behavior. There are two 20 × 10 × 10 grids at level 1. They are refined where needed by factors of 4 and then 2 in this 3-level run.</ns0:figDesc><ns0:graphic coords='12,86.88,75.40,200.88,150.66' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Left is strong scaling results for the AMRClaw example shown in Fig. 2. Right is plot of efficiency based on total computational time.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>and call the algorithms of Clawpack. It grew from what was initially a set of data structures and file IO routines that are used by the other Clawpack codes and by VisClaw. These routines were released in an early form in later 4.x versions of Clawpack. Those releases also included a fullyfunctional implementation of the 1D classic algorithm in pure Python. That implementation still exists in PyClaw and is useful for understanding the algorithm. The current release of PyClaw includes access to the classic algorithms as well as the high-order algorithms introduced in SharpClaw [32] (i.e., WENO reconstruction and Runge-Kutta integrators)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 :Figure 5 :</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4: Gray's Harbor showing Westport, WA on southern peninsula. (Google map data and image, 2016.) (b) Simulation of a potential magnitude 9 Cascadia Subduction Zone event, 40 minutes after the earthquake. (c) Design for new Ocosta Elementary School in Westport, based in part on GeoClaw simulations [23]. Image courtesy of TCF Architecture.</ns0:figDesc><ns0:graphic coords='15,94.32,207.80,401.76,301.32' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: (a) Photograph of the 2010 Mt. Meager debris-flow deposit, from [2]. (b) Simulated debris flow, from D. George.</ns0:figDesc><ns0:graphic coords='17,201.10,259.97,188.19,158.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Observed (yellow line) and computed (blue) landslide at Oso, WA in 2014 [29].</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>3. 6 . 4 Parallelism</ns0:head><ns0:label>64</ns0:label><ns0:figDesc>PyClaw includes a distributed parallel backend that uses PETSc through the Python wrapper petsc4py.The parallel code uses the same low-level routines without modification. In the high-level routines, only a few hundred lines of Python code deal explicitly with parallel communication, in order to transfer ghost cell information between subdomains and to find the global maximum CFL number in order to adapt the time step size. For instance, the computation shown in the right part of Fig.8involved more16 http://github.com/memmett/PyWENO</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Left: A two-dimensional detonation wave solution of the reactive Euler equations, showing transverse shocks that arise from instabilities; see<ns0:ref type='bibr' target='#b18'>[17]</ns0:ref>. Right: Dispersion of waves in a layered medium with matched impedance and periodically-varying sound speed; see<ns0:ref type='bibr' target='#b37'>[33]</ns0:ref>.</ns0:figDesc><ns0:graphic coords='18,82.26,75.40,207.30,156.24' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>17 https://github.com/jakevdp/JSAnimation3.7.1 Tools for visualizing geo-spatial data produced by GeoClawThe geo-spatial data generated by GeoClaw has particular visualization requirements. Tsunami or storm surge simulations are most useful when the plots showing inundation or flooding levels are overlaid onto background bathymetry or topography. Supplementary one dimensional time series data (e.g.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>that two core developers could manage without version control. It is now an ecosystem of related projects that share a core philosophy and some common code (notably Riemann solvers and visualization tools), but that are aimed at different user communities and that are developed by overlapping but somewhat distinct groups of developers scattered at many institutions. The adoption of better software engineering practices, in particular the use of Git and GitHub as an open development platform and the use of pull requests to discuss proposed changes, has been instrumental in facilitating the development of many of the new capabilities summarized in this paper. These developer facing improvements of course affect the user as well since better and faster development cycles means better and faster implementation of features. The user facing features already implemented in version 5 have opened up the use of Clawpack to a broader audience.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>1 Figure 1 :Figure 2 :</ns0:head><ns0:label>112</ns0:label><ns0:figDesc>Figure 1: An illustration showing grid cells on levels one and two, and only grid outlines on levels three and four.</ns0:figDesc><ns0:graphic coords='24,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 3 (</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3(on next page)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3: Left is strong scaling results for the AMRClaw example shown in Fig. 2. Right is plot of efficiency based on total computational time.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Gray's Harbor showing Westport, WA on southern peninsula. (Google map data and image, 2016.) (b) Simulation of a potential magnitude 9 Cascadia Subduction Zone event, 40 minutes after the earthquake. (c) Design for new Ocosta Elementary School in Westport, based in part on GeoClaw simulations [23]. Image courtesy of TCF Architecture.</ns0:figDesc><ns0:graphic coords='28,42.52,280.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 5 (Figure 5 :</ns0:head><ns0:label>55</ns0:label><ns0:figDesc>Figure 5(on next page)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: (a) Photograph of the 2010 Mt. Meager debris-flow deposit, from [?]. (b) Simulated debris flow, from D. George.</ns0:figDesc><ns0:graphic coords='31,42.52,229.87,525.00,171.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 7 :Figure 8 :</ns0:head><ns0:label>78</ns0:label><ns0:figDesc>Figure 7: Observed (yellow line) and computed (blue) landslide at Oso, WA in 2014 [29].</ns0:figDesc><ns0:graphic coords='32,42.52,204.37,525.00,441.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,72.00,75.40,446.41,199.99' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,280.87,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,0.00,0.00,540.00,241.92' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='33,42.52,280.87,525.00,199.50' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>11 , a data repository hosted at CERN that issues DOIs so that the software version can be cited with a</ns0:figDesc><ns0:table /><ns0:note>10 For a guide on creating a DOI to a particular version of software see http://guides.github.com/activities/ citable-code/11 https://zenodo.org 7 PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9313:1:1:NEW 14 May 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>2. Whereas most existing codes for hyperbolic PDEs use Riemann solvers to compute fluxes, Clawpack Riemann solvers instead compute the waves (or discontinuities) that make up the Riemann solution. In the unsplit algorithm, Clawpack also makes use of transverse Riemann solvers, responsible for computing transport between cells that are only corner (in 2d) or edge (in 3d) adjacent.For nonlinear systems, the exact solution of the Riemann problem is computationally costly and may involve both discontinuities (shocks and contact waves) and rarefactions. It is almost always preferable to employ inexact Riemann solvers that approximate the solution using discontinuities only, with an appropriate entropy condition. The solvers available in Clawpack are all approximate solvers, although one could easily implement their own exact solver and make it available in the format needed by Clawpack routines.A common feature in all packages in the Clawpack suite is the use of a standard interface for Fortran Riemann solver routines. This ensures that new solvers or solver improvements developed for one package can immediately be used by all packages. To further facilitate this sharing and to avoid duplication, Riemann solvers are (with rare exceptions) not maintained under the other packages but are collected in a single repository named riemann. Users who develop new solvers for Clawpack are encouraged to submit them to the Riemann repository.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>potential user base from making use of the Clawpack software. The one and two dimensional plotting routines were converted from Matlab to matplotlib, a popular open source Python package for producing publication quality graphics for one and two dimensional data<ns0:ref type='bibr' target='#b30'>[27]</ns0:ref>.With the development of Clawpack Version 5 and above, Python graphics tools have been collected into the VisClaw repository. The VisClaw tools extend the functionality of the Version 4.x Python routines for creating one and two dimensional plots, and adds several new capabilities. Chief among these are the ability to generate output to webpages, where a series of plots can be viewed individually or as an animated time sequence using the Javascript package17 (which was motivated by code in an earlier version of Clawpack). The VisClaw module Iplotclaw provides interactive plotting capabilities from the Python or IPython prompt. Providing much of the same interactive capabilities as the original Matlab routines, Iplotclaw allows the user to step, interactively, through a time sequence of plots, jump from one frame to another, or interactively explore data from the current time frame.</ns0:figDesc><ns0:table /><ns0:note>most available visualization packages do not have tools that conveniently visualize hierarchical AMR data. VisClaw provides support for all of the main Clawpack submodules, including ClassicClaw, AMRClaw, PyClaw and GeoClaw.From the first release in 1994, Clawpack has included tools for visualizing the output of Clawpack and AMRClaw runs. Up until the release of version Clawpack 4.x, these visualization tools consisted primarily of Matlab routines for creating one, two and three dimensional plots including pseudo-color plots, Schlieren plots, contour plots and scatter plots, including radially or spherically symmetric data.Built-in tools were also available for handling one, two and three-dimensional mapped grids. Starting with version 4.x, however, it was recognized that a reliance on proprietary software for visualization prevented a sizable</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head /><ns0:label /><ns0:figDesc>addition to HTML and Latex formats available for all Clawpack results, VisClaw will now also produce KML and KMZ files suitable for visualizing results in Google Earth. Using the same matplotlib graphics routines, VisClaw creates PNG files that can be used as GroundOverlay features in a KML file. Other features, such as gauges, borders on AMR grids, and user specified regions can also be shown on Google Earth. All KML and PNG files are compressed into a single KMZ file that can be opened directly in Google Earth or made available on-line. While VisClaw does not have any direct support</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='1'>Details of these changes can be found at http://depts.washington.edu/clawpack/users-4.6/changes.html. Version 4.x used svn version control and the freely available software (under the BSD license) was distributed via tarballs.</ns0:note>
<ns0:note place='foot' n='2'>https://ccse.lbl.gov/BoxLib/index.html</ns0:note>
<ns0:note place='foot' n='7'>https://travis-ci.org/ 8 http://coveralls.io 9 https://nose.readthedocs.org 6 PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9313:1:1:NEW 14 May 2016) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='12'>http://www.clawpack.org/changes.html 13 http://docs.scipy.org/doc/numpy-dev/f2py</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:02:9313:1:1:NEW 14 May 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='18'>http://www.arcgis.com<ns0:ref type='bibr' target='#b21'>19</ns0:ref> http://www.qgis.org</ns0:note>
<ns0:note place='foot' n='20'>http://docs.enthought.com/mayavi/mayavi/ 21 https://visit.llnl.gov 22 http://www.paraview.org/ 23 http://yt-project.org/</ns0:note>
</ns0:body>
" | "Clawpack Developers
May 12, 2016
To the editors and reviewers of “The Clawpack 5.X Software”,
First the authors would like to thank the editor and both reviewers for their
helpful comments. We feel the suggestions have lead to a more thorough presentation as well as a cleaner explanation of many points that are crystal clear to
those who are neck deep and completely opaque to others who have not waded
so far into the deep end.
Editors Comments
• At the moment the title implies the paper is simply a description of the
software, so it needs changing to something more descriptive. Perhaps
something along the lines of “The Clawpack Software: Building an Open
Source Ecosystem for Solving Nonlinear PDEs”. I would be inclined not
to put the version number in the tile.
We agree with the editor, this title is more reflective of the paper’s content
and we have changed the title accordingly.
• Page 5, line 4 of sec 2.2: Git submodules are new to me. Can you add a
brief explanation of what they are?
Additional text and a revision have hopefully provided more detail regarding the use of git submodules.
• Page 6, line 8 of sec 2.3: “ensuring that past versions of the software remain available on a stable and citable platform”. How is that done? I’m
not sure if you say so later.
Details of this including specifics of the use of Zonodo and DOIs through
github have been added.
The following have been corrected as requested:
• Page 1, line -10 of section 1.1: why are both [34], with its URL, and the
explicit URL given?
• Page 1, line -3 of section 1.1: “with many of the changes from 4.3 to 4.6”
appears to be part of an incomplete sentence, or at least one in need of
rewriting.
Reviewer Anders Logg:
• On p. 2: What is q and what is f? And how does f (typically) depend
on q (one or two examples). This is clear for most readers but please add
one or two sentences to give a gentle introduction to non-specialists.
Additional clarification text was added that hopefully illuminates the situation.
• Some of the packages, e.g. PyCLAW on p. 3 and AMRCLAW on p. 4 are
mentioned before it is made clear on p. 4-5 that these are separate packages in the collection that make up CLAWPACK. Please clarify earlier.
The text has been slightly rearranged to avoid this.
• A diagram could be added to explain the relationships between the packages listed on p. 4-5.
We created a draft diagram but decided it was not really all that helpful
given the space it took up. Instead we attempted to clarify in the text the
relationships.
• The documentation repository is hosted on clawpack.github.com whereas
the rest is hosted on github.com/clawpack/. This looks confusing to me.
Clarification on this point was added.
• Line 198: Should it be “Travis CI”, not only “Travis”? Or is Travis the
team member responsible for running the regression tests?
Instead of forcing a poor graduate student to adopt the name “Travis” we
decided it was wiser to give better clarification on our use of the Travis
CI service.
• Figures 2 and 3 are small and very difficult to read.
We have gone back to the original versions and provided significantly better resolution figures.
• Remove FIXME/margin note on p. 14.
Yeah, that was stupid, blame the lead author.
The following have been corrected as requested:
2
• Add a comma on line 163 (following “repositories”) for increased readability.
• Top paragraph on p. 10: Somewhat confusing, seems to contrast AMR
with MPI (“... on the other hand does not include AMR but uses MPI...”).
Please reword.
Reviewer Markus Blatt:
A note regarding the reviewers concern regarding formatting, we were informed
that the editorial staff of PeerJ would assist with formatting issues once the
article was accepted. If there are outstanding formatting issues beyond what
the PeerJ staff would accommodate please let us know.
• The introduction is missing a list of related efforts/software packages that
are comparable to (parts of) ClawPack. This should be added.
A number of similar packages has been added along with appropriate citations.
• Furthermore a section should be added that lists all scientific software
that is used/needed by ClawPack together with references. Currently,
only PETSc and matplotlib are cited. According to the website at least
NumPy is needed/recommend and according to the source matlab seems
to be also used if it is found. These two should be cited.
Additional citations were added as appropriate.
• The text in the pictures of Figure 5 cannot be read on a laptop screen.
Maybe increasing the size of the text or the pictures themselves would fix
this. Some of the pictures seem to be or use third party pictures. The
author should check whether their license allows the distribution under
CC-BY. Some of the pictures (e.g. the maps in Figure 3 and 5 might need
a copyright attribute added.
Figure 5’s resolution has been increased and the text made more clear.
Additionally some of the figures have been removed from figures 3 and 5
or or distribution rights confirmed and/or cited.
• lines 27–28 (Introduction): I think it would be nice to list one/some of
the major changes that will make upgrading to the new user worthwhile
or make ClawPack more unique.
Text added to address this.
3
• lines 563–570 (Conclusion): Similar to above, I am missing a short sentence why upgrading is worthwhile for a user. The current conclusion only
takes into account the view of developers.
Text in the conclusion added to also reiterate this.
• lines 216 - 223: In the abstract it says that clawpack has been developed
as open source for 20 years. Here it reads like the software has been made
open source recently as you expect it to increase the developer community.
Please clarify!
Hopefully this has been clarified via the additional text.
• line 358: “..., where some operation ...” Is it really some (might be different per thread) or the same operation?
This was clarified.
• lines 357 - 362: “The main ... patch at a time” Application of parallel for
is not clear to me. First it seems like parallelization will be over the
patches. Then you say that each iteration corresponds to a grid level
(shouldn’t that be a patch?). Afterwards you say you assign one patch to
each thread. It might also be that my understanding of a grid is different
from the author’s. Please clarify, currently the parallelization strategy is
not clear to me.
Additional text has been added that hopefully clarifies this point.
• lines 370 - 371: 40 cores or 20 cores/40 threads? Which exact processor
model did you use?
Text has been added and clarification about the testing hardware was
added.
• line 376: “The efficiency is above 80% until 24 cores, then drops...” I
would have expected it drop for ¿20 threads as two threads share one
core. Might the preservation of efficiency be due to the ordering of the
patch by size? Do you have an explanation?
The test hardware does not use hyperthreading and does contain 40 cores.
Text to clarify this was added.
• lines 377–380: I don’t understand these sentences about level n grids.
Does two level 1 grids mean that 2 cells on level 0 got refined? Maybe the
4
meaning/definition of a level n grid could/should be stated.
Clarification text was added.
• Figure 3: The text in the left two pictures is hardly readable on a laptop
screen. Maybe making the picture larger would help. If these are 3rd
party pictures, please check whether they can be distributed via CC-BY
License. At least for the map a reference is missing.
Figure 3 contains original content from the authors and has been made
higher resolution.
• Figure 5: Please double check whether you are allowed to distribute the
pictures under CC-BY License and whether they need a reference to be
added.
Distribution rights were checked and addressed.
• lines 480-482: Is the definition of patch here the same as in the description
of the OpenMP parallelization above (lines 357–362)? If this is the case
then the mesh-patch equivalence should be mentioned there.
Clarification text regarding the patches vs. mesh was added.
• Page 14: There is an annotation in the original PDF.
Yeah, again, blame the lead author on that one.
The following have been corrected as requested:
• The referencing style for figures should be used consistently. Figure 1 is
referenced as “Figure 1” while the rest is referenced as “Fig. 2”, etc.
• lines 49–50: Reference [34] contains the URL. Therefore the last sentence
(lines 49-50) is superfluous.
• line 227: Citation of PETSc reference missing.
• line 354: “GEOCLAW. where” has a superfluous dot.
• line 368: “Figure 1” -¿ “Fig. 1” for consistency.
• line 377: “Note that there are only 2 level 1” -¿ “Note that there are only
two level 1”
• line 382: Citation of PetSc missing.
• Mention improvement of AMR discussion
5
– Fixed some of the figures
– Added more explanation on AMR and added figure
– Parallel explanations
• Added some open source details
• Cleaned up parallelism architectures
• Cite PETSc - line 227, all PETSc references are included
• Add text regarding ecosystem
• Diagram of package dependency - added better explanation in text rather
than diagram (tried, did not seem to work well)
Sincerely,
Kyle T. Mandli,
Aron J. Ahmadia,
Donna Calhoun,
David George,
Yiannis Hadjimichael,
David I. Ketcheson,
Grady I. Lemoine, and
Randall J. LeVeque
6
" | Here is a paper. Please give your review comments after reading it. |
211 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Bengali is a low-resource language that lacks tools and resources for various natural language processing (NLP) tasks, such as sentiment analysis and profanity identification.</ns0:p><ns0:p>In Bengali, only the translated versions of English sentiment lexicons are available.</ns0:p><ns0:p>Besides, no lexicon exists for detecting profanity in Bengali social media text. In this work, we introduce a Bengali sentiment lexicon, BengSentiLex, and a Bengali swear lexicon, BengSwearLex. To build BengSentiLex, we propose a cross-lingual methodology that utilizes a machine translation system, review corpus, English sentiment lexicons, pointwise mutual information (PMI), and supervised machine learning (ML) classifiers in different steps. For creating BengSwearLex, we introduce a semi-automatic methodology that leverages an obscene corpus, word embedding, and part-of-speech (POS) taggers. We compare the performance of BengSentiLex with the translated English lexicons in three evaluation datasets. BengSentiLex achieves 5%-50% improvement over the translated lexicons. For profanity detection, BengSwearLex achieves coverage of around 85% in document-level in the evaluation dataset. The experimental results imply that BengSentiLex and BengSwearLex are effective at classifying sentiment and identifying profanity in Bengali social media content, respectively.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>The popularity of e-commerce and social media has surged the availability of a user generated content.</ns0:p><ns0:p>Therefore, text analysis tasks such as sentiment classification and profanity or abusive content identification has received significant attention in recent years. Sentiment analysis identifies emotions, attitudes, and opinions expressed in a text <ns0:ref type='bibr' target='#b35'>(Liu, 2012)</ns0:ref>. Extracting insights from user feedback data has practical implications such as market research, customer service, result predictions, etc. Profanity indicates the use of taboo or swear words to express emotional feelings and prevalent in social media data (e.g., online post, message, comment, etc.) <ns0:ref type='bibr' target='#b61'>(Wang et al., 2014)</ns0:ref> across languages. The occurrences of swearing or vulgar words are often linked with abusive or hatred context, sexism, and racism. Hence, identifying swearing words has practical connections to understanding and monitoring online content. In this paper, we use profanity, slang and swearing interchangeably.</ns0:p><ns0:p>Lexicon plays an important role in both the sentiment classification and profanity identification. For example, sentiment lexicons help to analyze key subjective properties of texts such as opinions and attitudes <ns0:ref type='bibr' target='#b55'>(Taboada et al., 2011)</ns0:ref>. A sentiment lexicon contains opinion conveying terms (e.g., words, phrases, etc.), labeled with the sentiment polarity (i.e., positive or negative) and strength of the polarity. Some examples of positive sentiment words are beautiful, wonderful, amazing that express some desired states or qualities, while negative sentiment words, such as bad, awful, poor, etc. are used to represent undesired states. The profane list, on the other hand, contains words having foul, filthy, and profane meaning <ns0:ref type='bibr'>(e.g., ass, fuck, bitch)</ns0:ref>. When labeled data are unavailable, sentiment classification methods usually utilize opinion conveying words and a set of linguistic rules. As this approach relies on the polarity 1 of the individual words, it is crucial to building a comprehensive sentiment lexicon. Similarly, for swear lexicon is instrumental for determining profanity in a text.</ns0:p><ns0:p>As sentiment analysis is a well-studied problem in English, many general-purpose and domain-specific sentiment lexicons are available. Some of the popular general-purpose sentiment lexicons are MPQI <ns0:ref type='bibr' target='#b63'>(Wilson et al., 2005)</ns0:ref>, opinion lexicon <ns0:ref type='bibr' target='#b27'>(Hu and Liu, 2004)</ns0:ref>, SentiwordNet <ns0:ref type='bibr' target='#b21'>(Esuli and Sebastiani, 2006)</ns0:ref>, VADER <ns0:ref type='bibr' target='#b29'>(Hutto and Gilbert, 2014)</ns0:ref>, etc. Besides English, other widely used languages such as Chinese, Arabic, Spanish, etc have their sentiment lexicons <ns0:ref type='bibr' target='#b66'>(Xu et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b38'>Mohammad et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b44'>Perez-Rosas et al., 2012)</ns0:ref>. The presence of swearing in English social media has been investigated by various researchers <ns0:ref type='bibr' target='#b61'>(Wang et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b39'>Pamungkas et al., 2020)</ns0:ref>. <ns0:ref type='bibr' target='#b61'>Wang et al. (2014)</ns0:ref> found that the rate of swear word use in English Twitter is 1.15%, almost double compared to its use in daily conversation (0.5% -0.7%) as observed in previous work <ns0:ref type='bibr' target='#b31'>(Jay, 1992)</ns0:ref>. The work of <ns0:ref type='bibr' target='#b61'>Wang et al. (2014)</ns0:ref> also reported that a portion of 7.73% tweets in their random sampling collection contains swear words.</ns0:p><ns0:p>Although Bengali is the seventh most spoken language in the world, sentiment analysis or profanity identification in Bengali is still in its beginning. Limited research has been conducted on sentiment analysis in Bengali in the last two decades utilizing supervised machine learning (ML) techniques <ns0:ref type='bibr' target='#b52'>(Sazzed and Jayarathna, 2019;</ns0:ref><ns0:ref type='bibr' target='#b18'>Das and Bandyopadhyay, 2010a;</ns0:ref><ns0:ref type='bibr' target='#b16'>Chowdhury and Chowdhury, 2014;</ns0:ref><ns0:ref type='bibr' target='#b49'>Sarkar and Bhowmick, 2017;</ns0:ref><ns0:ref type='bibr' target='#b46'>Rahman and Kumar Dey, 2018)</ns0:ref>, as they do not require language-specific resources such as sentiment lexicon, part-of-speech (POS) tagger, dependency parser, etc. Regarding profanity identification, although few works addressed the abusive content analysis, none of them focused on determining profanity or generating resources for identifying profanity.</ns0:p><ns0:p>There have been a few attempts to develop sentiment lexicons for Bengali by translating various English sentiment dictionaries. <ns0:ref type='bibr' target='#b19'>Das and Bandyopadhyay (2010b)</ns0:ref> utilized a word-level lexical-transfer technique and an English-Bengali dictionary to develop SentiWordNet for Bengali from English Sen-tiWordNet. <ns0:ref type='bibr' target='#b7'>Amin et al. (2019)</ns0:ref> translated the VADER <ns0:ref type='bibr' target='#b29'>(Hutto and Gilbert, 2014)</ns0:ref> sentiment lexicon to Bengali for sentiment analysis. However, dictionary-based translation can not capture the informal language people use in everyday communication or social media. Regarding vulgar or swear words, there exist no resources in Bengali which can identify profanity in social media data. Therefore, in this work, we focus on generating resources for these two essential tasks.</ns0:p><ns0:p>To develop the Bengali sentiment lexicon BengSentiLex, we present a corpus-based cross-lingual methodology. We collected around 50000 Bengali drama reviews from Youtube; among them, we manually annotated around 12000 reviews <ns0:ref type='bibr' target='#b50'>(Sazzed, 2020a)</ns0:ref>. Our proposed methodology consists of three phases, where each phase identifies sentiment words from the corpus and includes them in the lexicon. In phase 1, we identify sentiment words from the Bengali review corpus (both labeled and unlabeled) with the help of two English sentiment lexicons, Bing Liu's opinion lexicon and VADER. In phase 2, utilizing around 12000 annotated reviews and PMI, we identify top class-correlated ( positive or negative) words.</ns0:p><ns0:p>Using the POS tagger, we determine adjectives and verbs, which mainly convey opinions. In the final phase, we make use of unlabeled reviews to recognize the polar words. Utilizing the labeled reviews as training data, we determine the class of the unlabeled reviews. We then follow the similar steps of phase 2 to identify sentiment words from these pseudo-labeled reviews. All three phases are followed by a manual validation and synonym generation step. Finally, we show the effectiveness of our developed lexicon in three evaluation datasets . This work is the extended version of our previous work <ns0:ref type='bibr' target='#b51'>(Sazzed, 2020b)</ns0:ref>.</ns0:p><ns0:p>To construct the Bengali swear lexicon, BengSwearLex, we propose a corpus-based semi-automatic approach. However, unlike the methodology used for sentiment lexicon creation, this approach does not use any cross-lingual resources as machine translation is not capable of translating language-specific swear terms. From an existing Bengali obscene corpus, utilizing word embedding and POS tagging, we create BengSwearLex. To show the efficacy of BengSwearLex for identifying profanity, we annotate a negative drama review corpus into profane and non-profane categories based on the presence of swear terms. We find that BengSwearLex successfully identifies 85.5% of the profane reviews from the corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.1'>Motivation and Challenges</ns0:head><ns0:p>Since the existing Bengali sentiment dictionaries lack words people use in informal and social communication, it is necessary to build such a sentiment lexicon in Bengali. With the rapid growth of user-generated Bengali content in social media and web, the presence of inappropriate content has become an issue. The content which is not inline with the social norms and expectations of a community needs to be censored. Manuscript to be reviewed Computer Science building a swear lexicon for Bengali. Some of the challenges to develop a sentiment lexicon in Bengali are-1. One of the popular techniques to create a lexicon is to utilize corpora to extract opinion conveying words. However, the Bengali language lacks such corpus. Thus, we have to collect and annotate a large corpus.</ns0:p><ns0:p>2. One of the important tools for identifying opinion word is sophisticated part-of-speech (POS) tagger; However, in Bengali, there exists no sophisticated POS tagger; thus, we leverage POS tagger from English utilizing machine translation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Contributions</ns0:head><ns0:p>Our main contributions in this paper can be summarized as follows-</ns0:p><ns0:p>• We introduce two lexical resources, BengSentiLex, a Bengali sentiment lexicon consists of over 1200 opinion words created from a Bengali review corpus, and BenSwearLex, a Bengali swear lexicon, comprised of about 200 swear words. We have made both lexicons publicly available for the researchers.</ns0:p><ns0:p>• We show how the machine translation based cross-lingual approach, the labeled and unlabeled reviews, English sentiment lexicons can be utilized to build a sentiment lexicon in Bengali.</ns0:p><ns0:p>• We present a semi-automatic methodology for developing a swear lexicon utilizing an obscene corpus and various natural language processing tools.</ns0:p><ns0:p>• We demonstrate that BengSentiLex and BengSwearLex are effective at sentiment classification and profane terms detection compared to existing tools.</ns0:p><ns0:p>The rest of the paper is structured as follows: In section 2, we review related literature. We explain the corpus creation and annotation process in section 3. Section IV describes various cross-lingual resources used for the Bengali lexicon generation. In section 4, we present the lexicon construction methodology.</ns0:p><ns0:p>Section 5 and 6 provide experimental results and discussion. Finally, section 7 concludes and provides future directions. <ns0:ref type='bibr' target='#b35'>Liu (2012)</ns0:ref> categorized the sentiment lexicon generation methods into three approaches: manual approach, dictionary-based approach, and corpus-based approach. Considerable time and resources are needed for the manual approach as the annotation is performed by humans; The dictionary-based methods usually start with a set of seed words, which are created manually and then expanded using a dictionary. The corpus-based techniques utilize both manually labeled seed words and corpus data. Since the proposed sentiment lexicon creation framework is corpus-based, we mention only the works which utilizes corpus for lexicon creation. <ns0:ref type='bibr' target='#b28'>Huang et al. (2014)</ns0:ref> proposed a label propagation-based method for generating domain-specific sentiment lexicon. In their work, the candidate sentiment terms are extracted by leveraging the chunk dependency information and prior generic sentiment dictionary. They defined the pairwise contextual and morphological constraints and incorporated the label propagation. Their experimental results suggested that constrained label propagation can improve the performance of the automatic construction of domain-specific sentiment lexicon. <ns0:ref type='bibr' target='#b25'>Han et al. (2018)</ns0:ref> proposed a domain-specific lexicon generation method from the unlabeled corpus utilizing mutual information and part-of-speech (POS) tags. Their lexicon shows satisfactory performance on publicly available datasets. <ns0:ref type='bibr' target='#b57'>Tai and Kao (2013)</ns0:ref> proposed a graph-based label propagation algorithm to generate a domain-specific sentiment lexicon. Their proposed approach considers words as nodes and similarities as weighted edges of the word graphs. Using a graph-based label propagation method, they assigned the polarity to unlabeled words. They conducted experiments on the Twitter dataset and achieved better performance than the general-purpose sentiment dictionaries.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK OF SENTIMENT LEXICON CREATION</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Corpus-based lexicon generation in English</ns0:head></ns0:div>
<ns0:div><ns0:head>3/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b60'>Wang and Xia (2017)</ns0:ref> developed a neural architecture to train a sentiment-aware word embedding. To enhance the quality of word embedding as well as the sentiment lexicon, they integrated the sentiment supervision at both document and word levels. They performed experiments on the SemEval 2013-2016 datasets using their sentiment lexicon and obtained the best performance in both supervised and unsupervised sentiment classification tasks. <ns0:ref type='bibr' target='#b24'>Hamilton et al. (2016)</ns0:ref> constructed a domain-sensitive sentiment lexicon using label propagation algorithms and small seed sets. They showed that their corpus-based approach outperformed methods that rely on hand-curated resources such as WordNet. <ns0:ref type='bibr' target='#b64'>Wu et al. (2019)</ns0:ref> presented an automatic method for building a target-specific sentiment lexicon. Their lexicon consists of opinion pairs made from an opinion target and an opinion word. Their unsupervised algorithms first extract high-quality opinion pairs; Then utilizing general-purpose sentiment lexicon and contextual knowledge, calculates sentiment scores of opinion pairs. They applied their method on several product review datasets and found their lexicon outperformed several general-purpose sentiment lexicons. <ns0:ref type='bibr' target='#b12'>Beigi and Moattar (2021)</ns0:ref> presented an automatic domain-specific sentiment lexicon construction method for unsupervised domain adaptation and sentiment classification. The authors first constructed a sentiment lexicon from the source domain using the labeled data. In the next phase, the weights of the first hidden layer of Multilayer Perceptron (MLP) are set to the corresponding polarity score of each word from the developed sentiment lexicon, and then the network is trained. Finally, a domain-independent Lexicon (DIL) is introduced that contains words with static positive or negative scores. The experiments on Amazon multi-domain sentiment datasets showed the superiority of their approach over the existing unsupervised domain adaptation methods.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Lexicon generation in Bengali and other languages</ns0:head><ns0:p>Al-Moslmi et al. ( <ns0:ref type='formula'>2018</ns0:ref>) developed an Arabic sentiment lexicon consists of 3880 positive and negative synsets annotated with the part-of-speech (POS), polarity scores, dialects synsets, and inflected forms.</ns0:p><ns0:p>They performed the word-level translation of the English MPQA lexicon using google translation, which was followed by manual inspection for removing the inappropriate word. Besides, from two Arabic review corpora, they manually examined a list of opinion words or sentiment words and phrases. <ns0:ref type='bibr' target='#b44'>Perez-Rosas et al. (2012)</ns0:ref>, the authors presented a framework to derive sentiment lexicon in Spanish using manually and automatically annotated data from English. To bridge the language gap, they used the multilingual sense-level aligned WordNet structure. <ns0:ref type='bibr' target='#b38'>Mohammad et al. (2016)</ns0:ref>, the authors introduced several sentiment lexicons in Arabic that were automatically generated using two different methods: (1) by using distant supervision techniques on Arabic tweets, and (2) by translating English sentiment lexicons into Arabic using a freely available statistical machine translation system. They compared the performance of existing and their proposed sentiment lexicons in sentence-level sentiment analysis. <ns0:ref type='bibr' target='#b8'>Asghar et al. (2019)</ns0:ref> presented a word-level translation scheme for creating an Urdu polarity lexicon using a list of English opinion words, SentiWordNet, English-Urdu bilingual dictionary, and a collection of Urdu modifiers. <ns0:ref type='bibr' target='#b19'>Das and Bandyopadhyay (2010b)</ns0:ref> proposed a computational method for generating an equivalent lexicon of English SentiWordNet using an English-Bengali bilingual dictionary. Their approach used a word-level translation process, which is followed by the error reduction technique. From the SentiWordNet, they selected a subset of opinion words whose orientation strength is above the heuristically identified threshold of 0.4. They used two Bengali corpora, News, and Blog to show the coverage of their developed lexicon. <ns0:ref type='bibr' target='#b7'>Amin et al. (2019)</ns0:ref>, the authors compiled a Bengali polarity lexicon from the English VADER lexicon using a translation technique. They modified the functionalities of the VADER lexicon so that it can be directly applied to Bengali sentiment analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Comparison with Existing Sentiment Lexicons</ns0:head><ns0:p>We provide comparisons in both the methodological and evaluation phases with the Bengali lexicon-based methods. Due to differences in language, it is not possible to compare the proposed framework with the English lexicon-based methods in the evaluation step. Thus to show the novelty and originality of the proposed framework, we discuss how the proposed framework is different from the existing sentiment lexicon generation methods in English.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_4'>2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.3.1'>Bengali Sentiment lexicons</ns0:head><ns0:p>In contrast to the existing Bengali sentiment lexicons, which are the simple word-level translation of English lexicons, BengSentiLex is created from a Bengali review corpus. Besides, BengSentiLex differs in the way it has been developed and the aspects of the content. We use a cross-lingual corpus-based approach utilizing labeled and unlabeled data, while the existing lexicons simply translate the English lexicon to Bengali at the word level. Besides, as we utilize social media review data, BengSentilex is capable of capturing opinion words people use in informal communication.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2'>English Sentiment Lexicons</ns0:head><ns0:p>In this section, we discuss how the proposed methodology is different from some of the existing lexicon creation methods in English. A number of corpus-based lexicon-generation methods employed label propagation algorithms utilizing seed words <ns0:ref type='bibr' target='#b59'>(Velikovich et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hamilton et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b57'>Tai and Kao, 2013)</ns0:ref>, while BengSentiLex does not use any seed word list. Some of the existing works utilized PMI or modified PMI in some phase of lexicon generation framework <ns0:ref type='bibr' target='#b68'>(Yang et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b58'>Turney and Littman, 2003;</ns0:ref><ns0:ref type='bibr' target='#b67'>Xu et al., 2012)</ns0:ref>; however, other than using PMI, their entire framework is different from the proposed methodology. Besides, they calculated PMI among various features, while the proposed framework utilizes it between feature and target. The work of <ns0:ref type='bibr' target='#b12'>Beigi and Moattar (2021)</ns0:ref> utilized the word's frequency in positive and negative comments and vocabulary size of the corpus to determine the polarity score of the corresponding word, while BengSentiLex uses PMI based sentiment intensity (SI) score to determine the semantic orientation of a word. Some other works utilized Matrix Factorization <ns0:ref type='bibr' target='#b43'>(Peng and Park, 2011)</ns0:ref> or distant supervision for creating lexicon <ns0:ref type='bibr' target='#b54'>(Severyn and Moschitti, 2015)</ns0:ref>. A comprehensive literature review of the corpus-based lexicon creation method in English has been performed by <ns0:ref type='bibr' target='#b17'>Darwich et al. (2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RELATED WORK OF PROFANITY AND ABUSIVE CONTENT ANALYSIS</ns0:head><ns0:p>Researchers studied the existence and socio-linguistics characteristics of swearing or cursing in social media. <ns0:ref type='bibr' target='#b61'>Wang et al. (2014)</ns0:ref> investigated the cursing activities on Twitter, a social media platform. They studied the ubiquity, utility, and contextual dependency of swearing on Twitter. <ns0:ref type='bibr' target='#b22'>Gauthier et al. (2015)</ns0:ref> analyzed several sociolinguistic aspects of swearing on Twitter text data. Several studies investigated the relationship between social factors such as gender with the profanity and discovered males employ profanity much more often than females <ns0:ref type='bibr' target='#b61'>(Wang et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b53'>Selnow, 1985)</ns0:ref>. Other social factors such as age, religiosity, or social status were found to be related to the rate of using vulgar words <ns0:ref type='bibr' target='#b37'>(McEnery, 2004)</ns0:ref>. <ns0:ref type='bibr' target='#b32'>Jay and Janschewitz (2008)</ns0:ref> noticed that the offensiveness of taboo words depends on their context, and found that usages of taboo words in conversational context is less offensive than the hostile context. <ns0:ref type='bibr' target='#b45'>Pinker (2007)</ns0:ref> classified the use of swear words into five categories: dysphemistic; abusive, using taboo words to abuse or insult someone; idiomatic, using taboo words to arouse the interest of listeners without really referring to the matter; emphatic, to emphasize another word; cathartic, the use of swear words as a response to stress or pain.</ns0:p><ns0:p>Research related to the identification of swearing or offensive words has been conducted mainly in English; Therefore, lexicons comprised of offensive words are available in the English language.</ns0:p><ns0:p>Pamungkas et al. <ns0:ref type='bibr' target='#b5'>(2020)</ns0:ref> created SWAD (Swear Words Abusiveness Dataset), a Twitter English corpus, where abusive swearing is manually annotated at the word level. Their collection consists of 1,511 unique swear words from 1,320 tweets. <ns0:ref type='bibr' target='#b47'>Razavi et al. (2010)</ns0:ref> manually collected approximately 2,700 dictionary entries including phrases and multi-word expressions, which is one of the earliest work offensive lexicon creations. The recent work on lexicon focusing on hate speech was reported by <ns0:ref type='bibr' target='#b23'>(Gitari et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Currently, the largest English lexicon of abusive words was provided by <ns0:ref type='bibr' target='#b62'>(Wiegand et al., 2018)</ns0:ref>.</ns0:p><ns0:p>In Bengali, several works investigated the presence of abusive language in social media data by leveraging supervised ML classifiers and labeled data <ns0:ref type='bibr' target='#b30'>(Ishmam and Sharmin, 2019;</ns0:ref><ns0:ref type='bibr' target='#b11'>Banik and Rahman, 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b20'>Emon et al. (2019)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CREATION OF SENTIMENT LEXICON</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1'>Basic Terminology</ns0:head><ns0:p>This section describes some of the concepts used in this paper for sentiment lexicon creation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.1'>Supervisory Characteristics</ns0:head><ns0:p>Supervised Learning Supervised learning is one of the most popular approaches of machine learning defined by its usage of annotated data. The labeled data are used to train or 'supervise' algorithms for classifying data accurately. Using annotated inputs and outputs, the model can assess its accuracy and learn over time.</ns0:p><ns0:p>Semi-supervised learning Semi-supervised learning uses both labeled and unlabeled data. It is very useful when a high volume of data is available, but the annotation process is very challenging and requires a huge amount of time and resources.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2'>Cross-lingual approach</ns0:head><ns0:p>The cross-lingual approach leverages resources and tools from a resource-rich language (e.g., English) to a resource-scarce language. Most of the research in sentiment analysis has been performed in English.</ns0:p><ns0:p>Hence, resources from English can be employed in other languages using various language mapping techniques. The construction of a language-specific sentiment lexicon requires vast resources, tools, and an active research community, which are not available in the resource-scarce language. A feasible approach could be utilizing resources from the languages where sentiment resources are abundant. In this work, we utilize machine translation to leverage several resources from English.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.3'>Machine translation</ns0:head><ns0:p>Machine translation (MT) refers to the use of software to translate text or speech from one language to another. Over the decades, the machine translation system has evolved to a more reliable system, from the simple word-level substitution to sophisti. cated Neural Machine Translation (NMT) <ns0:ref type='bibr' target='#b33'>(Kalchbrenner and Blunsom, 2013;</ns0:ref><ns0:ref type='bibr' target='#b9'>Bahdanau et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b69'>Zhu et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Machine translation has been successfully applied to various sentiment analysis tasks by researchers. <ns0:ref type='bibr' target='#b10'>Balahur and Turchi (2014)</ns0:ref> studied the possibility of employing machine translation systems and supervised methods to build models that can detect and classify sentiment in low-resource languages. Their evaluation showed that machine translation systems were rapidly maturing. The authors claimed that with appropriate ML algorithms and carefully chosen features, machine translation could be used to build sentiment analysis systems in resource-poor languages.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.4'>Pointwise Mutual Information (PMI)</ns0:head><ns0:p>Pointwise Mutual Information (PMI) is a measure of association used in information theory and statistics.</ns0:p><ns0:p>The PMI between two variables X and Y is computed as,</ns0:p><ns0:formula xml:id='formula_0'>PMI(X,Y ) = log P(X,Y ) P(X)P(Y )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>The term P(X, Y) is the number of observed co-occurrences of event X and Y. P(X) represents the number of times X occurs, and P(Y) means the number of times Y occurs. When two variables X and Y are independent, the PMI between them is 0. PMI maximizes when X and Y are perfectly correlated.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Datasets for Lexicon Creation and Evaluation</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Training dataset for sentiment lexicon</ns0:head><ns0:p>We use a drama review dataset (Drama-Train) collected from Youtube to build BengSentiLex <ns0:ref type='bibr' target='#b50'>(Sazzed, 2020a)</ns0:ref>. This corpus consists of around 50000 Bengali reviews, where each review represents the viewer's opinions towards a Bengali drama. Among the 50000 Bengali reviews, around 12000 are annotated reviews Sazzed (2020a), while the remaining are unlabeled. Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> shows examples of drama reviews belong to the Drama-Train.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Evaluation dataset for sentiment lexicon)</ns0:head><ns0:p>We show the effectiveness of BengSentiLex in three datasets from distinct domains. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> provides the details of the evaluation datasets.</ns0:p><ns0:p>The first evaluation dataset is a drama review dataset (Drama-Eval) consisting of around 1000 annotated reviews. This dataset belongs to the same domain as the training dataset, Drama-Train;</ns0:p><ns0:p>However, it has not used for lexicon creation. This is a class-balanced dataset, consists of 500 positive and 500 negative reviews.</ns0:p><ns0:p>The second dataset is a news dataset (News1) that was collected from <ns0:ref type='bibr'>(Soc, 2020)</ns0:ref>. It consists of 4000 news comments; among them, 2000 are positive and 2000 are negative comments.</ns0:p><ns0:p>The third dataset is also a news comment dataset (News2) , collected from two popular Bengali newspaper, Prothom Alo and BBC Bangla <ns0:ref type='bibr' target='#b56'>(Taher et al., 2018)</ns0:ref>. It consists of 5205 positive comments and around 5600 negative comments. For the evaluation we select a class-balanced subset where each class contains 5205 comments.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Cross-lingual resources</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.3.1'>Sentiment lexicon</ns0:head><ns0:p>To identify whether a Bengali word conveys an opinion, we employ a cross-lingual approach. Leveraging machine translation and English sentiment lexicons, we decide whether an extracted Bengali word bears opinion. The study by <ns0:ref type='bibr' target='#b52'>Sazzed and Jayarathna (2019)</ns0:ref> showed that though Bengali to English machine</ns0:p></ns0:div>
<ns0:div><ns0:head>7/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science translation (i.e., Google Translate) system is not perfect, it preserves semantic orientation in most of the cases.</ns0:p><ns0:p>We translate all the extracted Bengali words into English and then determine their polarities based on the English lexicon. If the translated word exists in an English lexicon, we include the corresponding Bengali word to our Bengali sentiment lexicon. Although we perform word-level translation between English and Bengali, it differs from existing works that translate words from English to Bengali, therefore only contain translated Bengali dictionary words rather than the words used by people in informal communication.</ns0:p><ns0:p>Our proposed approach supports the inclusion of informal Bengali words, which is not achievable using the dictionary-based translation of English lexicons. Furthermore, this approach can yield and include multiple synonymous opinion words instead of one. For example, by translating an English sentiment word, we only get the corresponding Bengali term. However, when words are extracted from the corpus and translated to English, due to low coverage of the machine translation system, synonymous</ns0:p><ns0:p>Bengali words can be mapped into the same English polarity word. Thus, it helps to identify and include more opinion words to the Bengali lexicon.</ns0:p><ns0:p>To determine the polarity of the translated words, we utilize the following English sentiment lexicons.</ns0:p><ns0:p>Opinion lexicon was developed by <ns0:ref type='bibr' target='#b27'>(Hu and Liu, 2004</ns0:ref>) and contains around 6800 English sentiment words (positive or negative). Besides the dictionary words, it also includes acronyms, misspelled words, and abbreviations. Liu's opinion lexicon is a binary lexicon, where each word is associated with either positive (+1) or negative(-1) polarity value.</ns0:p><ns0:p>VADER is a sentiment lexicon especially attuned for social media. VADER contains over 7,500 lexical features with sentiment polarity of either positive or negative and sentiment intensity between -4 to +4. VADER includes emoticons such as ':-)', which denotes a smiley face ( positive expression), and sentiment-related initialisms such as 'LOL', 'WTF'.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2'>Part-of-speech (POS) tagging</ns0:head><ns0:p>Part-of-speech (POS) tagger is a tool that assigns a POS tag (e.g.,noun, verb, adjective, etc.) to each of the words present in a text. As adjectives, nouns, and verbs usually convey opinions, the POS tagger can help to identify opinion words. In English, several standard POS taggers are available such as NLTK POS tagger <ns0:ref type='bibr' target='#b36'>Loper and Bird (2002)</ns0:ref>, spaCy POS tagger <ns0:ref type='bibr' target='#b26'>Honnibal and Montani (2017)</ns0:ref>. However, in Bengali, since no sophisticated POS tagger is available, we use the machine translation system to convert the probable Bengali opinion words to English. We then use the spaCy POS tagger to determine the POS tag of those English words, which allow us to label the POS tag of the corresponding Bengali words.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Methodology</ns0:head><ns0:p>The creation of the BengSentiLex involves several phases. We utilize various tools and resources to determine opinion words from the corpus and add them to BengSentiLex, as shown in Fig 2.</ns0:p><ns0:p>• Phase 1: Labeled and unlabeled corpus, machine translation system, English lexicons.</ns0:p><ns0:p>• Phase 2: Labeled corpus, PMI, machine translation system, English POS tagger, English lexicons, Bengali lexicon (constructed in phase 1).</ns0:p><ns0:p>• Phase 3: Unlabeled corpus, ML classifiers, PMI, machine translation system, English POS tagger, English lexicons, Bengali lexicon (constructed in phase 1 and phase 2).</ns0:p><ns0:p>Each phase enlarges BengSentiLex with the newly recognized opinion-conveying words. A manual validation step is included to examine identified opinion words. Then, we generate synonyms for the validated words which are added to the lexicon.</ns0:p><ns0:p>For synonym generation, we utilize google translate 1 , as no standard Bengali synonym dictionary is available on the web in digital format. We translate Bengali words into multiple languages and then perform back-translation. This approach assists to find synonyms as sentiments are expressed in different ways across the languages. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4.1'>Phase-1: Utilizing English Sentiment Lexicons</ns0:head><ns0:p>The development of a sentiment lexicon typically starts with a list of well-defined sentiment words. A well-known approach for identifying the initial list of words (often called seed words) is to use a dictionary. However, dictionary words denote mostly formal expressions and usually do not represent the words people use in social media or informal communication. On the contrary, words extracted from a corpus represent terms people use in regular communication, hence, more useful for sentiment analysis.</ns0:p><ns0:p>We tokenize words from the review corpus (both labeled and unlabeled) using NLTK tokenizer and calculate their frequency in the corpus. Only the words with a frequency above 5 are added to the candidate pool. However, not all the high-frequency words convey sentiments. For example-'drama' is a high-frequency word in our drama review dataset, but it is not a sentiment word.</ns0:p><ns0:p>As Bengali does not have any sentiment dictionary of its own, we utilize resources from English.</ns0:p><ns0:p>Using a machine translation system, we convert all the words from the candidate pool to English. Two English sentiment lexicons, Opinion Lexicon, and VADER are employed to determine the polarity of the translated words.</ns0:p><ns0:p>The assumption is that if a translated English word exists in the English sentiment lexicon, then it is an opinion conveying word; therefore, the corresponding Bengali word can be added to the Bengali sentiment dictionary.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4.2'>Phase 2: Lexicon generation from labeled data</ns0:head><ns0:p>Phase 2 retrieves opinion words from the labeled corpus by leveraging the pointwise mutual information (PMI) and a POS tagger, as shown in Figure <ns0:ref type='figure' target='#fig_3'>2 and 3</ns0:ref>.</ns0:p><ns0:p>From the labeled reviews, we derive the terms which are highly correlated with the class label. The words or terms that already exist in the lexicon (from earlier phase) are not considered. The remaining words are translated into English using the machine translation system. We utilize the spaCy POS tagger to identify their POS tags. Since usually adjectives and verbs convey opinions, we only keep them and exclude the other POS.</ns0:p><ns0:p>The sentiment score of a word, w, is calculated using the formula shown below,</ns0:p><ns0:formula xml:id='formula_1'>SentimentScore(w) = PMI(w, pos) − PMI(w, neg) (2)</ns0:formula><ns0:p>where, PMI(w, pos) represents the PMI score of word w corresponding to positive class and PMI <ns0:ref type='bibr'>(w, neg)</ns0:ref> represents the PMI score of word w corresponding to negative class.</ns0:p><ns0:p>We then calculate the sentiment intensity (SI) of w, using the following equation,</ns0:p></ns0:div>
<ns0:div><ns0:head>SI(w) = SentimentScore(w) PMI(w, pos) + PMI(w, neg)</ns0:head><ns0:p>(3)</ns0:p><ns0:p>We use the sentiment strength along with the threshold value to identity opinion conveying words from the labeled reviews.</ns0:p><ns0:p>If the sentiment intensity of a word, w, is above the threshold of 0.5, we consider it as a positive word.</ns0:p><ns0:p>if sentiment strength is below -0.5, we consider it as a negative word.</ns0:p><ns0:formula xml:id='formula_2'>Class(w) =      Positive, if SI(w)>0.50</ns0:formula><ns0:p>Negative, if SI(w)< − 0.50</ns0:p></ns0:div>
<ns0:div><ns0:head>Unassigned, Otherwise</ns0:head><ns0:p>(4)</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4.3'>Phase 3: Lexicon generation from unlabeled data</ns0:head><ns0:p>In addition to annotated reviews, our review corpus consists of a large number of unlabeled reviews. For the labeled reviews, we use PMI to identify top class-correlated words. However, for the unannotated reviews, a manual class label is not available; thus, automatic labeling is required. To automatically annotate the unlabeled reviews, we compare various automatics approaches and select the approach with the highest accuracy.</ns0:p><ns0:p>Several ML classifiers are applied to the annotated reviews to determine the best performing classifiers.</ns0:p><ns0:p>The following ML classifiers are employed: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science SVM (Support Vector Machine) is a popular supervised ML algorithm used for classification and regression problems. SVM decides the best hyperplane to separate the space into multiple classes that maximizes the distance between data points belong to different classes.</ns0:p><ns0:p>SGD (Stochastic gradient descent) is a method that optimizes an objective function iteratively. It is a stochastic approximation of actual gradient descent optimization since it calculates gradient from a randomly selected subset of the data. For SGD, hinge loss and l2 penalty with a maximum iteration of 1500 are employed.</ns0:p><ns0:p>LR (Logistic regression) is a statistical classification method that finds the best fitting model to describe the relationship between the dependent variable and a set of independent variables.</ns0:p><ns0:p>Random Forest (RF) is a decision tree-based ensemble learning classifier. It makes predictions by combining the results from multiple individual decision trees.</ns0:p><ns0:p>K-nearest neighbors (k-NN) algorithm is a non-parametric method used for classification and regression. In k-NN classification, the class membership of a sample is determined by the plurality vote of its neighbors. Here, we set k=3, the class of a review depends on three of its closest neighbors.</ns0:p><ns0:p>We use scikit-learn <ns0:ref type='bibr'>(Pedregosa et al., 2011)</ns0:ref> implementation of the aforementioned ML classifiers.</ns0:p><ns0:p>For all of the classifiers, we use the default parameter settings. Using 10-fold cross-validation, we assess their performances. The purpose of this step is to find reliable classifiers that can be used for automatic class-labeling. Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> shows the classification accuracy of various ML classifiers using 10-fold cross-validation.</ns0:p><ns0:p>Among the five classifiers we employ, SGD and SVM show higher accuracy. Both of them correctly identify around 93% of the reviews, which is close to the accuracy of manual annotations. LR shows similar accuracy of around 92%. We use these three classifiers to determine the class of the unlabeled reviews.</ns0:p><ns0:p>The following procedures are considered for automatically generating class-label of the unannotated reviews utilizing the ML classifiers, 1) Use all the labeled reviews as training data and all the unlabeled reviews as testing data.</ns0:p><ns0:p>2) Iteratively utilize a small unlabeled set as testing data. After assigning their labels, we add these pseudo-labeled reviews to the training set and select a new set of unlabeled reviews as testing data. This procedure continues until all the data are labeled.</ns0:p><ns0:p>To determine the performance of the approach (1), we conduct 4-fold cross-validation on the labeled reviews. We use 1-fold as training data and the remaining 3-folds as testing data. The training-testing data ratio is selected based on the ratio of labeled (around 12000) and unlabeled data (around 38000) reviews.</ns0:p><ns0:p>For approach (2), similar way to approach 1, we split the 12000 labeled data into four subsets. initially, we use one subset (around 3000 reviews) as a training set. In each iteration, a group of reviews is selected from the other three subsets (from around 9000 reviews), and used as a testing set. The size of the chosen group is equal to 10% of the current training set. After assigning the class of the reviews belong to the testing set, they are added to the training set. This process continues until all the reviews (9000 reviews) are annotated.</ns0:p><ns0:p>We find that gradually expanding the training set by adding the predicted results of the testing set provides better performance. After applying approach 2 (shown in Figure <ns0:ref type='figure'>4</ns0:ref>), our dataset contains around 38000 pseudo-labeled reviews. We then employ PMI and POS tagger in a similar way to phase 2. However, since this phase utilizes pseudo-labeled data instead of the true-label data, we set a higher threshold of 0.7 for the class label assignment.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science We utilize two Bengali corpora, one for creating the swear lexicon, BengSwearLex, which we refer to as training corpus (SW), and the other one for analysis and evaluating the performance of BengSwearLex, which we refer to as evaluation corpus (SW).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.1'>Training corpus (SW)</ns0:head><ns0:p>We use a Bengali corpus deposited by Abu <ns0:ref type='bibr' target='#b5'>(2020)</ns0:ref> for constructing BengSwearLex. Originally, this corpus consists of 10221 text reviews/comments belong to different categories, such as toxic, racism, obscene, insult, etc. However, this corpus is noisy, consists of many empty and punctuation only comments, or erroneous annotation. We manually validate and exclude comments having the above-mentioned issues.</ns0:p><ns0:p>From the modified corpus, we only select the reviews labeled as obscene. After excluding erroneous reviews and reviews belong to other classes, the corpus consists of 3902 obscene comments. The length of each comment ranges from 1-100 words.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> shows some examples of the obscene comments from the lexicon corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.2'>Evaluation corpus (SW)</ns0:head><ns0:p>The evaluation corpus we utilize is a drama review corpus collected from Youtube (the same corpus that is used for sentiment lexicon creation).</ns0:p><ns0:p>This corpus was created and deposited by (Sen, 2019) for sentiment analysis; It consists of 8500 positive and 3307 negative reviews. However, there is no distinction between different types of negative reviews. Therefore, we manually label these 3307 negative reviews into two categories, profane and non-profane. The annotation of these 3307 negative reviews was conducted by three Bengali expert annotators (A1, A2, A3). The first two annotators (A1 and A2) initially annotated all the reviews. In case of disagreement in annotation, it was resolved by a third annotator (A3).</ns0:p><ns0:p>After annotation, this corpus consists of 2643 non-profane negative reviews and 664 profane reviews, as shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. The kappa statistic for 2 raters are 0.81 which shows high agreement in reviews.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Text Processing Tools</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.2.1'>POS tagger</ns0:head><ns0:p>Similar to sentiment lexicon (described in previous section), here we utilize a POS tagger to identify opinion word 2 3 . However, in Bengali, limited research has been conducted for developing a sophisticated POS tagger; therefore, the existing Bengali POS taggers are not as accurate as of its English counterpart.</ns0:p><ns0:p>Hence, manual validation is needed to check the correctness of the POS tag assigned by the POS taggers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.2'>Word Embedding</ns0:head><ns0:p>A word embedding is a learned representation for text, where related words have similar representations.</ns0:p><ns0:p>The word-embedding provides an efficient way to use the dense representation of words of varying lengths.</ns0:p><ns0:p>The values for the embedding of a word are learned by the model during the training phase.</ns0:p><ns0:p>There exist two main approaches for learning word embedding, count-based and context-based. The count-based vector space models heavily rely on the word frequency and co-occurrence matrix with the assumption that words in the same contexts share similar or related semantic meanings. The other learning approach, context-based methods, build predictive models that predict the target word given its neighbors.</ns0:p><ns0:p>The best vector representation of each word is learned during the model training process.</ns0:p><ns0:p>The Continuous Bag-of-Words (CBOW) model is a popular context-based method for learning word vectors. It predicts the center word from surrounding context words.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Lexicon Creation Framework</ns0:head><ns0:p>Lexical resources can help to identify the presence of profanity in Bengali social media. Here, we present a semi-automatic approach for creating a swear lexicon utilizing an annotated corpus, word-embedding, and POS tagger. The complete lexicon development stage consists of three phases as shown in Figure <ns0:ref type='figure'>6</ns0:ref>, 1. Seed word selection 2. Lexicon expansion 3. Manual Validation above 0, we consider it as a positive prediction; if the final score is below 0, we consider it as negative;</ns0:p><ns0:p>when the polarity score is 0, we consider the prediction as wrong.</ns0:p><ns0:p>A polarity score of 0 can result when the word-level weight of a lexicon can not distinguish positive/negative reviews or lack coverage of opinion words present in the review; therefore, it is more appropriate to assign this scenario to misprediction rather than positive or negative class.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1.2'>Comparative results</ns0:head><ns0:p>Table <ns0:ref type='table'>4</ns0:ref> shows the comparative performances of various translated lexicons and BengSentiLex when integrated with BengSentiAn. In the drama review dataset, BengSentiLex classifies 1308 reviews out of 2000 reviews with an accuracy of around 65%. Among the three translated lexicons, VADER classifies 46.60% reviews correctly, while AFINN and Opinion Lexicon provide 31.65% and 41.95% accuracy, respectively. In the News1 dataset, BengSentiLex exhibits an accuracy of 47.30%, while the VADER, Opinion Lexicon, AFINN provides an accuracy of 40.90%, 35.57%, 31.65%, respectively. In the News2 dataset, BengSentiLex shows an accuracy of 46.17%, while the VADER provides second-best performance with an accuracy of 43.12%.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Profanity Identification</ns0:head></ns0:div>
<ns0:div><ns0:head n='6.2.1'>Evaluation metric</ns0:head><ns0:p>To show the effectiveness of BengSwearLex, we utilize document-level coverage. The document-level coverage (or recall) of a lexicon corresponding to a review corpus is calculated as follows-</ns0:p><ns0:p>From the corpus, we first count the number of reviews that contain at least one word from the lexicon, which is then divided by the total number of reviews present in the corpus. Finally, it is multiplied by 100.</ns0:p><ns0:p>The following equation is used to calculate document-level coverage (DCov) of a lexicon-DCov = Number o f reviews with (>0) swear word identi f ied total number o f reviews in corpus * 100</ns0:p><ns0:p>The purpose of creating BengSwearLex is to identify comments and reviews that contain swear or slang words, not to identity non-profane comments; thus, we show document-level coverage for only the profane reviews. Regarding the false positive, as BengSwearLex is manually validated at the final step, it contains only swear words; hence, there is no possibility that it identifies a non-profane comment as profane (false positive).</ns0:p><ns0:p>As no swear lexicon exists in Bengali, we compare the performance of BengSwearLex with several supervised classifiers (that use in-domain labeled data) for profanity detection in the evaluation corpus.</ns0:p><ns0:p>Two popular supervised ML classifiers: Logistics Regression (LR) and Support Vector Machine The three important parameters of the embedding layer are input dimension, which represents the size of the vocabulary, output dimensions, which is the length of the vector for each word, input length, the maximum length of a sequence. The input dimension is determined by the number of words present in a corpus, which vary in two corpora. We set the output dimensions to 64. The maximum length of a sequence is used as 300. A drop-out rate of 0.5 is applied to the dropout layer; ReLU activation is used in the intermediate layers. In the final layer, softmax activation is applied. As an optimization function, Adam optimizer, and as a loss function, binary-cross entropy are utilized. We set the batch size to 64, use a learning rate of 0.001, and train the model for 10 epochs. We use the Keras library <ns0:ref type='bibr' target='#b14'>(Chollet et al., 2015)</ns0:ref> with the TensorFlow backend for implementing DNN based model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2.2'>Comparison results</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> shows that among the 664 profane reviews present in the evaluation corpus (SW), BengSwearLex registers 564 reviews as profane by identifying the presence of at least one swear term in the review, document-level coverage of 84.93%. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> shows the coverage of these ML classifiers in the evaluation corpus. We provide their performances in two different settings: class-balanced setting and class-imbalanced setting. In the classimbalanced setting, we utilize all the 664 profane comments and 2643 non-profane negative comments.</ns0:p><ns0:p>In the class-balanced setting, we use all the 664 profane comments; however, for the non-profane class, we randomly select 664 non-profane comments from the pool of 2643 non-profane comments.</ns0:p><ns0:p>From the Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>, we observe that when the original class-imbalanced data is used, all the three ML classifiers achieve coverage of around 60%. However, when a class-balanced dataset is utilized, the performances of classifiers dramatically increase, achieve coverage of around 90%.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>DISCUSSION</ns0:head></ns0:div>
<ns0:div><ns0:head n='7.1'>Sentiment Lexicon</ns0:head><ns0:p>The results suggest that translated lexicons are not good enough to capture the semantic orientation of the reviews as they lack coverage of opinion words presents in Bengali text. We find that BengSentiLex performs considerably better than the translated lexicons in the drama review dataset, with over 40% improvements. Since BengSentiLex is developed from the corpus that belongs to the same domain, it is very effective at classifying sentiments in this drama review evaluation corpus.</ns0:p><ns0:p>Also, for the two other cross-domain evaluation corpus, News1 and News2, BengSentiLex yields better performance compared to translated lexicons; especially, for classifying negative reviews, which can be attributed to the presence of a higher number of negative opinion words (716) in BengSentiLex compared to 519 positive sentiment words.</ns0:p><ns0:p>The results indicate that utilizing corpora in the target language for automated sentiment lexicon generation is more effective as opposed to translating words directly from another language such as English. As BengSentiLex is built from a social media corpus, it is comprised of words that people use in the web, social media, and informal communication; Thus, it is more effective in recognizing sentiments in text data compared to word-level translated lexicons.</ns0:p><ns0:p>Although supervised ML classifiers usually perform better in sentiment classification, they require annotated data, which are largely missing in low-resource languages such as Bengali. Thus, the developed lexicon can help sentiment classification in Bengali.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.2'>Swear Lexicon</ns0:head><ns0:p>The results of Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>However, in Bengali, no such resources exist for identifying the presence of profanity; thus, we focus on 2/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>utilized linear support vector classifier (LinearSVC), logistic regression (LR), multinomial naïve Bayes (MNB), random forest (RF), artificial neural network (ANN), recurrent neural network (RNN) with long short term memory (LSTM) to detect multi-type abusive Bengali text. They found RNN outperformed other classifiers by obtaining the highest accuracy of 82.20%. Chakraborty and Seddiqui (2019) employed machine learning and natural language processing techniques to build an automatic system for detecting abusive comments in Bengali. As input, they used 5/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021) Manuscript to be reviewed Computer Science Unicode emoticons and Unicode Bengali characters. They applied MNB, SVM, Convolutional Neural Network (CNN) with LSTM, and found SVM performed best with 78% accuracy. Karim et al. (2020) proposed BengFastText, a word embedding model for Bengali, and incorporated it into a Multichannel Convolutional-LSTM (MConv-LSTM) network for predicting different types of hate speech. They compared BengFastText against the Word2Vec and GloVe embedding by integrating them into several ML classifiers and showed the superiority of BengFastText for hate speech detection.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Sample reviews from training dataset Drama-Train</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.58,206.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The various phases of sentiment lexicon generation in Bengali</ns0:figDesc><ns0:graphic coords='10,292.69,412.39,111.67,293.64' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. Class-label assignment of unlabeled reviews using supervised ML classifier and labeled data</ns0:figDesc><ns0:graphic coords='13,203.77,63.78,289.51,166.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>SVM), and an optimization method, Stochastic Gradient Descendent (SGD) are employed in the evaluation corpus to identify profane reviews. As a feature vector, we use unigram and bigram based tf-idf score. 10-fold cross-validation is performed to assess the performance of various ML classifiers. For all the classifiers, default parameter settings are used. For SGD, hinge loss and l2 penalty with a maximum iteration of 1500 are employed.Furthermore, we employ Deep Neural Network (DNN) based architecture, Convolutional Neural Network (CNN), Long short-term memory (LSTM), and Bidirectional Long short-term memory (BiLSTM) to identify profanity. The DNN based model starts with the Keras (Chollet et al., 2015) embedding layer.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Evaluation Datasets for BengSentiLex</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Domain</ns0:cell><ns0:cell cols='2'>Positive Negative</ns0:cell><ns0:cell>Total</ns0:cell></ns0:row><ns0:row><ns0:cell>Drama-Eval</ns0:cell><ns0:cell>Drama Review</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>2000</ns0:cell></ns0:row><ns0:row><ns0:cell>News1</ns0:cell><ns0:cell>News Comments</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>4000</ns0:cell></ns0:row><ns0:row><ns0:cell>News2</ns0:cell><ns0:cell>News Comments</ns0:cell><ns0:cell>5205</ns0:cell><ns0:cell>5205</ns0:cell><ns0:cell>10410</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Performances of supervised ML classifiers in annotated corpus</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>Classifier Precision Recall F1 Score Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>SGD</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.901</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>93.61%</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.908</ns0:cell><ns0:cell>0.924</ns0:cell><ns0:cell>0.916</ns0:cell><ns0:cell>93.00%</ns0:cell></ns0:row><ns0:row><ns0:cell>LR</ns0:cell><ns0:cell>0.889</ns0:cell><ns0:cell>0.922</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell>91.80%</ns0:cell></ns0:row><ns0:row><ns0:cell>k-NN</ns0:cell><ns0:cell>0.901</ns0:cell><ns0:cell>0.849</ns0:cell><ns0:cell>0.875</ns0:cell><ns0:cell>90.18%</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell>0.870</ns0:cell><ns0:cell>0.874</ns0:cell><ns0:cell>89.9l%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Description of Drama Review Evaluation Corpus</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Profane Non-Profane Total</ns0:cell></ns0:row><ns0:row><ns0:cell>664</ns0:cell><ns0:cell>2643</ns0:cell><ns0:cell>3307</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Figure 6. The Proposed Methodology</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Document-level coverage of various methods for profanity detection</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell># Identified</ns0:cell><ns0:cell>DCov</ns0:cell></ns0:row><ns0:row><ns0:cell>Unsupervised</ns0:cell><ns0:cell>BengSwearLex</ns0:cell><ns0:cell>564/664</ns0:cell><ns0:cell>84.93%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>161/664</ns0:cell><ns0:cell>24.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Supervised (Unbalanced)</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>345/664</ns0:cell><ns0:cell>53.4%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SGD</ns0:cell><ns0:cell>366/664</ns0:cell><ns0:cell>58.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LSTM</ns0:cell><ns0:cell>433/664</ns0:cell><ns0:cell>65.21%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>462/664</ns0:cell><ns0:cell>70.4%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>444/664</ns0:cell><ns0:cell>66.86%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>609/664</ns0:cell><ns0:cell>91.71%</ns0:cell></ns0:row><ns0:row><ns0:cell>Supervised (Balanced)</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>594/664</ns0:cell><ns0:cell>89.45%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SGD</ns0:cell><ns0:cell>589/664</ns0:cell><ns0:cell>88.70%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LSTM</ns0:cell><ns0:cell>610/664</ns0:cell><ns0:cell>91.67%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>624/664</ns0:cell><ns0:cell>94.0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>609/664</ns0:cell><ns0:cell>91.71%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>Table 5 reveal that BengSwearLex is capable of identifying profanity in Bengali social media content. It shows higher document-level coverage than in-domain labeled data when a class-imbalanced training set is used. However, a class-balanced training set performs better than BengSwearLex. Labeled</ns0:figDesc><ns0:table /><ns0:note>16/20PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>https://translate.google.com</ns0:note>
<ns0:note place='foot' n='2'>https://github.com/AbuKaisar24/Bengali-Pos-Tagger-Using-Indian-Corpus/ 3 https://www.isical.ac.in/ utpal/docs/POSreadme.txt 13/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021)</ns0:note>
<ns0:note place='foot' n='20'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:1:1:NEW 13 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Reviewer 1
Basic reporting
no comment
Experimental design
The discussion of the experiment needs to be more in-depth, and the
content of the experiment needs to be more detailed and complete. Please
analyze the advantages of the proposed method and supplement the related
experiments.
Validity of the findings
no comment
Comments for the author
1. Your most important issue
Q1: Are the methods presented in this paper the most advanced and
original, what are the main challenges of this work, and these need
to be further discussed in the relevant work.
-Thank you for comments. The paper presents a new and original
corpus-based approach for sentiment lexicon creation in Bengali. Till now,
this is the most effective sentiment lexicon in Bengali as shown by the
evaluation results.
Some of the challenges associated with this work are lack of annotated and
unannotated corpus, standard part-of-speech (POS) tagger, etc. which have
been included in revised manuscript. (Line 100-106)
Based on your comments, a new section has been added that describes
challenges associated with creating this sentiment lexicon. (Line 100-106).
Besides, a new section has been added to discuss how the proposed work
differs from the existing sentiment lexicon creation methods in Bengali and
English (Section 2.3, Line 194-221).
Q2: The discussion of the experiment needs to be more in-depth, and
the content of the experiment needs to be more detailed and
complete. Please analyze the advantages of the proposed method
and supplement the related experiments.
-Thank you for your comments. The discussion section has been
extended with more information (Line 592-609). The created sentiment
lexicon has advantage over existing Bengali lexicons as it can capture words
used in informal communication.
2. The next most important item
Q3: 3.3.2 What is the meaning of the PMI formula and its variables?
Please give specific explanation.
-Thank you for your comments. The PMI formula has been explained
with the definition of various terms used in the equation. (Line 289-293,
391-392)
Q4: 3.3.3 “After applying approach 2, our dataset contains around
38000 pseudo-labeled reviews. We then employ PMI and POS tagger
in a similar way to phase 2. However, since this phase utilizes
pseudo-labeled data instead of the true-label data, we set a higher
threshold of 0.7 for the class label assignment.” Why is the
threshold set to 0.7?
-
Thank you for your comments. The threshold is set empirically and
based on the assumption that as pseudo-labels are not perfect (unlike
true label), a higher threshold need to be used compared to true label
data.
Q5: 5.1.1 “If the total polarity score of as review is below 0, we consider
it as a positive prediction; if the final score is below 0, we consider it as
negative;” In which case does the score below 0?
-
Thank you for your comment. This was a typo, ‘If the total polarity
score of a review is below 0, we consider it as a positive prediction’,
has been changed to ‘If the total polarity score of a review is above 0,
we consider it as a positive prediction’
3. The least important points
Q6: Table reference error in section 3.1.2, Table3 should be changed to
Table1.
- The Table reference has been fixed.
Q7: 3.1.2 Incorrect data in Table1, the data in the last column of the last
row in Table 1 should be 10410.
-Thank you for pointing this. The value has been fixed.
Q8: 5.1.2 “Among the three translated lexicons, VADER classifies 46.60%
reviews correctly, while AFINN and Opinion Lexicon provide 48.65% and
41.95% accuracy, respectively.” The accuracy of AFINN in Table 4 is
31.65%, which does not agree with the description in the text.
- Thank you for finding this. The typo has been fixed.
Q9: CONCLUSION “We made both BengSentiLex and BengSwearLex
publicly available for the researchers in Sen (2020)” This sentence lacks
punctuation.
- Thank you. The punctuation has been added.
Reviewer 2
Basic reporting
The article has been provided in clear and professional English, as well as a
sufficient organized sections. The tables and Figures are also well created.
Experimental design
The approach seems to have a promising results. Although the approaches
have been used in previous papers, they showed a good insight on the
specific language (Bengali).
Validity of the findings
The authors have provided well-designed conclusion, as well as defining the
data with good statistics.
Comments for the author
Here is a few suggestion that can improve the article:
1- The scope of the 'related works' section requires a more thorough
details.
i) Some essential parts of the paper, such as 'supervisory characteristics'
(supervised methods, semi-supervised methods, and unsupervised methods)
can be discussed.
- A basic terminology section has been added that describes supervised
approach, semi-supervised approach and other terms/approach used in the
paper. Section 4.1, Line 259-293
ii) the following paper has also done a similar work. it is recommended to
review and compare the method with the provided method in the
paper. https://www.sciencedirect.com/science/article/abs/pii/S09507051203
05529
- The comparison with the proposed paper has been shown. Line 160-167,
Line 217-220
iii) Section 3.2.1, the second paragraph is relatively discussed the
related works, which can be replaced in the 'related works' section.
Thank you for your suggestion, the machine translation part has been
moved to the earlier basic terminology section.
2- For better referencing the readers, it is recommended to
reference the Equations with numbers.
Equation number has been added for better referencing.
3- For better understanding of the readers, a translation for Figure 5
is required.
Thank you for your comments. The English translation has been added
based on your suggestion.
4- Also, the source of the dataset (Youtube reviews) is needed to be
cited.
- The source of the dataset has been added.
5- The compared methods are mostly traditional methods for the
experiments. authors must provide state-of-the-art methods for
comparisons. (having LSTM, Bi-LSTM, CNN, Attention model, BERT
model, etc.) - though not all of them, but to compare some of them.
- Thank you for your comments. For evaluation, I added the comparison of
LSTM, Bi-LSTM and CNN with the swear lexicon for identifying profanity.
Table 5, Line 583
The general idea behind the paper is a novel approach for Bengali language
and hope the authors find the comments useful and make the necessary
changes for the acceptance.
Reviewer 3
Basic reporting
No comment
Experimental design
No comment
Validity of the findings
No comment
Comments for the author
line 231, Table 3 -> Table 1
- Thank you for your comments. The table reference number has been fixed.
line 235, numbers do not add up to the numbers tabulated in Table 1 -> last
row, last column cell?
- The typo has been fixed.
line 249, will translation sufficiently map all types of sentiments from one to
another language?
- Thank you for your insightful comments. In another paper, we have
investigated the sentiment preservation in Bengali and Machine translated
review utilizing Cohen’s kappa and Gwet’s AC1. We found two very accurate
classifiers, SVM and LR show kappa scores above 0.80 and AC1 scores above
0.85, which indicates sentiment consistency exists between original Bengali
and machine-translated English reviews. Thus, although not perfect, Google
Machine Translation preserve the sentiment in majority of the cases.
line 254, reference for NMT
- References to NMT has been added.
line 364, which is the machine learning model used in SGD algorithm?
Similary in line 515.
- Thank you for your comments. The SGD algorithm uses hinge loss which is
basically linear SVM. The following information has been added “For SGD, hinge loss and l2 penalty with a maximum iteration of 1500
are employed.’
" | Here is a paper. Please give your review comments after reading it. |
212 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Bengali is a low-resource language that lacks tools and resources for various natural language processing (NLP) tasks, such as sentiment analysis and profanity identification.</ns0:p><ns0:p>In Bengali, only the translated versions of English sentiment lexicons are available.</ns0:p><ns0:p>Besides, no lexicon exists for detecting profanity in Bengali social media text. In this work, we introduce a Bengali sentiment lexicon, BengSentiLex, and a Bengali swear lexicon, BengSwearLex. To build BengSentiLex, we propose a cross-lingual methodology that utilizes a machine translation system, review corpus, English sentiment lexicons, pointwise mutual information (PMI), and supervised machine learning (ML) classifiers in different steps. For creating BengSwearLex, we introduce a semi-automatic methodology that leverages an obscene corpus, word embedding, and part-of-speech (POS) taggers. We compare the performance of BengSentiLex with the translated English lexicons in three evaluation datasets. BengSentiLex achieves 5%-50% improvement over the translated lexicons. For profanity detection, BengSwearLex achieves coverage of around 85% in document-level in the evaluation dataset. The experimental results imply that BengSentiLex and BengSwearLex are effective at classifying sentiment and identifying profanity in Bengali social media content, respectively.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>The popularity of e-commerce and social media has surged the availability of user-generated content.</ns0:p><ns0:p>Therefore, text analysis tasks such as sentiment classification and profanity or abusive content identification have received significant attention in recent years. Sentiment analysis identifies emotions, attitudes, and opinions expressed in a text <ns0:ref type='bibr' target='#b33'>(Liu, 2012)</ns0:ref>. Extracting insights from user feedback data has practical implications such as market research, customer service, result predictions, etc. Profanity indicates the use of taboo or swear words to express emotional feelings and is prevalent in social media data (e.g., online post, message, comment, etc.) <ns0:ref type='bibr' target='#b60'>(Wang et al., 2014)</ns0:ref> across languages. The occurrences of swearing or vulgar words are often linked with abusive or hatred context, sexism, and racism. Hence, identifying swearing words has practical connections to understanding and monitoring online content. In this paper, we use the terms such as profanity, slang, vulgarity, and swearing interchangeably to indicate the usage of foul and filthy languages/words even though they have subtle differences in their meanings.</ns0:p><ns0:p>Lexicon plays an important role in both sentiment classification and profanity identification. For example, sentiment lexicons help to analyze key subjective properties of texts such as opinions and attitudes <ns0:ref type='bibr' target='#b54'>(Taboada et al., 2011)</ns0:ref>. A sentiment lexicon contains opinion conveying terms (e.g., words, phrases, etc.), labeled with the sentiment polarity (i.e., positive or negative) and strength of the polarity. Some examples of positive sentiment words are beautiful, wonderful, amazing that express some desired states or qualities, while negative sentiment words, such as bad, awful, poor, etc. are used to represent undesired states. The profane list, on the other hand, contains words having foul, filthy, and profane meanings <ns0:ref type='bibr'>(e.g., ass, fuck, bitch)</ns0:ref>. When labeled data are unavailable, sentiment classification methods 1 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science usually utilize opinion conveying words and a set of linguistic rules. As this approach relies on the polarity of the individual words, it is crucial to building a comprehensive sentiment lexicon. Similarly, creating a lexicon that consists of a list of swear and obscene words is instrumental for determining profanity in a text.</ns0:p><ns0:p>As sentiment analysis is a well-studied problem in English, many general-purpose and domain-specific sentiment lexicons are available. Some of the popular general-purpose sentiment lexicons are MPQI <ns0:ref type='bibr' target='#b62'>(Wilson et al., 2005)</ns0:ref>, opinion lexicon <ns0:ref type='bibr' target='#b25'>(Hu and Liu, 2004)</ns0:ref>, SentiwordNet <ns0:ref type='bibr' target='#b19'>(Esuli and Sebastiani, 2006)</ns0:ref>, VADER <ns0:ref type='bibr' target='#b27'>(Hutto and Gilbert, 2014)</ns0:ref>, etc. Besides English, other widely used languages such as Chinese, Arabic, Spanish, etc have their sentiment lexicons <ns0:ref type='bibr' target='#b64'>(Xu et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b36'>Mohammad et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b40'>Perez-Rosas et al., 2012)</ns0:ref>. The presence of swearing in English social media has been investigated by various researchers <ns0:ref type='bibr' target='#b60'>(Wang et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b37'>Pamungkas et al., 2020)</ns0:ref>. <ns0:ref type='bibr' target='#b60'>Wang et al. (2014)</ns0:ref> found that the rate of swear word use in English Twitter is 1.15%, almost double compared to its use in daily conversation (0.5% -0.7%) as observed in previous work <ns0:ref type='bibr' target='#b29'>(Jay, 1992)</ns0:ref>. The work of <ns0:ref type='bibr' target='#b60'>Wang et al. (2014)</ns0:ref> also reported that a portion of 7.73% tweets in their random sampling collection contains swear words.</ns0:p><ns0:p>Although Bengali is the seventh most spoken language in the world, sentiment analysis or profanity identification in Bengali is still in its beginning. Limited research has been conducted on sentiment analysis in Bengali in the last two decades utilizing supervised machine learning (ML) techniques <ns0:ref type='bibr' target='#b51'>(Sazzed and Jayarathna, 2019;</ns0:ref><ns0:ref type='bibr' target='#b15'>Das and Bandyopadhyay, 2010a;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chowdhury and Chowdhury, 2014;</ns0:ref><ns0:ref type='bibr' target='#b46'>Sarkar and Bhowmick, 2017;</ns0:ref><ns0:ref type='bibr' target='#b42'>Rahman and Kumar Dey, 2018)</ns0:ref>, as they do not require language-specific resources such as sentiment lexicon, part-of-speech (POS) tagger, dependency parser, etc. Regarding profanity identification, although few works addressed the abusive content analysis, none of them focused on determining profanity or generating resources for identifying profanity.</ns0:p><ns0:p>There have been a few attempts to develop sentiment lexicons for Bengali by translating various English sentiment dictionaries. <ns0:ref type='bibr' target='#b16'>Das and Bandyopadhyay (2010b)</ns0:ref> utilized a word-level lexical-transfer technique and an English-Bengali dictionary to develop SentiWordNet for Bengali from English Sen-tiWordNet. <ns0:ref type='bibr' target='#b5'>Amin et al. (2019)</ns0:ref> translated the VADER <ns0:ref type='bibr' target='#b27'>(Hutto and Gilbert, 2014)</ns0:ref> sentiment lexicon to Bengali for sentiment analysis. However, dictionary-based translation can not capture the informal language people use in everyday communication or social media. Regarding vulgar or swear words, there exist no resources in Bengali which can identify profanity in social media data. Therefore, in this work, we focus on generating resources for these two essential tasks.</ns0:p><ns0:p>To develop the Bengali sentiment lexicon BengSentiLex, we present a corpus-based cross-lingual methodology. We collected around 50000 Bengali drama reviews from Youtube; among them, we manually annotated around 12000 reviews <ns0:ref type='bibr' target='#b47'>(Sazzed, 2020a)</ns0:ref>. Our proposed methodology consists of three phases, where each phase identifies sentiment words from the corpus and includes them in the lexicon. In phase 1, we identify sentiment words from the Bengali review corpus (both labeled and unlabeled) with the help of two English sentiment lexicons, Bing Liu's opinion lexicon and VADER. In phase 2, utilizing around 12000 annotated reviews and PMI, we identify top class-correlated ( positive or negative) words.</ns0:p><ns0:p>Using the POS tagger, we determine adjectives and verbs, which mainly convey opinions. In the final phase, we make use of unlabeled reviews to recognize the polar words. Utilizing the labeled reviews as training data, we determine the class of the unlabeled reviews. We then follow the similar steps of phase 2 to identify sentiment words from these pseudo-labeled reviews. All three phases are followed by a manual validation and synonym generation step. Finally, we provide the comparative performance analysis of the developed BengSentiLex with the translated English lexicons. As BengSentiLex is built from a social media corpus, it contains words that people use on the web, social media, and informal communication; Therefore, it is more effective in recognizing sentiments in text data compared to word-level translation of English lexicons. This sentiment lexicon creation framework is the extended version of the previous work of <ns0:ref type='bibr' target='#b48'>Sazzed (2020b)</ns0:ref>.</ns0:p><ns0:p>To construct the Bengali swear lexicon, BengSwearLex, we propose a corpus-based semi-automatic approach. However, unlike the methodology used for sentiment lexicon creation, this approach does not use any cross-lingual resources as machine translation is not capable of translating language-specific swear terms. From an existing Bengali obscene corpus, utilizing word embedding and POS tagging, we create BengSwearLex. To show the efficacy of BengSwearLex for identifying profanity, we annotate a negative drama review corpus into profane and non-profane categories based on the presence of swear terms. We find that BengSwearLex successfully identifies 85.5% of the profane reviews from the corpus.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='1.1'>Motivation and Challenges</ns0:head><ns0:p>Since the existing Bengali sentiment dictionaries lack words people use in informal and social communication, it is necessary to build such a sentiment lexicon in Bengali. With the rapid growth of user-generated Bengali content on social media and the web, the presence of inappropriate content has become an issue.</ns0:p><ns0:p>The content which is not in line with the social norms and expectations of a community needs to be censored. However, in Bengali, no such resources exist for identifying the presence of profanity; thus, we focus on building a swear lexicon for Bengali. Some of the challenges to develop a sentiment lexicon in Bengali are-1. One of the popular techniques to create a lexicon is to utilize corpora to extract opinion conveying words. However, the Bengali language lacks such corpus. Thus, we have to collect and annotate a large corpus.</ns0:p><ns0:p>2. One of the important tools for identifying opinion word is sophisticated part-of-speech (POS) tagger; However, in Bengali, there exists no sophisticated POS tagger; thus, we leverage POS tagger from English utilizing machine translation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Contributions</ns0:head><ns0:p>Our main contributions in this paper can be summarized as follows-</ns0:p><ns0:p>• We introduce two lexical resources, BengSentiLex, a Bengali sentiment lexicon that consists of over 1200 opinion words created from a Bengali review corpus, and BenSwearLex, a Bengali swear lexicon, comprised of about 200 swear words. We have made both lexicons publicly available for the researchers.</ns0:p><ns0:p>• We show how the machine translation-based cross-lingual approach, the labeled and unlabeled reviews, English sentiment lexicons can be utilized to build a sentiment lexicon in Bengali.</ns0:p><ns0:p>• We present a semi-automatic methodology for developing a swear lexicon utilizing an obscene corpus and various natural language processing tools.</ns0:p><ns0:p>• We demonstrate that BengSentiLex and BengSwearLex are effective at sentiment classification and profane terms detection compared to existing tools.</ns0:p><ns0:p>The rest of the paper is structured as follows: In section 2, we review related literature. We explain the corpus creation and annotation process in section 3. Section IV describes various cross-lingual resources used for the Bengali lexicon generation. In section 4, we present the lexicon construction methodology.</ns0:p><ns0:p>Section 5 and 6 provide experimental results and discussion. Finally, section 7 concludes and provides future directions. <ns0:ref type='bibr' target='#b33'>Liu (2012)</ns0:ref> categorized the sentiment lexicon generation methods into three approaches: manual approach, dictionary-based approach, and corpus-based approach. Considerable time and resources are needed for the manual approach as the annotation is performed by humans; The dictionary-based methods usually start with a set of seed words, which are created manually and then expanded using a dictionary. The corpus-based techniques utilize both manually labeled seed words and corpus data. Since the proposed sentiment lexicon creation framework is corpus-based, we mention only the works which utilizes corpus for lexicon creation. <ns0:ref type='bibr' target='#b26'>Huang et al. (2014)</ns0:ref> proposed a label propagation-based method for generating domain-specific sentiment lexicon. In their work, the candidate sentiment terms are extracted by leveraging the chunk dependency information and prior generic sentiment dictionary. They defined the pairwise contextual and morphological constraints and incorporated the label propagation. Their experimental results suggested that constrained label propagation can improve the performance of the automatic construction of domain-specific sentiment lexicon.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK OF SENTIMENT LEXICON CREATION</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Corpus-based lexicon generation in English</ns0:head></ns0:div>
<ns0:div><ns0:head>3/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b23'>Han et al. (2018)</ns0:ref> proposed a domain-specific lexicon generation method from the unlabeled corpus utilizing mutual information and part-of-speech (POS) tags. Their lexicon shows satisfactory performance on publicly available datasets. <ns0:ref type='bibr' target='#b56'>Tai and Kao (2013)</ns0:ref> proposed a graph-based label propagation algorithm to generate a domain-specific sentiment lexicon. Their proposed approach considers words as nodes and similarities as weighted edges of the word graphs. Using a graph-based label propagation method, they assigned the polarity to unlabeled words. They conducted experiments on the Twitter dataset and achieved better performance than the general-purpose sentiment dictionaries. <ns0:ref type='bibr' target='#b59'>Wang and Xia (2017)</ns0:ref> developed a neural architecture to train a sentiment-aware word embedding. To enhance the quality of word embedding as well as the sentiment lexicon, they integrated the sentiment supervision at both document and word levels. They performed experiments on the SemEval 2013-2016 datasets using their sentiment lexicon and obtained the best performance in both supervised and unsupervised sentiment classification tasks. <ns0:ref type='bibr' target='#b22'>Hamilton et al. (2016)</ns0:ref> constructed a domain-sensitive sentiment lexicon using label propagation algorithms and small seed sets. They showed that their corpus-based approach outperformed methods that rely on hand-curated resources such as WordNet. <ns0:ref type='bibr' target='#b63'>Wu et al. (2019)</ns0:ref> presented an automatic method for building a target-specific sentiment lexicon. Their lexicon consists of opinion pairs made from an opinion target and an opinion word. Their unsupervised algorithms first extract high-quality opinion pairs; Then utilizing general-purpose sentiment lexicon and contextual knowledge, calculates sentiment scores of opinion pairs. They applied their method on several product review datasets and found their lexicon outperformed several general-purpose sentiment lexicons. <ns0:ref type='bibr' target='#b10'>Beigi and Moattar (2021)</ns0:ref> presented an automatic domain-specific sentiment lexicon construction method for unsupervised domain adaptation and sentiment classification. The authors first constructed a sentiment lexicon from the source domain using the labeled data. In the next phase, the weights of the first hidden layer of Multilayer Perceptron (MLP) are set to the corresponding polarity score of each word from the developed sentiment lexicon, and then the network is trained. Finally, a domain-independent Lexicon (DIL) is introduced that contains words with static positive or negative scores. The experiments on Amazon multi-domain sentiment datasets showed the superiority of their approach over the existing unsupervised domain adaptation methods. <ns0:ref type='bibr' target='#b4'>-Moslmi et al. (2018)</ns0:ref> developed an Arabic sentiment lexicon consists of 3880 positive and negative synsets annotated with the part-of-speech (POS), polarity scores, dialects synsets, and inflected forms.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Lexicon generation in Bengali and other languages</ns0:head></ns0:div>
<ns0:div><ns0:head>Al</ns0:head><ns0:p>They performed the word-level translation of the English MPQA lexicon using google translation, which was followed by manual inspection for removing the inappropriate word. Besides, from two Arabic review corpora, they manually examined a list of opinion words or sentiment words and phrases. <ns0:ref type='bibr' target='#b40'>Perez-Rosas et al. (2012)</ns0:ref>, the authors presented a framework to derive sentiment lexicon in Spanish using manually and automatically annotated data from English. To bridge the language gap, they used the multilingual sense-level aligned WordNet structure. <ns0:ref type='bibr' target='#b36'>Mohammad et al. (2016)</ns0:ref>, the authors introduced several sentiment lexicons in Arabic that were automatically generated using two different methods: (1) by using distant supervision techniques on Arabic tweets, and (2) by translating English sentiment lexicons into Arabic using a freely available statistical machine translation system. They compared the performance of existing and their proposed sentiment lexicons in sentence-level sentiment analysis. <ns0:ref type='bibr' target='#b6'>Asghar et al. (2019)</ns0:ref> presented a word-level translation scheme for creating an Urdu polarity lexicon using a list of English opinion words, SentiWordNet, English-Urdu bilingual dictionary, and a collection of Urdu modifiers. <ns0:ref type='bibr' target='#b16'>Das and Bandyopadhyay (2010b)</ns0:ref> proposed a computational method for generating an equivalent lexicon of English SentiWordNet using an English-Bengali bilingual dictionary. Their approach used a word-level translation process, which is followed by the error reduction technique. From the SentiWordNet, they selected a subset of opinion words whose orientation strength is above the heuristically identified threshold of 0.4. They used two Bengali corpora, News, and Blog to show the coverage of their developed lexicon. <ns0:ref type='bibr' target='#b5'>Amin et al. (2019)</ns0:ref>, the authors compiled a Bengali polarity lexicon from the English VADER lexicon using a translation technique. They modified the functionalities of the VADER lexicon so that it can be <ns0:ref type='table' target='#tab_4'>-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>4/21 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>directly applied to Bengali sentiment analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Comparison with Existing Sentiment Lexicons</ns0:head><ns0:p>We provide comparisons in both the methodological and evaluation phases with the Bengali lexicon-based methods. Due to differences in language, it is not possible to compare the proposed framework with the English lexicon-based methods in the evaluation step. Thus to show the novelty and originality of the proposed framework, we discuss how the proposed framework is different from the existing sentiment lexicon generation methods in English.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.1'>Bengali Sentiment lexicons</ns0:head><ns0:p>In contrast to the existing Bengali sentiment lexicons, which are the simple word-level translation of English lexicons, BengSentiLex is created from a Bengali review corpus. Besides, BengSentiLex differs in the way it has been developed and the aspects of the content. We use a cross-lingual corpus-based approach utilizing labeled and unlabeled data, while the existing lexicons simply translate the English lexicons to Bengali at the word level. Besides, as we utilize social media review data, BengSentilex is capable of capturing opinion words people use in informal communication.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2'>English Sentiment Lexicons</ns0:head><ns0:p>In this section, we discuss how the proposed methodology is different from some of the existing lexicon creation methods in English. A number of corpus-based lexicon-generation methods employed label propagation algorithms utilizing seed words <ns0:ref type='bibr' target='#b58'>(Velikovich et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b22'>Hamilton et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b56'>Tai and Kao, 2013)</ns0:ref>, while BengSentiLex does not use any seed word list. Some of the existing works utilized PMI or modified PMI in some phase of lexicon generation framework <ns0:ref type='bibr' target='#b67'>(Yang et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b57'>Turney and Littman, 2003;</ns0:ref><ns0:ref type='bibr' target='#b66'>Xu et al., 2012)</ns0:ref>; however, other than using PMI, their entire framework is different from the proposed methodology. Besides, they calculated PMI among various features, while the proposed framework utilizes it between feature and target. The work of <ns0:ref type='bibr' target='#b10'>Beigi and Moattar (2021)</ns0:ref> utilized the word's frequency in positive and negative comments and vocabulary size of the corpus to determine the polarity score of the corresponding word, while BengSentiLex uses PMI based sentiment intensity (SI) score to determine the semantic orientation of a word. Some other works utilized Matrix Factorization <ns0:ref type='bibr' target='#b39'>(Peng and Park, 2011)</ns0:ref> or distant supervision for creating lexicon <ns0:ref type='bibr' target='#b53'>(Severyn and Moschitti, 2015)</ns0:ref>. A comprehensive literature review of the corpus-based lexicon creation method in English has been performed by <ns0:ref type='bibr' target='#b14'>Darwich et al. (2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RELATED WORK OF PROFANITY AND ABUSIVE CONTENT ANALYSIS</ns0:head><ns0:p>Researchers studied the existence and sociolinguistic characteristics of swearing or cursing in social media. <ns0:ref type='bibr' target='#b60'>Wang et al. (2014)</ns0:ref> investigated the cursing activities on Twitter, a social media platform. They studied the ubiquity, utility, and contextual dependency of swearing on Twitter. <ns0:ref type='bibr' target='#b20'>Gauthier et al. (2015)</ns0:ref> analyzed several sociolinguistic aspects of swearing on Twitter text data. Several studies investigated the relationship between social factors such as gender the profanity and discovered males employ profanity much more often than females <ns0:ref type='bibr' target='#b60'>(Wang et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b52'>Selnow, 1985)</ns0:ref>. Other social factors such as age, religiosity, or social status were found to be related to the rate of using vulgar words <ns0:ref type='bibr' target='#b35'>(McEnery, 2004)</ns0:ref>. <ns0:ref type='bibr' target='#b30'>Jay and Janschewitz (2008)</ns0:ref> noticed that the offensiveness of taboo words depends on their context, and found that usages of taboo words in conversational context is less offensive than the hostile context. <ns0:ref type='bibr' target='#b41'>Pinker (2007)</ns0:ref> classified the use of swear words into five categories: dysphemistic; abusive, using taboo words to abuse or insult someone; idiomatic, using taboo words to arouse the interest of listeners without really referring to the matter; emphatic, to emphasize another word; cathartic, the use of swear words as a response to stress or pain.</ns0:p><ns0:p>Research related to the identification of swearing or offensive words has been conducted mainly in English; Therefore, lexicons comprised of offensive words are available in the English language.</ns0:p><ns0:p>Pamungkas et al. <ns0:ref type='bibr' target='#b3'>(2020)</ns0:ref> created SWAD (Swear Words Abusiveness Dataset), a Twitter English corpus, where abusive swearing is manually annotated at the word level. Their collection consists of 1,511 unique swear words from 1,320 tweets. <ns0:ref type='bibr' target='#b44'>Razavi et al. (2010)</ns0:ref> manually collected approximately 2,700 dictionary entries including phrases and multi-word expressions, which is one of the earliest work offensive lexicon creations. The recent work on lexicon focusing on hate speech was reported by <ns0:ref type='bibr' target='#b21'>(Gitari et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Currently, the largest English lexicon of abusive words was provided by <ns0:ref type='bibr' target='#b61'>(Wiegand et al., 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In Bengali, several works investigated the presence of abusive language in social media data by leveraging supervised ML classifiers and labeled data <ns0:ref type='bibr' target='#b28'>(Ishmam and Sharmin, 2019;</ns0:ref><ns0:ref type='bibr' target='#b9'>Banik and Rahman, 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b17'>Emon et al. (2019)</ns0:ref> However, none of the existing works focused on creating resources to detect vulgarity or profanity in Bengali social media content. To the best of our knowledge, it is the first attempt to create resources to detect vulgarity or profanity in the context of Bengali social media data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CREATION OF SENTIMENT LEXICON</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1'>Basic Terminology</ns0:head><ns0:p>This section describes some of the concepts used in this paper for sentiment lexicon creation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.1'>Supervisory Characteristics</ns0:head><ns0:p>Supervised Learning Supervised learning is one of the most popular approaches of machine learning defined by its usage of annotated data. The labeled data are used to train or 'supervise' algorithms for classifying data accurately. Using annotated inputs and outputs, the model can assess its accuracy and learn over time.</ns0:p><ns0:p>Semi-supervised learning Semi-supervised learning uses both labeled and unlabeled data. It is very useful when a high volume of data is available, but the annotation process is very challenging and requires a huge amount of time and resources.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2'>Cross-lingual approach</ns0:head><ns0:p>The cross-lingual approach leverages resources and tools from a resource-rich language (e.g., English) to a resource-scarce language. Most of the research in sentiment analysis has been performed in English.</ns0:p><ns0:p>Hence, resources from English can be employed in other languages using various language mapping techniques. The construction of a language-specific sentiment lexicon requires vast resources, tools, and an active research community, which are not available in the resource-scarce language. A feasible approach could be utilizing resources from the languages where sentiment resources are abundant <ns0:ref type='bibr' target='#b50'>(Sazzed, 2021b)</ns0:ref>. In this work, we employ machine translation to leverage several resources from English.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.3'>Machine translation</ns0:head><ns0:p>Machine translation (MT) refers to the use of software to translate text or speech from one language to another. Over the decades, the machine translation system has evolved to a more reliable system, from the simple word-level substitution to sophisticated Neural Machine Translation (NMT) <ns0:ref type='bibr' target='#b31'>(Kalchbrenner and Blunsom, 2013;</ns0:ref><ns0:ref type='bibr' target='#b7'>Bahdanau et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b68'>Zhu et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Machine translation has been successfully applied to various sentiment analysis tasks by researchers. <ns0:ref type='bibr' target='#b8'>Balahur and Turchi (2014)</ns0:ref> studied the possibility of employing machine translation systems and supervised methods to build models that can detect and classify sentiment in low-resource languages. Their evaluation showed that machine translation systems were rapidly maturing. The authors claimed that with appropriate ML algorithms and carefully chosen features, machine translation could be used to build sentiment analysis systems in resource-poor languages. The PMI between two variables X and Y is computed as,</ns0:p><ns0:formula xml:id='formula_1'>PMI(X,Y ) = log P(X,Y ) P(X)P(Y )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>The term P(X, Y) is the number of observed co-occurrences of event X and Y. P(X) represents the number of times X occurs, and P(Y) means the number of times Y occurs. When two variables X and Y are independent, the PMI between them is 0. PMI maximizes when X and Y are perfectly correlated.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Datasets for Lexicon Creation and Evaluation</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Training dataset for sentiment lexicon</ns0:head><ns0:p>We use a drama review dataset (Drama-Train) collected from Youtube <ns0:ref type='bibr' target='#b47'>(Sazzed, 2020a)</ns0:ref> to build BengSen-tiLex. This corpus consists of around 50000 Bengali reviews, where each review represents the viewer's opinions towards a Bengali drama. Among the 50000 Bengali reviews, around 12000 are annotated by <ns0:ref type='bibr' target='#b47'>Sazzed (2020a)</ns0:ref>, while the remaining are unlabeled. Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> shows examples of drama reviews belong to the Drama-Train.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Evaluation dataset for sentiment lexicon</ns0:head><ns0:p>We show the effectiveness of BengSentiLex in three datasets from distinct domains. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> provides the details of the evaluation datasets.</ns0:p><ns0:p>The first evaluation dataset is a drama review dataset (Drama-Eval) consisting of around 1000 annotated reviews. This dataset belongs to the same domain as the training dataset, Drama-Train;</ns0:p><ns0:p>However, it has not been used for lexicon creation. This is a class-balanced dataset, consists of 500 positive and 500 negative reviews.</ns0:p><ns0:p>The second dataset is a news dataset (News1) that was collected from <ns0:ref type='bibr'>(Soc, 2020)</ns0:ref>. It consists of 4000 news comments; among them, 2000 are positive and 2000 are negative comments.</ns0:p><ns0:p>The third dataset is also a news comment dataset (News2), collected from two popular Bengali newspapers, Prothom Alo and BBC Bangla <ns0:ref type='bibr' target='#b55'>(Taher et al., 2018)</ns0:ref>. It consists of 5205 positive comments and around 5600 negative comments. For the evaluation, we select a class-balanced subset where each class contains 5205 comments.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.1'>Sentiment lexicon</ns0:head><ns0:p>To identify whether a Bengali word conveys an opinion, we employ a cross-lingual approach. Leveraging machine translation and English sentiment lexicons, we decide whether an extracted Bengali word bears opinion. The study by <ns0:ref type='bibr' target='#b51'>Sazzed and Jayarathna (2019)</ns0:ref> showed that though the Bengali to English machine translation (i.e., Google Translate) system is not perfect, it preserves semantic orientation in most cases.</ns0:p><ns0:p>We translate all the extracted Bengali words into English and then determine their polarities based on the English lexicon. If the translated word exists in an English lexicon, we include the corresponding Bengali word in our Bengali sentiment lexicon. Although we perform word-level translation between</ns0:p><ns0:p>English and Bengali, it differs from existing works that translate words from English to Bengali, therefore only contain translated Bengali dictionary words rather than the words used by people in informal communication.</ns0:p><ns0:p>Our proposed approach supports the inclusion of informal Bengali words, which is not achievable using the dictionary-based translation of English lexicons. Furthermore, this approach can yield and include multiple synonymous opinion words instead of one. For example, by translating an English sentiment word, we only get the corresponding Bengali term. However, when words are extracted from the corpus and translated to English, due to the low coverage of the machine translation system, synonymous</ns0:p><ns0:p>Bengali words can be mapped into the same English polarity word. Thus, it helps to identify and include more opinion words in the Bengali lexicon.</ns0:p><ns0:p>To determine the polarity of the translated words, we utilize the following English sentiment lexicons.</ns0:p><ns0:p>Opinion lexicon was developed by <ns0:ref type='bibr' target='#b25'>(Hu and Liu, 2004</ns0:ref>) and contains around 6800 English sentiment words (positive or negative). Besides the dictionary words, it also includes acronyms, misspelled words, and abbreviations. Liu's opinion lexicon is a binary lexicon, where each word is associated with either positive (+1) or negative(-1) polarity value.</ns0:p><ns0:p>VADER is a sentiment lexicon especially attuned to social media. VADER contains over 7,500 lexical features with sentiment polarity of either positive or negative and sentiment intensity between -4 to +4. VADER includes emoticons such as ':-)', which denotes a smiley face ( positive expression), and sentiment-related initialisms such as 'LOL', 'WTF'.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.2'>Part-of-speech (POS) tagging</ns0:head><ns0:p>A Part-of-speech (POS) tagger is a tool that assigns a POS tag (e.g., noun, verb, adjective, etc.) to each of the words present in a text. As adjectives, nouns, and verbs usually convey opinions, the POS tagger can help to identify opinion words. In English, several standard POS taggers are available such as NLTK POS tagger <ns0:ref type='bibr' target='#b34'>Loper and Bird (2002)</ns0:ref>, spaCy POS tagger <ns0:ref type='bibr' target='#b24'>Honnibal and Montani (2017)</ns0:ref>. However, in Bengali, since no sophisticated POS tagger is available, we use the machine translation system to convert the probable Bengali opinion words to English. We then use the spaCy POS tagger to determine the POS tag of those English words, which allows us to label the POS tag of the corresponding Bengali words.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Methodology</ns0:head><ns0:p>The creation of the BengSentiLex involves several phases. We utilize various tools and resources to determine opinion words from the corpus and add them to BengSentiLex, as shown in Fig 2.</ns0:p><ns0:p>• Phase 1: Labeled and unlabeled corpus, machine translation system, English lexicons.</ns0:p><ns0:p>• Phase 2: Labeled corpus, PMI, machine translation system, English POS tagger, English lexicons, Bengali lexicon (constructed in phase 1).</ns0:p><ns0:p>• Phase 3: Unlabeled corpus, ML classifiers, PMI, machine translation system, English POS tagger, English lexicons, Bengali lexicon (constructed in phase 1 and phase 2).</ns0:p></ns0:div>
<ns0:div><ns0:head>8/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science For synonym generation, we utilize google translate 1 , as no standard Bengali synonym dictionary is available on the web in digital format. We translate Bengali words into multiple languages such as English, Chinese, French, Hindi, Russian and Arabic and then perform back-translation. This approach assists to find synonyms as sentiments are expressed in different ways across the languages.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4.1'>Phase-1: Utilizing English Sentiment Lexicons</ns0:head><ns0:p>The development of a sentiment lexicon typically starts with a list of well-defined sentiment words. A well-known approach for identifying the initial list of words (often called seed words) is to use a dictionary. However, dictionary words denote mostly formal expressions and usually do not represent the words people use in social media or informal communication. On the contrary, words extracted from a corpus represent terms people use in regular communication, hence, more useful for sentiment analysis.</ns0:p><ns0:p>We tokenize words from the review corpus (both labeled and unlabeled) using the NLTK tokenizer and calculate their frequency in the corpus. Only the words with a frequency above 5 are added to the candidate pool. However, not all high-frequency words convey sentiments. For example-'drama' is a high-frequency word in our drama review dataset, but it is not a sentiment word.</ns0:p><ns0:p>As Bengali does not have any sentiment dictionary of its own, we utilize resources from English.</ns0:p><ns0:p>Using a machine translation system, we convert all the words from the candidate pool to English. Two English sentiment lexicons, Opinion Lexicon, and VADER are employed to determine the polarity of the translated words.</ns0:p><ns0:p>The assumption is that if a translated English word exists in the English sentiment lexicon, then it is an opinion conveying word; therefore, the corresponding Bengali word can be added to the Bengali sentiment dictionary.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.4.2'>Phase 2: Lexicon generation from labeled data</ns0:head><ns0:p>Phase 2 retrieves opinion words from the labeled corpus by leveraging the pointwise mutual information (PMI) and a POS tagger, as shown in Figure <ns0:ref type='figure' target='#fig_3'>2 and 3</ns0:ref>.</ns0:p><ns0:p>From the labeled reviews, we derive the terms which are highly correlated with the class label. The words or terms that already exist in the lexicon (from earlier phases) are not considered. The remaining words are translated into English using the machine translation system. We utilize the spaCy POS tagger to identify their POS tags. Since usually adjectives and verbs convey opinions, we only keep them and exclude the other POS.</ns0:p><ns0:p>The sentiment score of a word, w, is calculated using the formula shown below,</ns0:p><ns0:formula xml:id='formula_2'>SentimentScore(w) = PMI(w, pos) − PMI(w, neg) (2)</ns0:formula><ns0:p>where, PMI(w, pos) represents the PMI score of word w corresponding to positive class and PMI(w, neg)</ns0:p><ns0:p>represents the PMI score of word w corresponding to negative class.</ns0:p><ns0:p>We then calculate the sentiment intensity (SI) of w, using the following equation,</ns0:p><ns0:formula xml:id='formula_3'>SI(w) = SentimentScore(w) PMI(w, pos) + PMI(w, neg) (3)</ns0:formula><ns0:p>We use the sentiment strength along with the threshold value to identity opinion conveying words from the labeled reviews.</ns0:p><ns0:p>If the sentiment intensity of a word, w, is above the threshold of 0.5, we consider it as a positive word.</ns0:p><ns0:p>if sentiment strength is below -0.5, we consider it as a negative word.</ns0:p><ns0:formula xml:id='formula_4'>Class(w) =      Positive, if SI(w)>0.50 Negative, if SI(w)< − 0.50</ns0:formula></ns0:div>
<ns0:div><ns0:head>Unassigned, Otherwise</ns0:head><ns0:p>(4)</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4.3'>Phase 3: Lexicon generation from unlabeled (pseudo-labeled) data</ns0:head><ns0:p>In addition to annotated reviews, our review corpus consists of a large number of unlabeled reviews. For Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> shows the classification accuracy of various ML classifiers using 10-fold cross-validation.</ns0:p><ns0:p>Among the five classifiers we employ, SGD and SVM show higher accuracy. Both of them correctly identify around 93% of the reviews, which is close to the accuracy of manual annotations. LR shows similar accuracy of around 92%. We use these three classifiers to determine the class of the unlabeled reviews.</ns0:p><ns0:p>The following procedures are considered for automatically generating class-label of the unannotated reviews utilizing the ML classifiers, 1) Use all the labeled reviews as training data and all the unlabeled reviews as testing data.</ns0:p><ns0:p>2) Iteratively utilize a small unlabeled set as testing data. After assigning their labels, we add these pseudo-labeled reviews to the training set and select a new set of unlabeled reviews as testing data. This procedure continues until all the data are labeled.</ns0:p><ns0:p>To determine the performance of the approach (1), we conduct 4-fold cross-validation on the labeled reviews. We use 1-fold as training data and the remaining 3-folds as testing data. The training-testing data ratio is selected based on the ratio of labeled (around 12000) and unlabeled data (around 38000) reviews.</ns0:p><ns0:p>For approach (2), similar way to approach 1, we split the 12000 labeled data into four subsets. Initially, we use one subset (around 3000 reviews) as a training set. In each iteration, a group of reviews is selected from the other three subsets (from around 9000 reviews), and used as a testing set. The size of the chosen group is equal to 10% of the current training set. After assigning the class of the reviews that belong to the testing set, they are added to the training set. This process continues until all the reviews (9000 reviews) are annotated.</ns0:p><ns0:p>We find that gradually expanding the training set by adding the predicted reviews from the testing set provides better performance. After applying approach 2 (shown in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>), our dataset contains around 38000 pseudo-labeled reviews. We then employ PMI and POS tagger in a similar way to phase 2. However, since this phase utilizes pseudo-labeled data instead of the true-label data, we set a higher threshold of 0.7 for the class label assignment.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CREATION OF SWEAR LEXICON</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.1'>Corpus</ns0:head><ns0:p>We utilize two Bengali corpora, one for creating the swear lexicon, BengSwearLex, which we refer to as training corpus (SW), and the other one for analysis and evaluating the performance of BengSwearLex, which we refer to as evaluation corpus (SW).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.1'>Training corpus (SW)</ns0:head><ns0:p>We use a Bengali corpus deposited by Abu <ns0:ref type='bibr' target='#b3'>(2020)</ns0:ref> for constructing the BengSwearLex. Originally, this corpus consists of 10221 text reviews/comments belong to different categories, such as toxic, racism, obscene, and insult. However, this corpus is noisy, consisting of many empty and punctuation only comments or erroneous annotation. We manually validate and exclude comments having the abovementioned issues.</ns0:p><ns0:p>From the modified corpus, we only select the reviews labeled as obscene. After excluding erroneous reviews and reviews that belong to other classes, the corpus consists of 3902 obscene comments. The length of each comment ranges from 1-100 words.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> shows some examples of the obscene comments from the training corpus (SW).</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1.2'>Evaluation corpus (SW)</ns0:head><ns0:p>The evaluation corpus we utilize is a drama review corpus collected from Youtube (the same corpus that is used for sentiment lexicon creation).</ns0:p><ns0:p>This corpus was created and deposited by (Sen, 2019) for sentiment analysis; It consists of 8500 positive and 3307 negative reviews. However, there is no distinction between different types of negative reviews. Therefore, we manually label these 3307 negative reviews into two categories, profane and non-profane. The annotation of these 3307 negative reviews was conducted by three Bengali expert annotators (A1, A2, A3). The first two annotators (A1 and A2) initially annotated all the reviews. In case of disagreement in annotation, it was resolved by a third annotator (A3).</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 3. Description of Drama Review Evaluation Corpus</ns0:head><ns0:p>Profane Non-Profane Total 664 2643 3307</ns0:p><ns0:p>After annotation, this corpus consists of 2643 non-profane negative reviews and 664 profane reviews, as shown in Table <ns0:ref type='table'>3</ns0:ref>. The kappa statistic for the two raters (A1 and A2) is 0.81, which indicates a high agreement in the annotation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Text Processing Tools</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.2.1'>POS tagger</ns0:head><ns0:p>Similar to sentiment lexicon (described in previous section), here we utilize a POS tagger to identify opinion word 2 3 . However, in Bengali, limited research has been conducted for developing a sophisticated POS tagger; therefore, the existing Bengali POS taggers are not as accurate as of its English counterpart.</ns0:p><ns0:p>Hence, manual validation is needed to check the correctness of the POS tag assigned by the POS taggers. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2.2'>Word Embedding</ns0:head><ns0:p>A word embedding is a learned representation for text, where related words have similar representations.</ns0:p><ns0:p>The word-embedding provides an efficient way to use the dense representation of words of varying lengths.</ns0:p><ns0:p>The values for the embedding of a word are learned by the model during the training phase.</ns0:p><ns0:p>There exist two main approaches for learning word embedding, count-based and context-based. The count-based vector space models heavily rely on the word frequency and co-occurrence matrix with the assumption that words in the same contexts share similar or related semantic meanings. The other learning approach, context-based methods, build predictive models that predict the target word given its neighbors.</ns0:p><ns0:p>The best vector representation of each word is learned during the model training process.</ns0:p><ns0:p>The Continuous Bag-of-Words (CBOW) model is a popular context-based method for learning word vectors. It predicts the center word from surrounding context words.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Swear Lexicon Creation Framework</ns0:head><ns0:p>Lexical resources can help to identify the presence of profanity in Bengali social media. Here, we present a semi-automatic approach for creating a swear lexicon utilizing an annotated corpus, word-embedding, and POS tagger. The complete lexicon development stage consists of three phases as shown in Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref>, 1. Seed word selection 2. Lexicon expansion</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Manual Validation</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.3.1'>Seed word selection</ns0:head><ns0:p>The lexicon creation process usually begins with a list of seed words. Our proposed approach utilizes an annotated obscene corpus to generate a seed word list. We extract and count the occurrence of individual words in the corpus. Based on the word-occurrence count in the corpus, we select the top 100 words.</ns0:p><ns0:p>However, we find that the list contains some non-vulgar words, which we exclude utilizing POS tagger and manual validation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.2'>Lexicon expansion</ns0:head><ns0:p>The lexicon expansion step involves utilizing word embedding to identify similar words of the already recognized swear words. We use the training corpus (SW) as a training dataset and utilize the Gensim <ns0:ref type='bibr' target='#b45'>( Řehůřek and Sojka, 2010)</ns0:ref> Continuous Bag-of-Words (CBOW) implementation to find similar words.</ns0:p><ns0:p>The entire procedure consists of the following steps.</ns0:p><ns0:p>• In the first step, we find the words which are the most similar to the seed words.</ns0:p></ns0:div>
<ns0:div><ns0:head>14/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• The second step iteratively finds words similar to vulgar words recognized in step 1. We remove the duplicate word automatically. In addition, we remove words that are not a noun, adjective, or verb.</ns0:p><ns0:p>Several iterations are performed until we notice no significant expansion of the swear word list.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3.3'>Manual validation</ns0:head><ns0:p>In the final step, we manually exclude non-swear words that exist in the swear lexicon. As lexical resources such as POS tagger in Bengali are not sophisticated enough, a manual validation step is necessary to eliminate non-swear words. Moreover, we find that vulgar comments often do not follow the usual sentence structure; Therefore, the POS tagger fails to label them correctly.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>EXPERIMENTAL RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head n='6.1'>Sentiment Classification</ns0:head></ns0:div>
<ns0:div><ns0:head n='6.1.1'>Baselines and evaluation metrics</ns0:head><ns0:p>We compare the performances of our corpus-built lexicon BengSentiLex (716 negative words and 519 positive words) with the translated versions of three English sentiment lexicons: VADER (7518 words), AFINN (2477 words), and Opinion Lexicon (6800 words) by integrating them into a lexicon-based classifier. We compute the accuracy of all the four lexicons in three cross-domain evaluation datasets to show the effectiveness of BengSentiLex in the varied domains and distributions. If the total polarity score of a review is above 0, we consider it as a positive prediction; if the final score is below 0, we consider it as negative; when the polarity score is 0, we consider the prediction as wrong.</ns0:p><ns0:p>A polarity score of 0 can result when the word-level polarity score of a lexicon can not distinguish a review as positive or negative, or the lexicon lacks coverage of opinion words present in the review. It is more appropriate to consider this scenario as a misprediction rather than a positive or negative class prediction. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1.2'>Comparative results</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Opinion Lexicon, and AFINN provide an accuracy of 40.90%, 35.57%, 31.65%, respectively. In the News2 dataset, BengSentiLex shows an accuracy of 46.17%, while the VADER provides the second-best performance with an accuracy of 43.12%.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Profanity Identification</ns0:head></ns0:div>
<ns0:div><ns0:head n='6.2.1'>Evaluation metric</ns0:head><ns0:p>To show the effectiveness of BengSwearLex, we utilize document-level coverage. The document-level coverage (or recall) of a lexicon corresponding to a review corpus is calculated as follows-</ns0:p><ns0:p>From the corpus, we first count the number of reviews that contain at least one word from the lexicon, which is then divided by the total number of reviews present in the corpus. Finally, it is multiplied by 100.</ns0:p><ns0:p>The following equation is used to calculate document-level coverage (DCov) of a lexicon-DCov = Number o f reviews with (>0) swear word identi f ied total number o f reviews in corpus * 100</ns0:p><ns0:p>The purpose of creating BengSwearLex is to identify comments and reviews that contain swear or slang words, not to identity non-profane comments; thus, we show document-level coverage for only the profane reviews. Regarding the false positive, as BengSwearLex is manually validated at the final step, it contains only swear words; hence, there is no possibility that it identifies a non-profane comment as profane (false positive).</ns0:p><ns0:p>As no swear lexicon exists in Bengali, we compare the performance of BengSwearLex with several supervised classifiers (that use in-domain labeled data) for profanity detection in the evaluation corpus.</ns0:p><ns0:p>Two popular supervised ML classifiers: Logistics Regression (LR) and Support Vector Machine The three important parameters of the embedding layer are input dimension, which represents the size of the vocabulary, output dimensions, which is the length of the vector for each word, input length, the maximum length of a sequence. The input dimension is determined by the number of words present in a corpus, which vary in two corpora. We set the output dimensions to 64. The maximum length of a sequence is used as 300. A drop-out rate of 0.5 is applied to the dropout layer; ReLU activation is used in the intermediate layers. In the final layer, softmax activation is applied. As an optimization function, Adam optimizer, and as a loss function, binary-cross entropy are utilized. We set the batch size to 64, use a learning rate of 0.001, and train the model for 10 epochs. We use the Keras library <ns0:ref type='bibr' target='#b12'>(Chollet et al., 2015)</ns0:ref> with the TensorFlow backend for implementing DNN based model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2.2'>Comparison results</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> shows that among the 664 profane reviews present in the evaluation corpus (SW), BengSwearLex registers 564 reviews as profane by identifying the presence of at least one swear term in the review, document-level coverage of 84.93%.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> shows the coverage of these ML classifiers in the evaluation corpus. We provide their performances in two different settings: class-balanced setting and class-imbalanced setting. In the classimbalanced setting, we utilize all the 664 profane comments and 2643 non-profane negative comments.</ns0:p><ns0:p>In the class-balanced setting, we use all the 664 profane comments; however, for the non-profane class, we randomly select 664 non-profane comments from the pool of 2643 non-profane comments.</ns0:p><ns0:p>From Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref>, we observe that when the original class-imbalanced data is used, all the three ML classifiers achieve coverage of around 60%. However, when a class-balanced dataset is utilized, the performances of classifiers dramatically increase, achieve coverage of around 90%.</ns0:p></ns0:div>
<ns0:div><ns0:head>16/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>DISCUSSION</ns0:head></ns0:div>
<ns0:div><ns0:head n='7.1'>Sentiment Lexicon</ns0:head><ns0:p>The results suggest that translated lexicons are not good enough to capture the semantic orientation of the reviews as they lack coverage of opinion words presents in Bengali text. We find that BengSentiLex performs considerably better than the translated lexicons in the drama review dataset, with over 40% improvements. Since BengSentiLex is developed from the corpus that belongs to the same domain, it is very effective at classifying sentiments in this drama review evaluation corpus.</ns0:p><ns0:p>Also, for the two other cross-domain evaluation corpus, News1 and News2, BengSentiLex yields better performance compared to translated lexicons; especially, for classifying negative reviews, which can be attributed to the presence of a higher number of negative opinion words (716) in BengSentiLex compared to 519 positive sentiment words.</ns0:p><ns0:p>The results indicate that utilizing corpora in the target language for automated sentiment lexicon generation is more effective as opposed to translating words directly from another language such as English. As BengSentiLex is built from a social media corpus, it is comprised of words that people use in the web, social media, and informal communication; Thus, it is more effective in recognizing sentiments in text data compared to word-level translated lexicons.</ns0:p><ns0:p>Although supervised ML classifiers usually perform better in sentiment classification, they require annotated data, which are largely missing in low-resource languages such as Bengali. Thus, the developed lexicon can help sentiment classification in Bengali.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.2'>Swear Lexicon</ns0:head><ns0:p>The results of Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> reveal that BengSwearLex is capable of identifying profanity in Bengali social media content. It shows higher document-level coverage than in-domain labeled data when a class-imbalanced training set is used. However, a class-balanced training set performs better than BengSwearLex. Labeled data is scarcely available in low-resource languages such as Bengali; therefore, although small in size, BengSwearLex can be an effective tool for profanity identification in the inadequacy of labeled data.</ns0:p><ns0:p>Besides, since BengSwearLex consists of only swear or obscene terms, there is a very low chance that it would refer to non-obscene comments as obscene (False Positive), thus capable of achieving a very high precision score.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>utilized linear support vector classifier (LinearSVC), logistic regression (LR), multinomial naïve Bayes (MNB), random forest (RF), artificial neural network (ANN), recurrent neural network (RNN) with long short term memory (LSTM) to detect multi-type abusive Bengali text. They found RNN outperformed other classifiers by obtaining the highest accuracy of 82.20%. Chakraborty and Seddiqui (2019) employed machine learning and natural language processing techniques to build an automatic system for detecting abusive comments in Bengali. As input, they used Unicode emoticons and Unicode Bengali characters. They applied MNB, SVM, Convolutional Neural Network (CNN) with LSTM, and found SVM performed best with 78% accuracy. Karim et al. (2020) proposed BengFastText, a word embedding model for Bengali, and incorporated it into a Multichannel Convolutional-LSTM (MConv-LSTM) network for predicting different types of hate speech. They compared BengFastText against the Word2Vec and GloVe embedding by integrating them into several ML classifiers and showed the effectiveness of BengFastText for hate speech detection. Sazzed (2021a) introduced an annotated Bengali corpus of 3000 transliterated Bengali comments categorized into two classes, abusive and non-abusive, 1500 comments for each. For the baseline evaluations, the author employed several supervised machine learning (ML) and deep learning-based classifiers. They observed support vector machine (SVM) shows the highest efficacy for identifying abusive content.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Sample reviews from training dataset Drama-Train</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The various phases of sentiment lexicon generation in Bengali</ns0:figDesc><ns0:graphic coords='10,203.77,226.09,289.50,317.91' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The lexicon building block</ns0:figDesc><ns0:graphic coords='11,292.69,63.78,111.67,293.64' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>the labeled reviews, we use PMI to identify the top class-correlated words. However, for the unannotated reviews, the true class labels are not available; thus, automatic labeling is required. To automatically annotate the unlabeled reviews, we employ various several ML classifiers and select the classifier with the highest accuracy. The unigram and bigram-based tf-idf (term frequency-inverse document frequency) scores are used as input features for the ML classifiers. The following ML classifiers are employed: SVM (Support Vector Machine) is a popular supervised ML algorithm used for classification and regression problems. Originally, SVM is a binary classifier that decides the best hyperplane to separate the space into binary classes by maximizing the distance between data points belong to different classes.However, SVM can be used as multi-class classifier following same principle and employing one-versusone or one-vs-the-rest strategy. SGD (Stochastic gradient descent) is a method that optimizes an objective function iteratively. It is a stochastic approximation of actual gradient descent optimization since it calculates gradient from a randomly selected subset of the data. For SGD, hinge loss and l2 penalty with a maximum iteration of 1500 are employed. LR (Logistic regression) is a statistical classification method that finds the best fitting model to describe the relationship between the dependent variable and a set of independent variables. Random Forest (RF) is a decision tree-based ensemble learning classifier. It makes predictions by combining the results from multiple individual decision trees. K-nearest neighbors (k-NN) algorithm is a non-parametric method used for classification and regression. In k-NN classification, the class membership of a sample is determined by the plurality vote of its neighbors. Here, we set k=3, the class of a review depends on three of its closest neighbors.We use scikit-learn<ns0:ref type='bibr' target='#b38'>(Pedregosa et al., 2011)</ns0:ref> implementation of the aforementioned ML classifiers. For all of the classifiers, we use the default parameter settings. Using 10-fold cross-validation, we assess their 11/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Class-label assignment of unlabeled reviews using supervised ML classifier and labeled data</ns0:figDesc><ns0:graphic coords='13,203.77,188.85,289.51,166.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Examples of Bengali obscene comments and corresponding English translation</ns0:figDesc><ns0:graphic coords='14,162.41,63.78,372.22,101.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>2 https://github.com/AbuKaisar24/Bengali-Pos-Tagger-Using-Indian-Corpus/ 3 https://www.isical.ac.in/ utpal/docs/POSreadme.txt13/21PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. The Proposed Methodology</ns0:figDesc><ns0:graphic coords='15,245.13,63.78,206.78,173.63' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>SVM), and an optimization method, Stochastic Gradient Descendent (SGD) is employed in the evaluation corpus to identify profane reviews. As a feature vector, we use the unigram and bigram-based tf-idf score. 10-fold cross-validation is performed to assess the performance of various ML classifiers. For all the classifiers, default parameter settings are used. For SGD, hinge loss and l2 penalty with a maximum iteration of 1500 are employed.Furthermore, we employ Deep Neural Network (DNN) based architecture, Convolutional Neural Network (CNN), Long short-term memory (LSTM), and Bidirectional Long short-term memory (BiLSTM) to identify profanity. The DNN based model starts with the Keras (Chollet et al., 2015) embedding layer.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='8,141.73,63.78,413.58,206.79' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Evaluation Datasets for BengSentiLex</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Domain</ns0:cell><ns0:cell cols='2'>Positive Negative</ns0:cell><ns0:cell>Total</ns0:cell></ns0:row><ns0:row><ns0:cell>Drama-Eval</ns0:cell><ns0:cell>Drama Review</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>2000</ns0:cell></ns0:row><ns0:row><ns0:cell>News1</ns0:cell><ns0:cell>News Comments</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>2000</ns0:cell><ns0:cell>4000</ns0:cell></ns0:row><ns0:row><ns0:cell>News2</ns0:cell><ns0:cell>News Comments</ns0:cell><ns0:cell>5205</ns0:cell><ns0:cell>5205</ns0:cell><ns0:cell>10410</ns0:cell></ns0:row><ns0:row><ns0:cell>4.3 Cross-lingual resources</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Performances of supervised ML classifiers in annotated corpus</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>Classifier Precision Recall F1 Score Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>SGD</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.901</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>93.61%</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>0.908</ns0:cell><ns0:cell>0.924</ns0:cell><ns0:cell>0.916</ns0:cell><ns0:cell>93.00%</ns0:cell></ns0:row><ns0:row><ns0:cell>LR</ns0:cell><ns0:cell>0.889</ns0:cell><ns0:cell>0.922</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell>91.80%</ns0:cell></ns0:row><ns0:row><ns0:cell>k-NN</ns0:cell><ns0:cell>0.901</ns0:cell><ns0:cell>0.849</ns0:cell><ns0:cell>0.875</ns0:cell><ns0:cell>90.18%</ns0:cell></ns0:row><ns0:row><ns0:cell>RF</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell>0.870</ns0:cell><ns0:cell>0.874</ns0:cell><ns0:cell>89.9l%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Performance of various lexicons for sentiment classification</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Lexicon</ns0:cell><ns0:cell>#Neg Class</ns0:cell><ns0:cell>#Pos Class</ns0:cell><ns0:cell>Total</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>As no Bengali lexicon-based sentiment analysis tool is publicly available, we develop a simple</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>lexicon-based sentiment analysis tool, BengSentiAn (Bengali Sentiment Analyzer). The polarity score of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>a review is computed by adding up the polarity score of individual opinion words (based on the opinion</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>lexicon) present in a review. Besides, negation words are considered to shit the polarity of the opinion</ns0:cell></ns0:row><ns0:row><ns0:cell>words.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>shows the comparative performances of various translated lexicons and BengSentiLex when integrated with BengSentiAn. In the drama review dataset, BengSentiLex classifies 1308 reviews out of 2000 reviews with an accuracy of around 65%. Among the three translated lexicons, VADER classifies 46.60% reviews correctly, while AFINN and Opinion Lexicon provide 31.65% and 41.95% accuracy, respectively. In the News1 dataset, BengSentiLex exhibits an accuracy of 47.30%, while the VADER,15/21 PeerJ Comput. Sci. reviewing PDF | (CS-2021:05:61491:2:0:NEW 24 Jul 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Document-level coverage of various methods for profanity detection</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell>Method</ns0:cell><ns0:cell># Identified</ns0:cell><ns0:cell>DCov</ns0:cell></ns0:row><ns0:row><ns0:cell>Unsupervised</ns0:cell><ns0:cell>BengSwearLex</ns0:cell><ns0:cell>564/664</ns0:cell><ns0:cell>84.93%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>161/664</ns0:cell><ns0:cell>24.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>Supervised (Unbalanced)</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>345/664</ns0:cell><ns0:cell>53.4%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SGD</ns0:cell><ns0:cell>366/664</ns0:cell><ns0:cell>58.8%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LSTM</ns0:cell><ns0:cell>433/664</ns0:cell><ns0:cell>65.21%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>462/664</ns0:cell><ns0:cell>70.4%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>444/664</ns0:cell><ns0:cell>66.86%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LR</ns0:cell><ns0:cell>609/664</ns0:cell><ns0:cell>91.71%</ns0:cell></ns0:row><ns0:row><ns0:cell>Supervised (Balanced)</ns0:cell><ns0:cell>SVM</ns0:cell><ns0:cell>594/664</ns0:cell><ns0:cell>89.45%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>SGD</ns0:cell><ns0:cell>589/664</ns0:cell><ns0:cell>88.70%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>LSTM</ns0:cell><ns0:cell>610/664</ns0:cell><ns0:cell>91.67%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>BiLSTM</ns0:cell><ns0:cell>624/664</ns0:cell><ns0:cell>94.0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CNN</ns0:cell><ns0:cell>609/664</ns0:cell><ns0:cell>91.71%</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>https://translate.google.com</ns0:note>
</ns0:body>
" | "Dear Dr. Srinivasan,
Thank you for giving me the opportunity to submit a revised draft of the manuscript
“BengSentiLex and BengSwearLex: Creating lexicons for sentiment analysis and profanity
detection in low-resource Bengali language” for publication in the ‘PeerJ Computer Science’. I
appreciate the time and effort that you and the reviewers dedicated to providing feedback on my
manuscript and are grateful for the insightful comments on and valuable improvements to my
paper. I have incorporated all the suggestions made by you and the reviewers. Please see
below for a point-by-point response to the reviewers’ comments and concerns. All page
numbers refer to the revised manuscript file with tracked changes.
Reviewer: Dr Balakrishnan Subramanian
Basic reporting
1. The concept is clear.
2. In line no. 280, 'sophisti. cated' - spelling mistake.
-Thank you for your comments. The spelling mistake has been fixed.
3. In figure 2, both Phase 2 and 3 consist of lexicon building block (you may indicate flow of phase 2 and
3 work in this diagram)
- Thank you for your comments. Both phase 2 and 3 use the same approaches to expand
sentiment lexicon (The lexicon building block). They only differ in input data (Phase 2 use
true labeled data, while phase 3 use pseudo-labeled data) and threshold (the threshold is
set little bit higher for phase 3, as it uses pseudo-labeled data). Thus, the same lexicon
building block is used in Fig 2. The various steps of lexicon building block have been shown
in Fig 3.
4. In line 383, 4.4.2 Phase 2: Lexicon generation from labeled data - you mentioned like that. But in
figure 2, phase 2 input is taken from both labeled and unlabeled like that. Rectify this.
- You are correct. Thank you for pointing this issue. The link from unlabeled data to Phase 2 has been
removed.
Experimental design
1. To improve the readability, the author requires to provide a good data flow/ sequence diagram in
understanding the “Service Level Agreement”
Validity of the findings
1. In the introduction, the findings of the present research work should be compared with the recent
work of the same field towards claiming the contribution made. , kindly provide several references to
substantiate the claim made in the abstract (that is, provide references to other groups who do or have
done research in this area).
- Thank you for your comments. Other works related to sentiment lexicon and swear lexicon creation in
Bengali have been mentioned in introduction, Line 67-72. How the proposed lexicon creation method
differs from other Bengali sentiment lexicon based methods described in section 2.3.1 , Line 206-212;
2. Try to concise the conclusion
Thank you for your comments. The conclusion section has been renamed to summary and conclusion
to make the heading more appropriate.
3. Discuss the future plans with respect to the research state of progress and its limitations.
Thank you for your comments. The future plan of this work already mentioned in the manuscript, Line
653-654.
“The future work will involve expanding the size of both lexicons utilizing larger and multi-domain
training corpora.”
Additional comments
1. In the Introduction section, the drawbacks of each conventional technique should be described
clearly.
- The limitations of Existing Bengali sentiment/swear lexicons have been mentioned in Introduction Line
71-74 and also in the motivation section 1.1, line 99-105
2. You should emphasize the difference between other methods to clarify the position of this work
further.
- The differences with the English and Bengali lexicon creation methods have been already provided in
section 2.3, Line 200-227
3. The Wide ranges of applications need to be addressed in the Introduction
- The applications/necessity of sentiment lexicon or swear lexicon has been already describe in
introduction section, Line 37-48
4. Add the advantages of the proposed system in one quoted line for justifying the proposed approach
in the Introduction section.
The following text has been included in the introduction section, Line 87-89.
‘As BengSentiLex is built from a social media corpus, it contains words that people use in the web,
social media, and informal communication; Therefore, it is more effective in recognizing sentiments in
text data compared to word-level translation of English lexicons.’
Reviewer 3
Basic reporting
no comment
Experimental design
no comment
Validity of the findings
no comment
Additional comments
The following questions should be addressed by the authors:
1) line 280 typo error
- The type has been fixed
2) line 301 typo error
- The type has been fixed
3) line 320, was there any statistical analysis carried out?
- The cited paper investigated the sentiment preservation in Bengali and Machine translated
review utilizing Cohen’s kappa and Gwet’s AC1. It has been observed that two very accurate
classifiers, SVM and LR show kappa scores above 0.80 and AC1 scores above 0.85, which
indicates sentiment consistency exists between original Bengali and machine-translated English
reviews.
Thus, although not 100% accurate, Google Machine Translation preserves the
sentiment in majority of the cases.
4) line 363, how many languages and list them?
- English, Chinese, French, Hindi, Russian and Arabic , which have been added to Line 376-377 in the
revised version
5) line 405, what are the input features for the various ML classifiers? covered in section 5.2.2 but not in
section 4!
- The unigram and bigram-based tf–idf (term frequency–inverse document frequency) scores
are used as input features for the ML classifiers. The text have been added in Line 416-418
6) line 408 and 409, svm is a binary classifier and not a multiple class classifier.
- I agree with you SVM is basically a binary classier, although, it can also work as a
multi-class classier, based on various techniques, such as one versus all. The text has been
updated to remove the ambiguity. Line 419-423
“Originally, SVM is a binary classifier that decides the best hyperplane to separate the space into binary
classes by maximizing the distance between data points belong to different classes. However, SVM can be used as
multi-class classifier following same principle and employing \ one-versus-one or one-vs-the-rest strategy.
7) line 489, which type of word embedding is used, is it CBOW or count based?
- It is CBOW based word embedding.
8) line 529, how is the polarity score computed?
- Thank you for the comments. The polarity score of a review is computed by adding up the polarity
score of individual opinion words (based on the opinion lexicon) present in the reviews. Besides,
negation words are considered to change the polarity of the opinion words. The text has been updated
in 543-547.
" | Here is a paper. Please give your review comments after reading it. |
213 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In this study, a deep learning bidirectional long short-term memory (BiLSTM) recurrent neural network-based channel state information estimator is proposed for 5G orthogonal frequency-division multiplexing systems. The proposed estimator is a pilot-dependent estimator and follows the online learning approach in the training phase and the offline approach in the practical implementation phase. The estimator does not deal with complete a priory certainty for channels' statistics and attains superior performance in the presence of a limited number of pilots. A comparative study is conducted using three loss functions, namely, mean absolute error, cross entropy function for kth mutually exclusive classes and sum of squared of the errors. The Adam optimisation algorithm is used to evaluate the performance of the proposed estimator under each loss function. In terms of symbol error rate and accuracy metrics, the proposed estimator outperforms long shortterm memory (LSTM) neural network-based channel state information, least squares and minimum mean square error estimators under different simulation conditions. The computational and training time complexities for deep learning BiLSTM-and LSTM-based estimators are provided. Given that the proposed estimator relies on the deep learning neural network approach, where it can analyse massive data, recognise statistical dependencies and characteristics, develop relationships between features and generalise the accrued knowledge for new datasets that it has not seen before, the approach is promising for any 5G and beyond communication system.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>5G wireless communication is the most active area of technology development and a rapidly growing branch of the wider field of communication systems. Wireless communication has made various possible services ranging from voice to multimedia.</ns0:p><ns0:p>The physical characteristics of the wireless communication channel and many unknown surrounding effects result in imperfections in the transmitted signals. For example, the transmitted signals experience reflections, diffractions, and scattering, which produce multipath signals with different delays, phase shift, attenuation, and distortion arriving at the receiving end; hence, they adversely affect the recovered signals <ns0:ref type='bibr' target='#b19'>(Oyerinde & Mneney 2012b)</ns0:ref>.</ns0:p><ns0:p>A priori information on the physical characteristics of the channel provided by pilots is one of the significant factors that determine the efficiency of channel state information estimators (CSIEs). For instance, if not a priori information is available (no or insufficient pilots), channel estimation is useless; finding what you do not know is impossible. When complete information on the transmission channel is available, CSIEs are no longer needed. Thus, a priori uncertainty exists for communication channel statistics. However, the classical theory of detection, recognition, and estimation of signals deals with complete priory certainty for channel statistics, and it is an unreliable and unpractical assumption <ns0:ref type='bibr' target='#b2'>(Bogdanovich et al. 2009</ns0:ref>).</ns0:p><ns0:p>In the classic case, uncertainty is related to useful signals. In detection problems, the unknown is the fact of a signal existence. In recognition problems, the unknown is the type of signal being received at the current moment. In estimation problems, the unknown is the amplitude of the measured signal or one of its parameters. The rest of the components of the signal-noise environment in classical theory are regarded as a priori certain (known) as follows: the known is the statistical description of the noise, the known is the values of the unmeasured parameters of the signal and the known is the physical characteristics of the wireless communication channel. In such conditions, the classical theory allows the synthesis of optimal estimation algorithms, but the structure and quality coefficients of the algorithms depend on the values of the parameters of the signal-noise environment. If the values of the parameters describing the signal-noise environment are slightly different from the parameters for which the optimal algorithm is built, then the quality coefficients will become substantially poor, making the algorithm useless in several cases <ns0:ref type='bibr' target='#b2'>(Bogdanovich et al. 2009)</ns0:ref>, <ns0:ref type='bibr' target='#b16'>(O'Shea et al. 2017)</ns0:ref>. The most frequently used CSIEs are derived from signal and channel statistical models by employing techniques, such as maximum likelihood (ML), least squares (LS), and minimum mean squared error (MMSE) optimisation metrics <ns0:ref type='bibr' target='#b10'>(Kim 2015)</ns0:ref>.</ns0:p><ns0:p>One of the major concerns in the optimum performance of wireless communication systems is providing accurate channel state information (CSI) at the receiver end of the systems to detect the transmitted signal coherently. If CSI is unavailable at the receiver end, then the transmitted signal can only be demodulated and detected by a noncoherent technique, such as differential demodulation. However, using a noncoherent detection method occurs at the expense of a loss of signal-to-noise ratio of about 3-4 dB compared with using a coherent detection technique. To eliminate such losses, researchers have focused on the development of channel estimation techniques to provide perfect detection of transmitted information in wireless communication systems using the Orthogonal Frequency-Division Multiplexing (OFDM) modulation scheme <ns0:ref type='bibr' target='#b18'>(Oyerinde & Mneney 2012a)</ns0:ref>.</ns0:p><ns0:p>The use of deep learning neural networks (DLNNs) is the state-of-the-art approach in the field of wireless communication. The amazing learning capabilities of DLNNs from training data sets and the tremendous progress of graphical processing units (GPUs), which are considered the most powerful tools for training DLNNs, have motivated its usage for different wireless communication issues, such as modulation recognition <ns0:ref type='bibr' target='#b31'>(Zhou et al. 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b9'>(Karra et al. 2017</ns0:ref>) and channel state estimation and detection <ns0:ref type='bibr' target='#b3'>(Essai Ali ;</ns0:ref><ns0:ref type='bibr' target='#b7'>Joo et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Kang et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b15'>Ma et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Ponnaluru & Penke 2020;</ns0:ref><ns0:ref type='bibr' target='#b27'>Yang et al. 2019a;</ns0:ref><ns0:ref type='bibr' target='#b29'>Ye et al. 2018)</ns0:ref>. According to <ns0:ref type='bibr' target='#b9'>(Karra et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b10'>Kim 2015;</ns0:ref><ns0:ref type='bibr' target='#b18'>Oyerinde & Mneney 2012a;</ns0:ref><ns0:ref type='bibr' target='#b31'>Zhou et al. 2020</ns0:ref>) and <ns0:ref type='bibr' target='#b15'>(Ma et al. 2018</ns0:ref>), all proposed deep learning-based CSIEs have better performance compared with the examined traditional channel ones, such as LS and MMSE estimators. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Recently, numerous long short-term memory (LSTM)-and BiLSTM-based applications have been introduced for prognostic and health management <ns0:ref type='bibr' target='#b30'>(Zhao et al. 2020)</ns0:ref>, artificial intelligencebased translation systems <ns0:ref type='bibr' target='#b26'>(Wu et al. 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b17'>(Ong 2017</ns0:ref>) and other areas. For channel state information estimation in 5G-OFDM wireless communication systems, many deep learning approaches, such as convolutional neural network (CNN), recurrent neural network (RNN) (e.g. LSTM and BiLSTM NNs) and hybrid (CNN and RNN) neural networks have been used <ns0:ref type='bibr' target='#b3'>(Essai Ali ;</ns0:ref><ns0:ref type='bibr' target='#b12'>Liao et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b13'>Luo et al. 2018a;</ns0:ref><ns0:ref type='bibr' target='#b20'>Ponnaluru & Penke 2020;</ns0:ref><ns0:ref type='bibr' target='#b27'>Yang et al. 2019a;</ns0:ref><ns0:ref type='bibr' target='#b28'>Yang et al. 2019b;</ns0:ref><ns0:ref type='bibr' target='#b29'>Ye et al. 2018)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b12'>(Liao et al. 2019)</ns0:ref>, a deep learning-based CSIE was proposed by using CNN and BiLSTM-NN for the extraction of the feature vectors of the channel response and channel estimation, respectively. The aim was to improve the channel state information estimation performance at the downlink, which is caused by the fast time-varying and varying channel statistical characteristics in high-speed mobility scenarios. In <ns0:ref type='bibr' target='#b14'>(Luo et al. 2018b</ns0:ref>), an online-trained CSIE that is an integration of CNN and LSTM-NN was proposed. The authors also developed an offline-online training technique that applies to 5G wireless communication systems. In <ns0:ref type='bibr'>(Ye et al. 2018</ns0:ref>), a joint channel estimator and detector that is based on feedforward DLNNs for frequency selective channel (OFDM) systems was introduced. The proposed algorithm was found to be superior to the traditional MMSE estimation method when unknown surrounding effects of communication systems are considered. In <ns0:ref type='bibr' target='#b28'>(Yang et al. 2019b</ns0:ref>), an online estimator was developed by adopting feedforward DLNNs for doubly selective channels. The proposed estimator was considered superior to the traditional LMMSE estimator in all investigated scenarios. In <ns0:ref type='bibr' target='#b20'>(Ponnaluru & Penke 2020)</ns0:ref>, a one-dimensional CNN (1D-CNN) deep learning estimator was proposed. Under various modulation scenarios and in terms of MSE and BER metrics, the authors compared the performance of the proposed estimator with that of feedforward neural networks (FFNN), MMSE and LS estimators. 1D-CNN outperformed LS, MMSE and FFNN estimators. In (Essai Ali), an online pilot-assisted estimator model for OFDM wireless communication systems was developed by using LSTM NN. The conducted comparative study showed the superior performance of the proposed estimator in comparison with LS and MMSE estimators under limited pilots and a prior uncertainty of channel statistics. The authors in <ns0:ref type='bibr' target='#b22'>(Sarwar et al. 2020</ns0:ref>) used the genetic algorithm-optimised artificial neural network to build a CSIE. The proposed estimator was dedicated for space-time block-coding MIMO-OFDM communication systems. The proposed estimator outperformed LS and MMSE estimators in terms of BER at high SNRs, but it achieved approximately the same performance as LS and MMSE estimators at low SNRs. The authors in <ns0:ref type='bibr' target='#b23'>(Senol et al. 2021)</ns0:ref> proposed a CSIE for OFDM systems by using ANN under the condition of sparse multipath channels. The proposed estimator achieved a comparable SER performance as matching pursuit-and orthogonal matching pursuit-based estimators at a lower computational complexity than that of the examined estimators. The authors in (Le <ns0:ref type='bibr' target='#b11'>Ha et al. 2021)</ns0:ref> proposed a CSIE that uses deep learning and LS estimator and utilizes the multiple-input multiple-output system for 5G-OFDM. The proposed estimator minimizes the MSE loss function between the LS-based channel estimation and the actual channel. The proposed estimator outperformed LS and LMMSE estimators in terms of BER and MSE metrics.</ns0:p><ns0:p>In this study, a BiLSTM DLNN-based CSIE for OFDM wireless communication systems is proposed and implemented. To the best of the authors' knowledge, this work is the first to use the BiLSTM network as a CSIE without integration with CNN. The proposed estimator does not need any prior knowledge of the communication channel statistics and powerfully works at limited pilots (under the condition of less CSI). The proposed BiLSTM-based CSIE is a datadriven estimator, so it can analyse, recognise and understand the statistical characteristics of wireless channels suffering from many known interferences such as adjacent channel, inter symbol, inter user, inter cell, co-channel and electromagnetic interferences and unknown ones <ns0:ref type='bibr' target='#b6'>(Jeya et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Sheikh 2004)</ns0:ref>. Although an impressively wide range of configurations can be found for almost every aspect of deep neural networks, the choice of loss function is underrepresented when addressing communication problems, and most studies and applications simply use the 'log' loss function <ns0:ref type='bibr' target='#b5'>(Janocha & Czarnecki 2017)</ns0:ref>. In this study two customed loss PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2021:04:60719:1:1:CHECK 12 Jun 2021)</ns0:ref> Manuscript to be reviewed Computer Science functions known as mean absolute error (MAE), and sum of squared errors (SSE) are proposed to obtain the most reliable and robust estimator under unknown channel statistical characteristics and limited pilot numbers.</ns0:p><ns0:p>The performance of the proposed BiLSTM-based estimator is compared with the performance of the most frequently used LS and MMSE channel state estimators. The obtained results show that the BiLSTM-based estimator attains a comparable performance as the MMSE estimator and outperforms LS and MMSE estimators at large and small numbers of pilots, respectively. In addition, the proposed estimator improves the transmission data rate of OFDM wireless communication systems because it exhibits optimal performance compared with the examined estimators at a small number of pilots.</ns0:p><ns0:p>The rest of this paper is organised as follows. The DLNN-based CSIE is presented in Section II. The standard OFDM system and the proposed deep learning BiLSTM NN-based CSIE are presented in Section III. The simulation results are given in Section IV. The conclusions and future work directions are provided in Section V.</ns0:p></ns0:div>
<ns0:div><ns0:head>DLNN-BASED CSIE</ns0:head><ns0:p>In this section, a deep learning BiLSTM NN for channel state information estimation is presented. The BiLSTM network is another version of LSTM neural networks, which are recurrent neural networks (RNN) that can learn the long-term dependencies between the time steps of input data <ns0:ref type='bibr' target='#b4'>(Hochreiter & Schmidhuber 1997)</ns0:ref> <ns0:ref type='bibr' target='#b13'>(Luo et al. 2018a;</ns0:ref><ns0:ref type='bibr' target='#b30'>Zhao et al. 2020)</ns0:ref>.</ns0:p><ns0:p>The BiLSTM architecture mainly consists of two separate LSTM-NNs and has two propagation directions (forward and backward). The LSTM NN structure consists of input, output and forget gates and a memory cell. The forget and input gates enable the LSTM NN to effectively store long-term memory. Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref> shows the main construction of the LSTM cell <ns0:ref type='bibr' target='#b4'>(Hochreiter & Schmidhuber 1997)</ns0:ref>. The forget gate enables LSTM NN to remove the undesired information by currently used input t</ns0:p><ns0:p>x and cell output t h of the last process. The input gate finds the information that will be used with the previous LSTM cell state 1 t c  to obtain a new cell state t c based on the current cell input t</ns0:p><ns0:p>x and the previous cell output 1 t h  . Using the forget and input gates, LSTM can decide which information is abandoned and which is retained.</ns0:p><ns0:p>The output gate finds current cell output t h by using the previous cell output 1 t h  at current cell state t c and input t x . The mathematical model of the LSTMNN structure can be described through Equations ( <ns0:ref type='formula' target='#formula_0'>1</ns0:ref>) -( <ns0:ref type='formula' target='#formula_4'>6</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_0'>  1 t g i t i t i i w x R h b      ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:formula xml:id='formula_1'>  1 t g f t f t f f w x R h b      ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p> </ns0:p><ns0:formula xml:id='formula_2'>1 t c g t g t g g w x R h b      ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>  1 t g o t o t o o w x R h b      ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>, ( <ns0:ref type='formula'>5</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_4'>𝑐 𝑡 = 𝑓 𝑡 ʘ𝑐 𝑡 -1 + 𝑖 𝑡 ʘ𝑔 𝑡 , (<ns0:label>6</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>)</ns0:formula><ns0:formula xml:id='formula_6'>ℎ 𝑡 = 𝑜 𝑡 ʘ𝜎 𝑐 (𝑐 𝑡 )</ns0:formula><ns0:p>where , , , , </ns0:p><ns0:formula xml:id='formula_7'>i f g o c  ,</ns0:formula><ns0:formula xml:id='formula_8'>] T i f g o w w w w  W , [ ] T i f g o R R R R  R and [ ] T i f g o b b b b  b</ns0:formula><ns0:p>are input weights, recurrent weights and bias, respectively.</ns0:p><ns0:p>The forward and backward propagation directions of BiLSTM are transmitted at the same time to the output unit. Therefore, old and future information can be captured, as shown in Figure <ns0:ref type='figure'>2</ns0:ref>. At any time t , the input is fed to forward LSTM and backward LSTM networks. The final output of BiLSTM-NN can be expressed as follows: , ( <ns0:ref type='formula'>7</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_9'>ℎ 𝑡 = ℎ 𝑡 ʘℎ 𝑡</ns0:formula><ns0:p>where and are forward and backward outputs of BiLSTM-NN, respectively.</ns0:p><ns0:formula xml:id='formula_10'>ℎ 𝑡 ℎ 𝑡</ns0:formula><ns0:p>As the proposed BiLSTM-based CSIE is built, the weights and biases of the proposed estimator are optimised (tuned) using the Adam optimization algorithm. Adam trains the proposed estimator by using one of three loss functions, namely, cross entropy function for k th mutually exclusive classes (crossentropyex), mean absolute error (MAE), and sum of squared errors (SSE). The loss function estimates the loss between the expected and actual outcome. During the learning process, optimisation algorithms try to minimise the available loss function to the desired error goal by optimising the DLNN weights and biases iteratively at each training epoch. Selecting a loss function is one of the essential and challenging tasks in deep learning. The proposed estimator is trained using above-mentioned three different loss functions to obtain the most optimal BiLSTM-based estimator for wireless communication systems with low prior information (limited pilots) for signal-noise environments.</ns0:p><ns0:p>To build the DL BiLSTM NN-based CSIE, an array is created with the following five layers: sequence input, BiLSTM, fully connected, softmax and output classification. The input size was set to 256. The BiLSTM layer consists of 16 hidden units and shows the sequence's last element. Four classes are specified by considering the size 4 fully connected (FC) layer, followed by a softmax layer and ended by a classification layer. Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the structure of the proposed estimator (Essai Ali).</ns0:p></ns0:div>
<ns0:div><ns0:head>DL BiLSTM NN-BASED CSIE for 5G-OFDM WIRELESS COMMUNICATION SYSTEMS</ns0:head><ns0:p>The standard OFDM wireless communication system and an offline DL of the proposed CSIE are presented in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head>OFDM SYSTEM MODEL</ns0:head><ns0:p>In accordance with (Essai Ali ; <ns0:ref type='bibr' target='#b29'>Ye et al. 2018)</ns0:ref>, Figure <ns0:ref type='figure'>4</ns0:ref> clearly illustrates the structure of the traditional OFDM communication system. On the transmitter side, a serial-to-parallel (S/P) converter is used to convert the transmitted symbols with pilot signals into parallel data streams. Then, inverse discrete Fourier transform (IDFT) is applied to convert the signal into the time domain. A cyclic prefix (CP) must be added to alleviate the effects of inter-symbol interference. The length of the CP must be longer than the maximum spreading delay of the channel.</ns0:p><ns0:p>The multipath channel of a sample space defined by complex random variables</ns0:p><ns0:formula xml:id='formula_11'>1 0 { ( )} N n h n  </ns0:formula><ns0:p>is considered. Then, the received signal can be evaluated as follows:</ns0:p><ns0:formula xml:id='formula_12'>{ h ( n )} N -1 n = 0 ( ) ( ) ( ) ( ) y n x n h n w n    ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>y ( n ) = x ( n ) ⊕ h ( n ) + w ( n ) where ( ) x n is the input signal,  is circular convolution, ( ) w n is additive white ⊕ x(n) w(n) Gaussian noise (AWGN) and ( ) y n</ns0:formula><ns0:p>is the output signal.</ns0:p></ns0:div>
<ns0:div><ns0:head>y ( n )</ns0:head><ns0:p>The received signal in the frequency domain can be defined as</ns0:p><ns0:formula xml:id='formula_14'>( ) ( ) ( ) ( ) Y k X k H k W k   ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>Y ( k ) = X ( k ) H ( k ) + W ( k )</ns0:formula><ns0:p>where the discrete Fourier transformations (DFT) of ( )</ns0:p><ns0:p>x n , ( ) h n , ( ) y n and ( )</ns0:p><ns0:formula xml:id='formula_16'>w n are x ( n ) h ( n ) w ( n ) ( ) X k , ( ) H k , ( ) Y k and ( ) W k , respectively. These discrete Fourier X ( k ) H ( k )</ns0:formula><ns0:p>Y(k) W(k) transformations are estimated after removing CP.</ns0:p><ns0:p>The OFDM frame includes the pilot symbols of the 1 st OFDM block and the transmitted data of the next OFDM blocks. The channel can be considered stationary during a certain frame, but it can change between different frames. The proposed DL BiLSTM NN-based CSIE receives the arrived data at its input terminal and extracts the transmitted data at its output terminal (Essai Ali), <ns0:ref type='bibr'>(Ye et al. 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>OFFLINE DL OF THE DL BILSTM NN-BASED CSIE</ns0:head><ns0:p>DLNN utilisation is the state-of-the-art approach in the field of wireless communication, but DLNNs have high computational complexity and long training time. GPUs are the most powerful tools used for training DLNNs <ns0:ref type='bibr' target='#b24'>(Sharma et al. 2016)</ns0:ref>. Training should be done offline due to the long training time of the proposed CSIE and the large number of BILSTM-NN's parameters, such as biases and weights, that should be tuned during training. The trained CSIE is then used in online implementation to extract the transmitted data <ns0:ref type='bibr' target='#b29'>(Ye et al. 2018</ns0:ref>), (Essai Ali).</ns0:p><ns0:p>In offline training, the learning dataset is randomly generated for one subcarrier. The transmitting end sends OFDM frames to the receiving end through the adopted (simulated) channel, where each frame consists of single OFDM pilot symbol and a single OFDM data symbol. The received OFDM signal is extracted based on OFDM frames that are subjected to different channel imperfections.</ns0:p><ns0:p>All classical estimators rely highly on tractable mathematical channel models, which are assumed to be linear, stationary and follow Gaussian statistics. However, practical wireless communication systems have other imperfections and unknown surrounding effects that cannot be tackled well by accurate channel models; therefore, researchers have developed various channel models that effectively characterise practical channel statistics. By using these channel models, reliable and practical training datasets can be obtained by modelling <ns0:ref type='bibr' target='#b2'>(Bogdanovich et al. 2009)</ns0:ref>, (Essai Ali), <ns0:ref type='bibr' target='#b1'>(2019)</ns0:ref>.</ns0:p><ns0:p>In this study, the 3GPP TR38.901-5G channel model developed by ( <ns0:ref type='formula'>2019</ns0:ref>) is used to simulate the behaviour of a practical wireless channel that can degrade the performance of CSIEs and hence, the overall communication system's performance.</ns0:p><ns0:p>The proposed estimator is trained via Adam optimisation, which updates the weights and biases by minimising a specific loss function. Simply, a loss function is defined as the difference between the estimator's responses and the original transmitted data. The loss function can be represented by several functions. MATLAB/neural network toolbox allows the user to choose a loss function amongst its available list that contains crossentropyex, MSE, sigmoid and softmax. In this study, another two custom loss functions (MAE and SSE) are created. The performance of the proposed estimator when using three loss functions (i.e. MAE, crossentropyex and SSE) is investigated. The loss functions can be expressed as follows:</ns0:p><ns0:formula xml:id='formula_17'>  1 1 log( ( )) N c ij ij i j crossentropyex X k X k       , crossentropyex = -∑ N i = 1 ∑ c j = 1 X ij (k)log (X ij (k)) (10) MAE = ∑ N i = 1 ∑ c j = 1 |X ij (k) -X ij (k)| N   1 1 ˆ( ) N c ij ij i j X k X k MAE N       , (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)     2 1 1 ˆ( ) N c ij ij i j SSE X k X k       , (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_19'>) SSE = ∑ N i = 1 ∑ c j = 1 (X ij (k) -X ij (k)) 2</ns0:formula><ns0:p>where N is the sample number, c is the class number, ij X is the th transmitted data sample for </ns0:p></ns0:div>
<ns0:div><ns0:head>Simulation Results</ns0:head></ns0:div>
<ns0:div><ns0:head>STUDYING THE PERFORMANCE OF THE PROPOSED, LS AND MMSE ESTIMATORS BY USING DIFFERENT PILOTS AND LOSS FUNCTIONS</ns0:head><ns0:p>Several simulation experiments are performed to evaluate the performance of the proposed estimator. In terms of symbol error rate (SER) performance analysis, the SER performance of the proposed estimator under various SNRs is compared with that of the LSTM NN-based CSIE (Essai PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60719:1:1:CHECK 12 Jun 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Ali), the well-known LS estimator and the MMSE estimator, which is an optimal estimator but requires channel statistical information. A priori uncertainty of the used channel model statistics is assumed and considered for all conducted experiments. Moreover, the Adam optimisation algorithm is used to train the proposed estimator whilst using different loss functions to obtain the most robust version of the proposed CSIE. The proposed model is implemented in 2019b MATLAB/software.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> lists the parameters of BiLSTM-NN and LSTM-NN architectures and their related training options. These parameters are identified by a trial-and-error approach. Table <ns0:ref type='table'>2</ns0:ref> lists the parameters of the OFDM system model and the channel model.</ns0:p><ns0:p>The examined estimators' performance is evaluated at different pilot numbers of 4, 8 and 64 as well as crossentropyex, MAE and SSE loss functions. The Adam optimisation algorithm is used for all simulation experiments. With a sufficiently large number of pilots ( <ns0:ref type='formula'>64</ns0:ref>) and the use of the crossentropyex loss function, the proposed BiLSTM crossentropyex estimator outperforms LSTM crossentropyex , LS and MMSE estimators over the entire SNR range, as shown in Figure <ns0:ref type='figure' target='#fig_2'>6</ns0:ref>. At the use of the MAE loss function, the BiLSTM MAE estimator outperforms the LS estimator over the SNR range [0-18 dB], but LSTM MAE outperforms it over the SNR range [0-14 dB]. In addition, the BiLSTM MAE and LSTM MAE estimators are at par with the MMSE estimator over the SNR ranges [0-10 dB] and [0-4 dB], respectively. Beyond these SNR ranges, the MMSE estimator outperforms BiLSTM MAE and LSTM MAE estimators. BiLSTM MAE outperforms LSTM MAE starting from 0 dB to 20 dB.</ns0:p><ns0:p>At the use of the SSE loss function, Figure <ns0:ref type='figure' target='#fig_2'>6</ns0:ref> shows that the BiLSTM SSE and LSTM SSE estimators achieve approximately the same performance as the MMSE estimator over a low SNR range [0-6 dB]. MMSE outperforms the BiLSTM SSE and LSTM SSE estimators starting from 8 dB, and the LS estimator outperforms BiLSTM SSE starting from 16 dB and LSTM SSE starting from 14 dB. BiLSTM SSE outperforms LSTM SSE starting from 10 dB to 20 dB. LS provides poor performance compared with MMSE because it does not use prior information about channel statistics in the estimation process. MMSE exhibits superior performance, especially with sufficient pilot numbers, because it uses second-order channel statistics. Concisely, MMSE and the proposed BiLSTM crossentropyex attain close SER performance with respect to all SNRs. Furthermore, at low SNR (0-6 dB), BiLSTM (crossentropyex, MAE, and SSE) , LSTM (crossentropyex, MAE, and SSE) and MMSE attain approximately the same performance.</ns0:p><ns0:p>Figures 7 present the performance comparison of LS, MMSE, BiLSTM and LSTM-based estimators using the Adam optimisation algorithm and the different (crossentropyex, MAE and SSE) loss functions at 8 pilots. Figure <ns0:ref type='figure'>7</ns0:ref> shows that the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators outperform the LSTM (crossentropyex, or MAE or SSE) estimators and the traditional estimators over the examined SNR range. At a low SNR (0-7 dB), the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators exhibit semi-identical performance. Furthermore, the proposed BiLSTM SSE estimator trained by minimising the SSE loss function outperforms the BiLSTM crossentropyex estimator trained by minimising the crossentropyex loss function starting from 0 dB; also it outperforms BiLSTM MAE , which is trained by minimising the MAE loss function starting from 14 dB. Concisely at 8 pilots BiLSTM SSE estimator achieved the most minimum SER.</ns0:p><ns0:p>Figures 8 show the performance comparison of the LS, MMSE, BiLSTM (crossentropyex, or MAE or SSE) and LSTM (crossentropyex, or MAE or SSE) estimators at 4 pilots. Figure <ns0:ref type='figure'>8</ns0:ref> shows the superiority of the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators in comparison with the traditional estimators, which have lost their workability starting from 0 dB. It also shows the superiority of the proposed estimator BiLSTM (MAE or SSE) over LSTM <ns0:ref type='bibr'>(MAE or SSE)</ns0:ref> . LSTM (crossentropyex) exhibits a competitive performance as BiLSTM (crossentropyex) starting from 0 dB to 12 dB, and LSTM (crossentropyex) outperforms BiLSTM (crossentropyex) starting from 14 dB. At very low SNRs (0-3 dB), the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators have the same performance. The proposed BiLSTM SSE estimator outperforms the BiLSTM crossentropyex estimator starting from 4 dB, and it exhibits an identical performance as the BiLSTM MAE estimator until 14 dB and outperforms it in the rest of the SNR examination range. Manuscript to be reviewed Computer Science statistics. They demonstrate the importance of testing various loss functions in the deep learning process to obtain the most optimal architecture of any proposed estimator.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>9</ns0:ref> indicates that the proposed BiLSTM crossentropyex , BiLSTM SSE and BiLSTM SSE estimators have close SER performance at 64, 8 and 4 pilots, respectively. The performance of BiLSTM SSE at 8 pilots coincides with the performance of BiLSTM crossentropyex at 64 pilots. Therefore, using the proposed estimators with few pilots is recommended for 5G OFDM wireless communication systems to attain a significant improvement in their transmission data rate. Given that the proposed estimator adopts a training data set-driven approach, it is robust to a priori uncertainty for channel statistics.</ns0:p></ns0:div>
<ns0:div><ns0:head>LOSS CURVES</ns0:head><ns0:p>The quality of the DLNNs' training process can be monitored efficiently by exploring the training loss curves. These loss curves provide information on how the training process goes, and the user can decide whether to let the training process continue or stop.</ns0:p><ns0:p>Figures <ns0:ref type='figure' target='#fig_4'>10-12</ns0:ref> show the loss curves of the DLNN-based estimators (BiLSTM and LSTM) at pilot numbers = 64, 8 and 4 and with the three examined loss functions (crossentropyex, MAE and SSE). The curves emphasise and verify the obtained results in Figure <ns0:ref type='figure' target='#fig_2'>6</ns0:ref>, 7, and 8. For example, the sub-curves in Figure <ns0:ref type='figure' target='#fig_4'>10</ns0:ref> for BiLSTM crossentropyex and LSTM crossentropyex estimators emphasise their superiority over the other estimators. This superiority can be seen clearly from Figures 6. Moreover, the training loss curves in Figures <ns0:ref type='figure' target='#fig_4'>11 and 12</ns0:ref> emphasise the obtained SER performance in Figures <ns0:ref type='figure'>7 and 8</ns0:ref>, respectively, of each examined DLNN-based CSIE. For more details, good zooming, and analysis of the presented loss curves, they can be downloaded from this link (shorturl.at/lqxGQ).</ns0:p></ns0:div>
<ns0:div><ns0:head>ACCURACY CALCULATION</ns0:head><ns0:p>The accuracy of the proposed and other examined estimators is a measure of how the estimators recover transmitted data correctly. Accuracy can be defined as the number of correctly received symbols divided by the total number of transmitted symbols. The proposed estimator is trained in different conditions as indicated in the previous subsection, and we wish to investigate how well it performs in a new data set. Tables <ns0:ref type='table'>3, 4</ns0:ref> and 5 present the obtained accuracies for all examined estimators under all simulation conditions. As illustrated in Tables <ns0:ref type='table'>3 to 5</ns0:ref>, the proposed BiLSTM-based estimator attains accuracies from 98.61 to 100 under different pilots and loss functions. The other examined DL LSTM-based estimator has accuracies from 97.88 to 99.99 under the same examination conditions. The achieved accuracies indicate that the proposed estimator has robustly learned and emphasises the obtained SER performance in Figure <ns0:ref type='figure'>9</ns0:ref>. The obtained results of MMSE and LS in Tables 1, 2 and 3 emphasise the presented SER performance in Figures <ns0:ref type='figure' target='#fig_2'>6, 7 and 8</ns0:ref>, respectively, and show that as the pilot number decreases, the accuracy of the conventional estimators dramatically decreases.</ns0:p><ns0:p>The proposed BiLSTM-and LSTM-based estimators rely on DLNN approaches, where they can analyse huge data sets that may be collected from any plant, recognise the statistical dependencies and characteristics, devise the relationships between features and generalise the accrued knowledge for new data sets that they have not seen before. Thus, they are applicable to any 5G and beyond communication system.</ns0:p></ns0:div>
<ns0:div><ns0:head>COMPLEXITY</ns0:head><ns0:p>The feed-forward pass and feed-back pass operations dominate the computational complexity ( ) O W of all neural networks, such as FFNNs, LSTM and BiLSTM. In a feed-forward pass, the weighted sum of inputs from previous layers to the next layers is calculated. In feed-back pass, the errors are evaluated; hence, the weights are modified. (13) where W is the weight number, K is the output unit number, H is the hidden unit number, I is the input number, C is the memory cell block number and S is the memory cell block size <ns0:ref type='bibr' target='#b4'>(Hochreiter & Schmidhuber 1997)</ns0:ref>.</ns0:p><ns0:p>The BiLSTM architecture has two separate LSTM-NNs and two propagation directions (forward and backward). Hence, for BiLSTM, (</ns0:p><ns0:formula xml:id='formula_20'>)<ns0:label>14</ns0:label></ns0:formula><ns0:p>The required training time can be used as another a complexity metric. Table <ns0:ref type='table'>6</ns0:ref> lists the consumed processing time for the examined BiLSTM-and LSTM-based CSIEs. The used computer is equipped with an Intel(R) Core (TM) i5-2400 CPU running with a 3.10-3.30 GHz microprocessor and 4 GB of RAM. The LSTM-based estimators consume less processing time than the BiLSTM-based estimators do. Hence, they have the lowest complexity.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS and FUTURE WORK</ns0:head><ns0:p>The proposed DL-BiLSTM-based CSIE is an online pilot-assisted estimator. It is robust against a limited number of pilots and exhibits superior performance compared with conventional estimators; it is also robust under the conditions of a priori uncertainty of communication channel statistics (non-Gaussian/stationary statistical channels) and demonstrates superior performance compared with conventional estimators and DL LSTM NN-based CSIEs.</ns0:p><ns0:p>The proposed CSIE exhibits a consistent performance at large and small pilot numbers and superior performance at low SNRs, especially at limited pilots, compared with conventional estimators. It also achieves the highest accuracy amongst all examined estimators at 64, 8, and 4 pilots for all the used loss functions.</ns0:p><ns0:p>The proposed BiLSTM-and LSTM-based estimators have high prediction accuracies of 98.61% to 100% and 97.88% to 99.99%, respectively, when using crossentropyex, MAE, and SSE loss functions for 64, 8, and 4 pilots. They are promising for 5G and beyond wireless communication systems.</ns0:p><ns0:p>Two customized loss functions (MAE and SSE) are introduced. The computational and training time complexities are presented to illustrate the complexity of the proposed estimator compared with that of the LSTM-based estimator.</ns0:p><ns0:p>For future work, authors suggest the following research plans: 1. Investigating the proposed estimator's performance and accuracy by using other learning algorithms, such as Adadelta, Adagrad, AMSgrad, AdaMax and Nadam. 2. Investigating the proposed estimator's performance and accuracy by using different cyclic prefix lengths and types. 3. Developing robust loss functions by using robust statistics estimators, such as Tukey, Cauchy, Huber and Welsh. 4. Investigating the performance of CNN-, gated recurrent unit (GRU)-and simple recurrent unit (SRU)-based CSIEs whilst using crossentropyex, MAE and SSE loss functions and for 64, 8 and 4 pilots. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>[</ns0:head><ns0:label /><ns0:figDesc /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 illustrates the processes of generating the training data sets and offline DL to obtain a learned CSIE based on BiLSTM-NN.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figures 6 ,</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figures 6, 7 and 8 emphasise the robustness of the BiLSTM-based estimators against the limited number of pilots, low SNR, and under the condition of a priori uncertainty of channel</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,42.52,178.87,525.00,306.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,42.52,178.87,525.00,245.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,199.12,525.00,382.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,199.12,525.00,342.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,199.12,525.00,363.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,199.12,525.00,338.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,199.12,525.00,399.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,399.75' type='bitmap' /></ns0:figure>
</ns0:body>
" | "
Al-Azhar University –Engineering Faculty, Egypt
Electrical Engineering Department.
June 13, 2021
Dear Editors
We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. We hope that the manuscript is now suitable for publication in PeerJ.
Dr. Mohamed H. Essai Ali
Associate Professor of Electronics &Communications
On behalf of all authors.
Reviewer 1
Thank you for your valuable time reviewing our paper and thank you for your comments that helped making our paper more impacting. We have taken your comments carefully in our consideration while processing our paper.
Basic reporting
• The paper is well written. The results are clear. Only proofreading is required.
Proofreading is done!
Experimental design
• well presented.
Validity of the findings
• well presented.
Comments for the author
• Please proofread the paper.
Proofreading is done!
Reviewer 2
Thank you for your valuable time reviewing our paper and thank you for your comments that helped making our paper more impacting. We have taken your comments carefully in our consideration while processing our paper.
Basic reporting
• Reference is required for lines 44-48.
[1] O. O. Oyerinde and S. H. J. I. T. r. Mneney, 'Review of channel estimation for wireless communication systems,' IETE Technical Review, vol. 29, no. 4, pp. 282-298, 2012.
The required reference is inserted at the end of line 48. Kindly refer to 'Revised manuscript' file.
• Reference is required for lines for lines 54-56.
[2] V. Bogdanovich, A. J. J. o. C. T. Vostretsov, and Electronics, 'Application of the invariance and robustness principles in the development of demodulation algorithms for wideband communications systems,' vol. 54, no. 11, pp. 1283-1291, 2009.
The required reference is inserted at the end of line 56. Kindly refer to 'Revised manuscript' file.
• Reference is required for lines 91-94.
References [8, 11, 12, 18 – 20] are inserted at the end of line 97, and you can find them at the updated references list at the end of the revised version of the paper. Kindly refer to 'Revised manuscript' file.
• OFDM is just given by an abbreviation; an open form is required in line 80.
The open form of OFDM is added at line 80. Kindly refer to 'Revised manuscript' file
• In line 144, what are the known interferences that occur on this kind of system? Any reference on this? For this claim, it seems an inexplicit expression.
The known interferences in wireless communication systems are adjacent channel interference, inter symbol interference, inter user interference, inter cell interference, co-channel interference and electromagnetic interference.
So, I will rephrase ' The proposed BiLSTM-based CSIE is a data-driven estimator, so it can analyse, recognise and understand the statistical characteristics of wireless channels suffering from many known and unknown interferences.' To be
The proposed BiLSTM-based CSIE is a data-driven estimator, so it can analyse, recognise and understand the statistical characteristics of wireless channels suffering from many known interferences such as adjacent channel, inter symbol, inter user, inter cell, co-channel and electromagnetic interferences and unknown ones [24, 25].
New paragraph you can find in lines 137-140. Kindly refer to 'Revised manuscript' file. Also, I added the following two references
[24] A. U. H. Sheikh, 'Interference, Distortion and Noise,' in Wireless Communications: Theory and TechniquesBoston, MA: Springer US, 2004, pp. 225-285.
[25] R. Jeya, B. Amutha, N. Nikhilesh, and R. R. J. s. Immaculate, 'Signal Interferences in Wireless Communication-An Overview,' vol. 2, p. 3, 2019.
• You should remove the text in line 153.
I did.
• Figure 1 needs a cite.
I added ref. [26], refer to line 167. Kindly refer to 'Revised manuscript' file.
[26] S. Hochreiter and J. Schmidhuber, 'Long Short-Term Memory,' vol. 9, no. 8 %J Neural Comput., pp. 1735–1780, 1997.
• Figure 2 has low quality.
Quality was enhanced.
• Line 207 ….. .On -> a space after full stop
I did.
• For Figure 4, details should be increased. Legend of the blocks, etc., are required. In the text, serial-to-parallel (probably) refers to S/P, but a more technical perspective of all figures up to that point is a must.
I increased details, kindly refer to lines 219 – 232. Kindly refer to 'Revised manuscript' file.
• CP is probably a cyclic prefix, but it is not indicated in the text properly. (Line 220 & 209)
This section briefly describes 'OFDM SYSTEM MODEL', for more details ref [8, 12] can be referred by reader. Kindly refer to lines 219 – 232.
• The attached code requires a readme to show the procedures of the .m file hierarchy. For instance, I keep taking an 'abstract class instantiation' error for sseClassificationLayer.m (even if I have corrected backwardLoss(), there occur some other problems throwing an error. My MATLAB is R2018b).
Readme file was prepared and uploaded to the supplemental files section; you can find it with title 'READmePlease.m'. As for the error for 'sseClassificationLayer.m', you should try using MATLAB is R2019b, which I have been used.
• Please re-check some typos in your attached code, like:
• '%plotting the loss and accuracy curves 'sparately' ' (TrainDNN.m) or :
• '% in the MATLAB example of 'seqeunce' classification using LSTM network.' '% is represented by a feature vector that follows the similar data 'struture' ' etc.
All typos were checked and corrected.
Word ' separately ' in script m-file 'TrainDNN.m ', line 55.
Word 'sequence' in function m-file TrainingDataGeneration.m ', line 14.
Word 'structure' in function m-file 'getFeatureAndLabel.m', line 1, and in script m-file ' TrainingDataGeneration.m ', line 13.
• The most crucial lines in the paper are: 188-191 & 223-224. They are telling the contribution proposed by the authors; however, they all can be expanded by adding some flow charts, further explanation, etc.
Authors think that main contributions are declared enough before, through, and after these lines.
Experimental design
• The repeatedly highlighted Adam optimization has no big impact on the overall contribution. What about RMSProp, AdaGrad, etc.?
The use of both RMSProp, AdaGrad, and other optimization algorithms such Adadelta, Adagrad, AMSgrad, AdaMax and Nadam is suggested in the future work section. Kindly refer to 'Future work'.
• I did not get the point of using BiLSTM instead of LSTM. Why/How your system requires a bidirectional movement on the timeline?
All obtained results demonstrate the superior performance of BiLSTM-based estimators compared with conventional estimators and LSTM-based estimators, so the study recommends using BiLSTM-based estimators, especially at low signal-to-noise ratios.
• What are the data details? Data size & training, validation, testing portions?
(Even if I can get the information through supplementary material, the paper itself should present all this information properly.)
Data sets details are included in Table 1
Table 1
BiLSTM- and LSTM-NN structure parameters and training process options
Parameter
Value
Input Size
256
BiLSTM Layer Size
LSTM Layer Size
30 hidden neurons
30 hidden neurons
FC Layer Size
4
Loss Functions
Crossentropyex, MAE, SSE
Mini Batch Size
1000
Epochs Number
1000
Learning Algorithm
Adam
Training Data Size
8000 - OFDM frame
Validation Data Size
2000 - OFDM frame
Test Data Size
10000 - OFDM frame
• What about further network details like dropout, batch normalization?
Both dropout Layer, and Batch normalization layer are not used in our proposed structures. Kindly refer to 'TrainDNN.m', lines 25-31.
Validity of the findings
• For Figure 7, why is there a transition for the best performing techniques: BiLSTM versus LSTM? Is there any explanation for this? Or is this just experimentally obtained observation?
It is an experimentally obtained observation.
• Results based on Figures 6-18 should be summarized in a table like indicating the effect of 'the applied method', 'pilot numbers', and 'dB range'. A performance comparison heat-map table can be an idea.
Many thanks for your so helpful 'Heatmap' notice.
Fig.6 to Fig. 9 are replaced by Heatmap in Fig. 6
Fig.10 to Fig. 13 are replaced by Heatmap in Fig. 7
Fig.14 to Fig. 17 are replaced by Heatmap in Fig. 8
• The complexity section does not give critical feedback.
This section represents a complementary study for this type of work. The section presents much valuable information, such as training time and the feed-forward pass and feedback pass operations complexity for BiLSTM, and LSTM-based estimators.
• Loss curves have the zoom problem; the only critical regions of each network can be plotted side-by-side. Since for some certain epochs, some of the network architectures do not indicate significant learning.
Zoom problem comes from the fact that the loss figures are composite figures (before zooming and after zooming). We will provide loss curves in MatLab '*.fig' format to be zoomed in and out efficiently; hence they can be examined easily. I added the following paragraph, kindly refer to lines 361-363. Kindly refer to 'Revised manuscript' file:
For more details, good zooming, and analysis of the presented loss curves, they can be downloaded from this link (shorturl.at/lqxGQ).
• Conclusions and future work sections can be merged. Plus, item-based representation is a strange way for the overall section. There may be some significant contributions as items, but not the overall section. (To the best of my knowledge, PeerJ does not demand a listing style like this.)
Conclusions and future work sections are merged and reformatted. Kindly refer to the ' CONCLUSIONS and FUTURE WORK' section. It is highlighted in yellow.
Comments for the author
• For the introduction part;
While reading the paper, the transition between the communication systems and the deep neural network architectures seems quite abstract. This may be because the current structure of the introduction section is like a literature review. I suggest that readers either technically explain the relation between BiLSTM and the CSIE or change the current structure of the introduction to give just the definition of the problem, a slight literature summary, and the main points of their proposal. I highly recommend the latter since the introduction is like mixing several different information. For instance, why should a reader get information about the “loss function” before thoroughly knowing the problem definition?
Many thanks for this tip, I agreed with your recommendation. I did some modifications based on your recommendations. kindly, refer to lines 133 – 153, and lines 193-203. Kindly refer to 'Revised manuscript' file.
• Line 192-197: Did you decide on this structure based on any preliminary experiments? Or some of the previous efforts (even though you claim to be the first on this structure; maybe you inspired by any LSTM-based communication system.) Why a structure like that? This should be technically explained in this paragraph.
Yes, I decided on this structure based on preliminary experiments and some of the previous efforts. Kindly refer to Ref. [12]. Kindly refer to 'Revised manuscript' file.
• Figure 3 should have been more detailed such as drawing with more information on layer size, activation function, connections, etc. Figure 3 (b) does not tell scientific information.
All details for Figure 3 are already mentioned in the previous paragraph (lines 205 - 210). So, I think it may be better to leave it as it is. Kindly refer to 'Revised manuscript' file
As for Figure 3(b), I agreed so removed it and its related pointing in the text.
• Information in line 250 about MATLAB seems strange since the reader cannot get the point whether authors mention a neural net toolbox, or Simulink, or plain code – built-in methods, etc.
I am agreed, the text was modified to be ' MATLAB/neural network toolbox allows the user '. Kindly refer to line 266; Kindly refer to 'Revised manuscript' file.
• The utilization of different loss functions, Adam optimization, etc., cannot be considered as the main contributions of the study. If authors would have widely present training & validation procedures, it may be possible to mark the paper as a well-contributing research paper. For this current version, it seems like “investigating the NN performance with try and error methods”.
Many thanks for your valuable review and opinion. Initially, our main contribution is proposing an online channel state information estimator for wireless communication systems that adopt deep learning recurrent neural networks. The proposed algorithm achieves high performance compared to other traditional estimators at a low number of pilots, low signal-to-noise ratios, and under the condition of a priori uncertainty for channel statistics. Since the behavior of neural networks relies on different factors such as the network structure, the learning algorithm, loss functions, the activation functions used in each node, etc., authors aimed to develop another loss functions besides the commonly used one and test the efficiency of using them on the efficiency of the proposed estimator. Kindly refer to our published paper
• For the further efforts of this paper, I strongly suggest including the training parameters and a wide-angle figure of the proposed network architecture, including every single detail from input to the output (such as which signals are exactly sourced, network hyperparameters, connections, dropout, batch normalization, etc.). Besides, the way the simulations are handled is far away from being a vivid picture. A summary of the plots into the table (heat-map or ticking for performances like ✓ ✓✓ ✓✓✓ ✓✓✓✓ , etc. ) can be an idea to present 'the applied method', 'pilot numbers', and 'dB range' performance.
Thank you for your valuable time reviewing our paper and thank you for your comments that helped make our article more impactful. We have taken most of your comments carefully into our consideration while processing the revised paper. The rest of your suggestions we will consider in our future articles. Really authors appreciate your extraordinary efforts that exerted in reviewing.
Reviewer 3
Thank you for your valuable time reviewing our paper and thank you for your comments that helped making our paper more impacting. We have taken your comments carefully in our consideration while processing our paper.
Basic reporting
• no comment
Experimental design
• no comment
Validity of the findings
• no comment
Comments for the author
The manuscript is providing the practical use of BiLSTM networks for the 5G systems which is interesting. However, some additional improvements are required.
• The written text must be improved. For example in the introduction parts the paragraph divisions are not suitable and must be organized well.
I did.
• The conclusion and future parts must started with some sentences and then start using i ii, etc.
Conclusions and future work sections are merged and reformatted. Kindly refer to the ' CONCLUSIONS and FUTURE WORK' section. Kindly refer to 'Revised manuscript' file.
• The quality of the figures must be improved well.
I did.
• Please put the comparison table and compare the results with other reported high impact factor papers.
Many thanks for your so helpful notice.
Fig.6 to Fig. 9 are replaced by Heatmap in Fig. 6
Fig.10 to Fig. 13 are replaced by Heatmap in Fig. 7
Fig.14 to Fig. 17 are replaced by Heatmap in Fig. 8
• Please add some up-to-date journals in the reference list and add them in the introduction part. Then please discuss about the drawbacks with that references and clarify your contributions
I did, kindly refer to the introduction section and references list.
" | Here is a paper. Please give your review comments after reading it. |
214 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In this study, a deep learning bidirectional long short-term memory (BiLSTM) recurrent neural network-based channel state information estimator is proposed for 5G orthogonal frequency-division multiplexing systems. The proposed estimator is a pilot-dependent estimator and follows the online learning approach in the training phase and the offline approach in the practical implementation phase. The estimator does not deal with complete a priory certainty for channels' statistics and attains superior performance in the presence of a limited number of pilots. A comparative study is conducted using three loss functions, namely, mean absolute error, cross entropy function for kth mutually exclusive classes and sum of squared of the errors. The Adam optimisation algorithm is used to evaluate the performance of the proposed estimator under each loss function. In terms of symbol error rate and accuracy metrics, the proposed estimator outperforms long shortterm memory (LSTM) neural network-based channel state information, least squares and minimum mean square error estimators under different simulation conditions. The computational and training time complexities for deep learning BiLSTM-and LSTM-based estimators are provided. Given that the proposed estimator relies on the deep learning neural network approach, where it can analyse massive data, recognise statistical dependencies and characteristics, develop relationships between features and generalise the accrued knowledge for new datasets that it has not seen before, the approach is promising for any 5G and beyond communication system.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>5G wireless communication is the most active area of technology development and a rapidly growing branch of the wider field of communication systems. Wireless communication has made various possible services ranging from voice to multimedia.</ns0:p><ns0:p>The physical characteristics of the wireless communication channel and many unknown surrounding effects result in imperfections in the transmitted signals. For example, the transmitted signals experience reflections, diffractions, and scattering, which produce multipath signals with different delays, phase shift, attenuation, and distortion arriving at the receiving end; hence, they adversely affect the recovered signals <ns0:ref type='bibr' target='#b21'>(Oyerinde & Mneney 2012b)</ns0:ref>.</ns0:p><ns0:p>A priori information on the physical characteristics of the channel provided by pilots is one of the significant factors that determine the efficiency of channel state information estimators (CSIEs). For instance, if not a priori information is available (no or insufficient pilots), channel estimation is useless; finding what you do not know is impossible. When complete information on the transmission channel is available, CSIEs are no longer needed. Thus, a priori uncertainty exists for communication channel statistics. However, the classical theory of detection, recognition, and estimation of signals deals with complete priory certainty for channel statistics, and it is an unreliable and unpractical assumption <ns0:ref type='bibr' target='#b1'>(Bogdanovich et al. 2009</ns0:ref>).</ns0:p><ns0:p>In the classic case, uncertainty is related to useful signals. In detection problems, the unknown is the fact of a signal existence. In recognition problems, the unknown is the type of signal being received at the current moment. In estimation problems, the unknown is the amplitude of the measured signal or one of its parameters. The rest of the components of the signal-noise environment in classical theory are regarded as a priori certain (known) as follows: the known is the statistical description of the noise, the known is the values of the unmeasured parameters of the signal and the known is the physical characteristics of the wireless communication channel. In such conditions, the classical theory allows the synthesis of optimal estimation algorithms, but the structure and quality coefficients of the algorithms depend on the values of the parameters of the signal-noise environment. If the values of the parameters describing the signal-noise environment are slightly different from the parameters for which the optimal algorithm is built, then the quality coefficients will become substantially poor, making the algorithm useless in several cases <ns0:ref type='bibr' target='#b1'>(Bogdanovich et al. 2009)</ns0:ref>, <ns0:ref type='bibr' target='#b18'>(O'Shea et al. 2017)</ns0:ref>. The most frequently used CSIEs are derived from signal and channel statistical models by employing techniques, such as maximum likelihood (ML), least squares (LS), and minimum mean squared error (MMSE) optimisation metrics <ns0:ref type='bibr' target='#b12'>(Kim 2015)</ns0:ref>.</ns0:p><ns0:p>One of the major concerns in the optimum performance of wireless communication systems is providing accurate channel state information (CSI) at the receiver end of the systems to detect the transmitted signal coherently. If CSI is unavailable at the receiver end, then the transmitted signal can only be demodulated and detected by a noncoherent technique, such as differential demodulation. However, using a noncoherent detection method occurs at the expense of a loss of signal-to-noise ratio of about 3-4 dB compared with using a coherent detection technique. To eliminate such losses, researchers have focused on the development of channel estimation techniques to provide perfect detection of transmitted information in wireless communication systems using the Orthogonal Frequency-Division Multiplexing (OFDM) modulation scheme <ns0:ref type='bibr' target='#b20'>(Oyerinde & Mneney 2012a)</ns0:ref>.</ns0:p><ns0:p>The use of deep learning neural networks (DLNNs) is the state-of-the-art approach in the field of wireless communication. The amazing learning capabilities of DLNNs from training data sets and the tremendous progress of graphical processing units (GPUs), which are considered the most powerful tools for training DLNNs, have motivated its usage for different wireless communication issues, such as modulation recognition <ns0:ref type='bibr' target='#b37'>(Zhou et al. 2020)</ns0:ref>, <ns0:ref type='bibr' target='#b10'>(Karra et al. 2017</ns0:ref>) and channel state estimation and detection <ns0:ref type='bibr' target='#b4'>(Essai Ali ;</ns0:ref><ns0:ref type='bibr' target='#b8'>Joo et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b9'>Kang et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b17'>Ma et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b22'>Ponnaluru & Penke 2020;</ns0:ref><ns0:ref type='bibr' target='#b30'>Yang et al. 2019a;</ns0:ref><ns0:ref type='bibr' target='#b34'>Ye et al. 2018)</ns0:ref>. According to <ns0:ref type='bibr' target='#b10'>(Karra et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b12'>Kim 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Oyerinde & Mneney 2012a;</ns0:ref><ns0:ref type='bibr' target='#b37'>Zhou et al. 2020</ns0:ref>) and <ns0:ref type='bibr' target='#b17'>(Ma et al. 2018)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Recently, numerous long short-term memory (LSTM)-and BiLSTM-based applications have been introduced for prognostic and health management <ns0:ref type='bibr' target='#b36'>(Zhao et al. 2020)</ns0:ref>, artificial intelligencebased translation systems <ns0:ref type='bibr' target='#b29'>(Wu et al. 2016)</ns0:ref>, <ns0:ref type='bibr' target='#b19'>(Ong 2017</ns0:ref>) and other areas. For channel state information estimation in 5G-OFDM wireless communication systems, many deep learning approaches, such as convolutional neural network (CNN), recurrent neural network (RNN) (e.g. LSTM and BiLSTM NNs) and hybrid (CNN and RNN) neural networks have been used <ns0:ref type='bibr' target='#b4'>(Essai Ali ;</ns0:ref><ns0:ref type='bibr' target='#b14'>Liao et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b16'>Luo et al. 2018b;</ns0:ref><ns0:ref type='bibr' target='#b22'>Ponnaluru & Penke 2020;</ns0:ref><ns0:ref type='bibr' target='#b30'>Yang et al. 2019a;</ns0:ref><ns0:ref type='bibr' target='#b32'>Yang et al. 2019b;</ns0:ref><ns0:ref type='bibr' target='#b34'>Ye et al. 2018)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b14'>(Liao et al. 2019</ns0:ref>), a deep learning-based CSIE was proposed by using CNN and BiLSTM-NN for the extraction of the feature vectors of the channel response and channel estimation, respectively. The aim was to improve the channel state information estimation performance at the downlink, which is caused by the fast time-varying and varying channel statistical characteristics in high-speed mobility scenarios. In <ns0:ref type='bibr' target='#b15'>(Luo et al. 2018a</ns0:ref>), an online-trained CSIE that is an integration of CNN and LSTM-NN was proposed. The authors also developed an offline-online training technique that applies to 5G wireless communication systems. In <ns0:ref type='bibr' target='#b34'>(Ye et al. 2018</ns0:ref>), a joint channel estimator and detector that is based on feedforward DLNNs for frequency selective channel (OFDM) systems was introduced. The proposed algorithm was found to be superior to the traditional MMSE estimation method when unknown surrounding effects of communication systems are considered. In <ns0:ref type='bibr' target='#b30'>(Yang et al. 2019a</ns0:ref>), an online estimator was developed by adopting feedforward DLNNs for doubly selective channels. The proposed estimator was considered superior to the traditional LMMSE estimator in all investigated scenarios. In <ns0:ref type='bibr' target='#b22'>(Ponnaluru & Penke 2020)</ns0:ref>, a one-dimensional CNN (1D-CNN) deep learning estimator was proposed. Under various modulation scenarios and in terms of MSE and BER metrics, the authors compared the performance of the proposed estimator with that of feedforward neural networks (FFNN), MMSE and LS estimators. 1D-CNN outperformed LS, MMSE and FFNN estimators. In (Essai Ali), an online pilot-assisted estimator model for OFDM wireless communication systems was developed by using LSTM NN. The conducted comparative study showed the superior performance of the proposed estimator in comparison with LS and MMSE estimators under limited pilots and a prior uncertainty of channel statistics. The authors in <ns0:ref type='bibr' target='#b23'>(Sarwar et al. 2020</ns0:ref>) used the genetic algorithmoptimised artificial neural network to build a CSIE. The proposed estimator was dedicated for space-time block-coding MIMO-OFDM communication systems. The proposed estimator outperformed LS and MMSE estimators in terms of BER at high SNRs, but it achieved approximately the same performance as LS and MMSE estimators at low SNRs. The authors in <ns0:ref type='bibr' target='#b25'>(Senol et al. 2021)</ns0:ref> proposed a CSIE for OFDM systems by using ANN under the condition of sparse multipath channels. The proposed estimator achieved a comparable SER performance as matching pursuit-and orthogonal matching pursuit-based estimators at a lower computational complexity than that of the examined estimators. The authors in (Le <ns0:ref type='bibr' target='#b13'>Ha et al. 2021)</ns0:ref> proposed a CSIE that uses deep learning and LS estimator and utilizes the multiple-input multiple-output system for 5G-OFDM. The proposed estimator minimizes the MSE loss function between the LSbased channel estimation and the actual channel. The proposed estimator outperformed LS and LMMSE estimators in terms of BER and MSE metrics.</ns0:p><ns0:p>In this study, a BiLSTM DLNN-based CSIE for OFDM wireless communication systems is proposed and implemented. To the best of the authors' knowledge, this work is the first to use the BiLSTM network as a CSIE without integration with CNN. The proposed estimator does not need any prior knowledge of the communication channel statistics and powerfully works at limited pilots (under the condition of less CSI). The proposed BiLSTM-based CSIE is a data-driven estimator, so it can analyse, recognise and understand the statistical characteristics of wireless channels suffering from many known interferences such as adjacent channel, inter symbol, inter user, inter cell, co-channel and electromagnetic interferences and unknown ones <ns0:ref type='bibr' target='#b7'>(Jeya et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Sheikh 2004</ns0:ref>). Although an impressively wide range of configurations can be found for almost every aspect of deep neural networks, the choice of loss function is underrepresented when addressing communication problems, and most studies and applications simply use the 'log' loss function <ns0:ref type='bibr' target='#b6'>(Janocha & Czarnecki 2017)</ns0:ref>. In this study two customed loss functions known as mean absolute error (MAE), and sum of squared errors (SSE) are proposed to obtain the most reliable and robust estimator under unknown channel statistical characteristics and limited pilot numbers.</ns0:p><ns0:p>The performance of the proposed BiLSTM-based estimator is compared with the performance of the most frequently used LS and MMSE channel state estimators. The obtained results show that the BiLSTM-based estimator attains a comparable performance as the MMSE estimator and outperforms LS and MMSE estimators at large and small numbers of pilots, respectively. In addition, the proposed estimator improves the transmission data rate of OFDM wireless communication systems because it exhibits optimal performance compared with the examined estimators at a small number of pilots.</ns0:p><ns0:p>The rest of this paper is organised as follows. The DLNN-based CSIE is presented in Section II. The standard OFDM system and the proposed deep learning BiLSTM NN-based CSIE are presented in Section III. The simulation results are given in Section IV. The conclusions and future work directions are provided in Section V.</ns0:p></ns0:div>
<ns0:div><ns0:head>DLNN-BASED CSIE</ns0:head><ns0:p>In this section, a deep learning BiLSTM NN for channel state information estimation is presented. The BiLSTM network is another version of LSTM neural networks, which are recurrent neural networks (RNN) that can learn the long-term dependencies between the time steps of input data <ns0:ref type='bibr' target='#b5'>(Hochreiter & Schmidhuber 1997)</ns0:ref> <ns0:ref type='bibr' target='#b16'>(Luo et al. 2018b;</ns0:ref><ns0:ref type='bibr' target='#b36'>Zhao et al. 2020)</ns0:ref>.</ns0:p><ns0:p>The BiLSTM architecture mainly consists of two separate LSTM-NNs and has two propagation directions (forward and backward). The LSTM NN structure consists of input, output and forget gates and a memory cell. The forget and input gates enable the LSTM NN to effectively store longterm memory. Figure <ns0:ref type='figure'>1</ns0:ref> shows the main construction of the LSTM cell <ns0:ref type='bibr' target='#b5'>(Hochreiter & Schmidhuber 1997)</ns0:ref>. The forget gate enables LSTM NN to remove the undesired information by currently used input and cell output of the last process. The input gate finds the information that will be used (1)</ns0:p><ns0:formula xml:id='formula_0'>  1 t g i t i t i i w x R h b      ,<ns0:label>(2)</ns0:label></ns0:formula><ns0:p> </ns0:p><ns0:formula xml:id='formula_1'>1 t g f t f t f f w x R h b      ,<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>  </ns0:p><ns0:formula xml:id='formula_2'>1 t c g t g t g g w x R h b      , (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)   1 t g o t o t o o w x R h b      , (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>)</ns0:formula><ns0:formula xml:id='formula_5'>𝑐 𝑡 = 𝑓 𝑡 ʘ𝑐 𝑡 -1 + 𝑖 𝑡 ʘ𝑔 𝑡 ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:formula xml:id='formula_6'>] T i f g o R R R R  R [ ] T i f g o b b b b  b</ns0:formula><ns0:p>LSTM DNN, only analyses the impact of the previous sequence in the present, disregarding information later on and failing to reach optimal performance. On the other hand BiLSTM connects the LSTM unit's output bidirectionally (forward and backward propagation directions) and capture bidirectional signals dependencies, increasing the overall model's performance.</ns0:p><ns0:p>The forward and backward propagation directions of BiLSTM are transmitted at the same time to the output unit. Therefore, old and future information can be captured, as shown in Figure <ns0:ref type='figure'>2</ns0:ref> Step 1: The forward LSTM layer receives the transmitted signal vectors from X.</ns0:p><ns0:p>for i ∈length (X) do send X i to BiLSTM Layer end for Step 2: Equations 1-6 are used to update the state of the LSTM cell.</ns0:p><ns0:p>Step 3: The backward LSTM layer receives the signal vectors from X, and the two previous steps are repeated.</ns0:p><ns0:p>Step 4: A hidden state sequence vector is created by splicing the forward and backward sequences of hidden layers.</ns0:p><ns0:p>Step 5: A hidden state sequence vector is sent into a full connection layer and the prediction matrix is obtained</ns0:p><ns0:p>Step 6: Return the prediction matrix.</ns0:p><ns0:p>To build the DL BiLSTM NN-based CSIE, an array is created with the following five layers: sequence input, BiLSTM, fully connected, softmax and output classification. The input size was set to 256. The BiLSTM layer consists of 30 hidden units and shows the sequence's last element. Four classes are specified by considering the size 4 fully connected (FC) layer, followed by a softmax layer and ended by a classification layer. Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the structure of the proposed estimator (Essai Ali ; <ns0:ref type='bibr' target='#b34'>Ye et al. 2018)</ns0:ref>.</ns0:p><ns0:p>As the proposed BiLSTM-based CSIE is built, the weights and biases of the proposed estimator are optimised (tuned) using the desired optimisation algorithm. The optimisation algorithm trains the proposed estimator by using one of three loss functions, namely, cross entropy function for k th mutually exclusive classes (crossentropyex), mean absolute error (MAE), and sum of squared errors (SSE). The loss function estimates the loss between the expected and actual outcome. During the learning process, optimisation algorithms try to minimise the available loss function to the desired error goal by optimising the DLNN weights and biases iteratively at each training epoch. Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> illustrates the training processes of the proposed estimator. Selecting a loss function is one of the essential and challenging tasks in deep learning. Also, investigating the efficiency of the training process using different optimization algorithms such as Adaptive Moment Estimation (Adam), Root Mean Square Propagation (RMSProp), Stochastic Gradient Descent with momentum (SGdm) <ns0:ref type='bibr' target='#b2'>(Dogo et al. 2018)</ns0:ref>, and an adaptive learning rate method (Adadelta) (Zeiler 2012). The proposed estimator is trained using above-mentioned three different loss functions and optimization algorithms to obtain the most optimal BiLSTM-based estimator for wireless communication systems with low prior information (limited pilots) for signal-noise environments.</ns0:p></ns0:div>
<ns0:div><ns0:head>DL BiLSTM NN-BASED CSIE for 5G-OFDM WIRELESS COMMUNICATION SYSTEMS</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60719:2:1:NEW 17 Jul 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The standard OFDM wireless communication system and an offline DL of the proposed CSIE are presented in the following subsections.</ns0:p></ns0:div>
<ns0:div><ns0:head>OFDM SYSTEM MODEL</ns0:head><ns0:p>In accordance with (Essai Ali ; Ye et al. 2018), Figure <ns0:ref type='figure'>5</ns0:ref> clearly illustrates the structure of the traditional OFDM communication system. On the transmitter side, a serial-to-parallel (S/P) converter is used to convert the transmitted symbols with pilot signals into parallel data streams. Then, inverse discrete Fourier transform (IDFT) is applied to convert the signal into the time domain. A cyclic prefix (CP) must be added to alleviate the effects of inter-symbol interference. The length of the CP must be longer than the maximum spreading delay of the channel.</ns0:p><ns0:p>The multipath channel of a sample space defined by complex random variables is</ns0:p><ns0:formula xml:id='formula_7'>1 0 { ( )} N n h n  </ns0:formula><ns0:p>considered. Then, the received signal can be evaluated as follows:</ns0:p><ns0:p>,</ns0:p><ns0:formula xml:id='formula_8'>( ) ( ) ( ) ( ) y n x n h n w n   <ns0:label>(8)</ns0:label></ns0:formula><ns0:p>where is the input signal, is circular convolution, is additive white Gaussian noise</ns0:p><ns0:formula xml:id='formula_9'>( ) x n  ( ) w n (AWGN) and</ns0:formula><ns0:p>is the output signal.</ns0:p></ns0:div>
<ns0:div><ns0:head>( ) y n</ns0:head><ns0:p>The received signal in the frequency domain can be defined as</ns0:p><ns0:formula xml:id='formula_10'>, (9) ( ) ( ) ( ) ( ) Y k X k H k W k  </ns0:formula><ns0:p>where the discrete Fourier transformations (DFT) of </ns0:p></ns0:div>
<ns0:div><ns0:head>CP.</ns0:head><ns0:p>The OFDM frame includes the pilot symbols of the 1 st OFDM block and the transmitted data of the next OFDM blocks. The channel can be considered stationary during a certain frame, but it can change between different frames. The proposed DL BiLSTM NN-based CSIE receives the arrived data at its input terminal and extracts the transmitted data at its output terminal (Essai Ali), <ns0:ref type='bibr' target='#b34'>(Ye et al. 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>OFFLINE DL OF THE DL BILSTM NN-BASED CSIE</ns0:head><ns0:p>DLNN utilisation is the state-of-the-art approach in the field of wireless communication, but DLNNs have high computational complexity and long training time. GPUs are the most powerful tools used for training DLNNs <ns0:ref type='bibr' target='#b26'>(Sharma et al. 2016)</ns0:ref>. Training should be done offline due to the long training time of the proposed CSIE and the large number of BILSTM-NN's parameters, such as biases and weights, that should be tuned during training. The trained CSIE is then used in online implementation to extract the transmitted data <ns0:ref type='bibr' target='#b34'>(Ye et al. 2018</ns0:ref>), (Essai Ali).</ns0:p><ns0:p>In offline training, the learning dataset is randomly generated for one subcarrier. The transmitting end sends OFDM frames to the receiving end through the adopted (simulated) channel, where each frame consists of single OFDM pilot symbol and a single OFDM data symbol. The received OFDM signal is extracted based on OFDM frames that are subjected to different channel imperfections.</ns0:p><ns0:p>All classical estimators rely highly on tractable mathematical channel models, which are assumed to be linear, stationary and follow Gaussian statistics. However, practical wireless communication systems have other imperfections and unknown surrounding effects that cannot be tackled well by accurate channel models; therefore, researchers have developed various channel models that effectively characterise practical channel statistics. By using these channel models, reliable and practical training datasets can be obtained by modelling <ns0:ref type='bibr' target='#b1'>(Bogdanovich et al. 2009</ns0:ref><ns0:ref type='bibr'>), (Essai Ali), (2019)</ns0:ref>.</ns0:p><ns0:p>In this study, the 3GPP TR38.901-5G channel model developed by ( <ns0:ref type='formula'>2019</ns0:ref>) is used to simulate the behaviour of a practical wireless channel that can degrade the performance of CSIEs and hence, the overall communication system's performance.</ns0:p><ns0:p>The proposed estimator is trained via the algorithm, which updates the weights and biases by minimising a specific loss function. Simply, a loss function is defined as the difference between the estimator's responses and the original transmitted data. The loss function can be represented by several functions. MATLAB/neural network toolbox allows the user to choose a loss function amongst its available list that contains crossentropyex, MSE, sigmoid and softmax. In this study, another two custom loss functions (MAE and SSE) are created. The performance of the proposed estimator when using three loss functions (i.e. MAE, crossentropyex and SSE) is investigated. The loss functions can be expressed as follows:</ns0:p><ns0:formula xml:id='formula_11'>, (<ns0:label>10</ns0:label></ns0:formula><ns0:formula xml:id='formula_12'>)   1 1 log( ( )) N c ij ij i j crossentropyex X k X k       , (<ns0:label>11</ns0:label></ns0:formula><ns0:formula xml:id='formula_13'>)   1 1 ˆ( ) N c ij ij i j X k X k MAE N       , (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>)     2 1 1 ˆ( ) N c ij ij i j SSE X k X k      </ns0:formula><ns0:p>where is the sample number, is the class number, is the th transmitted data sample for the N </ns0:p></ns0:div>
<ns0:div><ns0:head>Simulation Results</ns0:head></ns0:div>
<ns0:div><ns0:head>STUDYING THE PERFORMANCE OF THE PROPOSED, LS AND MMSE ESTIMATORS BY USING DIFFERENT PILOTS AND LOSS FUNCTIONS</ns0:head><ns0:p>Several simulation experiments are performed to evaluate the performance of the proposed estimator. In terms of symbol error rate (SER) performance analysis, the SER performance of the proposed estimator under various SNRs is compared with that of the LSTM NN-based CSIE (Essai Ali), the well-known LS estimator and the MMSE estimator, which is an optimal estimator but requires channel statistical information. A priori uncertainty of the used channel model statistics is assumed and considered for all conducted experiments. Moreover, the Adam optimisation algorithm is used to train the proposed estimator whilst using different loss functions to obtain the most robust version of the proposed CSIE. The proposed model is implemented in 2019b MATLAB/software. Table <ns0:ref type='table'>1</ns0:ref> lists the parameters of BiLSTM-NN and LSTM-NN architectures and their related training options. These parameters are identified by a trial-and-error approach. Table <ns0:ref type='table'>2</ns0:ref> lists the parameters of the OFDM system model and the channel model.</ns0:p><ns0:p>The examined estimators' performance is evaluated at different pilot numbers of 4, 8 and 64 as well as crossentropyex, MAE and SSE loss functions. The Adam optimisation algorithm is used for all simulation experiments. With a sufficiently large number of pilots (64) and the use of the crossentropyex loss function, the proposed BiLSTM crossentropyex estimator outperforms LSTM crossentropyex , LS and MMSE estimators over the entire SNR range, as shown in Figure <ns0:ref type='figure'>6</ns0:ref>. At the use of the MAE loss function, the BiLSTM MAE estimator outperforms the LS estimator over the SNR range [0-18 dB], but LSTM MAE outperforms it over the SNR range [0-14 dB]. In addition, the BiLSTM MAE and LSTM MAE estimators are at par with the MMSE estimator over the SNR ranges [0-10 dB] and [0-4 dB], respectively. Beyond these SNR ranges, the MMSE estimator outperforms BiLSTM MAE and LSTM MAE estimators. BiLSTM MAE outperforms LSTM MAE starting from 0 dB to 20 dB.</ns0:p><ns0:p>At the use of the SSE loss function, Figure <ns0:ref type='figure'>6</ns0:ref> shows that the BiLSTM SSE and LSTM SSE estimators achieve approximately the same performance as the MMSE estimator over a low SNR range [0-6 dB]. MMSE outperforms the BiLSTM SSE and LSTM SSE estimators starting from 8 dB, and the LS estimator outperforms BiLSTM SSE starting from 16 dB and LSTM SSE starting from 14 dB. BiLSTM SSE outperforms LSTM SSE starting from 10 dB to 20 dB. LS provides poor performance compared with MMSE because it does not use prior information about channel statistics in the estimation process. MMSE exhibits superior performance, especially with sufficient pilot numbers, because it uses second-order channel statistics. Concisely, MMSE and the proposed BiLSTM crossentropyex attain close SER performance with respect to all SNRs. Furthermore, at low SNR (0-6 dB), BiLSTM (crossentropyex, MAE, and SSE) , LSTM (crossentropyex, MAE, and SSE) and MMSE attain approximately the same performance. and LSTM (crossentropyex, or MAE or SSE) estimators at 4 pilots. Figure <ns0:ref type='figure'>8</ns0:ref> shows the superiority of the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators in comparison with the traditional estimators, which have lost their workability starting from 0 dB. It also shows the superiority of the proposed estimator BiLSTM (MAE or SSE) over LSTM <ns0:ref type='bibr'>(MAE or SSE)</ns0:ref> . LSTM (crossentropyex) exhibits a competitive performance as BiLSTM (crossentropyex) starting from 0 dB to 12 dB, and LSTM (crossentropyex) outperforms BiLSTM (crossentropyex) starting from 14 dB. At very low SNRs (0-3 dB), the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators have the same performance. The proposed BiLSTM SSE estimator outperforms the BiLSTM crossentropyex estimator starting from 4 dB, and it exhibits an identical performance as the BiLSTM MAE estimator until 14 dB and outperforms it in the rest of the SNR examination range.</ns0:p><ns0:p>Figures 6, 7 and 8 emphasise the robustness of the BiLSTM-based estimators against the limited number of pilots, low SNR, and under the condition of a priori uncertainty of channel statistics. They demonstrate the importance of testing various loss functions in the deep learning process to obtain the most optimal architecture of any proposed estimator.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>9</ns0:ref> indicates that the proposed BiLSTM crossentropyex , BiLSTM SSE and BiLSTM SSE estimators have close SER performance at 64, 8 and 4 pilots, respectively. The performance of BiLSTM SSE at 8 pilots coincides with the performance of BiLSTM crossentropyex at 64 pilots. Therefore, using the proposed estimators with few pilots is recommended for 5G OFDM wireless communication systems to attain a significant improvement in their transmission data rate. Given that the proposed estimator adopts a training data set-driven approach, it is robust to a priori uncertainty for channel statistics.</ns0:p></ns0:div>
<ns0:div><ns0:head>LOSS CURVES</ns0:head><ns0:p>The quality of the DLNNs' training process can be monitored efficiently by exploring the training loss curves. These loss curves provide information on how the training process goes, and the user can decide whether to let the training process continue or stop.</ns0:p><ns0:p>Figures 10-12 show the loss curves of the DLNN-based estimators (BiLSTM and LSTM) at pilot numbers = 64, 8 and 4 and with the three examined loss functions <ns0:ref type='bibr'>(crossentropyex, MAE and SSE)</ns0:ref>. The curves emphasise and verify the obtained results in Figure <ns0:ref type='figure'>6</ns0:ref>, 7, and 8. For example, the sub-curves in Figure <ns0:ref type='figure'>10</ns0:ref> for BiLSTM crossentropyex and LSTM crossentropyex estimators emphasise their superiority over the other estimators. This superiority can be seen clearly from Figures 6. Moreover Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>ACCURACY CALCULATION</ns0:head><ns0:p>The accuracy of the proposed and other examined estimators is a measure of how the estimators recover transmitted data correctly. Accuracy can be defined as the number of correctly received symbols divided by the total number of transmitted symbols. The proposed estimator is trained in different conditions as indicated in the previous subsection, and we wish to investigate how well it performs in a new data set. Tables <ns0:ref type='table'>3, 4</ns0:ref> and 5 present the obtained accuracies for all examined estimators under all simulation conditions. As illustrated in Tables <ns0:ref type='table'>3 to 5</ns0:ref>, the proposed BiLSTM-based estimator attains accuracies from 98.61 to 100 under different pilots and loss functions. The other examined DL LSTM-based estimator has accuracies from 97.88 to 99.99 under the same examination conditions. The achieved accuracies indicate that the proposed estimator has robustly learned and emphasises the obtained SER performance in Figure <ns0:ref type='figure'>9</ns0:ref>. The obtained results of MMSE and LS in Tables 1, 2 and 3 emphasise the presented SER performance in Figures <ns0:ref type='figure' target='#fig_5'>6, 7 and 8</ns0:ref>, respectively, and show that as the pilot number decreases, the accuracy of the conventional estimators dramatically decreases.</ns0:p><ns0:p>The proposed BiLSTM-and LSTM-based estimators rely on DLNN approaches, where they can analyse huge data sets that may be collected from any plant, recognise the statistical dependencies and characteristics, devise the relationships between features and generalise the accrued knowledge for new data sets that they have not seen before. Thus, they are applicable to any 5G and beyond communication system.</ns0:p></ns0:div>
<ns0:div><ns0:head>IMPACT OF USING DIFFERENT OPTIMIZATION ALGORITHMS ON THE PROPOSED ESTMATOR PERFORMANCE</ns0:head><ns0:p>DL procedures benefit greatly from optimization methods. DNN training can be thought of as an optimisation issue that aims to discover a global optimum by applying gradient descent methods to obtain a robust training, and hence reliable prediction or classification models. Choosing the best optimization method for a particular scientific topic is a difficult task. Using the wrong optimization strategy during training can cause the DN to stay at the local minimum, which results in no training progress <ns0:ref type='bibr' target='#b2'>(Dogo et al. 2018</ns0:ref>). As a result, examination is required to evaluate the performance of various optimisers to get the optimal CSIE. This section provides performance comparison experiments using RMSProp, SGdm, and Adadelta optimisation algorithms (Soydaner & Intelligence 2020)for training the proposed BiLSTM-based CSIE at using 8-pilots, as illustrated in Fig. <ns0:ref type='figure'>13</ns0:ref>. Table <ns0:ref type='table'>6</ns0:ref> arranges the proposed BiLSTM CSIE estimators using different optimisation algorithms and loss functions from the highest performance to the lowest and their related accuracies.</ns0:p><ns0:p>It is clear from Fig. <ns0:ref type='figure'>13</ns0:ref> and Table <ns0:ref type='table'>6</ns0:ref> that the trained BiLSTM-based CSIE using Adadelta optimisation algorithm and SSE loss function achieves the best SER performance and provides the highest accuracy with 100%. On the other hand, the same estimator achieves the lowest SER performance and provides accuracy with 97.46% using SGdm optimization algorithm and SSE loss function. This, in turn, shows the importance of studying the training process efficiency using different optimization algorithms in the case of using a specific loss function.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS and FUTURE WORK</ns0:head><ns0:p>The proposed DL-BiLSTM-based CSIE is an online pilot-assisted estimator. It is robust against a limited number of pilots and exhibits superior performance compared with conventional estimators; it is also robust under the conditions of a priori uncertainty of communication channel statistics (non-Gaussian/stationary statistical channels) and demonstrates superior performance compared with conventional estimators and DL LSTM NN-based CSIEs.</ns0:p><ns0:p>Two customized classification layers using the loss functions (MAE and SSE) are introduced. The proposed CSIE exhibits a consistent performance at large and small pilot numbers and superior performance at low SNRs, especially at limited pilots, compared with conventional estimators. It also achieves the highest accuracy amongst all examined estimators at 64, 8, and 4 pilots for all the used loss functions.</ns0:p><ns0:p>The proposed BiLSTM-and LSTM-based estimators have high prediction accuracies of 98.61% to 100% and 97.88% to 99.99%, respectively, when using crossentropyex, MAE, and SSE loss functions for 64, 8, and 4 pilots. The proposed BiLSTM using (Adam, and crossentroyex), BiLSTM using (Adam, MAE, and SSE; and Adadelta, and SSE), and BiLSTM using (Adam, and SSE), achieve the best SER performance and provide accuracies with 100% at 64, 8, and 4 pilots respectively. The proposed estimator is promising for 5G and beyond wireless communication systems.</ns0:p><ns0:p>For future work, authors suggest the following research plans: 1. Investigating the proposed estimator's performance and accuracy by using different cyclic prefix lengths and types. 2. Developing robust loss functions by using robust statistics estimators, such as Tukey, Cauchy, Huber and Welsh. 3. Investigating the performance of CNN-, gated recurrent unit (GRU)-and simple recurrent unit (SRU)-based CSIEs whilst using crossentropyex, MAE and SSE loss functions and for 64, 8 and 4 pilots.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>Long short-term memory (LSTM) cell.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>cell output . Using the forget and input gates, LSTM can decide which and which is retained.The output gate finds current cell output by using the previous cell output at current cell and input . The mathematical model of the LSTMNN structure can be described through</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>,</ns0:head><ns0:label /><ns0:figDesc /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>[</ns0:head><ns0:label /><ns0:figDesc /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>. At PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60719:2:1:NEW 17 Jul 2021) Manuscript to be reviewed Computer Science any time , the input is fed to forward LSTM and backward LSTM networks. The final output of t BiLSTM-NN can be expressed as follows: , (7) ℎ 𝑡 = ℎ 𝑡 ʘℎ 𝑡 where and are forward and backward outputs of BiLSTM-NN, respectively. The operation of ℎ 𝑡 ℎ 𝑡 BiLSTM in the proposed estimator can be described briefly by the following algorithm: Input: sequence represents transmitted signal (original signal + channel model) Output: Prediction matrix of the extracted features of the input sequence</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 illustrates the offline training processes to obtain a learned CSIE based on BiLSTM-NN.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figures 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figures 7 present the performance comparison of LS, MMSE, BiLSTM and LSTM-based estimators using the Adam optimisation algorithm and the different (crossentropyex, MAE and SSE) loss functions at 8 pilots. Figure 7 shows that the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators outperform the LSTM (crossentropyex, or MAE or SSE) estimators and the traditional estimators over the examined SNR range. At a low SNR (0-7 dB), the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators exhibit semi-identical performance. Furthermore, the proposed BiLSTM SSE estimator trained by minimising the SSE loss function outperforms the BiLSTM crossentropyex estimator trained by minimising the crossentropyex loss function starting from 0 dB; also it outperforms BiLSTM MAE , which is trained by minimising the MAE loss function starting from 14 dB. Concisely at 8 pilots BiLSTM SSE estimator achieved the most minimum SER. Figures 8 show the performance comparison of the LS, MMSE, BiLSTM (crossentropyex, or MAE or SSE)and LSTM (crossentropyex, or MAE or SSE) estimators at 4 pilots. Figure8shows the superiority of the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators in comparison with the traditional estimators, which have lost their workability starting from 0 dB. It also shows the superiority of the proposed estimator BiLSTM (MAE or SSE) over LSTM(MAE or SSE) . LSTM (crossentropyex) exhibits a competitive performance as BiLSTM (crossentropyex) starting from 0 dB to 12 dB, and LSTM (crossentropyex) outperforms BiLSTM (crossentropyex) starting from 14 dB. At very low SNRs (0-3 dB), the proposed BiLSTM (crossentropyex, or MAE or SSE) estimators have the same performance. The proposed BiLSTM SSE estimator outperforms the BiLSTM crossentropyex estimator starting from 4 dB, and it exhibits an identical performance as the BiLSTM MAE estimator until 14 dB and outperforms it in the rest of the SNR examination range.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Figures 10-12 show the loss curves of the DLNN-based estimators (BiLSTM and LSTM) at pilot numbers = 64, 8 and 4 and with the three examined loss functions (crossentropyex, MAE and SSE). The curves emphasise and verify the obtained results in Figure6, 7, and 8. For example, the sub-curves in Figure10for BiLSTM crossentropyex and LSTM crossentropyex estimators emphasise their superiority over the other estimators. This superiority can be seen clearly from Figures 6. Moreover, the training loss curves in Figures11 and 12emphasise the obtained SER performance in Figures7 and 8, respectively, of each examined DLNN-based CSIE. For more details, good zooming, and analysis of the presented loss curves, they can be downloaded from this link (shorturl.at/lqxGQ).</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='14,42.52,178.87,525.00,345.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,42.52,178.87,525.00,306.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,70.87,272.85,672.95' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,178.87,525.00,245.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,342.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,199.12,525.00,363.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,199.12,525.00,338.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,199.12,525.00,399.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,199.12,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,399.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,199.12,525.00,266.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>, all proposed deep learningbased CSIEs have better performance compared with the examined traditional channel ones, such as LS and MMSE estimators.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60719:2:1:NEW 17 Jul 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "
Al-Azhar University –Engineering Faculty, Egypt
Electrical Engineering Department.
July 10, 2021
Dear Editors
We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns. We hope that the manuscript is now suitable for publication in PeerJ.
Dr. Mohamed H. Essai Ali
Associate Professor of Electronics &Communications
On behalf of all authors.
Reviewer 2
Thank you for your valuable time reviewing our paper and thank you for your comments that helped making our paper more impacting. We have taken your comments carefully in our consideration while processing our paper.
Basic reporting
• The most crucial lines in the paper are: 188-191 & 223-224. They are telling the contribution proposed by the authors; however, they all can be expanded by adding some flow charts, further explanation, etc.
Authors think that main contributions are declared enough before, through and after these lines.
• Reviewer’s previous demand:
'The most crucial lines in the paper are: 188-191 & 223-224. They are telling the contribution proposed by the authors; however, they all can be expanded by adding some flow charts, further explanation, etc.'
Old author’s response:
'Authors think that main contributions are declared enough before, through and after these lines.'
• New comment
Based on my previous review, I had indicated the main contribution explanatory paragraphs and demanded: 'however, they all can be expanded by adding some flow charts, further explanation, etc.' Visually, it is better to add a 'motivation of the study of things.' This does not mean 'Authors think that main contributions are declared enough before, though, and after these lines.' as you have claimed.
• New authors' response:
New paragraphs, modified figure, and flow chart have added. Figure 3 was replaced with modified version Fig.3: Structure of the DL BiLSTM NN for the BiLSTM estimator. Fig.4: flowchart of the proposed BiLSTM-based CSIE training was added too. Also, some modifications the authors have done on the text, kindly refer section 'DLNN-BASED CSIE' in the 'Revised manuscript' file.
• Reviwer’s previous question:
The complexity section does not give critical feedback.
Old author’s response:
This section represents a complementary study for this type of work. The section presents much valuable information, such as training time and the feed-forward pass and feedback pass operations complexity for BiLSTM, and LSTM-based estimators.
• New comment
You indicate O(w) -> O(2w) when it becomes “Bidirectional”. What does this bring to the reader? One can guess that if the direction of data processing in a network increases, the timing will also; plus, this is not a new finding. The formulation you utilize; I am not familiar with, please see:
http://cse.iitkgp.ac.in/~psraja/FNNs%20,RNNs%20,LSTM%20and%20BLSTM.pdf
https://arxiv.org/pdf/2103.08212.pdf
Besides, Table 6 is filled with some ratios? What is ':' operator for?
If you could find a proper citation for your highlight on the complexity, giving a cite is welcomed.
• New authors' response:
Done, the authors agree with the reviewer, that the reader can guess the submitted information for processing and timing (does not give critical feedback), so authors have deleted this section from the paper and its related Table 6. kindly refer to the 'Revised manuscript' file.
• Reviewer’s previous question:
Line 192-197: Did you decide on this structure based on any preliminary experiments? Or some of the previous efforts (even though you claim to be the first on this structure; maybe you inspired by any LSTM-based communication system.) Why a structure like that? This should be technically explained in this paragraph.
Old author’s response:
Yes, I decided on this structure based on preliminary experiments and some of the previous efforts. Kindly refer to Ref. [12]. Kindly refer to 'Revised manuscript' file.
• New comment
Then, why not clearly include this inspiration in the place where the first time the network architecture is mentioned (If I am not wrong, Ref. 12 is Liao et al. is only occurring in the introduction)
• New authors' response:
The authors apologize; it is not an intended mistake; the correct references are added at line 261; Kindly refer to the 'Revised manuscript' file.
Experimental design
• Reviewer’s previous question:
The repeatedly highlighted Adam optimization has no big impact on the overall contribution. What about RMSProp, AdaGrad, etc.?
Old author’s response:
The use of both RMSProp, AdaGrad, and other optimization algorithms such Adadelta, Adagrad, AMSgrad, AdaMax and Nadam is suggested in the future work section. Kindly refer to 'Future work'.
• New Comment
I do not agree with the authors to save some information for future work; at the end of the day, these issues are not mitigated, and the reader still does not know anything about other optimizers. It is better to put a comparison results (if saying repeatedly about Adam’s optimizer), or do not highlight too much of Adam's optimization. Nevertheless, for my overall expectation, as you will see at the end, the comparison result is more expected.
• New authors' response:
Based on the reviewer recommendation at the 'Comments for the author,' by choosing one single architecture like BiLSTM 8 and making comparisons, the Authors added a new substation with title ' IMPACT OF USING DIFFERENT OPTIMIZATION ALGORITHMS ON THE PROPOSED ESTMATOR PERFORMANCE'. This subsection is devoted to studying the effect of using other optimization algorithms other than 'Adam' on the performance of the proposed estimators at different loss functions. Authors have added new Fig.13 in heat map format, and related new Table 6. Kindly refer to the 'Revised manuscript' file.
Validity of the findings
• Reviewer’s previous question:
I did not get the point of using BiLSTM instead of LSTM. Why/How your system requires a bidirectional movement on the timeline?
Old author’s response:
All obtained results demonstrate the superior performance of BiLSTM-based estimators compared with conventional estimators and LSTM-based estimators, so the study recommends using BiLSTM-based estimators, especially at low signal-to-noise ratios.
• New Comment
Still, it seems to me “it has been tried and the results are like”. I am asking more mathematical perspective, what are the signal properties of LSTM & BiLSTM; why a bidirectional movement is required on a communication system as indicated. Are there any other advantages at least to use BiLSTM? Each and every day a new NN algorithm is being proposed; next time one other researcher can try enhanced BiLSTM, etc., but this does not seem to be a contribution; at least in your paper, more than the experimental results, one can infer why and how to construct a communication system based on BiLSTM. (For instance, at least you may include an Appendix (diagram, UML, or flowchart) by showing your code structure; thereby giving the related math becomes visually possible).
• New authors' response:
Authors have updated the section with title ' DLNN-BASED CSIE', where you can find:
1- Algorithm for illustrating the operation of BiLSTM in the proposed estimator.
2- Fig. 3: describes the structure of the proposed BiLSTM-based CSIE with more details.
3- Fig.4: describes the learning process.
Kindly refer to the 'Revised manuscript' file.
Comments for the author
• For the author response:
“Since the behavior of neural networks relies on different factors such as the network structure, the learning algorithm, loss functions, the activation functions used in each node, etc.,”
I agree for a research paper to include preliminary insights for the NN & Comm. Syst. collaborations; however, as you have given with your previous paper as you have shared, a thorough inspection on all these parameters is required. There are “different factors” as you have said, however, I cannot catch the main summary of all these factors as I had demanded with my sentences starting “For the further efforts of this paper, I strongly suggest”. You have taken it as a future work; however, my main aim is for this current paper.
Therefore, I still believe that a complete summary of the performance related to the indicated factors should be given. I believe this is the only way to increase the contribution of your paper. You may present a summary of all your experiments (by even increasing) showing the effects of all different loss functions, optimizers, learning rate, network size, etc. At least, you may choose one single architecture like BiLSTM 8, and make a table of all NN dynamics.
By the way, thank you very much for the heat map tables, now, I believe that the related results are clearer.
• New authors' response:
Based on the reviewer recommendations, the authors have added a new substation with the title ' impact of using different optimization algorithms on the proposed estimator performance'. It is noted that there are estimators that have achieved 100% accuracies under the conditions of the currently used network size and learning parameters. Kindly refer to the 'Revised manuscript' file.
" | Here is a paper. Please give your review comments after reading it. |
215 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background: Conventional in vivo method for post-translational modification site prediction such as spectrophotometry, Western Blotting, and chromatin immune precipitation can be very expensive and time-consuming. Neural networks (NN) is one of the computational approaches that can predict effectively the post-translational modification site. We developed a neural network model namely Sequential and Spatial Methylation Fusion Network (SSMFN) to predict possible one of post-translational modification process locations, methylation, site on the protein sequence.</ns0:p><ns0:p>Method: We designed our model to be able to extract spatial and sequential information from amino acid sequences. Convolutional Neural Networks (CNN) is applied to harness spatial information, while Long Short-Term Memory (LSTM) is applied for sequential data. The latent representation of the CNN and LSTM branch are then fused. Afterwards, we compared the performance of our proposed model to the state-ofthe-art methylation site prediction models on the balanced and imbalanced dataset.</ns0:p><ns0:p>Results: Our model appeared to be better in almost all measurement when trained on the balanced training dataset. On the imbalanced training dataset, all of the models gave better performance since they are trained on more data. In several metrics, our model also surpasses the PRMePred model, which requires a laborious effort for feature extraction and selection. Conclusion: Our models achieved the best performance across different environments in almost all measurements. Also, our result suggests that the NN model trained on a balanced training dataset and tested on an imbalanced dataset will offer high specificity and low sensitivity. Thus, the NN model for methylation site prediction should be trained on an imbalanced dataset. Since in the actual application, there are far more negative samples than positive samples.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head></ns0:div>
<ns0:div><ns0:head>41</ns0:head><ns0:p>Methylation is a post-translational modification (PTM) process that modifies the functional and confor-such as acetylation and phosphorylation <ns0:ref type='bibr' target='#b19'>(Schubert et al., 2006)</ns0:ref>. Moreover, methylation directly alters the regulation, transcription, and structure of chromatin <ns0:ref type='bibr' target='#b1'>(Bedford and Richard, 2005)</ns0:ref>. Genetic alterations through the methylation process induce oncogenes and tumour suppressor genes that play a crucial role in carcinogenesis and metastasis cancer <ns0:ref type='bibr' target='#b26'>(Zhang et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Currently, most of the methods for PTM sites prediction are in vivo methods, such as Mass Spectrophotometry, Western Blotting, and Chromatin Immune Precipitation (ChIP). However, computational (in silico) approaches are starting to be more popular for PTM sites prediction, especially methylation.</ns0:p><ns0:p>Computational approaches for predicting protein methylation sites can be an inexpensive, highly accurate, and fast alternative method through massive data sets. The commonly used computational approaches are Support Vector Machine (SVM) <ns0:ref type='bibr' target='#b4'>(Chen et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b20'>Shao et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b22'>Shien et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b21'>Shi et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b13'>Lee et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b18'>Qiu et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b24'>Wen et al., 2016)</ns0:ref>, Group-Based Prediction System (GPS) <ns0:ref type='bibr' target='#b7'>(Deng et al., 2017)</ns0:ref>, random forest <ns0:ref type='bibr' target='#b23'>(Wei et al., 2017)</ns0:ref>, and Neural Networks (NN) <ns0:ref type='bibr' target='#b5'>(Chen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b8'>Hasan and Khatun, 2018;</ns0:ref><ns0:ref type='bibr' target='#b3'>Chaudhari et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The application of the machine learning approach to predict possible methylation site on protein sequences has been done many times before. The latest studies and the most relevant to our study were conducted by <ns0:ref type='bibr' target='#b5'>Chen et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref>. <ns0:ref type='bibr' target='#b5'>Chen et al. (2018)</ns0:ref> developed MUscADEL (Multiple Scalable Accurate Deep Learner for lysine PTMs), a methylation site prediction model that was trained and tested on human and mice protein data sets. MUscADEL was utilized with bidirectional Long Short Term Memory (LSTM). <ns0:ref type='bibr' target='#b5'>Chen et al. (2018)</ns0:ref> hypothesized that the order of amino acids in the protein sequence has a significant influence on where the methylation process can occur. The other model is DeepRMethylSite which was developed by <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref>, was implemented with Convolutional Neural Network (CNN) and LSTM. The combination of CNN and LSTM was expected to be able to extract the spatial and sequential information of the amino acids sequences.</ns0:p><ns0:p>Before the practical application by Chaudhari et al. to predict methylation site, a combination of LSTM and CNN approach has been used since 2015 by <ns0:ref type='bibr' target='#b25'>Xu et al. (2015)</ns0:ref> to strengthen the face recognition model.</ns0:p><ns0:p>In the natural language processing (NLP) area, Jin <ns0:ref type='bibr'>Wang (2016)</ns0:ref> In this study, we developed the Sequential and Spatial Methylation Fusion Network (SSMFN) to predict possible methylation sites on the protein sequence. Similar to DeepRMethylSite, SSMFN also utilized CNN and LSTM. However, instead of treating them as an ensemble model, we fused the latent representation of the CNN and LSTM modules. By allowing more relaxed interaction between the CNN and LSTM modules, we hypothesize that the fusion approach can extract better features than the model with the ensemble approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>The dataset used in this study was obtained from a previous methylation site prediction study, <ns0:ref type='bibr' target='#b10'>Kumar et al. (2017)</ns0:ref>. The data was collected through literature review along with those reported in the protein database, Uniprot: the Universal Protein knowledgebase <ns0:ref type='bibr' target='#b0'>(Apweiler et al., 2004)</ns0:ref>, and was experimentally verified in vivo.</ns0:p><ns0:p>As the possible location for methylation is on Arginine (R), the dataset is consists of pieces of amino acids sequence with Arginine in the middle of the sequence. Each sequence is consist of 19 characters long, where each character represents an amino acid. An example of amino acid sequences collection is shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. There are six sets of amino acid sequences, training dataset, independent dataset, and test dataset; each set has positive and negative amino acids sequences collection, Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. In this study, the independent dataset is referred to as a test dataset to ease explanation. We also created a balanced dataset for the training dataset and validation dataset. The balanced training dataset was created to mimic previous studies' experiments, in which they always balanced the data before using it as a training dataset.</ns0:p><ns0:p>The balanced validation dataset was created as a fair comparison variable. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment</ns0:head><ns0:p>In our experiment, we compared the performance of our proposed model to DeepRMethylSite <ns0:ref type='bibr' target='#b3'>(Chaudhari et al., 2020)</ns0:ref>. In addition, we also provide a comparison to a standard multi-layer perceptron model. To explore the performance of the model in various environment, we conducted two experiments. Most of the previous studies carried their experiment on a balanced dataset. Hence, in the first experiment, we trained the models on the balanced training dataset. In the second experiment, we trained our models on the original dataset or the imbalanced training dataset to maximized the full potential of the models.</ns0:p><ns0:p>The trained models from both experiments were then validated and tested on the balanced validation dataset, the imbalanced validation dataset, and the test dataset. Lastly, to understand the contribution of each element in the models, we carried an ablation study on our proposed model. The elements tested and explored in this ablation study were the CNN and LSTM branches of the model. The workflow of this study is shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. All models in the experiment were developed using Python machine learning library, PyTorch <ns0:ref type='bibr'>(Paszke et al., 2019)</ns0:ref>. To train the models, we used a GPU Tesla P100 as well as a publicly available GPU instance provided by Google Colab.</ns0:p></ns0:div>
<ns0:div><ns0:head>Spatial and Sequential Methylation Fusion Network (SSMFN)</ns0:head><ns0:p>Our proposed model, Spatial and Sequential Methylation Fusion Network (SSMFN), was designed with the motivation that a protein sequence can be perceived as both spatial and sequential data. The view of a protein sequence as spatial data assumes that the amino acids are arranged in a one-dimensional space. On the other hand, protein sequence can also be thought of as sequential data by assuming that the next amino acid is the next time step of particular amino acid. On modelling protein sequences with deep learning, CNN is used when adopting spatial data view, while LSTM is used for the sequential data. Using the information from both views has been proved to be beneficial by <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref>,</ns0:p><ns0:p>by having an ensemble model of CNN and LSTM that read the same sequence. However, Chaudhari et al. processed the spatial and sequential view with separate sub-models. As a consequence, it cannot extract joint spatial-sequential features, which might be beneficial in modelling protein sequences. Having observed that, we constructed SSMFN as a deep learning model with an architecture that can fuse the latent representation of CNN modules and LSTM modules.</ns0:p><ns0:p>To read the amino acid sequence, SSMFN uses an embedding layer with 21 neurons. This embedding layer is used to enhance the expression of each amino acids. Thus, the number of neurons in this layer matches the amounts of amino acids variants. Therefore, each type of amino acid can have a different vector representation. The output of this layer is then branched into LSTM and CNN branch. In the LSTM branch, we created two LSTM layers with 64 neurons each. Every LSTM layer is followed by a dropout layer with a 0.5 drop rate. It is then followed by a fully connected layer at the end of the branch with 32 neurons. This fully connected layer serves as a latent representation generator that will be fused with the latent representation from the CNN branch.</ns0:p><ns0:p>On the other hand, the CNN branch comprised four CNN layers with 64 neurons in each layer. Unlike the LSTM layers, residual connections were utilized in the CNN branch. Each CNN layer is a 2D convolutional layer with rectified linear units (ReLU) as the activation function. Every CNN layer also has a 2D batch normalization layer and a dropout layer which set at 0.5. At the end of the branch, a fully connected layer with 32 neurons is installed to match the output with the LSTM branch.</ns0:p><ns0:p>Afterwards, the latent representation of both branches is fused with a summation operation. The fused representation then proceeds through a fully connected layer with 2 neurons as the last layer. This layer predicts whether the methylation occurred at the centre of the amino acid or not. The architecture of the proposed model and the hyperparameter settings is illustrated in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref> and listed in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison to Standard Multi-Layer Perceptron</ns0:head><ns0:p>A standard multi-layer perceptron (SMLP) neural network was developed for a comparable measurement Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>To evaluate the performance of the proposed model and to compare it to the models from previous studies, we utilized Accuracy (Equation <ns0:ref type='formula'>1</ns0:ref>), Sensitivity (Equation <ns0:ref type='formula'>2</ns0:ref>), Specificity (Equation <ns0:ref type='formula'>3</ns0:ref>), F1 score (Equation <ns0:ref type='formula'>4</ns0:ref>), Matthews Correlation Coefficient (MCC) (Equation <ns0:ref type='formula' target='#formula_0'>5</ns0:ref>), and Area Under Curve (AUC) <ns0:ref type='bibr' target='#b2'>(Bradley, 1997)</ns0:ref>. These parameters are based on our previous research with a focus on prediction protein phosphorylation site <ns0:ref type='bibr' target='#b15'>(Lumbanraja et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b14'>(Lumbanraja et al., , 2019))</ns0:ref>. Besides the parameters formula that is already listed, the AUC parameter was computed using the scikit-learn library which based on the Receiver Operating Characteristic (ROC). </ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy = T P + T N T P + T N + FP + FN</ns0:head><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>Sensitivity = T P T P + FN (2) Speci f icity = T N T N + FP (3) F1score = T P T P + FP + FN (4) MCC = (T P * T N) − (FP * FN) (T P + FP)(T P + FN)(T N + FP)(T N + FN)<ns0:label>(5)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The following two tables, Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref> and Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> show the results obtained from our ablation study. Afterwards, Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref> and Table <ns0:ref type='table' target='#tab_6'>7</ns0:ref> are the comparative results of our model to the previous models. Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref> shows the performance of the models which were trained on the balanced training dataset. While Table <ns0:ref type='table' target='#tab_6'>7</ns0:ref> shows the performance of the models which were trained on the original or the imbalanced training dataset. As seen in Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>, we added the performance of several methylation site prediction models from previous studies including MeMo <ns0:ref type='bibr' target='#b4'>(Chen et al., 2006)</ns0:ref>, MASA <ns0:ref type='bibr' target='#b22'>(Shien et al., 2009)</ns0:ref>, BPB-PPMS <ns0:ref type='bibr' target='#b20'>(Shao et al., 2009)</ns0:ref>, PMeS <ns0:ref type='bibr' target='#b21'>(Shi et al., 2012)</ns0:ref>, iMethylPseAAC <ns0:ref type='bibr' target='#b18'>(Qiu et al., 2014)</ns0:ref>, PSSMe <ns0:ref type='bibr' target='#b24'>(Wen et al., 2016)</ns0:ref>, MePred-RF <ns0:ref type='bibr' target='#b23'>(Wei et al., 2017)</ns0:ref> and PRmePRed <ns0:ref type='bibr' target='#b10'>(Kumar et al., 2017)</ns0:ref>. The performances of MeMo, MASA, BPB-PPMS, PMeS, iMethylPseAAC, PSSMe and MePred-RF were taken from <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref> paper. Meanwhile, the performance of PRmePRed was taken from <ns0:ref type='bibr' target='#b10'>Kumar et al. (2017)</ns0:ref> paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The results of the ablation study in Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref> and Table <ns0:ref type='table' target='#tab_4'>5</ns0:ref> show that the LSTM branch and CNN branch achieved better performance compared to the merged model at least on one dataset. However, the merged models were better in most of the dataset, specifically in the test dataset. This fact shows that the merged model has a better generalization capability than the model with only CNN or LSTM branch. Continuing to the first experiment where we trained the models on the balanced training dataset Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>, our proposed model appears to have better performance in almost all variables. Additionally, the recreated DeepRMethylSite final result (merged) has never been the best performer compared to its CNN branch and its LSTM branch. In the imbalanced validation dataset our proposed model, SSMFN, gave more than 4% higher accuracy and 6% higher MCC which is the best parameter for assessing model performance on imbalanced data, compared to the recreated DeepRMethylSite model. In the balanced validation dataset and test dataset, SSMFN gave 2-4% higher accuracy compared to recreated DeepRMethylSite.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>, on Tested on The Test Dataset section, we added the performance of several other methylation site prediction models from previous studies which we took from Chen et al. ( <ns0:ref type='formula'>2018</ns0:ref> negatives.</ns0:p><ns0:p>In the second experiment, Table <ns0:ref type='table' target='#tab_6'>7</ns0:ref>, we trained the models using the imbalanced dataset with a 5 to 1 ratio for negative samples. Overall, our model has better performance when trained on the imbalanced dataset compared to when it is trained on the balanced dataset. When trained on the imbalanced dataset, some of the SSMFN performances measurement are also higher than PRmePRed performance in Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref>.</ns0:p><ns0:p>SSMFN accuracy is 0.36% behind the recreated DeepRMethylSite accuracy on the imbalanced validation dataset. However, it appeared to have better performance on the balanced validation dataset and the test dataset compared to recreated DeepRMethylSite. Manuscript to be reviewed Science, University of Lampung. The GPU Tesla P100 used to conduct the experiment was provided by NVIDIA -BINUS AIRDC.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>developed a Dimensional Sentiment Analysis model and suggested that a combination of LSTM and CNN is capable of capturing long-distance dependency and local information pattern. Still related to NLP, Chuhan Wu (2018) developed an LSTM-CNN model with similar architecture to other previous studies where the CNN layer and LSTM layer were implemented in serial structure.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Research workflow. The chart shows that the data we used in this research was retrieved from Kumar et al. (2017) study. The data was subsequently balanced accordingly. In the first experiment, we trained our model using the balanced training dataset. Afterward, we validated and tested the model on the balanced and the imbalanced datasets. We did a similar workflow for the second experiment. However, instead of trained the model on the balanced dataset, we trained it on the imbalanced training dataset.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>to our proposed model. This multi-layer perceptron model is included in this study to provide an insight into the performance of a simple model to solve the methylation site prediction problem. This model consists of an embedding layer followed by two fully connected layers. The embedding layer has 21 neurons as there are 21 types of amino acids. The first fully connected layer has 399 neurons which came from 21 (types of amino acid)s multiple by 19 (protein sequence length). Then the second fully connected layer has two neurons which also act as the output. The structure of this model is shown in Figure3.4/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:1:1:NEW 25 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Proposed Neural Network Architecture.</ns0:figDesc><ns0:graphic coords='6,174.17,63.78,348.69,203.67' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The Standard Multi-Layer Perceptron Architecture.</ns0:figDesc><ns0:graphic coords='6,248.89,305.19,199.25,141.31' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>) and<ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref> research papers. The models from previous studies provide an overview of the models' performance development through generation. The table shows that early models gave very poor accuracy. Thus the discussion only focused on the newest model, PRmePRed. PRmePRed gave more than 5% higher accuracy than SSMFN. However, this model utilized a feature extraction technique. Thus, this model process more information than our model which only trained on protein sequence data. On the other hand, the Standard Multi-Layer Perceptron neural network model surprisingly gave a good result and slightly better than recreated DeepRMethylSite on the test dataset. This does not implicate that SMLP neural network has a better performance compared to the recreated DeepRMethylSite since the SMLP neural network model gave poor performances in the validation dataset, both balanced, and imbalanced.When trained on the balanced training dataset and tested on the imbalanced validation dataset, most of the model gave high specificity and low sensitivity. This may happen because the models do not have a sense of the actual proportion between positive and negative samples. Hence, resulting in many false 7/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:1:1:NEW 25 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>All of the Neural Networks models, including ours, gave a high specificity and a low sensitivity when trained on the balanced dataset and tested on the imbalanced dataset. When developing a Neural Network model, the actual application should be one of the main considerations where there are far more negative samples than positive samples. Regarding the overall performance of the models, our proposed model, SSMFN, gave better performance compared to the recreated DeepRMethylSite in various datasets. Our model also performed better when trained on the larger training dataset, even though the training dataset is not balanced between positive and negative samples. Moreover, when trained using a large and imbalanced dataset, the model has a chance to perform better than the model that uses feature extraction.ACKNOWLEDGMENTSThis research is a collaboration between Bioinformatics and Data Science Research Center (BDSRC), Bina Nusantara University and Department of Computer Science, Faculty of Mathematics and Natural 8/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:1:1:NEW 25 Jun 2021)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Protein Sequence Dataset Example</ns0:figDesc><ns0:table><ns0:row><ns0:cell>2/10</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Amino Acids Sequences Dataset List.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Hyperparameter Settings.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Settings</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning rate</ns0:cell><ns0:cell>0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>Epochs</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>Adam</ns0:cell></ns0:row><ns0:row><ns0:cell>Embedding layer neurons</ns0:cell><ns0:cell>21</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Embedding layer output dimension 21 x 19 = 399</ns0:cell></ns0:row><ns0:row><ns0:cell>Output layer neurons</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM Branch</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LSTM layers neurons</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Dropout layers drop rate</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Fully connected layer neurons</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN Branch</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>CNN layers neurons</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN layers activation function</ns0:cell><ns0:cell>Rectified Linear Units</ns0:cell></ns0:row><ns0:row><ns0:cell>Dropout layers drop rate</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Fully connected layer neurons</ns0:cell><ns0:cell>32</ns0:cell></ns0:row></ns0:table><ns0:note>5/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:1:1:NEW 25 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>First Ablation Study, Trained on The Balanced Training Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>Sens</ns0:cell><ns0:cell>Spec</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>AUC</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Validated on The Imbalanced Validation Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.7891 0.7649 0.5745 0.9368 0.5649 0.8120</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.8252 0.7985 0.6328 0.9354 0.6148 0.8326</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8187 0.7943 0.6175 0.9442 0.6143 0.8359</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Validated on The Balanced Validation Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.8431 0.8427 0.8767 0.8149 0.6889 0.8120</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.8302 0.3020 0.8195 0.8417 0.6609 0.8326</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8360 0.8358 0.8130 0.8626 0.6738 0.8359</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Tested on The Test Dataset</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.7962 0.7960 0.8105 0.7831 0.5929 0.7962</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.7981 0.7980 0.8063 0.7903 0.5964 0.7981</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8115 0.8115 0.8000 0.8240 0.6235 0.8115</ns0:cell></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>Sens</ns0:cell><ns0:cell>Spec</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>AUC</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Validated on The Imbalanced Validation Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.8939 0.8502 0.9389 0.8834 0.7230 0.8179</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.9167 0.8891 0.9100 0.9186 0.7836 0.8704</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.9078 0.8774 0.8895 0.9133 0.7598 0.8596</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Validated on The Balanced Validation Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.7529 0.7372 0.9948 0.6698 0.5798 0.8179</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.8638 0.8624 0.9567 0.8024 0.7560 0.8704</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8656 0.8640 0.9672 0.8003 0.7491 0.8596</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Tested on The Test Dataset</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.7404 0.7228 0.9845 0.6598 0.5566 0.7404</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.8442 0.8418 0.9590 0.7754 0.7110 0.8442</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8462 0.8435 0.9688 0.7744 0.7173 0.8462</ns0:cell></ns0:row></ns0:table><ns0:note>6/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:1:1:NEW 25 Jun 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Second Ablation Study, Trained on The Imbalanced Training Dataset.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Manuscript to be reviewed First Experiment, Trained on The Balanced Training Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>The Imbalanced Validation Dataset DeepRMethylSite CNN 0.8948 0.8550 0.9072 0.8916 0.7242 0.8283 DeepRMethylSite LSTM 0.9092 0.8782 0.9044 0.9106 0.7634 0.8576 DeepRMethylSite Merged 0.9114 0.8808 0.9047 0.9115 0.7693 0.8589 SMLP 0.9071 0.8670 0.9973 0.8873 0.7635 0.8295 SSMFN Merged 0.9078 0.8774 0.8895 0.9133 0.7598 0.8596 Validated on The Balanced Validation Dataset DeepRMethylSite CNN 0.8289 0.8249 0.9709 0.7527 0.6899 0.8283 DeepRMethylSite LSTM 0.8576 0.8557 0.9644 0.7908 0.7350 0.8576 DeepRMethylSite Merged 0.8585 0.8567 0.9645 0.7919 0.7365 0.8589 SMLP 0.7582 0.7432 1.0000 0.6740 0.5899 0.8295 SSMFN Merged 0.8656 0.8640 0.9672 0.8003 0.7491 0.8596 Tested on The Test Dataset DeepRMethylSite CNN 0.7808 0.7727 0.9506 0.7039 0.6063 0.7808 DeepRMethylSite LSTM 0.8115 0.8070 0.9500 0.7382 0.6548 0.8115 DeepRMethylSite Merged 0.8135 0.8088 0.9553 0.7390 0.6598 0.8135 SMLP 0.7250 0.7025 1.0000 0.6452 0.5388 0.7250 SSMFN Merged 0.8462 0.8435 0.9688 0.7744 0.7173 0.8462 Second Experiment, Trained on The Imbalanced Training Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Reviewer 1
Basic reporting
1) Please attention to some word mistakes, such as “swift and effectively” and “performers” in the
abstract. Please check the words and grammar of the whole paper again.
Ans: We have revised the mentioned mistakes above. We also proofread the whole manuscript again.
Thank you for your correction.
2) The Fig. 3 can be more professional and informative drawn by specialized tools, such as
PlotNeuralNet.
Ans: Thanks for your advice. We redraw our chart in Fig. 2 'Proposed Neural Network Architecture' and
Fig. 3 'The Standard Multi-Layer Perceptron Architecture' (formerly Fig. 3 and Fig. 4) in a more
professional style based on the PlotNeuralNet library.
Experimental design
Comparative experiments are not sufficient. Please give more results compared with other methods on
more datasets.
Ans: Thank you for your suggestion. We have added the performances of several models from previous
studies to Table 6 “First Experiment, Trained on The Balanced Training Dataset” (formerly Table 5) to give
an overview of the models’ performance development through generation. The numbers show that
models’ performance was gradually improved over time. For a fair comparison to our model that is based
on neural networks, we only focused on the only model that was developed using a neural network in this
topic, DeepRMethylSite. This model is also the most recent and the best model.
For the dataset, we used the same data that was used in Kumar et al study. In almost all methylation site
prediction studies, the models' performance was compared only on one dataset.
Validity of the findings
1)I think that fusing CNN and LSTM branch with a weighted operation is more rational. Please give
reasons for simple summation.
Ans: The reason why we did not use a weighted operation like what DeepRMethylSite did, is that we avoid
manual calibration. Since the addition operation is in the architecture of the neural network, the result of
the addition will be propagated back into the model when the model is trained. Therefore, by merging
the result from the CNN branch and LSTM branch using simple summation inside the neural network
architecture, the model would automatically adjust the weight inside the CNN and LSTM layers. This
should deliver better performance, which is later proved by the result of our study.
2)DeepRMethylSite uses one-hot encoding, where each amino acid is defined as a 20 length vector,
with only one of the 20 bits as 1. And SSMFN uses an embedding layer with 21 neurons to encode different
amino acids. What is the difference of those two representations and advantages of SSMFN?
Ans: In recent years, embedding is gaining popularity because the embedding layer showed a better result
for dealing with categorical data compared to one-hot encoding. Both embedding and one hot encoding
are used to make the representation of categorical data to be more expressive. However, by using
embedding layers, the process of creating individual vectors for each amino acid resides within the neural
network architecture. Thus, the vector representation for each amino acid is calibrated to gain optimal
performance based on the backpropagation when the model is trained.
Comments for the Author
This paper proposes a neural network model which combines CNN and LSTM to extract spatial and
sequential information from amino acid sequences, respectively. The writing structure is reasonable, but
there are some problems. Please see the detailed comments above.
Reviewer 2
Basic reporting
no comment
Experimental design
no comment
Validity of the findings
no comment
Comments for the Author
1. We note that your model achieves good results, but the explanation of the structure of the model is
not sufficient, and we hope it can be improved.
Ans: We have improved the explanation of the architecture of the proposed neural network. Also, we
have improved our image in Fig. 2 “Proposed Neural Network Architecture” (formerly Fig. 3) with a more
professional and representative style to help better understanding.
2. For Figure2, we would like you to have more visual representations and less textual representations.
Ans: Thank you for your suggestion. We redraw our chart in Fig. 1 “Research workflow” (formerly Fig. 2)
with more visual representations.
3. You can try to center the content in Table1 to make it look more beautiful.
Ans: Thank you for the suggestion. We have revised the format of the tables based on your suggestion.
4. The literature review of the manuscript need to be improved by comparing similar papers. Thus, the
readers can compare the results of various works and the main novelty of this paper can be easily
identified.
Ans: Thank you for your suggestion. We have added more previous studies in the introduction section.
We also added the comparison of the performances of the several models from the added previous
studies to Table 6 “First Experiment, Trained on The Balanced Training Dataset” (formerly Table 5).
" | Here is a paper. Please give your review comments after reading it. |
216 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background: Conventional in vivo method for post-translational modification site prediction such as spectrophotometry, Western Blotting, and chromatin immune precipitation can be very expensive and time-consuming. Neural networks (NN) is one of the computational approaches that can predict effectively the post-translational modification site. We developed a neural network model namely Sequential and Spatial Methylation Fusion Network (SSMFN) to predict possible one of post-translational modification process locations, methylation, site on the protein sequence.</ns0:p><ns0:p>Method: We designed our model to be able to extract spatial and sequential information from amino acid sequences. Convolutional Neural Networks (CNN) is applied to harness spatial information, while Long Short-Term Memory (LSTM) is applied for sequential data. The latent representation of the CNN and LSTM branch are then fused. Afterwards, we compared the performance of our proposed model to the state-ofthe-art methylation site prediction models on the balanced and imbalanced dataset.</ns0:p><ns0:p>Results: Our model appeared to be better in almost all measurement when trained on the balanced training dataset. On the imbalanced training dataset, all of the models gave better performance since they are trained on more data. In several metrics, our model also surpasses the PRMePred model, which requires a laborious effort for feature extraction and selection. Conclusion: Our models achieved the best performance across different environments in almost all measurements. Also, our result suggests that the NN model trained on a balanced training dataset and tested on an imbalanced dataset will offer high specificity and low sensitivity. Thus, the NN model for methylation site prediction should be trained on an imbalanced dataset. Since in the actual application, there are far more negative samples than positive samples.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Methylation is a post-translational modification (PTM) process that modifies the functional and conformational changes of a protein. The addition of a methyl group to the protein structure plays a role in the epigenetic process, especially in histones <ns0:ref type='bibr' target='#b13'>(Lee et al., 2005)</ns0:ref>. Histone methylation in Arginine (R) and Lysine (K) residues substantially affects the level of gene expression along with other PTM processes such as acetylation and phosphorylation <ns0:ref type='bibr' target='#b22'>(Schubert et al., 2006)</ns0:ref>. Moreover, methylation directly alters the regulation, transcription, and structure of chromatin <ns0:ref type='bibr' target='#b1'>(Bedford and Richard, 2005)</ns0:ref>. Genetic alterations through the methylation process induce oncogenes and tumor suppressor genes that play a crucial role in carcinogenesis and metastasis cancer <ns0:ref type='bibr' target='#b29'>(Zhang et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Currently, most of the methods for PTM sites prediction were conducted by implementing in vivo methods, such as Mass Spectrophotometry, Western Blotting, and Chromatin Immune Precipitation (ChIP). However, computational (in silico) approaches are starting to be more popular for PTM sites prediction, especially methylation. Computational approaches for predicting protein methylation sites can be an inexpensive, highly accurate, and fast alternative method through massive data sets. The commonly used computational approaches are Support Vector Machine (SVM) <ns0:ref type='bibr' target='#b4'>(Chen et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b23'>Shao et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b25'>Shien et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b24'>Shi et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b14'>Lee et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b21'>Qiu et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b27'>Wen et al., 2016)</ns0:ref>, <ns0:ref type='bibr'>Group-Based</ns0:ref> Prediction System (GPS) <ns0:ref type='bibr' target='#b7'>(Deng et al., 2017)</ns0:ref>, random forest <ns0:ref type='bibr' target='#b26'>(Wei et al., 2017)</ns0:ref>, and Neural Network (NN) <ns0:ref type='bibr' target='#b5'>(Chen et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b9'>Hasan and Khatun, 2018;</ns0:ref><ns0:ref type='bibr' target='#b3'>Chaudhari et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The application of the machine learning approach to predict possible methylation sites on protein sequences has been studied in numerous previous research. The latest and the most relevant studies to our study were conducted by <ns0:ref type='bibr' target='#b5'>Chen et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref>. <ns0:ref type='bibr' target='#b5'>Chen et al. (2018)</ns0:ref> developed MUscADEL (Multiple Scalable Accurate Deep Learner for lysine PTMs), a methylation site prediction model that was trained and tested on human and mice protein data sets. MUscADEL utilized bidirectional Long Short Term Memory (LSTM) <ns0:ref type='bibr' target='#b8'>(Graves and Schmidhuber, 2005)</ns0:ref>. Meanwhile, <ns0:ref type='bibr' target='#b5'>Chen et al. (2018)</ns0:ref> hypothesized that the order of amino acids in the protein sequence has a significant influence on the location where the methylation process can occur. The other model is DeepRMethylSite which was developed by <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref>. The model was implemented with the combination of Convolutional Neural Network (CNN) and LSTM. The combination was expected to be able to extract the spatial and sequential information of the amino acids sequences.</ns0:p><ns0:p>Before the practical application by Chaudhari et al. to predict methylation site, a combination of LSTM and CNN approach has been implemented since 2015 by <ns0:ref type='bibr' target='#b28'>Xu et al. (2015)</ns0:ref> to strengthen a face recognition model. This combination was also found In the natural language processing (NLP) area.</ns0:p><ns0:p>For instance, Jin <ns0:ref type='bibr'>Wang (2016)</ns0:ref> developed a Dimensional Sentiment Analysis model and suggested that a combination of LSTM and CNN is capable of capturing long-distance dependency and local information patterns. Related to NLP, Chuhan <ns0:ref type='bibr' target='#b6'>Wu (2018)</ns0:ref> developed an LSTM-CNN model with similar architecture to other previous studies where the CNN layer and LSTM layer were implemented in serial structure.</ns0:p><ns0:p>Recently, the combination of CNN and LSTM was also applied for educational data <ns0:ref type='bibr' target='#b19'>(Prabowo et al., 2021)</ns0:ref>.</ns0:p><ns0:p>In this study, we developed the Sequential and Spatial Methylation Fusion Network (SSMFN) to predict possible methylation sites on the protein sequence. Similar to DeepRMethylSite, SSMFN also utilized CNN and LSTM. However, instead of treating them as an ensemble model, we fused the latent representation of the CNN and LSTM modules. By allowing more relaxed interaction between the CNN and LSTM modules, we hypothesized that the fusion approach can extract better features than the model with the ensemble approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>The dataset in this study was obtained from the previous methylation site prediction study by <ns0:ref type='bibr' target='#b12'>Kumar et al. (2017)</ns0:ref>. The data was collected from other studies as well as from Uniprot protein database <ns0:ref type='bibr' target='#b0'>(Apweiler et al., 2004)</ns0:ref>. The collected data was furthermore experimentally verified in vivo.</ns0:p><ns0:p>The dataset comprises sequences of 19 amino acids with Arginine in the middle of the sequence because the possible location for methylation is on Arginine (R). These sequences are segments from the full amino acids sequence. Examples of the amino acids sequences in this dataset are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experiment</ns0:head><ns0:p>Firstly, to understand the contribution of each element in the proposed model, we carried an ablation study on our proposed model. The elements tested and explored in this ablation study were the CNN and LSTM branches of the model. Afterward, we compared the performance of our proposed model to DeepRMethylSite <ns0:ref type='bibr' target='#b3'>(Chaudhari et al., 2020)</ns0:ref>. Additionally, we also provided a comparison to a standard multi-layer perceptron model. To measure the effect of the data distribution (balanced or imbalanced), we conducted separate experiments for the balanced and the original imbalanced dataset. Afterward, the trained models from both experiments were validated and tested on the balanced validation dataset, the imbalanced validation dataset, and the test dataset, respectively. The workflow of this study is illustrated in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. All models in the experiment were developed using Python machine learning library, PyTorch <ns0:ref type='bibr'>(Paszke et al., 2019)</ns0:ref>. To train the models, we utilized a NVIDIA Tesla P100 Graphical Processing Unit (GPU) as well as a publicly available GPU instance provided by Google Colab.</ns0:p></ns0:div>
<ns0:div><ns0:head>Spatial and Sequential Methylation Fusion Network (SSMFN)</ns0:head><ns0:p>Our proposed model, Spatial and Sequential Methylation Fusion Network (SSMFN), was designed with the motivation that a protein sequence can be perceived as both spatial and sequential data. The view of a protein sequence as spatial data assumes that the amino acids are arranged in a one-dimensional space. On the other hand, protein sequence can also be thought of as sequential data by assuming that the next amino acid is the next time step of particular amino acid. On modelling protein sequences with deep learning, CNN is applied when adopting spatial data view, while LSTM is applied for the sequential data. Using the information from both views has been shown to be beneficial by <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref>. Having observed that, we constructed SSMFN as a deep learning model with an architecture that can fuse the latent representation of CNN modules and LSTM modules.</ns0:p><ns0:p>To read the amino acid sequence, SSMFN applied an embedding layer with 21 neurons. This embedding layer was used to enhance the expression of each amino acid. Thus, the number of neurons in this layer matches the amounts of amino acids variants. Therefore, each type of amino acid can have a different vector representation. The output of this layer is then split into LSTM and CNN branches. In the LSTM branch, we created two LSTM layers with 64 neurons each. Every LSTM layer is followed by a dropout layer with a 0.5 drop rate. It is subsequently followed by a fully connected layer at the end of the branch with 32 neurons. This fully connected layer serves as a latent representation generator that is fused with the latent representation from the CNN branch.</ns0:p><ns0:p>In contrast, the CNN branch comprised four CNN layers with 64 neurons in each layer. Unlike the LSTM layers, residual connections were utilized in the CNN branch. Each CNN layer is a 2D convolutional layer with rectified linear units (ReLU) as the activation function. Every CNN layer also has a 2D batch normalization layer and a dropout layer which is set at 0.5. At the end of the branch, a fully connected layer with 32 neurons is installed to match the output with the LSTM branch.</ns0:p><ns0:p>In the next step, the latent representation of both branches was fused with a summation operation. The fused representation was subsequently processed through a fully connected layer with 2 neurons as the last layer. This layer predicts whether the methylation occurred at the center of the amino acid or not. The architecture of the proposed model and the hyperparameter settings is illustrated in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> and listed in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison to DeepRMethylSite</ns0:head><ns0:p>For a fair comparison of our proposed model to other state-of-the-art methylation site prediction models, we re-conducted the experiment to train DeepRMethylSite <ns0:ref type='bibr' target='#b3'>(Chaudhari et al., 2020)</ns0:ref> with the same dataset used by our proposed model. To obtain optimal DeepRMethylSite performance on our dataset, we adjusted several hyperparameters. Firstly, we changed the LSTM branch optimizer, from Adadelta to Adam. Secondly, we removed recurrent dropout layers in the LSTM branch. Finally, we set the maximum number of epochs to 500.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>To evaluate the performance of the proposed model and to compare it to the models from previous studies, we utilized Accuracy (Equation <ns0:ref type='formula'>1</ns0:ref>), Sensitivity (Equation <ns0:ref type='formula'>2</ns0:ref>), Specificity (Equation <ns0:ref type='formula'>3</ns0:ref>), F1 score (Equation <ns0:ref type='formula'>5</ns0:ref>), and Area Under Curve (AUC) <ns0:ref type='bibr' target='#b2'>(Bradley, 1997)</ns0:ref>.</ns0:p><ns0:p>These metrics were commonly employed in the previous research with a focus on prediction protein phosphorylation site <ns0:ref type='bibr' target='#b16'>(Lumbanraja et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b15'>(Lumbanraja et al., , 2019))</ns0:ref>. The AUC was computed using the scikit-learn library from the Receiver Operating Characteristic (ROC) of the models' performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy = T P + T N T P + T N + FP + FN</ns0:head><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>Sensitivity = T P T P + FN (2) Speci f icity = T N T N + FP (3) F1score = T P T P + FP + FN (4) MCC = (T P * T N) − (FP * FN) (T P + FP)(T P + FN)(T N + FP)(T N + FN) (5)</ns0:formula></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> and Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> show the results obtained from our ablation study. Meanwhile, Table <ns0:ref type='table' target='#tab_7'>6 and Table 7</ns0:ref> summarized the comparative results of our model to the previous models with the balanced and imbalanced training dataset, respectively. In Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>, we also added the performance of several methylation site prediction models from previous studies including MeMo <ns0:ref type='bibr' target='#b4'>(Chen et al., 2006)</ns0:ref>, MASA <ns0:ref type='bibr' target='#b25'>(Shien et al., 2009)</ns0:ref>, BPB-PPMS <ns0:ref type='bibr' target='#b23'>(Shao et al., 2009)</ns0:ref>, PMeS <ns0:ref type='bibr' target='#b24'>(Shi et al., 2012)</ns0:ref>, iMethylPseAAC <ns0:ref type='bibr' target='#b21'>(Qiu et al., 2014)</ns0:ref>, PSSMe <ns0:ref type='bibr' target='#b27'>(Wen et al., 2016)</ns0:ref>, MePred-RF <ns0:ref type='bibr' target='#b26'>(Wei et al., 2017)</ns0:ref> and PRmePRed <ns0:ref type='bibr' target='#b12'>(Kumar et al., 2017)</ns0:ref>. The performances of MeMo, MASA, BPB-PPMS, PMeS, iMethylPseAAC, PSSMe and MePred-RF were reported by <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref>. Meanwhile, the performance of PRmePRed was reported by <ns0:ref type='bibr' target='#b12'>Kumar et al. (2017)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The results of the ablation study in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> and Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref> show that the LSTM branch and CNN branch achieved better performance compared to the merged model at least on one dataset. However, the merged models achieved better performance in most of the datasets, specifically in the test dataset. This fact indicates that the merged model has a better generalization capability than the model with only CNN or LSTM branches.</ns0:p><ns0:p>In the experiment on the balanced training dataset, our proposed model emerged as the best NN model with the best performance in all metrics except sensitivity among all other NN models. Interestingly, the DeepRMethylSite final result (merged) was not better in all metrics compared to its CNN branch and its LSTM branch. On the imbalanced validation dataset, our proposed model, SSMFN, has more than 4% higher accuracy and 6% higher MCC which is the best parameter for assessing model performance on imbalanced data, compared to the DeepRMethylSite model. On the balanced validation dataset and test dataset, SSMFN has 2-4% higher accuracy compared to DeepRMethylSite.</ns0:p><ns0:p>In Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>, we also present the performance of other methylation site prediction models from previous studies as reported by <ns0:ref type='bibr' target='#b5'>Chen et al. (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b3'>Chaudhari et al. (2020)</ns0:ref>. The models from previous studies provided an overview of the performance of non-neural-network models. The best non-neural-network model, PRmePRed, has more than 5% higher accuracy than SSMFN. However, it should be noticed that non-neural-network models require heavy feature engineering, which is also found in PRmePRed. This introduced unnecessary manual labor that can be avoided by the utilization of modern NN models, which are also known as deep learning. Interestingly, the SMLP model provided slightly better performance than DeepRMethylSite on the test dataset. This does not implicate that the SMLP model has a better performance compared to the DeepRMethylSite it has relatively poor performance in the validation dataset, both balanced and imbalanced. this result suggested that we need to train methylation site prediction models on a dataset with its natural distribution for a practical purpose, not a balanced dataset.</ns0:p><ns0:p>In the second experiment, we trained the models using the imbalanced dataset with a 5 to 1 ratio for negative to positive size samples, respectively. Overall, our model achieved better performance when trained on the imbalanced dataset compared to the balanced dataset. Trained on the imbalanced dataset, SSMFN can even outperform PRmePRed in several metrics. SSMFN accuracy is 0.36% lower than the DeepRMethylSite accuracy on the imbalanced validation dataset. However, it has better performance on the balanced validation dataset and the test dataset compared to DeepRMethylSite.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>In general, our proposed model, SSMFN, provided better performance compared to DeepRMethylSite.</ns0:p><ns0:p>Our model also performed better when trained on the imbalanced training dataset that it even has better performance than the model that uses feature extraction in several metrics. Additionally, we observed that all the NN models, including ours, achieved a high specificity and a low sensitivity when they were trained on the balanced dataset and tested on the imbalanced dataset. This suggested that, in future works, we need to consider using a dataset with the original distribution for training. This will train the models to recognize the real distribution of the methylation site prediction task, which has far more negative than positive samples, leading to better performance in practice. </ns0:p></ns0:div>
<ns0:div><ns0:head>ACKNOWLEDGMENTS</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>The dataset was split into three datasets: training, validation, and independent dataset. Each dataset2/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:2:0:NEW 23 Jul 2021) Manuscript to be reviewed Computer Science contains positive and negative samples, where positive samples are the sequence where methylation occurs in the middle amino acid. Because the original dataset was imbalanced, previous studies often constructed a new balanced dataset to improve the performance of their model. This practice is needed because most machine learning methods are not robust to imbalanced training data. Following the typical practice in previous studies, we also created a balanced training dataset as well as a balanced validation dataset for a fair comparison.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Research workflow. The chart shows that the data we used in this research was retrieved from Kumar et al. (2017) study. The data was afterward balanced accordingly. In the first experiment, we trained our model using the balanced training dataset. Subsequently, we validated and tested the model on the balanced and the imbalanced dataset. We did a similar workflow for the second experiment. However, instead of the balanced dataset, we trained the model on the imbalanced training dataset.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Proposed Neural Network Architecture.</ns0:figDesc><ns0:graphic coords='6,174.17,63.78,348.69,203.67' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The Standard Multi-Layer Perceptron Architecture.</ns0:figDesc><ns0:graphic coords='6,248.89,416.75,199.25,141.31' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>When trained on the balanced training dataset and tested on the imbalanced validation dataset, most of the models have high specificity and low sensitivity. This phenomenon is normal since the training and 7/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:2:0:NEW 23 Jul 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='5,141.73,63.78,413.56,268.74' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Protein Sequence Dataset Example</ns0:figDesc><ns0:table><ns0:row><ns0:cell>No</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Sequence</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='13'>1st 2nd 3rd . . 8th 9th 10th 11th 12th . . 17th 18th 19th</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>V</ns0:cell><ns0:cell>E</ns0:cell><ns0:cell>S</ns0:cell><ns0:cell cols='2'>. . V</ns0:cell><ns0:cell>T</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>L</ns0:cell><ns0:cell>H</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>H</ns0:cell><ns0:cell>M</ns0:cell><ns0:cell>N</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>K</ns0:cell><ns0:cell>N</ns0:cell><ns0:cell>H</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>I</ns0:cell><ns0:cell>S</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>H</ns0:cell><ns0:cell>H</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>P</ns0:cell><ns0:cell>Q</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>H</ns0:cell><ns0:cell>P</ns0:cell><ns0:cell>P</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>L</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>G</ns0:cell><ns0:cell>I</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>W</ns0:cell><ns0:cell>D</ns0:cell><ns0:cell>H</ns0:cell></ns0:row><ns0:row><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell></ns0:row><ns0:row><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell><ns0:cell>.</ns0:cell></ns0:row><ns0:row><ns0:cell>n</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>S</ns0:cell><ns0:cell>I</ns0:cell><ns0:cell cols='2'>. . A</ns0:cell><ns0:cell>C</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>I</ns0:cell><ns0:cell>R</ns0:cell><ns0:cell>. .</ns0:cell><ns0:cell>K</ns0:cell><ns0:cell>W</ns0:cell><ns0:cell>Y</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Data class</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Label</ns0:cell><ns0:cell cols='2'>n sequences</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Training</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Positive</ns0:cell><ns0:cell /><ns0:cell>1038</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Negative</ns0:cell><ns0:cell /><ns0:cell>5190</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Balanced Training</ns0:cell><ns0:cell cols='2'>Positive</ns0:cell><ns0:cell /><ns0:cell>1038</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Negative</ns0:cell><ns0:cell /><ns0:cell>1038</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Validation</ns0:cell><ns0:cell /><ns0:cell cols='2'>Positive</ns0:cell><ns0:cell /><ns0:cell>1131</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Negative</ns0:cell><ns0:cell /><ns0:cell>3033</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>Balanced Validation Positive</ns0:cell><ns0:cell /><ns0:cell>1131</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Negative</ns0:cell><ns0:cell /><ns0:cell>1131</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Independent(Test)</ns0:cell><ns0:cell cols='2'>Positive</ns0:cell><ns0:cell /><ns0:cell>260</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Negative</ns0:cell><ns0:cell /><ns0:cell>260</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Amino Acids Sequences Dataset List.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>. The code of this model can be found in this following link https://github.com/bharuno/SSMFN-Methylation-Analysis.4/10PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:2:0:NEW 23 Jul 2021)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Hyperparameter Settings.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Settings</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning rate</ns0:cell><ns0:cell>0.001</ns0:cell></ns0:row><ns0:row><ns0:cell>Epochs</ns0:cell><ns0:cell>500</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>Adam</ns0:cell></ns0:row><ns0:row><ns0:cell>Embedding layer neurons</ns0:cell><ns0:cell>21</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Embedding layer output dimension 21 x 19 = 399</ns0:cell></ns0:row><ns0:row><ns0:cell>Output layer neurons</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM Branch</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>LSTM layers neurons</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>Dropout layers drop rate</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Fully connected layer neurons</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN Branch</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>CNN layers neurons</ns0:cell><ns0:cell>64</ns0:cell></ns0:row><ns0:row><ns0:cell>CNN layers activation function</ns0:cell><ns0:cell>Rectified Linear Units</ns0:cell></ns0:row><ns0:row><ns0:cell>Dropout layers drop rate</ns0:cell><ns0:cell>0.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Fully connected layer neurons</ns0:cell><ns0:cell>32</ns0:cell></ns0:row></ns0:table><ns0:note>5/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:2:0:NEW 23 Jul 2021) Manuscript to be reviewed Computer Science 4), Matthews Correlation Coefficient (MCC) (Equation</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The First Ablation Study, Trained on the Balanced Training Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>Sens</ns0:cell><ns0:cell>Spec</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>AUC</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Validated on The Imbalanced Validation Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.7891 0.7649 0.5745 0.9368 0.5649 0.8120</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.8252 0.7985 0.6328 0.9354 0.6148 0.8326</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8187 0.7943 0.6175 0.9442 0.6143 0.8359</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Validated on The Balanced Validation Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.8431 0.8427 0.8767 0.8149 0.6889 0.8120</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.8302 0.3020 0.8195 0.8417 0.6609 0.8326</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8360 0.8358 0.8130 0.8626 0.6738 0.8359</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Tested on The Test Dataset</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.7962 0.7960 0.8105 0.7831 0.5929 0.7962</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.7981 0.7980 0.8063 0.7903 0.5964 0.7981</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8115 0.8115 0.8000 0.8240 0.6235 0.8115</ns0:cell></ns0:row><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>Sens</ns0:cell><ns0:cell>Spec</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>AUC</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Validated on The Imbalanced Validation Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.8939 0.8502 0.9389 0.8834 0.7230 0.8179</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.9167 0.8891 0.9100 0.9186 0.7836 0.8704</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.9078 0.8774 0.8895 0.9133 0.7598 0.8596</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Validated on The Balanced Validation Dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.7529 0.7372 0.9948 0.6698 0.5798 0.8179</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.8638 0.8624 0.9567 0.8024 0.7560 0.8704</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8656 0.8640 0.9672 0.8003 0.7491 0.8596</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Tested on The Test Dataset</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SSMFN CNN</ns0:cell><ns0:cell cols='6'>0.7404 0.7228 0.9845 0.6598 0.5566 0.7404</ns0:cell></ns0:row><ns0:row><ns0:cell>SSMFN LSTM</ns0:cell><ns0:cell cols='6'>0.8442 0.8418 0.9590 0.7754 0.7110 0.8442</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>SSMFN Merged 0.8462 0.8435 0.9688 0.7744 0.7173 0.8462</ns0:cell></ns0:row></ns0:table><ns0:note>6/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:2:0:NEW 23 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The Second Ablation Study, Trained on the Imbalanced Training Dataset.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Manuscript to be reviewedThe First Experiment, Trained on the Balanced Training Dataset test dataset have different distributions. Because the distribution of methylation is naturally imbalanced,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>This research is a collaboration between Bioinformatics and Data Science Research Center (BDSRC), Bina Nusantara University and Department of Computer Science, Faculty of Mathematics and Natural The Imbalanced Validation Dataset DeepRMethylSite CNN 0.8948 0.8550 0.9072 0.8916 0.7242 0.8283 DeepRMethylSite LSTM 0.9092 0.8782 0.9044 0.9106 0.7634 0.8576 DeepRMethylSite Merged 0.9114 0.8808 0.9047 0.9115 0.7693 0.8589 SMLP 0.9071 0.8670 0.9973 0.8873 0.7635 0.8295 SSMFN Merged 0.9078 0.8774 0.8895 0.9133 0.7598 0.8596 Validated on The Balanced Validation Dataset DeepRMethylSite CNN 0.8289 0.8249 0.9709 0.7527 0.6899 0.8283 DeepRMethylSite LSTM 0.8576 0.8557 0.9644 0.7908 0.7350 0.8576 DeepRMethylSite Merged 0.8585 0.8567 0.9645 0.7919 0.7365 0.8589 SMLP 0.7582 0.7432 1.0000 0.6740 0.5899 0.8295 SSMFN Merged 0.8656 0.8640 0.9672 0.8003 0.7491 0.8596 Tested on The Test Dataset DeepRMethylSite CNN 0.7808 0.7727 0.9506 0.7039 0.6063 0.7808 DeepRMethylSite LSTM 0.8115 0.8070 0.9500 0.7382 0.6548 0.8115 DeepRMethylSite Merged 0.8135 0.8088 0.9553 0.7390 0.6598 0.8135 SMLP 0.7250 0.7025 1.0000 0.6452 0.5388 0.7250 SSMFN Merged 0.8462 0.8435 0.9688 0.7744 0.7173 0.8462 Second Experiment, Trained on the Imbalanced Training Dataset Science, University of Lampung. The GPU Tesla P100 used to conduct the experiment was provided by NVIDIA -BINUS AIRDC.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Acc</ns0:cell><ns0:cell>F1</ns0:cell><ns0:cell>Sens</ns0:cell><ns0:cell>Spec</ns0:cell><ns0:cell>MCC</ns0:cell><ns0:cell>AUC</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Validated on</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>8/10 PeerJ Comput. Sci. reviewing PDF | (CS-2021:04:60038:2:0:NEW 23 Jul 2021) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
</ns0:body>
" | "Reviewer 1
All my concerns and questions have been addressed, and I recommend the acceptance of the paper.
Reviewer 2
Authors provide a manuscript presenting their works about methylation site prediction for proteins. This
is a well-written manuscript, the authors tried to use the deep learning method to solve the problem, the
proposed deep network architecture seems to work well on the prediction, achieved better accuracy than
peer methods. It would be helpful to improve the following research of the functional and conformational
changes of a protein. However, there still have a few suggestions:
1, As we all know, the position-specific scoring matrix (PSSM) is widely used as an effective representation
of protein features for a variety of protein-related problems. The authors should explain why this feature
is not used.
Ans: PSSM shows the score of similarity in a biological sequence. Although there are several studies using
PSSM on amino acid sequences, this scoring system is mostly used for nucleotide or DNA sequences. The
location of methylation may be determined by how close a sequence is to a particular combination.
However, we believe that the probable location of methylation is determined by the similarity of the
sequences with more than one particular combination. In addition, we also use the experimental design
that was used in previous studies. PSSM was not one of the parameters that were used to compare the
performance of the models.
2. The authors have to create a web server or submit all the source codes to GitHub for reproducing the
results.
Ans: We added the information regarding accessing the data and the codes in the manuscript. The data
and codes are already publicly accessible in https://github.com/bharuno/SSMFN-Methylation-Analysis.
3. The LSTM layer often followed by a self-attention mechanism; I think this can help to improve the
predictive performance.
Ans: Thank you for your suggestion. Adding a self-attention mechanism may indeed improve the
predictive performance. We will consider using it in our next work.
4. The draft needs proofread.
Ans: Thank you for your suggestion, we did proofread again to ensure our writing has been written
properly and correctly.
" | Here is a paper. Please give your review comments after reading it. |